id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2310.07221
Using Learnable Physics for Real-Time Exercise Form Recommendations
Good posture and form are essential for safe and productive exercising. Even in gym settings, trainers may not be readily available for feedback. Rehabilitation therapies and fitness workouts can thus benefit from recommender systems that provide real-time evaluation. In this paper, we present an algorithmic pipeline that can diagnose problems in exercise techniques and offer corrective recommendations, with high sensitivity and specificity in real-time. We use MediaPipe for pose recognition, count repetitions using peak-prominence detection, and use a learnable physics simulator to track motion evolution for each exercise. A test video is diagnosed based on deviations from the prototypical learned motion using statistical learning. The system is evaluated on six full and upper body exercises. These real-time recommendations, counseled via low-cost equipment like smartphones, will allow exercisers to rectify potential mistakes making self-practice feasible while reducing the risk of workout injuries.
Abhishek Jaiswal, Gautam Chauhan, Nisheeth Srivastava
2023-10-11T06:11:11Z
http://arxiv.org/abs/2310.07221v1
# Using Learnable Physics for Real-Time Exercise Form Recommendations ###### Abstract. Good posture and form are essential for safe and productive exercising. Even in gym settings, trainers may not be readily available for feedback. Rehabilitation therapies and fitness workouts can thus benefit from recommender systems that provide real-time evaluation. In this paper, we present an algorithmic pipeline that can diagnose problems in exercises technique and offer corrective recommendations, with high sensitivity and specificity, in real-time. We use MediaPipe for pose recognition, count repetitions using peak-prominence detection, and use a learnable physics simulator to track motion evolution for each exercise. A test video is diagnosed based on deviations from the prototypical learned motion using statistical learning. The system is evaluated on six full and upper body exercises.These real-time recommendations, counseled via low-cost equipment like smartphones, will allow exercisers to rectify potential mistakes making self-practice feasible while reducing the risk of workout injuries. real-time exercise pose recommendations, physics-inspired neural networks + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: Footnote †: journal: AcM + Footnote †: Footnote †: journal: AcM + Footnote †: Footnote †: journal: AcM + [MISSING_PAGE_POST] At present, however, such proposals face considerable difficulties. Most vision-based approaches to exercise tracking work with predetermined heuristic parameters which vary across exercises and participants, requiring considerable hand-crafting (Wang et al., 2017; Wang et al., 2018). While vision-based approaches to exercise type recognition and rep-counting are plentiful, approaches that seek to track exercise form are limited to simple upper body exercises with relatively little body movement (Wang et al., 2017; Wang et al., 2018; Wang et al., 2018). Further, most such approaches offer exercise diagnoses retrospectively after processing entire recorded exercise sessions (Wang et al., 2018; Wang et al., 2019). Taking the conventional idea of recommendation from collaborative filtering a step further, we frame the task of exercise supervision as that of recommending correct forms. To this end, we identify the over-general nature of the deep learning architectures used in vision-based exercise tracking pipelines as a key problem blocking progress in this area. Rather than use generic neural network architectures, we propose using a specific variety of neural networks, specifically designed to learn relationships between physical objects, as the base inference engine in such recommender systems. Using one such architecture - Interaction Networks (Beng et al., 2015) - we describe a novel recommender system for real-time exercise form correction in this paper. We show that our solution works with very high sensitivity and specificity for a wide variety of full-body and upper-body exercises. Section 2 places our proposal in line with existing approaches of exercise-specific recommendations. We explain the working of our physics inference model in Section 3 and the components of our recommender system in Section 4. The value of our system is demonstrated through experiments on different exercises in Section 5. Finally, we conclude by highlighting the salient points of our recommender system and identifying directions for future work, in Section 6. ## 2. Related Work Several recently proposed systems (Liu and Chu, 2017; Liu and Chu, 2018; Liu and Chu, 2018) have used state-of-the-art pose estimation techniques to craft heuristic joint angle thresholds for pose feedback. Recently, real-time pose diagnosis was done by Alatiah and Chen (Alatiah and Chen, 2018) using pre-calculated range of motion. Similarly, Ying et al. (Ying et al., 2019) used pre-stored correct exercises to detect incorrect moves. Such systems offer only binary feedback without any corrective recommendations. More granular diagnoses are possible in a system recently proposed by Liu and Chu (Liu and Chu, 2018), who learned three joint angle indicators for each rep using a RNN and provided visual diagnosis for two simple upper body dumbbell exercises with high accuracy. However, their approach requires per frame annotation for training. Similarly, Gharasuie et al. (Gharasuie et al., 2018) Figure 1. **Example showing working of Interaction Network. Vector b includes sender and receiver object details, Vector e is the resulting effect of an interaction. Vector c includes interaction effects and the exercised object.** developed a low-cost system using AlphaPose[(9)] based arm angles for upper-body exercises to count reps and also quantify exercise phase parameters to estimate user fatigue levels. While such heuristic based methods provide helpful textual feedback in some instances, they tend to work well only for isolation arm exercises involving only few joints and do not achieve significant diagnostic accuracy without extensive frame-level annotation. Our system, in contrast, with a more sophisticated inference engine, works well for compound exercises using only video-level annotation. Closer technically to our approach, Pose Trainer (Pose et al., 2017) uses OpenPose (B the next state \(P\) of each landmark. \[P=f_{O}(O;E)\] For more information, readers are requested to refer to the IN paper (Bahasiwal et al., 2019). ## 4. Proposing Pose Correcting Recommendations We first outline the overall methodology of our pipeline, followed by a detailed description of its sub-components. To begin with, we feed a recorded or live video to our pipeline, which predicts per-frame keypoints for 25 joints through Mediapipe API (Bahasiwal et al., 2019). Depending on motion evolution, we select exercise-specific landmarks (Table 1) followed by normalization and smoothing for physics modeling. The ML model predicts the motion rollouts for all the landmarks with visibility of only the initial rep state. Using these predictions, we calculate the Mean Squared Error (MSE) for individual landmarks and then transform them to the frequency domain for further processing, as described in subsequent sections. Essentially, we use frequency domain information from the MSE signals to classify exercise reps as either correct or incorrect (in one of the multiple predefined modes of failure) using a Random Forest multi-class classifier (Figure 2). Thus, our pipeline receives visual input from the user side and emits textual recommendations from the model side. ### Rep Counting using Peak Prominence We exploit cyclic movements within each exercise for rep segregation. To that end, we find peaks in the periodic landmark displacement plot. These peaks may contain extraneous motion data, such as fragments between successive reps and other discontinuities from tired and distracted performers. To detect genuine peaks, we find peaks' importance using peak prominence and use its standard deviation as a cutoff for our high pass filter. Displacement values above the cutoff delineate the start and stop of a valid rep.Figure 3 shows the result of rep counting for a single lungs video. ### Preprocessing MediaPipe API provides 3D positional time series data for 25 body landmarks for each exercise. We transform these coordinates for unidirectional facing and use the resulting view along with the landmark's displacement amplitude to fix the representative landmarks for an exercise. We apply Locally Weighted Scatterplot Smoothing (LOWESS) to each time series (K ### Error Analysis and Rep Classification The MSE time series emitted by the IN informs the subsequent stages. We hypothesize that our physics engine predicts the correct method of exercising, such that considerable deviation from it would hint at an incorrect rep. Further, the specific combination of MSE from different body components would hint at the precise mistake made by the exerciser. To extract this information, we transform the MSE time series from all the representative landmarks to the frequency domain using the discrete-time Fourier transform (DTFT). DTFT provides magnitude and phase values for each time series. Conversion to the frequency domain helps in two ways. It gives a fixed-sized representation of the variable-length time series. It also helps to extract the features of the time series. This output of the DTFT, called the error signature (Figure 5), is a vector representation of an exercise rep of variable duration. For our case, we take the principal 11 amplitudes and the corresponding phase values to build the error signature of each exercise rep. At the final stage of our pipeline, we use a Random Forest classifier for classification, operated in a multi-class classification setting, with recommendation labels used as shown in Table 2. We found Random Forest was the most consistent classifier across all the exercises. (Table 3) and tuned its hyperparameters using randomized search cross-validation Figure 3. **Peak prominence over vertical periodicity for rep counting.** \begin{table} \begin{tabular}{|c|} \hline **Lunges** \\ \hline Keep your knees behind the toes \\ \hline Keep your legs closer, they are too wide apart \\ \hline **Squats** \\ \hline Keep your knees behind the toes \\ \hline Don’t bend your knees inward \\ \hline Keep your Feet shoulder-width apart \\ \hline **Situps** \\ \hline Your back should rise up completely \\ \hline **Pushups** \\ \hline Keep your Knees-hips-Shoulders in a straight line \\ \hline Lower your chest to align it with hip \\ \hline Lower your hips \\ \hline Your chest should not touch the ground \\ \hline \end{tabular} \end{table} Table 2. Recommendations offered for full body exercises. Figure 4. **Push-ups** - stick figure and corresponding video frame. ## 5. Empirical Evaluation ### Data For the evaluation of full-body exercises, we used a proprietary dataset from E-Trainer Analytics Wizard Pvt. Ltd. This dataset contains the front and side view of seven exercisers performing more than 150 reps for four exercises - squats, push-ups, lungs, and sit-ups. Each exercise has one correct class, whereas incorrectly done exercises could belong to multiple classes. Incorrect videos were annotated with corrective recommendations by expert physical trainers. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Model & Squads & Push-ups & Lunges & Sit-ups \\ \hline MLP & 0.91 \(\pm\) 0.02 & 0.98 \(\pm\) 0.03 & 0.95 \(\pm\) 0.03 & **0.99 \(\pm\) 0.01** \\ \hline RNN & 0.85 \(\pm\) 0.04 & 0.98 \(\pm\) 0.01 & 0.94 \(\pm\) 0.01 & 0.98 \(\pm\) 0.02 \\ \hline GRU & 0.87 \(\pm\) 0.03 & 0.98 \(\pm\) 0.01 & 0.93 \(\pm\) 0.02 & 0.94 \(\pm\) 0.04 \\ \hline IN & **0.94 \(\pm\) 0.02** & **0.98 \(\pm\) 0.01** & **0.97 \(\pm\) 0.01** & 0.98 \(\pm\) 0.01 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline Model & ShoulderPress* & FrontRaise \\ \hline Ng [(20)] & 0.90 & 0.77 \\ \hline PoseTrainer[(7)] & 0.49 & 0.76 \\ \hline MLP & **0.99 \(\pm\) 0.01** & 0.82 \(\pm\) 0.04 \\ \hline RNN & 0.99 \(\pm\).01 & 0.79 \(\pm\).05 \\ \hline GRU & 0.95 \(\pm\).06 & 0.80 \(\pm\).04 \\ \hline IN & 0.98 \(\pm\) 0.01 & **0.88 \(\pm\) 0.03** \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline Model & ShoulderPress* & FrontRaise \\ \hline Ng [(20)] & 0.90 & 0.77 \\ \hline PoseTrainer[(7)] & 0.49 & 0.76 \\ \hline MLP & **0.99 \(\pm\) 0.01** & 0.82 \(\pm\) 0.04 \\ \hline RNN & 0.99 \(\pm\).01 & 0.79 \(\pm\).05 \\ \hline GRU & 0.95 \(\pm\).06 & 0.80 \(\pm\).04 \\ \hline IN & 0.98 \(\pm\) 0.01 & **0.88 \(\pm\) 0.03** \\ \hline \end{tabular} \end{table} Table 4. **Baselines comparisons for four full body exercises (left) and two upper body exercises(right). Classification results reported using weighted F1 score with standard deviations over five train-test runs (’Shoulder Press results reported for two incorrect classes).** Figure 5. **DTFT representations of error signature of different squats labels. Note how the phase plots for the incorrect squats differ from each other as well as from a correct squat systematically.** We used a train test split of 60%-40% to train the classifier on incorrect classes. Similarly, 60% of the correct class data was used to train our physics engine. To compare against existing approaches of form prediction, we conducted evaluation using annotated data for shoulder press and front raise from a publicly available dataset (Krishnan et al., 2019) ### Baseline Comparisons Among available learnable dynamics predictors, we exploit IN for their interpretability and simplicity. Our pipeline can also function with other motion predictors to the degree that they can accurately mimic the dynamics of the exercise. To test this hypothesis, we evaluate our model against several baselines. The **Multi layer perceptron (MLP) Baseline**, with three 256-length hidden layers and ReLU activation, has all information to learn the interaction dynamics, requiring it to assimilate relation indices implicitly without explicit scene factorization. The **Recurrent Neural Network (RNN) and Gated Recurrent Unit (GRU)**, capable of modeling posture evolution, have three recurrent units with three features in the hidden state. The final hidden state output is fed to a fully connected layer to predict future dynamics. All the above baselines use flattened node attributes as input. Additionally, we compare our pipeline against popular heuristic techniques (Krishnan et al., 2019), which examine reps using geometric thresholds over joint features. Next, we describe ablation modifications on IN's architecture and input. The **Attribute Hidden IN** uses the same IN architecture but with an empty relation attribute matrix which, in principle, could be deduced from position data demanding estimation of complex distance and inclination functions. The **Independent object IN** simulates removing the relation-centric component by zeroing out the interaction effects, which incapacitates it from modeling object-object interactions. The **Fully Connected (FC) IN** connects each joint with every other joint, procuring the same capacity as the IN but involving additional irrelevant inputs. The **Global Connection (GC) IN**, additionally, connects all the landmark points to the two stationary reference points, modeling both local and global interactions for superior information propagation. ### Results We evaluate our diagnostic system on six exercises using three criteria: Rep Counting, Posture diagnosis, and Real-time prediction. Our peak-prominence based algorithm perfectly counted all the reps in the full-body exercises. For each rep detected, we measured recommendation accuracy using weighted F1 scores in a multi-class classification setting. Figure 6. **Average rollout prediction errors over exercise reps(MSE) for Baseline Models and Interaction Network. Even though MLP have good F1 scores (Table 4) in some cases, high prediction error makes their performance unreliable.** #### 5.3.1. Posture diagnosis Our results (Table 4) indicate that a pipeline endowed with physics learning capability outperforms all baselines effectively differentiating correct and incorrect exercise reps. Performance can be comparable for fewer prediction classes (e.g., Sit-ups, Shoulder Press) or simple exercises with a small range of joint motion(e.g., Push-ups). However, the performance gain is evident as the number of incorrect classes increases (Front Raise - Table 5). Since classification results are an indirect measure of physics modeling, we also explore the MSE for the next state prediction (Figure 6). In all cases, a physics learning engine best describes motion dynamics. Even though the MLP showed good classification (Table 4), it significantly deviates from the actual exercise, causing deviant performance, especially with increasing exercise complexity (Table 5). Even an ill-suited model can classify well if the error signatures are separable. However, such arbitrary performance gains do not scale well as the number of classes increases. Our ablation study on IN's performance gain indicates that relational attributes are critical for learning the physics of interactions (Figure 7). Reasonable variations in the joint-to-joint links (as with FC-IN and GC-IN) show similar effects, with GC-IN edging slightly over the vanilla IN, probably due to faster information propagation. Stochastic interactions with involuntary factors (like fatigue and distractions) possibly contain the expected performance gain from global propagation. The FC-IN with redundant information in its irrelevant relations competes well, presumably because the IN learns to weigh the importance of exercise-specific relations. This architecture may, thus, bypass the requirement of an explicit relational matrix, provided that the pose estimation accurately detects all body landmarks. As expected, the independent object IN without relation function \(f_{r}\) finds it challenging to model the motion dynamics. Similarly, the Attribute Hidden IN is unable to exploit the joints' information without relational attributes. #### 5.3.2. Diagnosis latency The MediaPipe Android API, fed with the exercise's camera feed, outputs the joints' coordinates time series. After rep segregation, each rep data is passed to the server, where our pipeline classifies it as correct or diagnoses it as a mistake of a particular type. A corrective recommendation specific to the estimated diagnosis is displayed to the user through our mobile application. For exercisers operating at normal tempo, this feedback arrives before their next rep is halfway complete prompting the user to instantly correct any mistakes in technique (see Table 6 for a quantitative summary and Table 7 for examples of live recommendations videos). \begin{table} \begin{tabular}{|c|c|c|} \hline Exercise & Mean(sec) & Standard deviation(sec) \\ \hline Squats & 0.55 & 0.13 \\ \hline Sit-ups & 0.39 & 0.07 \\ \hline Push-ups & 0.36 & 0.11 \\ \hline Lunges & 0.54 & 0.09 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline Data type & Link \\ \hline IN predictions simulations & Link \\ \hline Exercise Demo & Link \\ \hline \end{tabular} \end{table} Table 6. **Lag time(seconds) for new rep recognition.** Table 7. **Link to videos of interaction network predictions rolled out over time and of exercise sessions diagnosed using our system in real-time.** ## 6. Discussion and Limitations Self-training is gaining popularity as more and more people feel disinclined to commit to a dedicated gym routine. By providing accurate real-time recommendations, we can benefit such users without compromising their day-to-day schedule. Our recommender system focuses on rep counting and diagnosis, assuming that the exercise performed is known (or is easily knowable). For instance, Moran et al. (2019) used MediaPipe API (Dian et al., 2016) for pose recognition to detect the type of exercise performed in real-time, a capability that could easily inform exercise type in our pipeline. Thus, to summarize, this paper offers a novel system for recommending form corrections to exercisers performing rep-based training in real-time with high precision. We introduce the use of learnable physics engines to model body physics, a task for which they are very well-suited. The success of our physics model permits downstream classifiers to accurately diagnose modes of failure of exercises using differential prediction error residuals between the model prediction and actual observations. Empirical evaluations show that our system diagnoses defective techniques in complex full-body exercises with high sensitivity and specificity. We expect the adoption of such interactive systems to help healthcare providers scale up access to supervised physical exercise. We conclude with a brief exploration of the limitations of our system, and possible directions for future work. The most critical technical limitation of the present system is its reliance on pre-defined relational attributes for each exercise's Interaction Network. These attributes depend on the nature of human biomechanics and must be decided beforehand. Learning relational attributes from data could improve this performance even further, a clear direction for future work. Our system is currently tested only for exercises with significant vertical periodicity, an artifact of our peak-prominence based rep-counting scheme, though vertical periodicity also exists in many other exercises. Replacing this with a more sophisticated rep-counting method could extend our system's capabilities to a more general set of exercises. In particular, given the known diagnostic value of gait analysis in predicting health outcomes for the elderly (Dian et al., 2016; Raza et al., 2017), extending this system's digital diagnostic capabilities to monitoring and diagnosing gait-related problems presents a very promising direction for future work.
2303.07319
Observations of GRB 230307A by TESS
We present the TESS light curve of GRB 230307A. We find two distinct components: a bright, prompt optical component at the time of the Fermi observation that peaked at TESS magnitude 14.49 (averaged over 200 seconds), followed by a gradual rise and fall over 0.5 days, likely associated with the afterglow, that peaked at 17.65 mag. The prompt component is observed in a single 200s Full Frame Image and was undetectable in the next TESS image ($T_{\rm mag} > 17.79$). Assuming that the onset of the optical transient was coincident with the gamma-ray emission, the prompt emission lasted less than 73.6 seconds, which implies the true peak was actually brighter than $T_{\rm mag} =$ 13.40. We also fit parametric models to the afterglow to characterize its shape. The TESS light curve can be retrieved at https://tess.mit.edu/public/tesstransients/light_curves/lc_grb230307A_cleaned.txt.
Michael M. Fausnaugh, Rahul Jayaraman, Roland Vanderspek, George R. Ricker, Christopher J. Burke, Knicole D. Colon, Scott W. Fleming, Hannah M. Lewis, Susan Mullally, Allison Youngblood, Thomas Barclay, Eric Burns, David W. Latham, S. Seager, Joshua N. Winn, Jon M. Jenkins
2023-03-13T17:41:39Z
http://arxiv.org/abs/2303.07319v2
# Observations of GRB 230307A by TESS ###### Abstract We present the TESS light curve of GRB 230307A. We find two distinct components: a bright, prompt optical component at the time of the _Fermi_ observation that peaked at TESS magnitude \(14.49\pm 0.05\) (averaged over 200 seconds), followed by a gradual rise and fall over 0.5 days, likely associated with the afterglow, that peaked at \(17.65\pm 0.06\) mag. The prompt component is observed in a single 200s Full Frame Image and was undetectable in the next TESS image (\(T_{\rm mag}>17.79\)). Assuming that the onset of the optical transient was coincident with the gamma-ray emission, the prompt emission lasted less than 73.6 seconds, which implies the true peak was actually brighter than \(T_{\rm mag}=13.40\). We also fit parametric models to the afterglow to characterize its shape. The TESS TICA light curve can be retrieved at [https://tess.mit.edu/public/tesstransients/light_curves/lc_grb230307A_cleaned.txt](https://tess.mit.edu/public/tesstransients/light_curves/lc_grb230307A_cleaned.txt). GRB 230307A GRB 230307A 000-0002-0002-0001]Michael M. Fausnaugh 0000-0002-0001]Raul Jayaraman 0000-002-0001-9001]Roland Vanderspe The Transiting Exoplanet Survey Satellite (TESS, Ricker et al., 2015) observed the optical afterglow of GRB 230307A in the Full Frame Images (FFIs), sampled at 200 seconds. In this research note, we present the TESS light curve and observed properties of the optical emission. ## 2 Observations and Data Reduction TESS observed GRB 230307A as part of its normal operations in Sector 62, which lasted from 2023 February 12 to March 10. The FFI data from March 3-10 were downloaded from the spacecraft on 2023 March 10, processed by the Payload Operations Center (POC) at MIT, and delivered to the Mikulski Archive for Space Telescopes (MAST) as a TICA High Level Science Product, doi:10.17909/t9-9j8c-7d30 (Fausnaugh et al., 2020).1 The order of processing events was Footnote 1: [https://archive.stsci.edu/hlsp/tica](https://archive.stsci.edu/hlsp/tica) * March 10, 16:23 UTC--Deep Space Network (DSN) begins contact with the TESS spacecraft to downlink the data. * March 11, 10:56 UTC--Compressed data arrive at POC from the DSN; POC begins decoding data delivery and writing image files. * March 11, 16:35 UTC--POC begins calibrating data and fitting World Coordinate Solutions as part of the TICA data delivery. * March 11, 18:42 UTC--POC begins staging and verifying the TICA delivery to MAST. * March 12, 1:50 UTC--MAST begins data ingest. * March 12, 6:40 UTC--GRB data are made public on MAST, announced on the MAST holdings page,2 and announced on MAST social media. Footnote 2: [https://outerspace.stsci.edu/display/TESS/TESS+Holdings+Available+by+MAST+Service](https://outerspace.stsci.edu/display/TESS/TESS+Holdings+Available+by+MAST+Service) Once the data were public, we processed the images with the difference imaging analysis pipeline described by Fausnaugh et al. (2021). We identified a transient point source in the TESS difference image overlapping with the _Fermi_ trigger at BJD\(-2,460,000\)=11.15518 days, close to the coordinates reported by Levan et al. (2023). We measured the position of the source using flux-weighted centroids in a 5x5 square pixel aperture. We found a position of RA = 04:03:25.36, Dec = \(-\)75:22:41.31, which agrees with the position from ULTRACAM to 1.8 arcseconds. This difference is consistent with the typical 1\(\sigma\) precision of TICA World Coordinate Solutions, about 2 arcseconds. We extracted light curves from the difference images using forced PSF photometry at the location of the afterglow reported by Levan et al. (2023). Figure 1 shows the resulting light curve. Based on the scatter of the light curve prior to the GRB discovery, we estimate a 5\(\sigma\) limiting magnitude of 18.63 in 1600 seconds. The TESS light curve is available at [https://tess.mit.edu/public/testransients/light_curves/lc_grb230307A_cleaned.txt](https://tess.mit.edu/public/testransients/light_curves/lc_grb230307A_cleaned.txt). Information on the file format and processing steps is available on the same website.3 Footnote 3: [https://tess.mit.edu/public/testransients/pages/readme.html](https://tess.mit.edu/public/testransients/pages/readme.html) The light curve shows two distinct components. First is a prompt rise from zero flux to TESS mag \(T_{\rm mag}=14.49\pm 0.05\) in a single FFI at BJD\(-2,460,000=11.15518\) days (mid exposure). The quoted uncertainty includes a statistical uncertainty of 0.02 mag, but is dominated by uncertainty in the TESS instrument absolute flux calibration (Vanderspek et al., 2018). The source is not detected in the next FFI, with a 3\(\sigma\) upper limit of 17.79 mag. The second emission component starts just after the prompt emission, and consists of a gradual rise and fall in the TESS light curve over 0.5 days, likely associated with the afterglow. ## 3 Analysis of the Afterglow We fit two parametric models to the afterglow light curve, a fast-rise exponential-decay (FRED) model and a double power law model. We fit the original data sampled at 200 seconds. We excluded the prompt emission at \(T_{\rm mag}=14.49\) from the fits to the afterglow emission. The reported uncertainties correspond to the 68% confidence intervals of the fits. For both models, we fit a residual background error as a nuisance parameter, which is consistent with zero at \(1\sigma\). From the FRED model, we find that * the peak magnitude of the afterglow is \(17.65\pm 0.06\), * the rise-time \(t_{\rm rise}=0.057\pm 0.007\) days, * the time containing 90% of the flux \(t_{90}=0.5\pm 0.1\) days, * and the exponential decay timescale \(t_{\rm exp}=0.23\pm 0.03\) days. For the double power law model, we used the parameterization of Li et al. (2012): \[F=F_{02}\left[\left(\frac{t}{t_{\rm break}}\right)^{\alpha_{1}\omega}+\left( \frac{t}{t_{\rm break}}\right)^{\alpha_{2}\omega}\right]^{-1/\omega} \tag{1}\] where \(F_{02}\) is the flux normalization at the break time \(t_{\rm break}\), while \(\alpha_{1}\), and \(\alpha_{2}\) are slope parameters. We fixed the smoothing parameter \(\omega\) to a value of 5.0. The best-fit model yields * \(t_{\rm break}=0.09\pm 0.02\) days after the _Fermi_ trigger, * \(\alpha_{1}=0.6\pm 0.2\), * and \(\alpha_{2}=-0.21\pm 0.07\). We also fit the TESS light curve with the deep \(r\)-band magnitudes reported by Levan et al. (2023) and O'Connor et al. (2023), and found consistent results. ## 4 Discussion The end time of the FFI with the prompt emission is BJD\(-2,460,000\)=11.15634 days, while the _Fermi_ trigger is at BJD\(-2,460,000\)=11.15549 days. Assuming that the onset of the optical light curve is coincident with the onset of the gamma-ray emission observed by _Fermi_, the prompt emission observed by TESS lasted less than 73.6 seconds. A shorter duration implies a higher peak magnitude for the same fluence; we find \(T_{\rm mag}=\)13.40 mag for a 73.6 second burst, and the peak could be even brighter if the duration of the prompt emission is shorter. For a given luminosity distance \(D\), the isotropic monochromatic luminosity at 7839 A (the pivot wavelength of the TESS filter) would be greater than \(9\times 10^{44}(D/1\ {\rm Gpc})^{2}\ {\rm erg\ s^{-1}}\). This calculation is corrected for Galactic extinction at 7839 A assuming a Cardelli, Clayton, & Mathis 1989 extinction law and \(E(B-V)=0.0758\) mag from Schlafly & Finkbeiner 2011, but does not take into account the unknown spectral shape of the GRB in the broad TESS filter (6,000-10,000 A). These data demonstrate the utility of wide-field, continuous monitoring for studies of fast extragalactic transients. TESS data released approximately every 7 days as TICA High Level Science Products will facilitate these studies over the next several years. Figure 1: TESS light curve of GRB 230307A. The large star shows the prompt emission coincident with the _Fermi_ discovery at BJD\(-2,460,000\)=11.15549 days (vertical red line). The grey marks show the 200 second cadence light curve, and the black points show the light curve binned to 1600 seconds. The limiting magnitude in 1600 seconds is 18.63, based on the scatter of the light curve prior to the GRB discovery. All timestamps are given relative to 2,460,000 BJD. Fits for two models of the afterglow are shown: a double power law (PL) and a fast-rise exponential-decay (FRED) model. For both fits, we excluded the prompt emission at the time of the _Fermi_ trigger. Models were fit to the original light curve sampled at 200 seconds—data are binned here only for display purposes. The horizontal dashed line shows the residual background of the FRED model, which is a nuisance parameter in our fits. The vertical blue line shows the break time \(t_{\rm break}\) of the double power law. The TESS light curve is available at [https://tess.mit.edu/public/testransients/light_curves/lc_grb230307A_cleaned.txt](https://tess.mit.edu/public/testransients/light_curves/lc_grb230307A_cleaned.txt). ## Acknowledgments We thank the TESS-POC team at MIT and MAST for facilitating the TICA HLSP delivery. The TESS mission is funded by NASA's Science Mission directorate. Matplotlib (Hunter, 2007), Numpy (van der Walt et al., 2011), Scipy (Oliphant, 2007), Astropy (Astropy Collaboration et al., 2018), Astroquery (Ginsburg et al., 2019), gwcs (Dencheva et al., 2020), TICA (Fausnaugh et al., 2020)
2308.12300
A fully-coupled nonlinear magnetoelastic thin shell formulation
A geometrically exact dimensionally reduced order model for the nonlinear deformation of thin magnetoelastic shells is presented. The Kirchhoff-Love assumptions for the mechanical fields are generalised to the magnetic variables to derive a consistent two-dimensional theory based on a rigorous variational approach. The general deformation map, as opposed to the mid-surface deformation, is considered as the primary variable resulting in a more accurate description of the nonlinear deformation. The commonly used plane stress assumption is discarded due to the Maxwell stress in the surrounding free-space requiring careful treatment on the upper and lower shell surfaces. The complexity arising from the boundary terms when deriving the Euler-Lagrange governing equations is addressed via a unique application of Green's theorem.The governing equations are solved analytically for the problem of an infinite cylindrical magnetoelastic shell. This clearly demonstrates the model's capabilities and provides a physical interpretation of the new variables in the modified variational approach. This novel formulation for magnetoelastic shells serves as a valuable tool for the accurate design of thin magneto-mechanically coupled devices.
Abhishek Ghosh, Andrew McBride, Zhaowei Liu, Luca Heltai, Paul Steinmann, Prashant Saxena
2023-08-18T17:56:38Z
http://arxiv.org/abs/2308.12300v1
# A fully-coupled nonlinear magnetoelastic thin shell formulation ###### Abstract A geometrically exact dimensionally reduced order model for the nonlinear deformation of thin magnetoelastic shells is presented. The Kirchhoff-Love assumptions for the mechanical fields are generalised to the magnetic variables to derive a consistent two-dimensional theory based on a rigorous variational approach. The general deformation map, as opposed to the mid-surface deformation, is considered as the primary variable resulting in a more accurate description of the nonlinear deformation. The commonly used plane stress assumption is discarded due to the Maxwell stress in the surrounding free-space requiring careful treatment on the upper and lower shell surfaces. The complexity arising from the boundary terms when deriving the Euler-Lagrange governing equations is addressed via a unique application of Green's theorem. The governing equations are solved analytically for the problem of an infinite cylindrical magnetoelastic shell. This clearly demonstrates the model's capabilities and provides a physical interpretation of the new variables in the modified variational approach. This novel formulation for magnetoelastic shells serves as a valuable tool for the accurate design of thin magneto-mechanically coupled devices. **Keywords:** Nonlinear magnetoelasticity; Magnetoelastic shells; Kirchhoff-Love; Large deformation ## 1 Introduction Large deformation of thin structures made from soft rubbers or elastomers is critical for numerous engineering components, including tyres, airbags, air springs, buffers, pneumatic actuators, and soft grippers (Galley et al., 2019; Hao et al., 2017). The analysis of slender structures undergoing large deformation, such as rods, membranes, plates, and shells, in which one or more characteristic dimensions are negligible compared to the others, is challenging. They exhibit both material and geometric nonlinearities often leading to instabilities. Slender structures are generally modelled as lower-dimensional manifolds embedded in three-dimensional space with appropriate kinematic simplifications (Niordson, 1985; Simo and Fox, 1989). Novel developments in smart materials with multi-physics coupling have led to a dramatic increase in technological applications in soft robotics, actuators, and sensors. These materials often rely on a non-mechanical stimulus from electric, magnetic, thermal or chemical fields (Jolly et al., 1996; McKay et al., 2010; Kim et al., 2012; Sheng et al., 2012) and are difficult to model due to their complex physics. Of particular relevance is magneto-mechanical coupling in thin structures due to the ability to produce extremely large reversible deformations in a short time-scale. The presence of strong magneto-mechanical coupling in some manufactured materials, such as magnetorheological elastomers (MREs) (Jolly et al., 1996), has the potential to underpin future engineering and technological applications, for example, in micro-robotics (Hu et al., 2018; Ziyu et al., 2019), as sensors and actuators (Bose et al., 2012; Psarra et al., 2017), in active vibration control (Ginder et al., 2000), and as waveguides (Saxena, 2018; Karami Mohammadi et al., 2019). Magnetoelastostatics concerns the analysis of suitable phenomenological models to describe the equilibrium of deformable solids associated with multifunctional processes involving magnetic and elastic effects. The main constituent of the theory is the coupling between elastic deformation and magnetisation in the presence of externally applied mechanical and magnetic force fields. The magnetoelastic coupling occurs in response to a phenomenon involving reconfigurations of small magnetic domains. This is observable as a continuum vector field emerging from an averaging of microscopic and distributed subfields. Thus, the imposition of a magnetic field also induces a deformation of the material in addition to the magnetic effects caused by the traditional mechanical forces. With a rich history spanning six decades (Tiersten, 1964; Brown, 1966; Maugin and Eringen, 1972; Maugin, 1988; DeSimone and James, 2002; Kankanala and Triantafyllidis, 2004; Dorfmann and Ogden, 2004; Keip and Sridhar, 2019; Sharma and Saxena, 2020; Moreno-Mateos et al., 2023a), the mathematical and computational modelling of magnetoelasticity continues to be an active area of research. The coupling between magnetic fields and mechanical deformations of a shell structure introduces additional complexities compared to purely mechanical or electromagnetic analyses, making the modelling and solution process considerably more challenging. Magneto-active soft materials are broadly divided into two sub-classes based on the type of embedded particles: soft-magnetic soft materials (SMSMs) and hard-magnetic soft materials (HMSMs). SMSMs contain particles with low coercivity, such as iron or iron oxides, and their magnetization vector varies under external magnetic loading. They are often modelled as three-dimensional solid continua (Danas et al., 2012; Saxena et al., 2013; Ethiraj and Miehe, 2016; Mehnert et al., 2017; Mukherjee et al., 2020; Bustamante et al., 2021; Akbari and Khajehsaeid, 2020; Hu et al., 2022). HMSMs consist of particles with high coercivity, such as CoFe\({}_{2}\)O\({}_{4}\) or NdFeB. The magnetisation vector, or remnant magnetic flux, of HMSMs remains unchanged over a wide range of applied external magnetic flux (Lee et al., 2020; Schumann et al., 2021; Moreno-Mateos et al., 2023). The viscoelastic material behaviour of HMSMs significantly affects the magnetic actuation behaviour of hard-magnetic soft actuators (Lucarini et al., 2022; Nandan et al., 2023; Stewart and Anand, 2023). Motivated by the need to model thin magnetoelastic structures, Steigmann (2004) presented a dimensionally reduced-order model for thin magnetoelastic membranes. Barham et al. (2008) investigated the limit point instability for a finitely deforming circular magnetoelastic membrane in an axisymmetric dipole field under one-way magneto-mechanical coupling. This analysis was extended by Reddy and Saxena (2017, 2018); Saxena et al. (2019); Ali et al. (2021); Mishra et al. (2023) to study wrinkling, bifurcation, and limit point instabilities in axisymmetric inflating magnetoelastic membranes. However, a shell theory for fully-coupled magnetoelasticity that can account for bending resistance is still lacking. An overview of the classical shell theory is given, for example, in Simo and Fox (1989); Cirak et al. (2000); Kiendl et al. (2009) or in the books by Basar and Kratzig (1985); Niordson (1985); Blauwendraad and Hoefakker (2014). When modelling physical phenomena on curved surfaces, defining geometric quantities (normal vectors, curvatures, etc.) and differential surface operators (gradients, divergence, etc.) is crucial (Steinmann, 2015). Reduced-order theories for hard-magnetic linear and nonlinear beams (Wang et al., 2020; Yan et al., 2022), and rods (Sano et al., 2022) have been derived based on the three-dimensional model presented in Zhao et al. (2019). These studies involved a dimensional reduction procedure on the three-dimensional magneto-elastic energy, assuming reduced kinematics based on the Kirchhoff-Love assumptions (Niordson, 1985). Green and Naghdi (1983) focused on the nonlinear and linear thermomechanical theories of deformable shell-like bodies, considering electromagnetic effects. The development was carried out using a direct approach, utilising the two-dimensional theory of directed media known as Cosserat surfaces. Yan et al. (2020) studied linear elastic magneto-active axisymmetric shells made of HMSMs. They leveraged the coupling between mechanics and magnetism to tune the onset of instability of shells undergoing pressure buckling. Magnetoelastic shell models for axisymmetric deformation and geometrically exact strain measures were compared with experimental results. Their findings demonstrated that the magnetic field can control the critical buckling pressure of highly sensitive spherical shells with imperfections (Hutchinson, 2016; Hutchinson and Thompson, 2018). Pezzulla et al. (2022) performed a dimensional reduction of the three-dimensional magneto-elastic energy contribution presented in Zhao et al. (2019) by assuming a reduced kinematics according to the Kirchhoff-Love assumptions, and focussing specifically on hard-magnetic, linear-elastic shells. Models for non-axisymmetric deformations of magnetoelastic shells have been derived for shallow shells (Seffen and Vidoli, 2016; Loukaides et al., 2014). Dadgar-Rad and Hossain (2023) proposed a micropolar-based shell model to predict the deformation of thin HMSMs, incorporating a ten-parameter formulation that considers the micro-rotation of the microstructure with the enhanced assumed strain method to alleviate locking phenomenona. Lee et al. (2023) have presented a direct two-dimensional formulation to couple non-mechanical stimuli with large deformation of shells. However, despite this wealth of research, models for general deformation cases in the context of SMSM shells have received limited attention with the form of the coupling between magnetism and mechanics remaining an open question. To address the aforementioned shortcoming, a theory is derived to model large deformations of soft-magnetic hyperelastic thin shells using the Kirchhoff-Love assumption for mechanical deformation and a linearly varying magnetic scalar potential across the thickness of the structure. The salient features of the theory are the following: 1. In the present work, a derived theory approach is adopted for SMSM shells by considering the total energy of a three-dimensional incompressible magnetoelastic body and its surrounding space as the starting point. A two-dimensional system is derived based on appropriate approximations for a thin shell by incorporating a new set of generalised solution variables in a modified variational setting. The magnetic potential in the free-space is treated as an independent solution variable to capture the underlying physics and strongly couple the magnetoelastic interactions between the shell and the free space. This approach is required to formulate appropriate boundary and interface conditions and ensure consistency in the mathematical modelling of the system. 2. In numerical simulations involving thin structures, the common practice is to apply external hydrostatic pressure at the mid-surface. In the present derived theory approach, a distinction is made between the applied pressures at the top and bottom surfaces. The implications of this departure from the conventional practice are discussed. 3. In the present context, obtaining a dimensionally reduced-order theory entails linearly approximating the variation of the magnetic potential across the shell's thickness. This leads to the total potential in the magnetoelastic body adopting a form similar to the Kirchhoff-Love assumption used for the mechanical behaviour of hyperelastic thin shells. Such an approximation is well-suited for modelling the physics of thin magnetoelastic shells and facilitates mathematical simplifications. 4. The plane-stress assumption, commonly employed in structural mechanics, assumes negligible stresses in the thickness direction of thin plates or shells. However, it is not directly applicable to magnetoelastic shells due to the coupling between magnetic field and mechanical deformation. This coupling results in three-dimensional stress and strain states, exemplified by the study of an inflating soft magnetoelastic cylindrical shell presented here. The plane-stress assumption fails to consider magnetic field-induced stresses in the thickness direction, leading to inaccuracies. 5. The physical three-dimensional shell is conceptualised as a stack of surfaces. Thereby, the overall deformation of the shell structure is described by the deformed mid-surface position vector, augmented by a term accounting for the through-thickness stretch and the deformed normal. Introducing the first variation of the thickness stretch and the deformed normal in the modified variational format adds richness and complexity to the derivation of the shell system of equations. Notably, when obtaining a reduced-order model for the thin soft magnetoelastic shell, a unique application of Green's theorem is required. The present work addresses this complexity and provides a suitable generalisation by deriving a system of partial differential equations with boundary terms that encompass these effects. ### Outline The paper is organised as follows: In Section 2, the mathematical preliminaries and the fundamentals of nonlinear magnetoelastostatics are introduced. Sections 3.1 and 3.2 define the geometry and kinematics of nonlinear magnetoelastic thin shells, respectively. In Section 3.3, the expressions for the divergence of the total stress tensor and the magnetic induction vector in the shell are provided. These are essential for deriving the equilibrium equations. Section 3.4 presents the interface condition on the magnetic potential, which is imposed by the continuity of the tangent space components of the magnetic field at the shell boundaries. Section 4 introduces the variational formulation accounting for the magnetoelastic body and the corresponding free space under suitable loading situations. Then, in Section 5, a new set of generalised solution variables for a modified variational format suitable for deriving the shell system of equations is introduced. Sections 5.2, 5.3, and 5.4 demonstrate the contributions of the stress tensors, magnetic induction vector, and external loads to the modified variational setting, respectively. Section 6 is dedicated to obtaining the governing equations for the system, and in Section 7, an example of an inflating magnetoelastic thin cylinder is illustrated to derive the response equations for a given boundary-value problem using the derived equations. Finally, in Section 8, concluding remarks are presented. ### Notation A variable typeset in a normal weight font represents a scalar. A bold weight font denotes a first or second-order tensor. A scalar variable with superscript or subscript indices normally represents the components of a vector or second-order tensor. Latin indices \(i,j,k,\dots\) vary from 1 to 3 while Greek indices \(\alpha,\beta,\gamma,\dots\), used as surface variable components, vary from 1 to 2. Einstein summation convention is used throughout. \(\mathbf{e}_{i}\) represent the basis vectors of an orthonormal and orthogonal system in three-dimensional Euclidean space with \(x,y\) and, \(z\) as its components. The three covariant basis vectors for a surface point are denoted as \(\mathbf{a}_{i}\), where \(\mathbf{a}_{\alpha}\) are the two tangential vectors and \(\mathbf{a}_{3}\) as the normal vector with \(\theta^{\alpha}\) and \(\eta\) as the respective coordinate components. The comma symbol in a subscript represents the partial derivative with respect to the surface parameters, for example, \(A_{,\beta}\) is the partial derivative of \(A\) with respect to \(\theta^{\beta}\). The scalar product of two vectors \(\boldsymbol{p}\) and \(\boldsymbol{q}\) is denoted \(\boldsymbol{p}\cdot\boldsymbol{q}\), and the tensor product of these vectors is a second-order tensor \(\boldsymbol{H}=\boldsymbol{p}\otimes\boldsymbol{q}\). Operation of a second-order tensor \(\boldsymbol{H}\) on a vector \(\boldsymbol{p}\) is given by \(\boldsymbol{H}\boldsymbol{p}\). The scalar product of two tensors, \(\boldsymbol{H}\) and \(\boldsymbol{G}\), is denoted \(\boldsymbol{H}:\boldsymbol{G}\). The notation \(\left\|\cdot\right\|\) represents the usual (Euclidean) norm For a second-order tensor in its component form \(\boldsymbol{H}=H^{ij}\mathbf{a}_{i}\otimes\mathbf{a}_{j}\), the component matrix is denoted \(\left[H^{ij}\right]\). Circular brackets ( ) are used to denote the parameters of a function. Square brackets \(\left[\,\right]\) are used to group expressions. If brackets are used to denote an interval, then ( ) stands for an open interval and \(\left[\,\right]\) is a closed interval. ## 2 Nonlinear Magnetoelastostatics A brief review of the key equations of nonlinear magnetoelastostatics is provided, see Dorfmann and Ogden (2014); Pelteret and Steinmann (2020) for further details. Either the magnetic field, magnetic induction, or the magnetisation vector can be selected as the independent magnetic variable. The present work is established on the variational formulation based on the magnetic field. Consider a magnetoelastic body that occupies the regions \(\mathcal{B}_{0}\in\mathbb{R}^{3}\) and \(\mathcal{B}\in\mathbb{R}^{3}\) in its reference and deformed configurations, respectively, with corresponding boundaries denoted as \(\partial\mathcal{B}_{0}\) and \(\partial\mathcal{B}\). A point \(\mathbf{X}_{\mathrm{B}}\in\mathcal{B}_{0}\) is related to a point \(\mathbf{x}_{\mathrm{B}}\in\mathcal{B}\) through a one-to-one map \(\mathbf{\chi}_{\mathrm{B}}(\mathbf{X}_{\mathrm{B}}):\mathcal{B}_{0}\to\mathcal{B}\). The three-dimensional region \(\mathcal{B}\) is enclosed within the region \(\mathcal{V}\), as schematically shown in Figure 1, so that the surrounding free space is \(\mathcal{B}^{\prime}=\mathcal{V}\setminus\mathcal{B}\cup\partial\mathcal{B}\). \(\mathcal{V}_{0}\) is the referential region corresponding to \(\mathcal{V}\) in \(\mathbb{R}^{3}\), such that \(\mathcal{B}^{\prime}_{0}=\mathcal{V}_{0}\setminus\mathcal{B}_{0}\cup\partial \mathcal{B}_{0}\). The deformation gradient \(\mathbf{F}\) is defined by \[\mathbf{F}=\frac{\partial\mathbf{\chi}}{\partial\mathbf{X}}, \tag{2.1}\] with the Jacobian, \(J=\mathrm{det}\mathbf{F}>0\), such that \[dv=Jd\mathcal{V}, \tag{2.2}\] where \(dV\) and \(dv\) are the volume elements in the reference and deformed configuration, respectively. The right Cauchy-Green tensor is defined by \[\mathbf{C}=\mathbf{F}^{\mathrm{T}}\mathbf{F}. \tag{2.3}\] To facilitate the description of fields exterior to the body. Consider a fictitious deformation map \(\mathbf{\chi}_{\mathrm{F}}\) as \(\mathbf{\chi}_{\mathrm{F}}(\mathbf{X}_{\mathrm{F}}):\mathcal{B}^{\prime}_{0}\to \mathcal{B}^{\prime}\) with \(\mathbf{X}_{\mathrm{F}}\in\mathcal{B}^{\prime}_{0}\), satisfying \[\mathbf{\chi}_{\mathrm{F}}=\mathbf{\chi}_{\mathrm{B}}\text{ on }\partial\mathcal{B}. \tag{2.4}\] The boundaries \(\partial\mathcal{V}_{0}\) and \(\partial\mathcal{V}\) coincide, implying \[\mathbf{x}_{\mathrm{F}}=\mathbf{X}_{\mathrm{F}}\text{ on }\partial\mathcal{V}. \tag{2.5}\] The magnetostatic problem is governed by the Maxwell's equations involving the spatial magnetic induction \(\mathbf{b}\) and magnetic field \(\mathbf{h}\) given by \[\mathrm{div}\mathbf{b}=0\quad\text{and}\quad\mathrm{curl}\mathbf{h}=\mathbf{0}, \quad\text{in}\quad\mathcal{V}, \tag{2.6}\] along with the boundary (or jump) conditions, \[\llbracket\mathbf{b}\rrbracket\cdot\mathbf{n}=0\quad\text{and}\quad\llbracket \mathbf{h}\rrbracket\times\mathbf{n}=\mathbf{0},\quad\text{on}\quad\partial \mathcal{B}, \tag{2.7}\] where \(\llbracket\bullet\rrbracket\) represents jump in a quantity across the boundary with a unit outward normal vector \(\mathbf{n}\). Equation Figure 1: Schematic of a thin shell that occupies regions \(\mathcal{B}_{0}\) and \(\mathcal{B}\) in its reference and current configurations, respectively. The body is embedded in volumes \(\mathcal{V}_{0}\) and \(\mathcal{V}\) in the two configurations connected by a deformation map \(\chi\). The two-dimensional parametric coordinate system is denoted by \(P\) and the local triads in the two configurations are also shown. (2.6)\({}_{2}\) motivates the introduction of a magnetic scalar potential \(\phi\) such that \[\mathbf{h}=-\mathrm{grad}\phi. \tag{2.8}\] Moreover, \(\phi\) is continuous across the boundary between the magnetoelastic body and the surrounding space. The vectors \(\mathbf{b}\) and \(\mathbf{h}\) are related via the well-known constitutive relation, \[\mathbf{b}=\mu_{0}\mathbf{h}+\mathbf{m}, \tag{2.9}\] where \(\mu_{0}\) is the constant magnetic permeability of free space and \(\mathbf{m}\) is the spatial magnetisation that vanishes in \(\mathcal{B}^{{}^{\prime}}\). The pull-back transformations of \(\mathbf{b}\), \(\mathbf{h}\), and \(\mathbf{m}\) to the reference configuration are given by \[\mathbf{\mathbbm{B}}=J\mathbf{F}^{-1}\mathbf{b},\qquad\mathbf{\mathbbm{H}}=\mathbf{F} ^{\mathrm{T}}\mathbf{h},\quad\text{and}\quad\mathbf{M}=\mathbf{F}^{\mathrm{T}} \mathbf{m}, \tag{2.10}\] thereby allowing one to rewrite the governing Maxwell's equations (2.6) and the boundary conditions (2.7) in the reference configuration as \[\mathrm{Div}\mathbf{B}=0,\qquad\mathrm{Cur}\mathbf{\mathbbm{H}}=\mathbf{0}, \tag{2.11}\] and \[\left[\mathbf{\mathbbm{B}}\right]\cdot\mathbf{N}=0,\qquad\left[\mathbf{ \mathbbm{H}}\right]\times\mathbf{N}=\mathbf{0}, \tag{2.12}\] with \(\mathbf{N}\) being the outward unit normal to the boundary in the reference configuration. Denoting \(\Phi\) as the referential counterpart of \(\phi\), then \(\Phi\left(\mathbf{X}\right)=\phi(\mathbf{x})\circ\mathbf{\chi}\left(\mathbf{X}\right)\) and \[\mathbf{\mathbbm{H}}=-\mathrm{Grad}\Phi. \tag{2.13}\] Using the transformations (2.10) in the constitutive relation (2.9), one obtains \[J^{-1}\mathbf{C}\,\mathbf{\mathbbm{B}}=\mu_{0}\mathbf{\mathbbm{H}}+\mathbf{M} \text{ in }\mathcal{V}_{0}. \tag{2.14}\] Since, \(\mathbf{m}\) and \(\mathbf{M}\) vanish in the vacuum, the constitutive relation simplifies to \[\mathbf{\mathbbm{B}}=\mu_{0}J\mathbf{C}^{-1}\mathbf{\mathbbm{H}}\,\text{ in }\mathcal{B}_{0}^{{}^{\prime}}. \tag{2.15}\] Coupled magnetoelastic constitutive relations in the body are established by assuming a free energy density function per unit reference volume, \(\varOmega\), that is of the form \(\varOmega=\varOmega\left(\mathbf{F},\mathbf{\mathbbm{H}}\right).\) Objectivity and isotropy require that the free energy take the form \[\varOmega=\varOmega\left(\mathbf{C},\mathbf{\mathbbm{H}}\right)=\varwidetilde{ \varOmega}\left(I_{1},I_{2},I_{3},I_{4},I_{5},I_{6}\right), \tag{2.16}\] where \(I_{1},I_{2},I_{3}\) are scalar invariants of \(\mathbf{C}\), that is \[I_{1}=\mathrm{tr}\mathbf{C},\quad I_{2}=\frac{1}{2}\left[\left[\mathrm{tr}\mathbf{C} \right]^{2}-\mathrm{tr}\mathbf{C}^{2}\right],\quad\text{and}\quad I_{3}=\mathrm{ det}\mathbf{C}=J^{2}, \tag{2.17}\] and the remaining three scalar invariants are given by \[I_{4}=\mathbf{\mathbbm{H}}\cdot\mathbf{\mathbbm{H}},\quad I_{5}=\left[\mathbf{C} \,\mathbf{\mathbbm{H}}\right]\cdot\mathbf{\mathbbm{H}},\quad\text{and}\quad I _{6}=\left[\mathbf{C}^{2}\,\mathbf{\mathbbm{H}}\right]\cdot\mathbf{\mathbbm{H}}. \tag{2.18}\] Incompressibility requires \(J\equiv 1\) and the energy density function is further simplified to \[\varOmega\left(I_{1},I_{2},I_{3},I_{4},I_{5},I_{6}\right)=\varOmega\left(I_{1 },I_{2},I_{4},I_{5},I_{6}\right). \tag{2.19}\] The total Piola stress tensor is given for the case of incompressible solid as \[\mathbf{P}=\frac{\partial\varOmega}{\partial\mathbf{F}}-p\mathbf{F}^{-\mathrm{T}}, \tag{2.20}\] where \(p\) is the Lagrange multiplier due to the incompressibility constraint. The constitutive relation for the magnetic induction is given as \[\mathbf{\mathbbm{B}}=-\frac{\partial\varOmega}{\partial\mathbf{\mathbbm{H}}}. \tag{2.21}\] The Maxwell stress tensor outside the body is given by \[\mathbf{\sigma}_{\mathrm{M}}=\frac{1}{\mu_{0}}\left[\mathbf{b}\otimes\mathbf{b}- \frac{1}{2}\left[\mathbf{b}\cdot\mathbf{b}\right]\mathbb{1}\right], \tag{2.22}\] where \(\mathbb{1}\) is the spatial identity tensor. Using the Piola transform \(\mathbf{P}_{\mathrm{M}}=J\mathbf{\sigma}_{\mathrm{M}}\mathbf{F}^{-\mathrm{T}}\), this can be written in the reference configuration as \[\mathbf{P}_{\rm M}=\mu_{0}J\left[\left[\mathbf{F}^{\rm-T}\mathbb{H}\right]\otimes\left[\bm {F}^{\rm-T}\mathbb{H}\right]-\frac{1}{2}\underbrace{\left[\mathbf{F}^{\rm-T} \mathbb{H}\right]\cdot\left[\mathbf{F}^{\rm-T}\mathbb{H}\right]\mathbb{1}_{0}}_{ \mathbb{H}}\right]\mathbf{F}^{\rm-T}, \tag{2.23}\] where \(\mathbb{1}_{0}\) is the two-point identity tensor. ## 3 Kirchhoff-Love magnetoelastic thin shell ### Geometry Consider the magnetoelastic body to be a thin shell. Each point \(\mathbf{X}_{\rm B}\in\mathcal{B}_{0}\) is mapped from the parametric domain defined by the coordinate system \(\{\theta^{1},\theta^{2},\eta\}\). The Kirchhoff-Love hypothesis states that for thin shell structures, lines perpendicular to the mid-surface of the shell remain straight and perpendicular to the mid-surface after deformation (see e.g. Niedson, 1985). Hence, assuming the shell has a thickness \(T\left(\theta^{\alpha}\right)\) in the reference configuration, the point \(\mathbf{X}_{\rm B}\) can be defined using a point on the mid-surface \(S_{\rm m}\) of the shell, \(\mathbf{R}\in S_{\rm m}\), and the associated unit normal vector \(\mathbf{N}\) as \[\mathbf{X}_{\rm B}=\mathbf{R}+\eta\mathbf{N}, \tag{3.1}\] where \(\eta\in[-T/2,T/2]\). The points on the mid-surface in the deformed configuration denoted by \(\mathbf{r}\). The mid-surface point in the deformed configuration corresponds to the mid-surface point in the reference configuration after a motion as shown in Figure 1, and can be expressed as \[\mathbf{r}=\mathbf{R}+\mathbf{u}, \tag{3.2}\] where \(\mathbf{u}\) denotes the mid-surface displacement. A point \(\mathbf{x}_{\rm B}\in\mathcal{B}\) can therefore be expressed as \[\mathbf{x}_{\rm B}=\mathbf{r}+\eta\mathbf{d}, \tag{3.3}\] where \(\mathbf{d}=\lambda\mathbf{n}\), and \(\lambda\) is the through thickness stretch for a finitely deformed shell defined by \[\lambda=\frac{t}{T}, \tag{3.4}\] where \(t(\theta^{\alpha})\) is the shell thickness after deformation. Further, in \(\mathcal{B}_{0}\), one can assume a form for the magnetic potential as \[\Phi\left(\eta,\theta^{\alpha}\right)=\Phi_{0}\left(\theta^{\alpha}\right)+ \eta\Phi_{1}\left(\theta^{\alpha}\right)+\frac{\eta^{2}}{2}\Phi_{2}\left( \theta^{\alpha}\right)+\mathcal{O}\left(\eta^{3}\right), \tag{3.5}\] where \(\Phi_{0}=\Phi\left(0,\theta^{\alpha}\right)\). Furthermore, assume \[\Phi_{1}\left(\theta^{\alpha}\right)=\frac{\Phi\left(T/2,\theta^{\alpha} \right)-\Phi\left(-T/2,\theta^{\alpha}\right)}{T}, \tag{3.6}\] implying a linear variation of the magnetic potential along the thickness of the thin shell. Therefore, the higher-order terms vanish, allowing one to express the Kirchhoff-Love assumption as \[\Phi\left(\eta,\theta^{\alpha}\right)=\Phi_{0}\left(\theta^{\alpha}\right)+ \eta\Phi_{1}\left(\theta^{\alpha}\right), \tag{3.7}\] which is similar in form to Equation (3.1). Table 1 provides a list of surface parameters used to describe the geometry of the shell and Table 2 presents the surface and volume elements of the shell. The expressions and associated derivations are elaborated on in Appendix A. The boundaries, \(\partial\mathcal{B}_{0}\) and \(\partial\mathcal{B}\), can be written as \(\partial\mathcal{B}_{0}=S_{\rm t}\cup S_{\rm b}\cup S_{\ell}\), and \(\partial\mathcal{B}=s_{\rm t}\cup s_{\rm b}\cup s_{\ell}\), where the subscripts, t, b, and \(\ell\), represents the top, bottom, and lateral surfaces in the two configurations, and the top surface is the side of the boundary that is reached along the unit outward normal vector. The incorporation of the variation of the through-thickness stretch and the deformed normal in obtaining the reduced-order model for the soft thin magnetoelastic shell requires the evaluation of the integral: \(\int\limits_{P}\left[A^{1/2}T^{\alpha}\right]_{,\alpha}dP\) for an arbitrary \(T^{\alpha}\left(\theta^{\beta}\right)\), as discussed in Sections 5.1 and 5.3. This integral can be expressed as: \[\int\limits_{P}\left[A^{1/2}T^{\alpha}\right]_{,\alpha}dP=\int\limits_{\mathcal{ C}_{\mathrm{m}}}T^{\alpha}\nu_{\alpha}dl, \tag{3.8}\] where \(\mathcal{C}_{\mathrm{m}}\) represents the boundary of the curved mid-surface \(S_{\mathrm{m}}\). This is elaborated upon further in Appendix B. ### Kinematics The deformation gradient and its inverse for a shell-point can be written so as to separate the thickness variable from the surface parameters as \[\mathbf{F} = \left[F^{\alpha}_{0_{\beta}}+\eta F^{\alpha}_{1_{\beta}}+\eta^{2} F^{\alpha}_{2_{\beta}}\right]\mathbf{a}_{\alpha}\otimes\mathbf{A}^{\beta}+\lambda\mathbf{n} \otimes\mathbf{N}+\mathbf{\mathcal{O}}(\eta^{3}), \tag{3.9}\] with \[F^{\alpha}_{0_{\beta}}=\delta^{\alpha}_{\beta},\quad F^{\alpha}_{1_{\beta}}=- \lambda b_{\beta}^{\ \alpha}+B_{\beta}^{\ \alpha},\quad\text{and}\quad F^{\alpha}_{2_{ \beta}}=B_{\delta}^{\ \alpha}B_{\beta}^{\ \delta}-\lambda b_{\delta}^{\ \alpha}B_{\beta}^{\ \delta}, \tag{3.10}\] and \[\mathbf{F}^{-1} = \left[F^{-1\alpha}_{0_{\beta}}+\eta F^{-1\alpha}_{1_{\beta}}+\eta ^{2}F^{-1\alpha}_{2_{\beta}}\right]\mathbf{A}_{\alpha}\otimes\mathbf{a}^{\beta}+ \frac{1}{\lambda}\mathbf{N}\otimes\mathbf{n}+\mathbf{\mathcal{O}}(\eta^{3}), \tag{3.11}\] with \[F^{-1\alpha}_{0_{\beta}}=\delta^{\alpha}_{\beta},\quad F^{-1\alpha}_{1_{\beta }}=\lambda b_{\beta}^{\ \alpha}-B_{\beta}^{\ \alpha},\quad\text{and}\quad F^{-1\alpha}_{2_{\beta}}= \lambda^{2}b_{\delta}^{\ \alpha}b_{\beta}^{\ \delta}-\lambda b_{\delta}^{\ \alpha}b_{\beta}^{\ \delta}. \tag{3.12}\] Here \(B_{\beta}^{\ \alpha}\) and \(b_{\beta}^{\ \alpha}\) are the components of the curvature tensors \(\mathbf{K}\) and \(\mathbf{\kappa}\), respectively. Further, using the relation, \(\mathbf{a}_{\alpha}\cdot\mathbf{n}=0\), the right Cauchy-Green deformation tensor can be written as \[\mathbf{C}=\left[C_{0_{\alpha\beta}}+\eta C_{1_{\alpha\beta}}+\eta^{2}C_{2_{ \alpha\beta}}\right]\mathbf{A}^{\alpha}\otimes\mathbf{A}^{\beta}+\lambda^{2}\mathbf{N} \otimes\mathbf{N}+\mathbf{\mathcal{O}}(\eta^{3}), \tag{3.13}\] \begin{table} \begin{tabular}{||l l l||} \hline Surface Parameters & Reference Configuration & Deformed Configuration \\ \hline \hline Covariant basis vectors at the mid-surface & \(\mathbf{A}_{\alpha}\) & \(\mathbf{a}_{\alpha}\) \\ Covariant metric tensor at the mid-surface & \(A_{\alpha\beta}\) & \(a_{\alpha\beta}\) \\ Determinant of the covariant metric tensor at the mid-surface & \(A\) & \(a\) \\ Contravariant basis vectors at a mid-surface & \(\mathbf{A}^{\alpha}\) & \(a^{\alpha}\) \\ Contravariant metric tensor at the mid-surface & \(A^{\alpha\beta}\) & \(a^{\alpha\beta}\) \\ Determinant of the contravariant metric tensor at the mid-surface & \(A^{-1}\) & \(a^{-1}\) \\ Christoffoff symbol at the mid-surface & \(T^{\alpha}_{\beta}\) & \(\gamma^{\alpha}_{\beta}\) \\ Parametric derivative of the metric at the mid-surface & \(A_{\mathcal{C}}\) & \(a_{\mathcal{C}}\) \\ Normal at the mid-surface & \(\mathbf{N}\) & \(\mathbf{n}\) \\ Tangent on the bounding curve of the mid-surface & \(\mathbf{\tau}\) & \(--\) \\ In-plane normal on the bounding curve of the mid-surface & \(\mathbf{\nu}\) & \(--\) \\ Projection tensor at the mid-surface & \(\mathbf{I}\) & \(\mathbf{i}\) \\ Curvature tensor at the mid-surface & \(\mathbf{K}\) & \(\mathbf{\kappa}\) \\ Mean curvature at the mid-surface & \(H\) & \(h\) \\ Gaussian curvature at the mid-surface & \(K\) & \(\kappa\) \\ Covariant basis vectors at a shell-point & \(\mathbf{G}_{\alpha}=\mathbf{M}\mathbf{A}_{\alpha}\) and \(\mathbf{M}=\mathbf{I}-\eta\mathbf{K}\) & \(\mathbf{g}_{\alpha}=\mathbf{\mu}\mathbf{a}_{\alpha}\) and \(\mathbf{\mu}=\mathbf{i}-\eta\lambda\mathbf{\kappa}\) \\ Covariant metric tensor at a shell-point & \(G_{\alpha\beta}\) & \(g_{\alpha\beta}\) \\ Contravariant basis vectors at a shell-point & \(G^{\alpha\beta}\) & \(g^{\alpha\beta}=\mu^{-\mathrm{T}}a^{\alpha}\) \\ Tangent at a shell-point on the lateral surface & \(\mathbf{\tau}_{\ell}=\mathbf{M}\tau/c\) and \(c=\|\mathbf{M}\tau\|\) & \(--\) \\ In-plane normal at a shell-point on the lateral surface & \(\mathbf{\nu}_{\ell}=c^{-1}\left[\mathbf{I}+\eta\left[\mathbf{K}-2\mathbf{H}\right]\right]\mathbf{ \nu}\) & \(--\) \\ \hline \end{tabular} \end{table} Table 1: The parameters used to describe the geometry of the thin shell in reference and deformed configurations. \begin{table} \begin{tabular}{||l l l||} \hline Volume/Surface elements & Reference Configuration & Deformed Configuration \\ \hline \hline Elemental area for the convected coordinates & \(dP\) & \(dP\) \\ Elemental area at the curved mid-surface & \(dS_{\mathrm{m}}\) & \(ds_{\mathrm{m}}\) & \(ds_{\mathrm{m}}\) \\ Elemental area at a shell-point & \(dS=MdS_{\mathrm{m}}\) and \(M=\det\!\mathbf{M}\) & \(ds=\mu\hat{a}^{1/2}dS_{\mathrm{m}},\ \mu=\det\!\mathbf{\mu},\ \mathrm{and}\ \hat{a}=a/A\) \\ Elemental area at the top surface & \(dS_{\mathrm{t}}\) & \(ds_{\mathrm{t}}\) \\ Elemental area at the bottom surface & \(dS_{\mathrm{b}}\) & \(ds_{\mathrm{b}}\) \\ Elemental area on the lateral surface & \(dS_{\mathrm{t}}=c\,dl\,d\eta\) & \(--\) \\ Elemental volume at a shell-point & \(dV=dS\,d\eta\) & \(--\) \\ \hline \end{tabular} \end{table} Table 2: Description of the surface and volume elements used for integration. where \[C_{0_{\alpha\beta}} = a_{\alpha\beta},\] \[C_{1_{\alpha\beta}} = -\lambda b_{\beta}^{\ \gamma}a_{\alpha\gamma}-\lambda b_{\alpha}^{\ \gamma}a_{\alpha\beta}+B_{\gamma}^{\ \alpha}a_{\gamma\beta}+B_{\beta}^{\ \gamma}a_{\gamma\alpha},\] \[C_{2_{\alpha\beta}} = B_{\alpha}^{\ \gamma}B_{\beta}^{\ \delta}a_{\delta\gamma}+ \lambda^{2}b_{\alpha}^{\ \delta}b_{\beta}^{\ \gamma}a_{\delta\gamma}+B_{\beta}^{\ \delta}B_{\alpha}^{\ \alpha}a_{\delta\beta}+B_{\gamma}^{\ \delta}B_{\beta}^{\ \gamma}a_{\delta\alpha}-\lambda b_{\gamma}^{\ \delta}B_{\beta}^{\ \gamma}a_{\delta\alpha}- \lambda b_{\gamma}^{\ \delta}B_{\alpha}^{\ \gamma}a_{\delta\beta} \tag{3.14}\] \[-\lambda b_{\beta}^{\ \gamma}b_{\alpha}^{\ \delta}a_{\delta \gamma}-\lambda B_{\alpha}^{\ \gamma}b_{\beta}^{\ \delta}a_{\gamma\delta}.\] The mid-surface right Cauchy-Green tensor is defined by \[\mathbf{C}_{\rm m}=\mathbf{C}\Big{|}_{\eta=0}=C_{0_{\alpha\beta}}\mathbf{A}^{\alpha} \otimes\mathbf{A}^{\beta}+\lambda^{2}\mathbf{N}\otimes\mathbf{N}, \tag{3.15}\] with \[J_{0}=\det\mathbf{C}_{\rm m}=\det\left[C_{{\rm m}_{ji}}^{\ \ i}\right]=\det\left[C_{{\rm m }_{ji}}A^{ki}\right]=\frac{\det\left[C_{{\rm m}_{ij}}\right]}{\det\left[A_{ ij}\right]}=\frac{\det\left[a_{\alpha\beta}\right]\lambda^{2}}{\det\left[A_{ \alpha\beta}\right]}=\frac{a\lambda^{2}}{A}, \tag{3.16}\] where \(A_{ij}=\mathbf{A}_{i}\cdot\mathbf{A}_{j}\) and \(A^{ij}=\mathbf{A}^{i}\cdot\mathbf{A}^{j}\) are the components of the three-dimensional covariant and contravariant metric tensors on the mid-surface, respectively, with \(\mathbf{A}_{3}=\mathbf{A}^{3}=\mathbf{N}\). The incompressibility constraint, \(J=1\), implies that \[\lambda=\hat{a}^{-1/2}, \tag{3.17}\] where the surface stretch \(\hat{a}\) is defined in Table 2. ### Divergence of the total stress tensor and magnetic induction vector The divergence of the total Piola stress tensor, as well as the magnetic induction vector, enters the governing equations for the Kirchhoff-Love magnetoelastic thin shell arising from the variational formulation involving the mechanical deformation and an independent field representing the magnetic component, that is, the magnetic field vector. The divergence of the total stress tensor can be expressed as \[\text{Div}\mathbf{P}=\mathbf{A}_{0}+\eta\mathbf{A}_{1}+\eta^{2}\mathbf{A}_{2}+\mathbf{\mathcal{O} }(\eta^{3}), \tag{3.18}\] where \[\mathbf{A}_{0} = \mathbf{P}_{0,\alpha}\mathbf{A}^{\alpha}+\mathbf{P}_{1}\mathbf{N},\] \[\mathbf{A}_{1} = B_{\delta}^{\ \alpha}\mathbf{P}_{0,\alpha}\mathbf{A}^{\delta}+\mathbf{P}_{1, \alpha}\mathbf{A}^{\alpha},+\mathbf{P}_{2}\mathbf{N},\] \[\mathbf{A}_{2} = B_{\zeta}^{\ \alpha}B_{\delta}^{\ \zeta}\mathbf{P}_{0,\alpha}\mathbf{A}^{\delta}+B_{ \delta}^{\ \alpha}\mathbf{P}_{1,\alpha}\mathbf{A}^{\delta}+\frac{1}{2}\mathbf{P}_{2,\alpha}\mathbf{A}^{ \alpha}+\frac{1}{2}\mathbf{P}_{3}\mathbf{N}, \tag{3.19}\] with \(\mathbf{P}=\mathbf{P}(\eta,\theta^{\alpha})=\mathbf{P}_{0}+\eta\mathbf{P}_{1}+\frac{\eta^{2} }{2}\mathbf{P}_{2}+\mathbf{\mathcal{O}}(\eta^{3})\). Similarly, for the magnetic induction vector at a shell-point, \[\text{Div}\mathbf{\rm B}=B_{0}+\eta B_{1}+\eta^{2}B_{2}+\mathbf{\mathcal{O}}(\eta^{3}), \tag{3.20}\] where \[B_{0} = \mathbf{\rm B}_{0,\alpha}\cdot\mathbf{A}^{\alpha}+\mathbf{\rm B}_{1}\cdot\mathbf{ N},\] \[B_{1} = B_{\delta}^{\ \alpha}\mathbf{\rm B}_{0,\alpha}\cdot\mathbf{A}^{\delta}+\mathbf{B}_{ 1,\alpha}\cdot\mathbf{A}^{\alpha}+\mathbf{\rm B}_{2}\cdot\mathbf{N},\] \[B_{2} = B_{\zeta}^{\ \alpha}B_{\delta}^{\ \zeta}\mathbf{\rm B}_{0,\alpha}\cdot\mathbf{A}^{ \delta}+B_{\delta}^{\ \alpha}\mathbf{\rm B}_{1,\alpha}\cdot\mathbf{A}^{\delta}+\frac{1}{2}\mathbf{\rm B}_{2, \alpha}\cdot\mathbf{A}^{\alpha}+\frac{1}{2}\mathbf{\rm B}_{3}\cdot\mathbf{N}, \tag{3.21}\] with \(\mathbf{\rm B}=\mathbf{\rm B}(\eta,\theta^{\alpha})=\mathbf{\rm B}_{0}+\eta\mathbf{\rm B}_{1}+ \frac{\eta^{2}}{2}\mathbf{\rm B}_{2}+\mathbf{\mathcal{O}}(\eta^{3})\). The total Piola stress tensor and magnetic induction vector at the top and bottom boundaries are obtained by setting \(\eta=\pm T/2\) in their respective through-thickness expansions. ### Interface condition on magnetic field The continuity of magnetic field components projected onto the tangent space across the boundaries of the thin shell and the surrounding space is implied by Equation (2.12)\({}_{2}\). Thereby, at the interfaces, by equating these components and using Equation (A.40), the resulting expression can be written as \[-\text{Grad}\Phi\cdot\mathbf{A}_{\alpha}+\eta\text{Grad}\Phi B_{\alpha}^{\ \gamma}\cdot\mathbf{A}_{\gamma}+\Phi_{0,\alpha}+\eta\left[\Phi_{0,\beta}B_{\alpha}^{ \ \beta}+\Phi_{1,\alpha}\right]+\mathbf{\mathcal{O}}(\eta^{2})=0, \tag{3.22}\] where Grad\(\Phi\) is evaluated in the free space. This imposes a constraint on the potential at the top, bottom, and lateral surfaces of the shell. Note, to the continuity of the potential across the boundaries of the thin shell with the surrounding space, this constraint is explicitly imposed and is not obtained from the modified variational setting while deriving the reduced-order theory, as discussed in Section 5. ## 4 Variational formulation in three dimensions Defining \(\hat{\boldsymbol{\chi}}=\{\boldsymbol{\chi},\Phi,p\}\) as the generalised set of the solution variables, the total potential energy of the system is written as (Dorfmann and Ogden, 2014): \[\Pi[\hat{\boldsymbol{\chi}}] = \int\limits_{\mathcal{B}_{0}}\Omega\left(\boldsymbol{F},\mathbf{ \mathbb{H}}\right)dV-\int\limits_{\mathcal{B}_{0}}p[J-1]dV-\frac{\mu_{0}}{2} \int\limits_{\mathcal{B}^{\prime}}\mathbf{h}\cdot\mathbf{h}dv-\int\limits_{ \partial\mathcal{V}}\phi\mathbf{b}_{\mathrm{e}}\cdot\boldsymbol{n}^{{}^{ \prime}}ds \tag{4.1}\] \[-\int\limits_{\mathcal{B}_{0}}\boldsymbol{\mathfrak{B}}\cdot \boldsymbol{\chi}dV-\int\limits_{\mathcal{C}_{\mathrm{m}}\backslash \mathcal{C}_{\mathrm{m}}^{\mathrm{u}}}\boldsymbol{t}_{t}\cdot\boldsymbol{\chi }dl-\int\limits_{\mathcal{B}_{\mathrm{e}}}\boldsymbol{\mathfrak{v}}\cdot \boldsymbol{\chi}ds_{\mathrm{t}}-\int\limits_{\mathcal{B}_{0}}\boldsymbol{ p}_{\mathrm{b}}\cdot\boldsymbol{\chi}ds_{\mathrm{b}}\.\] The external spatial magnetic induction is denoted as \(\mathbf{b}_{\mathrm{e}}\). Its normal component is prescribed on \(\partial\mathcal{V}\) and its counterpart in the reference configuration is denoted as \(\mathbf{B}_{\mathrm{e}}\). The fourth term in Equation (4.1) representing the work done by the external magnetic induction is expressed in the current configuration, and using Equation (2.5) can be rewritten in the reference configuration as \[\int\limits_{\partial\mathcal{V}}\phi\mathbf{b}_{\mathrm{e}}\cdot\boldsymbol{ n}^{{}^{\prime}}ds=\int\limits_{\partial\mathcal{V}_{0}}\Phi\mathrm{B}_{ \mathrm{e}}\cdot\boldsymbol{N}^{{}^{\prime}}dS\, \tag{4.2}\] with the associated unit normals on the outer boundary of the free space, denoted by \(\boldsymbol{N}^{\prime}\) and \(\boldsymbol{n}^{{}^{\prime}}\) in the reference and deformed configurations, respectively. The body force field per unit reference volume is \(\boldsymbol{\mathfrak{B}}\) while \(\boldsymbol{t}_{\ell}\) is the applied traction at \(C_{\mathrm{m}}\). Also, \(C_{\mathrm{m}}^{\mathrm{u}}\) are the parts of the boundary where displacements are specified. \(p_{\mathrm{t}}\left(\theta^{\alpha}\right)\) and \(p_{\mathrm{b}}\left(\theta^{\alpha}\right)\) are the magnitudes of external pressure at the top and bottom surfaces of the shell, respectively, such that \[\boldsymbol{p}_{\mathrm{t}}=-p_{\mathrm{t}}\boldsymbol{n},\quad\text{and} \quad\boldsymbol{p}_{\mathrm{b}}=p_{\mathrm{b}}\boldsymbol{n}\, \tag{4.3}\] with \(\boldsymbol{n}\) is the mid-surface unit normal in the current configuration. Let the set \(\delta\hat{\boldsymbol{\chi}}=\{\delta\boldsymbol{\chi},\delta\Phi,\delta p\}\). In the subsequent calculations, refer to Appendix C for details of the variation of key variables. From Equations (2.2) and (2.10), the first variation of the total energy is given by \[\delta\Pi[\hat{\boldsymbol{\chi}},\delta\hat{\boldsymbol{\chi}}] = \delta\left[\int\limits_{\mathcal{B}_{0}}\Omega\left(\boldsymbol{F },\mathbf{\mathbb{H}}\right)dV\right]-\int\limits_{\mathcal{B}_{0}}p\delta JdV -\int\limits_{\mathcal{B}_{0}}\delta p[J-1]dV-\frac{\mu_{0}}{2}\delta\left[ \int\limits_{\mathcal{B}_{0}^{\prime}}\left[\boldsymbol{F}^{-T}\mathbf{ \mathbb{H}}\right]\cdot\left[\boldsymbol{F}^{-T}\mathbf{\mathbb{H}}\right]JdV\right] \tag{4.4}\] \[-\delta\left[\int\limits_{\mathcal{B}_{0}^{\prime}}\Phi\mathrm{B }_{\mathrm{e}}\cdot\boldsymbol{N}^{{}^{\prime}}dS\right]-\int\limits_{ \mathcal{B}_{0}}\boldsymbol{\mathfrak{B}}\cdot\delta\boldsymbol{\chi}dV- \int\limits_{\mathcal{C}_{\mathrm{m}}\backslash\mathcal{C}_{\mathrm{m}}} \boldsymbol{t}_{\ell}\cdot\delta\boldsymbol{\chi}dl-\int\limits_{\mathcal{B}_{ \mathrm{e}}}\boldsymbol{p}_{\mathrm{t}}\cdot\delta\boldsymbol{\chi}ds_{ \mathrm{t}}-\int\limits_{\mathcal{B}_{0}}\boldsymbol{p}_{\mathrm{b}}\cdot \delta\boldsymbol{\chi}ds_{\mathrm{b}}\.\] The Euler-Lagrange equations are obtained by setting \(\delta\Pi=0\). * The first and second terms in Equation (4.4) can be combined as \[\delta\left[\int\limits_{\mathcal{B}_{0}}\Omega\left(\boldsymbol{F},\mathbf{ \mathbb{H}}\right)dV\right]-\int\limits_{\mathcal{B}_{0}}p\delta JdV=\int \limits_{\mathcal{B}_{0}}\left[\frac{\partial\Omega}{\partial\boldsymbol{F} }:\delta\boldsymbol{F}+\frac{\partial\Omega}{\partial\mathbf{\mathbb{H}}} \cdot\delta\mathbf{\mathbb{H}}\right]dV-\int\limits_{\mathcal{B}_{0}}pJ \boldsymbol{F}^{-\mathrm{T}}:\delta\boldsymbol{F}dV\,\] (4.5) and taking into account the incompressibility condition ( \[J=1\] ), and using Equations ( 2.20 ), ( 2.21 ), and ( 2.11 ), the expression reduces to \[=\int\limits_{\mathcal{B}_{0}}\boldsymbol{P}:\delta\boldsymbol{F}dV+\int \limits_{\mathcal{B}_{0}}\mathbf{\mathbb{B}}\cdot\frac{\partial\delta\Phi}{ \partial\boldsymbol{X}}dV\.\] (4.6) On an application of the divergence theorem, \[- \int\limits_{\mathcal{B}_{0}}\mathrm{Div}\mathbf{P}\cdot\delta\mathbf{ \chi}dV+\int\limits_{S_{\mathrm{t}}}\mathbf{P}\mathbf{N}\cdot\delta\mathbf{\chi}dS_{\mathrm{ t}}-\int\limits_{S_{\mathrm{b}}}\mathbf{P}\mathbf{N}\cdot\delta\mathbf{\chi}dS_{\mathrm{b}}+\int \limits_{S_{\mathrm{t}}}\mathbf{P}\mathbf{\nu}_{\ell}\cdot\delta\mathbf{\chi}dS_{\ell}\] (4.7) \[- \int\limits_{\mathcal{B}_{0}}\mathrm{Div}\mathds{B}\delta\delta dV +\int\limits_{S_{\mathrm{t}}}\mathds{B}\cdot\mathbf{N}\delta\delta dS_{\mathrm{t}}- \int\limits_{S_{\mathrm{b}}}\mathds{B}\cdot\mathbf{N}\delta\delta dS_{\mathrm{b}}+ \int\limits_{S_{\mathrm{t}}}\mathds{B}\cdot\mathbf{\nu}_{\ell}\delta\Phi dS_{ \ell}\.\] (4.8) * The fourth term in Equation (4.4) can be written as \[-\frac{\mu_{0}}{2}\delta\left[\int\limits_{\mathcal{B}_{0}^{{}^{ \prime}}}\left[\mathbf{F}^{-\mathrm{T}}\mathds{H}\right]\cdot\left[\mathbf{F}^{- \mathrm{T}}\mathds{H}\right]JdV\right] =-\frac{\mu_{0}}{2}\int\limits_{\mathcal{B}_{0}^{{}^{\prime}}}J \mathbf{F}^{-\mathrm{T}}:\delta\mathbf{F}\left[\mathbf{F}^{-\mathrm{T}}\mathds{H}\right] \cdot\left[\mathbf{F}^{-\mathrm{T}}\mathds{H}\right]dV\] \[+\mu_{0}\int\limits_{\mathcal{B}_{0}^{{}^{\prime}}}J\left[\mathbf{F}^ {-\mathrm{T}}\delta\mathbf{F}^{\mathrm{T}}\left[\mathbf{F}^{-\mathrm{T}}\mathds{H} \right]\right]\cdot\left[\mathbf{F}^{-\mathrm{T}}\mathds{H}\right]dV\] \[-\mu_{0}\int\limits_{\mathcal{B}_{0}^{{}^{\prime}}}J\left[\mathbf{F}^ {-\mathrm{T}}\mathds{H}\right]\cdot\left[\mathbf{F}^{-\mathrm{T}}\frac{\partial \delta\Phi}{\partial\mathbf{X}}\right]dV\,\] (4.9) and from Equations (2.23) and (2.15), \[= \int\limits_{\mathcal{B}_{0}^{{}^{\prime}}}\left[\mathbf{P}_{\mathrm{M}}: \delta\mathbf{F}+\mathds{B}\cdot\frac{\partial\delta\Phi}{\partial\mathbf{X}}\right] dV\.\] (4.10) Applying the divergence theorem, one obtains \[\int\limits_{\mathcal{B}_{0}^{{}^{\prime}}}\left[\mathbf{P}_{\mathrm{ M}}:\delta\mathbf{F}+\mathds{B}^{{}^{\prime}}\cdot\frac{\partial\delta\Phi}{ \partial\mathbf{X}}\right]dV\] \[= -\int\limits_{\mathcal{B}_{0}^{{}^{\prime}}}\mathrm{Div}\mathbf{P}_{ \mathrm{M}}\cdot\delta\mathbf{\chi}dV+\int\limits_{\partial\mathcal{V}_{0}}\mathbf{P}_ {\mathrm{M}}\mathbf{N}^{{}^{\prime}}\cdot\delta\mathbf{\chi}dS-\int\limits_{S_{\mathrm{ t}}}\mathbf{P}_{\mathrm{M}_{\mathrm{t}}}\mathbf{N}\cdot\delta\mathbf{\chi}dS_{\mathrm{t}}+\int \limits_{S_{\mathrm{b}}}\mathbf{P}_{\mathrm{M}_{\mathrm{b}}}\mathbf{N}\cdot\delta\mathbf{ \chi}dS_{\mathrm{b}}-\int\limits_{S_{\mathrm{t}}}\mathbf{P}_{\mathrm{M}_{\mathrm{t }}}\mathbf{\nu}_{\ell}\cdot\delta\mathbf{\chi}dS_{\ell}\] \[-\int\limits_{\mathcal{B}_{0}^{{}^{\prime}}}\mathrm{Div}\mathds{B} ^{{}^{\prime}}\delta\Phi dV+\int\limits_{\partial\mathcal{V}_{0}}\mathds{B} ^{{}^{\prime}}\cdot\mathbf{N}^{{}^{\prime}}\delta\Phi dS-\int\limits_{S_{\mathrm{ t}}}\mathds{B}_{\mathrm{t}}^{{}^{\prime}}\cdot\mathbf{N}\delta\Phi dS_{\mathrm{t}}+\int \limits_{S_{\mathrm{b}}}\mathds{B}_{\mathrm{b}}^{{}^{\prime}}\cdot\mathbf{N} \delta\Phi dS_{\mathrm{b}}-\int\limits_{S_{\mathrm{t}}}\mathds{B}_{\mathrm{t}}^{{} ^{\prime}}\cdot\mathbf{\nu}_{\ell}\delta\Phi dS_{\ell}. \tag{4.11}\] From Equation (2.5) it follows that the second term is zero. The exterior magnetic induction is denoted as \(\mathds{B}^{{}^{\prime}}\). Further, at the top and bottom boundaries, \(\mathds{B}^{{}^{\prime}}\) is denoted as \(\mathds{B}_{\mathrm{t}}^{{}^{\prime}}\) and \(\mathds{B}_{\mathrm{b}}^{{}^{\prime}}\), respectively. Similarly, the Maxwell stress tensors at the top and bottom surfaces of the shell are given by \(\mathbf{P}_{\mathrm{M}t}\) and \(\mathds{P}_{\mathrm{M}b}\), respectively. For the lateral surface, the expressions for \(\mathbf{P}_{\mathrm{M}\ell}\) and \(\mathds{B}_{\ell}\) are as follows: \[\mathbf{P}_{\mathrm{M}\ell}=\mathbf{P}_{\mathrm{M}0}+\eta\mathbf{P}_{\mathrm{M }1}+\mathbf{\mathcal{O}}(\eta^{2})\quad\text{and}\quad\mathds{B}_{\ell}^{{}^{ \prime}}=\mathds{B}_{0}^{{}^{\prime}}+\eta\mathds{B}_{1}^{{}^{\prime}}+\mathbf{ \mathcal{O}}(\eta^{2}). \tag{4.12}\] * Since, \(\mathds{B}_{\mathrm{e}}\) is the applied magnetic induction, the fifth term in Equation (4.4) can be written as \[\delta\left[\int\limits_{\mathcal{V}_{0}}\Phi\mathds{B}_{\mathrm{e }}\cdot\mathbf{N}^{{}^{\prime}}dS\right] = \int\limits_{\partial\mathcal{V}_{0}}\delta\Phi\mathds{B}_{\mathrm{ e}}\cdot\mathbf{N}^{{}^{\prime}}dS\.\] (4.13) The remaining terms in Equation (4.4), that is, the virtual work done by the dead load traction, body force and pressures are dealt with in Section 5.3, where their contributions to a modified variational form for a Kirchhoff-Love thin shell are discussed. ## 5 Two dimensional variational formulation for magnetoelastic shells The following discussions outline the key steps involved in deriving the Kirchhoff-Love shell equations. It is important to note that the derived equations can achieve accuracy up to the linear order of the through-thickness parameter. The generalised set of solution variables is now extended to \(\widetilde{\mathbf{\chi}}=\{\mathbf{r},\mathbf{\chi}_{\text{F}},\varPhi_{0},\varPhi^{{}^{ \prime}},p_{0}\}\), and define \(\delta\widetilde{\mathbf{\chi}}=\{\delta\mathbf{r},\delta\mathbf{\chi}_{\text{F}},\delta \varPhi_{0},\delta\varPhi^{{}^{\prime}},\delta p_{0}\}\), such that \[\delta I[\hat{\mathbf{\chi}},\delta\hat{\mathbf{\chi}}]=\delta I[\widetilde{\mathbf{\chi}},\delta\widetilde{\mathbf{\chi}}]. \tag{5.1}\] Here, the magnetic potential in \(\mathbf{\mathcal{E}}_{0}^{{}^{\prime}}\) is denoted as \(\varPhi^{{}^{\prime}}\), and the Lagrange multiplier is expressed as follows: \[p\left(\eta,\theta^{\alpha}\right)=p_{0}\left(\theta^{\alpha}\right)+\eta p_{ 1}\left(\theta^{\alpha}\right)+\mathcal{O}\left(\eta^{2}\right). \tag{5.2}\] The contribution of each integral in the first variation (4.4) to the above modified format is discussed in detail in the following subsections. ### Contribution to the first variation due to the total stress and the Maxwell stress #### 5.1.1 Integrals related to the total Piola stress tensor Taking into account the definition of \(\mathbf{\chi}_{\text{B}}\), and following Equations (A.43) and (A.44), the first term in Equation (4.7) which is the domain term related to the total Piola stress tensor can be written as \[-\int\limits_{\mathcal{B}_{0}}\text{Div}\mathbf{P}\cdot\delta\mathbf{\chi}dV=-\int \limits_{S}\int\limits_{\eta}\text{Div}\mathbf{P}\cdot\delta\mathbf{\chi}_{\text{B}}d \eta dS=-\int\limits_{S_{m}}\int\limits_{\eta}\text{Div}\mathbf{P}\cdot\delta\mathbf{ \chi}_{\text{B}}Md\eta d\mathcal{S}_{\text{m}}\, \tag{5.3}\] Here, \(M\) is defined in Table 2, which outlines the surface and volume elements of the shell. From Equations (3.18) and (A.46), and noting that, \(\int\limits_{\eta}d\eta=T\quad\text{and}\quad\int\limits_{\eta}\eta d\eta=0\), \[-\int\limits_{S_{m}}\int\limits_{\eta}\text{Div}\mathbf{P}\cdot\delta\mathbf{\chi}_{ \text{B}}Md\eta d\mathcal{S}_{\text{m}} = \int\limits_{S_{m}}\left[-T\mathbf{P}_{0,\alpha}\mathbf{A}^{\alpha}\cdot \delta\mathbf{r}-T\mathbf{P}_{1}\mathbf{N}\cdot\delta\mathbf{r}+\mathcal{O}\left(T^{3}\right) \right]dS_{\text{m}}\, \tag{5.4}\] where \[\mathbf{P}_{0}=\mathbf{\mathcal{P}}_{0}-p_{0}\mathbf{F}_{0}^{-\text{T}},\quad\text{and} \quad\mathbf{P}_{1}=\mathbf{\mathcal{P}}_{1}-p_{0}\mathbf{F}_{1}^{-\text{T}}-p_{1}\mathbf{F}_ {0}^{-\text{T}}\, \tag{5.5}\] with \(\mathbf{\mathcal{P}}(\eta,\theta^{\alpha})=\dfrac{\partial\varOmega}{\partial\mathbf{ F}}=\mathbf{\mathcal{P}}_{0}+\eta\mathbf{\mathcal{P}}_{1}+\mathbf{\mathcal{O}}(\eta^{2})\). The boundary term contribution related to the total Piola stress tensor at the top surface in Equation (4.7) can be expressed as \[\int\limits_{S_{\text{t}}}\mathbf{P}\mathbf{N}\cdot\delta\mathbf{\chi}dS_{ \text{t}} = \int\limits_{S_{\text{t}}}\left[\mathbf{P}_{0}\ \mathbf{N}\cdot\delta\mathbf{r}+\frac{T}{2}\delta\lambda\mathbf{P}_{0}\mathbf{N}\cdot\mathbf{n}+ \frac{T}{2}\lambda\mathbf{P}_{0}\mathbf{N}\cdot\delta\mathbf{n}+\frac{T}{2}\mathbf{P}_{1}\mathbf{N }\cdot\delta\mathbf{r}+\mathcal{O}\left(T^{2}\right)\right]dS_{\text{t}}. \tag{5.6}\] Similarly, for the bottom surface, \[-\int\limits_{S_{\text{b}}}\mathbf{P}\mathbf{N}\cdot\delta\mathbf{\chi}dS_{ \text{b}} = \int\limits_{S_{\text{b}}}\left[-\mathbf{P}_{0}\mathbf{N}\cdot\delta\mathbf{r }+\frac{T}{2}\delta\lambda\mathbf{P}_{0}\mathbf{N}\cdot\mathbf{n}+\frac{T}{2}\lambda\mathbf{P} _{0}\mathbf{N}\cdot\delta\mathbf{n}+\frac{T}{2}\mathbf{P}_{1}\mathbf{N}\cdot\delta\mathbf{r}+ \mathcal{O}\left(T^{2}\right)\right]dS_{\text{b}}. \tag{5.7}\] In Equations (5.6) and (5.7), the second and third integrals can be rewritten with the help of Equation (A.23) as \[\int\limits_{S_{\text{m}}}\frac{T}{2}\delta\lambda\mathbf{P}_{0}\mathbf{N }\cdot\mathbf{n}M\Big{|}_{\eta=\pm T/2}dS_{\text{m}}=\int\limits_{S_{m}}\left[ \frac{T}{2}\delta\lambda\mathbf{P}_{0}\mathbf{N}\cdot\mathbf{n}+\mathcal{O}\left(T^{2} \right)\right]dS_{\text{m}}\, \tag{5.8a}\] \[\int\limits_{S_{\text{m}}}\frac{T}{2}\lambda\mathbf{P}_{0}\mathbf{N} \cdot\delta\mathbf{n}M\Big{|}_{\eta=\pm T/2}dS_{\text{m}}=\int\limits_{S_{m}} \left[\frac{T}{2}\lambda\mathbf{P}_{0}\mathbf{N}\cdot\delta\mathbf{n}+\mathcal{O}\left(T^{ 2}\right)\right]dS_{\text{m}}. \tag{5.8b}\] Further, using Equations (A.62) and (A.65), the integral for the lateral boundary in Equation (4.7) can be represented as follows: \[\int\limits_{S_{\text{t}}}\mathbf{P}\mathbf{\nu}_{1}\cdot\delta\mathbf{\chi}dS_{\text{t}}= \int\limits_{\mathcal{C}_{m}}\left[T\mathbf{P}_{0}\mathbf{\nu}\cdot\delta\mathbf{r}+ \mathcal{O}\left(T^{3}\right)\right]dl=\int\limits_{C_{m}}\left[T\nu_{\alpha} \mathbf{P}_{0}\mathbf{A}^{\alpha}\cdot\delta\mathbf{r}+\mathcal{O}\left(T^{3}\right) \right]dl. \tag{5.9}\] #### 5.1.2 Integrals related to the Maxwell stress tensor Following eqn. (4.10), the terms concerning the Maxwell stress tensor at the inner boundaries of the free space can be written for the modified format as follows: \[\int\limits_{S_{\rm t}}\mathbf{P}_{\rm Mt}\mathbf{N}\cdot\delta\mathbf{\chi}dS_{ \rm t} =\int\limits_{S_{\rm t}}\mathbf{P}_{\rm Mt}\mathbf{N}\cdot\delta\mathbf{r}dS_{ \rm t}+\int\limits_{S_{\rm m}}\left[\frac{T}{2}\delta\lambda\mathbf{P}_{\rm Mt}\mathbf{N }\cdot\mathbf{n}+\frac{T}{2}\lambda\mathbf{P}_{\rm Mt}\mathbf{N}\cdot\delta\mathbf{n}+\mathcal{ O}\left(T^{2}\right)\right]dS_{\rm m}\, \tag{5.10a}\] \[\int\limits_{S_{\rm b}}\mathbf{P}_{\rm Mb}\mathbf{N}\cdot\delta\mathbf{\chi}dS _{\rm b} =\int\limits_{S_{\rm b}}\mathbf{P}_{\rm Mb}\mathbf{N}\cdot\delta\mathbf{r}dS_{ \rm b}-\int\limits_{S_{\rm m}}\left[\frac{T}{2}\delta\lambda\mathbf{P}_{\rm Mb} \mathbf{N}\cdot\mathbf{n}+\frac{T}{2}\lambda\mathbf{P}_{\rm Mb}\mathbf{N}\cdot\delta\mathbf{n}+ \mathcal{O}\left(T^{2}\right)\right]dS_{\rm m}\,\] (5.10b) \[\int\limits_{S_{\rm t}}\mathbf{P}_{\rm M\ell}\cdot\delta\mathbf{\chi}dS _{\rm\ell} =\int\limits_{C_{\rm m}\backslash C_{\rm m}^{\rm n}}\left[T\nu_{ \alpha}\mathbf{P}_{\rm Mb}\mathbf{A}^{\alpha}\cdot\delta\mathbf{r}+\mathcal{O}\left(T^{3} \right)\right]dl. \tag{5.10c}\] #### 5.1.3 Contribution arising from both stress tensors Now, considering the fictitious map \(\mathbf{\chi}_{\rm F}\), the net contribution due to the Maxwell and total stress tensors to the modified variational form can be written as shown below: \[-\int\limits_{\mathbf{B}_{0}}\mathrm{Div}\mathbf{P}\cdot\delta\mathbf{\chi} dV+\int\limits_{S_{\rm t}}\mathbf{P}\mathbf{N}\cdot\delta\mathbf{\chi}dS_{\rm t}-\int \limits_{S_{\rm b}}\mathbf{P}\mathbf{N}\cdot\delta\mathbf{\chi}dS_{\rm b}+\int\limits_{S_ {\rm t}}\mathbf{P}\mathbf{\nu}_{\rm\ell}\cdot\delta\mathbf{\chi}dS_{\rm t}-\int\limits_{ \mathbf{B}_{0}^{\prime}}\mathrm{Div}\mathbf{P}_{\rm Mt}\cdot\delta\mathbf{\chi}dV \tag{5.11}\] \[-\int\limits_{S_{\rm t}}\mathbf{P}_{\rm Mt}\mathbf{N}\cdot\delta\mathbf{\chi }dS_{\rm t}+\int\limits_{S_{\rm b}}\mathbf{P}_{\rm Mb}\mathbf{N}\cdot\delta\mathbf{\chi} dS_{\rm b}-\int\limits_{S_{\rm t}}\mathbf{P}_{\rm Md}\mathbf{\nu}_{\rm\ell}\cdot\delta\mathbf{ \chi}dS_{\rm t}\] \[= \int\limits_{S_{\rm m}}\left[-T\mathbf{P}_{0,\alpha}\mathbf{A}^{\alpha} \cdot\delta\mathbf{r}-\frac{T}{2}\mathbf{P}_{1}\mathbf{N}\cdot\delta\mathbf{r}+T\delta\lambda \dot{\mathbf{P}}\mathbf{N}\cdot\mathbf{n}+T\lambda\dot{\mathbf{P}}\mathbf{N}\cdot\delta\mathbf{n} \right]dS_{\rm m}\] \[+\int\limits_{S_{\rm t}}\left[\mathbf{P}_{0}\mathbf{N}\cdot\delta\mathbf{r}+ \frac{T}{2}\mathbf{P}_{1}\mathbf{N}\cdot\delta\mathbf{r}-\mathbf{P}_{\rm Mt}\mathbf{N}\cdot\delta \mathbf{r}\right]dS_{\rm t}+\int\limits_{S_{\rm b}}\left[-\mathbf{P}_{0}\mathbf{N}\cdot \delta\mathbf{r}+T\mathbf{P}_{1}\mathbf{N}\cdot\delta\mathbf{r}+\mathbf{P}_{\rm Mb}\mathbf{N}\cdot \delta\mathbf{r}\right]dS_{\rm b}\] \[+\int\limits_{C_{\rm m}\backslash C_{\rm m}}\left[\mathbf{P}_{0}-\mathbf{ P}_{\rm Mb}\right]\mathbf{A}^{\alpha}\cdot\delta\mathbf{r}dl-\int\limits_{S_{\rm t}} \mathrm{Div}\mathbf{P}_{\rm Mt}\cdot\delta\mathbf{\chi}_{\rm F}dV\,\] with \(\dot{\mathbf{P}}=\mathbf{P}_{0}-\overline{\mathbf{P}}_{\rm M}\) and \(\overline{\mathbf{P}}_{\rm M}=\frac{\mathbf{P}_{\rm Mt}+\mathbf{P}_{\rm Mb}}{2}\). The third and fourth term in the integral over the mid-surface of the shell can be rewritten by using Equation (A.45) as \[\int\limits_{S_{\rm m}}T\delta\lambda\dot{\mathbf{P}}\mathbf{N}\cdot \mathbf{n}dS_{\rm m}=\int\limits_{P}T\delta\lambda\dot{\mathbf{P}}\mathbf{N}\cdot\mathbf{n}A ^{1/2}dP\,\quad\text{and}\quad\int\limits_{S_{\rm m}}T\lambda\dot{\mathbf{P}}\mathbf{N} \cdot\delta\mathbf{n}dS_{\rm m}=\int\limits_{P}T\lambda\dot{\mathbf{P}}\mathbf{N}\cdot \delta\mathbf{n}A^{1/2}dP. \tag{5.12}\] Further, using Equations (A.19), (B.5), and (B.10), Equation (5.12)\({}_{1}\) can be expressed as \[\int\limits_{P}T\delta\lambda\dot{\mathbf{P}}\mathbf{N}\cdot\mathbf{n}A^{1/2}dP = -\int\limits_{P}T\lambda\left[\dot{\mathbf{P}}\mathbf{N}\cdot\mathbf{n} \right]\mathbf{a}^{\alpha}\cdot\delta\mathbf{a}_{\alpha}A^{1/2}dP, \tag{5.13}\] \[= -\int\limits_{P}\left[C^{\alpha}A^{1/2}\right]_{,\alpha}dP+\int \limits_{S_{\rm m}}\left[T\lambda\left[\dot{\mathbf{P}}\mathbf{N}\cdot\mathbf{n}\right] \mathbf{a}^{\alpha}\right]_{,\alpha}\cdot\delta\mathbf{r}dS_{\rm m}+\int\limits_{S_ {\rm m}}C^{\alpha}I_{\beta\alpha}^{\beta}dS_{\rm m},\] with \(C^{\alpha}=T\lambda\left[\dot{\mathbf{P}}\mathbf{N}\cdot\mathbf{n}\right]\mathbf{a}^{\alpha} \cdot\delta\mathbf{r}\), and the Christoffel symbols of the second kind \(\mathbf{\varGamma}\) is defined in Table 1. Similarly, Equation (5.12)\({}_{2}\) can be written as \[\int\limits_{P}T\lambda\dot{\mathbf{P}}\mathbf{N}\cdot\delta\mathbf{n}A^{1/2}dP=-\int \limits_{P}\left[D^{\alpha}A^{1/2}\right]_{,\alpha}dP+\int\limits_{S_{\rm m}} \left[T\lambda\left[\dot{\mathbf{P}}\mathbf{N}\cdot\mathbf{a}^{\alpha}\right]\mathbf{n}\right]_ {,\alpha}\cdot\delta\mathbf{r}dS_{\rm m}+\int\limits_{S_{\rm m}}D^{\alpha}I_{ \beta\alpha}^{\beta}dS_{\rm m}\, \tag{5.14}\] with \(D^{\alpha}=T\lambda\left[\dot{\mathbf{P}}\mathbf{N}\cdot\mathbf{a}^{\alpha}\right]\mathbf{n} \cdot\delta\mathbf{r}\). Moreover, following Equation (3.8), and excluding the parts of the boundary where displacements are specified, one obtains \[-\int\limits_{P}\left[C^{\alpha}A^{1/2}\right]_{,\alpha}dP=-\int\limits_{C_{\rm m }\backslash C_{\rm m}^{\rm n}}C^{\alpha}\nu_{\alpha}dl\,\quad\text{and}\quad-\int\limits_{P}\left[D^{\alpha}A^{1/2}\right]_{, \alpha}dP=-\int\limits_{C_{\rm m}\backslash C_{\rm m}^{\rm n}}D^{\alpha}\nu_{ \alpha}dl. \tag{5.15}\] ### Contribution to the first variation due to the magnetic induction vector #### 5.2.1 Integrals related to the magnetic induction vector inside the shell In Equation (4.7), using Equation (3.20), the domain term involving the magnetic induction vector can be written as \[-\int\limits_{\mathcal{B}_{0}}\mathrm{Div}\mathbf{B}\delta\Phi dV = \int\limits_{\mathcal{S}_{\mathrm{m}}}\left[-T\,\mathbf{B}_{0, \alpha}\cdot\mathbf{A}^{\alpha}\delta\Phi_{0}-T\mathbf{B}_{1}\cdot\mathbf{N}\delta \Phi_{0}+\mathcal{O}\left(T^{3}\right)\right]dS_{\mathrm{m}}. \tag{5.16}\] From Equations (3.6) and (3.7), the corresponding boundary term at the top surface is given by \[\int\limits_{\mathcal{S}_{\mathrm{t}}}\mathbf{B}\cdot\mathbf{N} \delta\Phi dS_{\mathrm{t}} = \int\limits_{\mathcal{S}_{\mathrm{t}}}\left[\mathbf{B}_{0}\cdot \mathbf{N}+\frac{T}{2}\mathbf{B}_{1}\cdot\mathbf{N}+\mathcal{O}\left(T^{2}\right) \right]\delta\Phi_{0}dS_{\mathrm{t}} \tag{5.17}\] \[+\int\limits_{\mathcal{S}_{\mathrm{t}}}\frac{1}{2}\left[ \mathbf{B}_{0}\cdot\mathbf{N}+\frac{T}{2}\mathbf{B}_{1}\cdot\mathbf{N}+\mathcal{O} \left(T^{2}\right)\right]\delta\Phi_{\mathrm{t}}^{{}^{\prime}}dS_{\mathrm{t}}\] \[-\int\limits_{\mathcal{S}_{\mathrm{t}}}\frac{1}{2}\left[ \mathbf{B}_{0}\cdot\mathbf{N}+\frac{T}{2}\mathbf{B}_{1}\cdot\mathbf{N}+\mathcal{O} \left(T^{2}\right)\right]\delta\Phi_{\mathrm{b}}^{{}^{\prime}}dS_{\mathrm{t}}\.\] The continuity of the magnetic potential at the shell boundaries is enforced, allowing \(\Phi_{1}\) to be expressed as \(\Phi_{1}\left(\theta^{\alpha}\right)=\frac{\Phi_{\mathrm{t}}^{{}^{\prime}}- \Phi_{\mathrm{b}}^{{}^{\prime}}}{T}\), with \(\Phi_{\mathrm{t}}^{{}^{\prime}}\) and \(\Phi_{\mathrm{b}}^{{}^{\prime}}\) as the potential at the top and bottom boundaries, respectively. The first integral can be rewritten as follows: \[\int\limits_{\mathcal{S}_{\mathrm{t}}}\left[\mathbf{B}_{0}\cdot \mathbf{N}+\frac{T}{2}\mathbf{B}_{1}\cdot\mathbf{N}+\mathcal{O}\left(T^{2}\right) \right]\delta\Phi_{0}dS_{\mathrm{t}} \tag{5.18}\] \[= \int\limits_{\mathcal{S}_{\mathrm{m}}}\left[\mathbf{B}_{0}\cdot \mathbf{N}\delta\Phi_{0}+\frac{T}{2}\mathbf{B}_{1}\cdot\mathbf{N}\delta\Phi_{0}-TH \mathbf{B}_{0}\cdot\mathbf{N}\delta\Phi_{0}+\mathcal{O}\left(T^{2}\right)\right]dS _{\mathrm{m}}\,\] and from Equation (A.52), noting that, \[dS_{\mathrm{t}}=\left[1-TH+\frac{T^{2}}{4}\,K\right]\left[1+TH+\frac{T^{2}}{4 }\,K\right]^{-1}dS_{\mathrm{b}}=\left[1-2TH+\mathcal{O}\left(T^{2}\right) \right]dS_{\mathrm{b}}, \tag{5.19}\] one can write for the third integral \[-\int\limits_{\mathcal{S}_{\mathrm{t}}}\frac{1}{2}\left[\mathbf{B }_{0}\cdot\mathbf{N}+\frac{T}{2}\mathbf{B}_{1}\cdot\mathbf{N}+\mathcal{O}\left(T^{2} \right)\right]\delta\Phi_{\mathrm{b}}^{{}^{\prime}}dS_{\mathrm{t}} \tag{5.20}\] \[= \int\limits_{\mathcal{S}_{\mathrm{b}}}\frac{1}{2}\left[-\mathbf{B }_{0}\cdot\mathbf{N}\delta\Phi_{\mathrm{b}}^{{}^{\prime}}-\frac{T}{2}\mathbf{B}_{1 }\cdot\mathbf{N}\delta\Phi_{\mathrm{b}}^{{}^{\prime}}+2TH\,\mathbf{B}_{0}\cdot\bm {N}\delta\Phi_{\mathrm{b}}^{{}^{\prime}}+\mathcal{O}\left(T^{2}\right)\right]dS _{\mathrm{b}}\.\] Incorporating Equations (5.18) and (5.20), and omitting the subscripts t and b for the exterior magnetic potential, one obtains \[\int\limits_{\mathcal{S}_{\mathrm{t}}}\mathbf{B}\cdot\mathbf{N} \delta\Phi dS_{\mathrm{t}} = \int\limits_{\mathcal{S}_{\mathrm{m}}}\left[\mathbf{B}_{0}\cdot \mathbf{N}\delta\Phi_{0}+\frac{T}{2}\mathbf{B}_{1}\cdot\mathbf{N}\delta\Phi_{0}-TH \mathbf{B}_{0}\cdot\mathbf{N}\delta\Phi_{0}+\mathcal{O}\left(T^{2}\right)\right]dS _{\mathrm{m}} \tag{5.21}\] \[+\int\limits_{\mathcal{S}_{\mathrm{t}}}\frac{1}{2}\left[\mathbf{B }_{0}\cdot\mathbf{N}\delta\Phi^{{}^{\prime}}+\frac{T}{2}\mathbf{B}_{1}\cdot\mathbf{N} \delta\Phi^{{}^{\prime}}+\mathcal{O}\left(T^{2}\right)\right]dS_{\mathrm{t}}\] \[+\int\limits_{\mathcal{S}_{\mathrm{b}}}\frac{1}{2}\left[-\mathbf{ B}_{0}\cdot\mathbf{N}\delta\Phi^{{}^{\prime}}-\frac{T}{2}\mathbf{B}_{1}\cdot\mathbf{N} \delta\Phi^{{}^{\prime}}+2TH\mathbf{B}_{0}\cdot\mathbf{N}\delta\Phi^{{}^{\prime}}+ \mathcal{O}\left(T^{2}\right)\right]dS_{\mathrm{b}}\.\] Similarly, for the bottom surface of the shell, \[-\int\limits_{\mathcal{S}_{\mathrm{b}}}\mathbf{B}\cdot\mathbf{N} \delta\Phi dS_{\mathrm{b}} = \int\limits_{\mathcal{S}_{\mathrm{m}}}\left[-\mathbf{B}_{0}\cdot \mathbf{N}\delta\Phi_{0}+\frac{T}{2}\mathbf{B}_{1}\cdot\mathbf{N}\delta\Phi_{0}-TH \,\mathbf{B}_{0}\cdot\mathbf{N}\delta\Phi_{0}+\mathcal{O}\left(T^{2}\right)\right]dS _{\mathrm{m}}\] \[+\int_{S_{\rm u}}\frac{1}{2}\left[-{\bf B}_{0}\cdot\mathbf{N}\delta \mathbf{\phi}^{{}^{\prime}}+\frac{T}{2}{\bf B}_{1}\cdot\mathbf{N}\delta\mathbf{\phi}^{{}^{ \prime}}+{\cal O}\left(T^{2}\right)\right]dS_{\rm b}\] \[+\int_{S_{\rm u}}\frac{1}{2}\left[{\bf B}_{0}\cdot\mathbf{N}\delta\mathbf{ \phi}^{{}^{\prime}}-\frac{T}{2}{\bf B}_{1}\cdot\mathbf{N}\delta\mathbf{\phi}^{{}^{ \prime}}+2TH{\bf B}_{0}\cdot\mathbf{N}\delta\mathbf{\phi}^{{}^{\prime}}+{\cal O}\left(T ^{2}\right)\right]dS_{\rm t}. \tag{5.22}\] Note that the effect of the external field on the response of the soft magnetoelastic thin shell, leads to integrals over the top, bottom, and mid-surfaces during the derivation of the reduced-order model. In Equation (4.7), the contribution corresponding to the lateral surface of the shell can be expressed as \[\int_{S_{\rm t}}{\bf B}\cdot\mathbf{\nu}_{\ell}\delta\mathbf{\phi}dS_{ \ell} = \int_{C_{\rm m}}\left[T{\bf B}_{0}\cdot\mathbf{\nu}\delta\mathbf{\phi}_{0 }+{\cal O}\left(T^{3}\right)\right]dl. \tag{5.23}\] #### 5.2.2 Contribution arising from the magnetic induction vector In Equation (4.10), for the exterior magnetic induction, the integral over the lateral surface can be written as follows: \[-\int_{S_{\rm t}}{\bf B}^{{}^{\prime}}_{\ell}\cdot\mathbf{\nu}_{\ell }\delta\mathbf{\phi}dS_{\ell} = -\int_{C_{\rm m}}\left[T{\bf B}^{{}^{\prime}}_{0}\cdot\mathbf{\nu} \delta\mathbf{\phi}_{0}+{\cal O}\left(T^{3}\right)\right]dl. \tag{5.24}\] The total contribution resulting from the magnetic induction vector in the shell and free space in the modified variational form can now be expressed as \[-\int_{\mathbf{\beta}^{\prime}_{0}}{\rm Div}{\bf B}^{{}^{\prime}} \delta\mathbf{\phi}dV+\int_{\partial V_{0}}{\bf B}^{{}^{\prime}}\cdot\mathbf{N}^{{}^{ \prime}}\delta\mathbf{\phi}dS-\int_{S_{\rm t}}{\bf B}^{{}^{\prime}}_{\rm t}\cdot \mathbf{N}\delta\mathbf{\phi}dS_{\rm t}+\int_{S_{\rm t}}{\bf B}^{{}^{\prime}}_{\rm b} \cdot\mathbf{N}\delta\mathbf{\phi}dS_{\rm b}-\int_{S_{\rm t}}{\bf B}^{{}^{\prime}}_{ \ell}\cdot\mathbf{\nu}_{\ell}\delta\mathbf{\phi}dS_{\ell} \tag{5.25}\] \[-\int_{\mathbf{\beta}^{\prime}_{0}}{\rm Div}{\bf B}\delta\mathbf{\phi}dV+ \int_{S_{\rm t}}{\bf B}\cdot\mathbf{N}\delta\mathbf{\phi}dS_{\rm t}-\int_{S_{\rm t}}{ \bf B}\cdot\mathbf{N}\delta\mathbf{\phi}dS_{\rm b}+\int_{S_{\rm t}}{\bf B}\cdot\mathbf{\nu }_{\ell}\delta\mathbf{\phi}dS_{\ell}\] \[= -\int_{\mathbf{\beta}^{\prime}_{0}}{\rm Div}{\bf B}^{{}^{\prime}} \delta\mathbf{\phi}^{{}^{\prime}}dV+\int_{\partial V_{0}}{\bf B}^{{}^{\prime}} \cdot\mathbf{N}^{{}^{\prime}}\delta\mathbf{\phi}^{{}^{\prime}}dS\] \[+\int_{S_{\rm t}}\left[{\bf B}_{0}\cdot\mathbf{N}+TH{\bf B}_{0}\cdot \mathbf{N}-{\bf B}^{{}^{\prime}}_{\rm t}\cdot\mathbf{N}\right]\delta\mathbf{\phi}^{{}^{ \prime}}dS_{\rm t}+\int_{S_{\rm t}}\left[-{\bf B}_{0}\cdot\mathbf{N}+TH{\bf B}_{0} \cdot\mathbf{N}+{\bf B}^{{}^{\prime}}_{\rm b}\cdot\mathbf{N}\right]\delta\mathbf{\phi}^ {{}^{\prime}}dS_{\rm b}\] \[+\int_{S_{\rm m}}\left[-T{\bf B}_{0,\alpha}\cdot\mathbf{A}^{\alpha}-2 TH{\bf B}_{0}\cdot\mathbf{N}\right]\delta\mathbf{\phi}_{0}dS_{\rm m}+\int_{C_{\rm m}} \left[T{\bf B}_{0}\cdot\mathbf{\nu}-T{\bf B}^{{}^{\prime}}_{0}\cdot\mathbf{\nu}\right] \delta\mathbf{\phi}_{0}dl\.\] ### Contribution to the first variation due to external loads The terms related to the external mechanical loads, as they appear in Equation (4.4), will now be expanded upon. #### 5.3.1 Integrals related to externally applied pressure In the present work, a noteworthy aspect is the differentiation of the applied pressures at the top and bottom surfaces of the shell structure, instead of directly considering them on the mid-surface during the derivation of the shell system of equations. From Equations (A.52) and (A.53), the virtual work due to the external pressure at the top surface of the shell is given by \[\int_{\mathbf{\gamma}_{\rm t}}{\mathbf{p}}_{\rm t}\cdot\delta\mathbf{\chi}d {s}_{\rm t} = -\int_{S_{\rm t}}p_{\rm t}\mathbf{n}\cdot\delta\mathbf{\chi}_{\rm B}\Big{|} _{\eta=T/2}\left.\widehat{\mu}\right|_{\eta=T/2}\widehat{a}^{1/2}dS_{\rm t}\, \tag{5.26}\] where \[\widehat{\mu}=\frac{\mu}{M}=\left[1-2\eta\lambda h+\eta^{2}\lambda^{2}\kappa \right]\left[1-2\eta H+\eta^{2}K\right]^{-1}=\left[1-2\eta\left[\lambda h+H \right]+{\cal O}\left(\eta^{2}\right)\right]. \tag{5.27}\] Using the expression for \(\widehat{\mu}\), and taking into account that \(\mathbf{n}\cdot\mathbf{n}=1\), \(\mathbf{n}\cdot\delta\mathbf{n}=0\), and \(\lambda\widehat{a}^{1/2}=1\), Equation (5.26) can be rewritten as \[\int\limits_{s_{\rm t}}\mathbf{p}_{\rm t}\cdot\delta\mathbf{\chi}ds_{\rm t} = \int\limits_{S_{\rm t}}\left[-\lambda^{-1}p_{\rm t}\mathbf{n}\cdot \delta\mathbf{r}+\mathbf{a}^{\alpha}\cdot\delta\mathbf{a}_{\alpha}\frac{T}{2}p_{\rm t}+T \left[h+\lambda^{-1}H\right]p_{\rm t}\mathbf{n}\cdot\delta\mathbf{r}+\mathcal{O}\left(T ^{2}\right)\right]dS_{\rm t}. \tag{5.28}\] Similarly, the virtual work due to the external pressure at the bottom surface of the thin shell can be expressed as \[\int\limits_{s_{\rm t}}\mathbf{p}_{\rm b}\cdot\delta\mathbf{\chi}ds_{\rm b} = \int\limits_{S_{\rm b}}\left[\lambda^{-1}p_{\rm b}\mathbf{n}\cdot \delta\mathbf{r}+\mathbf{a}^{\alpha}\cdot\delta\mathbf{a}_{\alpha}\frac{T}{2}p_{\rm b}+T \left[h+\lambda^{-1}H\right]p_{\rm b}\mathbf{n}\cdot\delta\mathbf{r}+\mathcal{O}\left( T^{2}\right)\right]dS_{\rm b}. \tag{5.29}\] Therefore, using Equation (A.44), \[\int\limits_{s_{\rm t}}\mathbf{p}_{\rm t}\cdot\delta\mathbf{\chi}ds_{\rm t }+\int\limits_{s_{\rm b}}\mathbf{p}_{\rm b}\cdot\delta\mathbf{\chi}ds_{\rm b} = \int\limits_{S_{\rm t}}\left[-\lambda^{-1}p_{\rm t}\mathbf{n}\cdot \delta\mathbf{r}+T\left[h+\lambda^{-1}H\right]p_{\rm t}\mathbf{n}\cdot\delta\mathbf{r}+ \mathcal{O}\left(T^{2}\right)\right]dS_{\rm t}+ \tag{5.30}\] \[\int\limits_{S_{\rm b}}\left[\lambda^{-1}p_{\rm b}\mathbf{n}\cdot \delta\mathbf{r}+T\left[h+\lambda^{-1}H\right]p_{\rm b}\mathbf{n}\cdot\delta\mathbf{r}+ \mathcal{O}\left(T^{2}\right)\right]dS_{\rm b}+\] \[\int\limits_{S_{\rm m}}\left[\mathbf{a}^{\alpha}\cdot\delta\mathbf{a}_{ \alpha}T\overline{p}+\mathcal{O}\left(T^{2}\right)\right]dS_{\rm m}\,\] with \(\overline{p}=\dfrac{p_{t}+p_{\rm b}}{2}\), and the integral over the mid-surface excluding the higher-order terms can be further simplified as \[\int\limits_{S_{\rm m}}\mathbf{a}_{\alpha}\cdot\delta\mathbf{a}^{\alpha}T\overline{p} dS_{\rm m} = \int\limits_{C_{\rm m}\backslash C_{\rm m}^{\rm m}}T\overline{p} \mathbf{a}^{\alpha}\nu_{\alpha}\cdot\delta\mathbf{r}dl-\int\limits_{S_{\rm m}}\left[T \overline{p}\mathbf{a}^{\alpha}\right]_{,\alpha}\cdot\delta\mathbf{r}dS_{\rm m}-\int \limits_{S_{\rm m}}T\overline{p}\mathbf{a}^{\alpha}\Gamma_{\beta\alpha}^{\beta} \cdot\delta\mathbf{r}dS_{\rm m}. \tag{5.31}\] #### 5.3.2 Integral related to the dead load traction For the lateral surface of the shell, the contribution due to the dead load traction applied at the bounding curve of the mid-surface can be written as \[\int\limits_{C_{\rm m}\backslash C_{\rm m}^{\rm m}}\mathbf{t}_{\ell}\cdot\delta\bm {\chi}dl=\int\limits_{C_{\rm m}\backslash C_{\rm m}^{\rm m}}\mathbf{t}_{\ell} \cdot\delta\mathbf{\chi}_{\rm B}\Big{|}_{\eta=0}dl=\int\limits_{C_{\rm m} \backslash C_{\rm m}^{\rm m}}\mathbf{t}_{\ell}\cdot\delta\mathbf{r}dl. \tag{5.32}\] #### 5.3.3 Integral related to the body force The virtual work due to the body force per unit volume can be further simplified as \[\int\limits_{\mathcal{B}_{0}}\mathbf{\mathfrak{B}}\cdot\delta\mathbf{\chi}dV = \int\limits_{S_{\rm m}}\left[T\mathbf{\mathfrak{B}}_{0}\cdot\delta\bm {r}+\mathcal{O}\left(T^{3}\right)\right]dS_{\rm m}\, \tag{5.33}\] with \(\mathbf{\mathfrak{B}}(\eta,\theta^{\alpha})=\mathbf{\mathfrak{B}}_{0}+\eta\mathbf{ \mathfrak{B}}_{1}+\mathbf{\mathcal{O}}(\eta^{2})\) in \(\mathcal{B}_{0}\). #### 5.3.4 Net contribution due to the external loads The applied magnetic induction, as defined by Equation (4.12), along with the overall role of external mechanical loads on the modified variational framework, is considered. Consequently, the combined effect of the external stimulus can be expressed as \[-\int\limits_{S_{\rm t}}\mathbf{p}_{\rm t}\cdot\delta\mathbf{\chi}dS_{\rm t }-\int\limits_{S_{\rm b}}\mathbf{p}_{\rm b}\cdot\delta\mathbf{\chi}dS_{\rm b}-\int \limits_{\mathcal{B}_{0}}\mathbf{\mathfrak{B}}\cdot\delta\mathbf{\chi}dV-\int\limits_ {C_{\rm m}\backslash C_{\rm m}^{\rm m}}\mathbf{t}_{\ell}\cdot\delta\mathbf{\chi}dl- \int\limits_{\partial V_{0}}\mathbf{\rm B}_{\rm e}\cdot\mathbf{N}^{\prime}\,\delta \Phi dS\] \[= \int\limits_{S_{\rm t}}\left[\lambda^{-1}p_{\rm t}\mathbf{n}-T\left[h +\lambda^{-1}H\right]p_{\rm t}\mathbf{n}\right]\cdot\delta\mathbf{r}dS_{\rm t}-\int \limits_{S_{\rm b}}\left[\lambda^{-1}p_{\rm t}\mathbf{n}+T\left[h+\lambda^{-1}H \right]p_{\rm b}\mathbf{n}\right]\cdot\delta\mathbf{r}dS_{\rm b}\] \[+\int\limits_{S_{\rm m}}\left[\left[T\overline{p}\mathbf{a}^{\alpha} \right]_{,\alpha}+T\overline{p}\mathbf{a}^{\alpha}\Gamma_{\beta\alpha}^{\beta}-T \overline{p}\mathbf{\mathfrak{B}}_{0}\right]\cdot\delta\mathbf{r}dS_{\rm m}-\int\limits_ {C_{\rm m}\backslash C_{\rm m}^{\rm m}}\left[T\overline{p}\nu_{\alpha}\mathbf{a}^{ \alpha}+\mathbf{t}_{\ell}\right]\cdot\delta\mathbf{r}dl\] \[-\int\limits_{\partial V_{0}}\mathbb{B}_{\rm e}\cdot\mathbf{N}^{{}^{\prime}} \delta\mathbf{\phi}^{{}^{\prime}}dS. \tag{5.34}\] ### Contribution to the first variation due to the incompressibility constraint For the volume-preserving magnetoelastic body, when expanding the Lagrange multiplier along the thickness of the thin shell and considering only the first-order terms with respect to the through-thickness parameter in the modified variational form, the contribution arising from the incompressibility constraint can be expressed as follows: \[\int\limits_{\mathbb{B}_{0}}\delta p\left[J-1\right]dV = \int\limits_{S_{\rm m}}\left[T\left[J_{0}-1\right]+\mathcal{O} \left(T^{3}\right)\right]\delta p_{0}dS_{\rm m}. \tag{5.35}\] ## 6 Governing equations for the Kirchhoff-Love magnetoelastic shell and accompanying free space The equations for a nonlinear magnetoelastostatic Kirchhoff-Love thin shell are now derived using the modified variational form. In Section 5, the contributions of the stress tensors, external loads, and magnetic induction vector to the modified variational format were determined. By adding Equations (5.11), (5.25), (5.34), and (5.35), the variation of the total potential energy of the system is obtained. The state of magnetoelastic equilibrium is obtained by considering the variable \(\delta\widehat{\mathbf{\chi}}\) as arbitrary, which must correspond to an extremum of \(\delta\Pi\). In other words, the first variation of the potential energy functional must be zero. Now, by the arbitrary variation \(\delta\mathbf{r}\), the shell-system of equations are obtained as follows: \[-T\mathbf{P}_{0,\alpha}\mathbf{A}^{\alpha}-T\mathbf{P}_{1}\mathbf{N}+\left[T \lambda\left[\mathbf{P}\mathbf{N}\cdot\mathbf{n}\right]\mathbf{a}^{\alpha}\right]_{,\alpha}+ \left[T\lambda\left[\mathbf{P}\mathbf{N}\cdot\mathbf{a}^{\alpha}\right]\mathbf{n}\right]_{,\alpha}\] \[+T\lambda\left[\mathbf{P}\mathbf{N}\cdot\mathbf{n}\right]\mathbf{a}^{\alpha}\Gamma ^{\beta}_{\beta\alpha}+T\lambda\left[\hat{\mathbf{P}}\mathbf{N}\cdot\mathbf{a}^{\alpha} \right]\mathbf{n}\Gamma^{\beta}_{\beta\alpha}+\left[T\overline{p}\mathbf{a}^{\alpha} \Gamma^{\beta}_{\beta\alpha}-T\mathbf{\mathfrak{B}}_{0}\right. = \mathbf{0}\quad\forall\mathbf{X}\in S_{\rm m}, \tag{6.1a}\] \[T\left[\mathbf{P}_{0}-\mathbf{P}_{\rm M0}\right]\mathbf{A}^{\alpha}\nu_{ \alpha}-T\lambda\left[\hat{\mathbf{P}}\mathbf{N}\cdot\mathbf{n}\right]\mathbf{a}^{\alpha}\nu_ {\alpha}-T\lambda\left[\hat{\mathbf{P}}\mathbf{N}\cdot\mathbf{a}^{\alpha}\right]\mathbf{n}\nu _{\alpha}-T\overline{p}\mathbf{a}^{\alpha}\nu_{\alpha}-\mathbf{t}_{\ell} = \mathbf{0}\quad\forall\mathbf{X}\in C_{\rm m}\setminus C_{\rm m}^{\rm u},\] (6.1b) \[\left[\mathbf{P}_{0}-\mathbf{P}_{\rm Mt}\right]\mathbf{N}+\frac{T}{2}\mathbf{P}_ {1}\mathbf{N}+\lambda^{-1}p_{t}\mathbf{n}-T\left[h+\lambda^{-1}H\right]p_{t}\mathbf{n} = \mathbf{0}\quad\forall\mathbf{X}\in S_{\rm t},\] (6.1c) \[-\left[\mathbf{P}_{0}-\mathbf{P}_{\rm Mb}\right]\mathbf{N}+\frac{T}{2}\mathbf{P}_ {1}\mathbf{N}-\lambda^{-1}p_{t}\mathbf{n}-T\left[h+\lambda^{-1}H\right]p_{t}\mathbf{n} = \mathbf{0}\quad\forall\mathbf{X}\in S_{\rm b}. \tag{6.1d}\] Also, considering the arbitrary variations \(\delta\mathbf{\Phi}_{0}\) and \(\delta\mathbf{\Phi}^{\prime}\) in the shell and the free space, respectively, the equations obtained are given by \[-T\mathbb{B}_{0,\alpha}\cdot\mathbf{A}^{\alpha}-2TH\mathbb{B}_{0} \cdot\mathbf{N} = 0\quad\forall\mathbf{X}\in S_{\rm m}, \tag{6.2a}\] \[\left[\mathbb{B}_{0}-\mathbb{B}^{{}^{\prime}}_{0}\right]\cdot \mathbf{\nu} = 0\quad\forall\mathbf{X}\in C_{\rm m},\] (6.2b) \[\left[\mathbb{B}_{0}-\mathbb{B}^{{}^{\prime}}_{\rm t}\right]\cdot \mathbf{N}+TH\mathbb{B}_{0}\cdot\mathbf{N} = 0\quad\forall\mathbf{X}\in S_{\rm t},\] (6.2c) \[-\left[\mathbb{B}_{0}-\mathbb{B}^{{}^{\prime}}_{\rm b}\right]\cdot \mathbf{N}+TH\mathbb{B}_{0}\cdot\mathbf{N} = 0\quad\forall\mathbf{X}\in S_{\rm b}, \tag{6.2d}\] and \[\mathrm{Div}\mathbb{B}^{{}^{\prime}} = 0\quad\forall\mathbf{X}\in\mathcal{B}^{{}^{\prime}}_{0}, \tag{6.3a}\] \[\left[\mathbb{B}^{{}^{\prime}}-\mathbb{B}_{\rm e}\right]\cdot \mathbf{N}^{{}^{\prime}} = 0\quad\forall\mathbf{X}\in\partial\mathcal{V}. \tag{6.3b}\] The arbitrary variation \(\delta\mathbf{\chi}_{\rm F}\) and \(\delta p_{0}\) leads to \[\mathrm{Div}\mathbf{P}_{M}=\mathbf{0}\quad\forall\mathbf{X}\in\mathcal{B}^{{}^{\prime}} _{0}\qquad\text{and}\qquad J_{0}-1=0\quad\forall\mathbf{X}\in S_{\rm m}. \tag{6.4}\] Equation (6.4)\({}_{2}\) returns the incompressibility relation (3.17) at the mid-surface, as derived in Section 3. Furthermore, by neglecting the higher-order terms, the condition on the magnetic potential at the surfaces of the shell structure, as given by Equation (3.22), can be rewritten as: \[-\Phi_{0,\alpha}-\eta\mathbf{\Phi}_{0,\beta}B_{\alpha}^{\ \ \beta}-\eta\left[\frac{\mathbf{\phi}^{{}^{\prime}}_{\rm t}-\mathbf{\phi}^{{}^{ \prime}}_{\rm b}}{T}\right]_{,\alpha}+\mathrm{Grad}\mathbf{\phi}^{{}^{\prime}} \cdot\mathbf{A}_{\alpha}-\eta\mathrm{Grad}\mathbf{\phi}^{{}^{\prime}}B_{\alpha}^{\ \ \beta}\cdot\mathbf{A}_{\beta}=0, \tag{6.5}\] with \(\eta=\pm T/2\) at the top and bottom surfaces, respectively. ## 7 Finite inflation and magnetisation of a long cylindrical shell The main objective of this section is to illustrate via an example how the equilibrium equations for a Kirchhoff-Love magnetoelastic thin shell, as introduced in Section 6, can be used to derive the response equations for the boundary-value problems at hand. Consider the problem of an inflating infinite magnetoelastic thin cylindrical shell. The body is subjected to two loading situations, as depicted in Figure 2. The first scenario is purely mechanical case where external pressures are applied at the inner and outer surfaces of the shell structure. The second scenario is the magnetoelastic case, with a wire carrying a current \(i\) along the axis of the thin cylinder. The inner boundary of the free space is at an infinitesimal distance \(\Delta\) from the wire while the outer boundary extends to infinity, as shown in Figure 2. The axisymmetric deformation of an infinite cylinder under a unit axial stretch is given by \[\mathbf{r} =\mathbf{R}+\mathbf{u}=\left[R+u\right]\mathbf{e}_{\rho}+Z\mathbf{e}_{z}=r\mathbf{e}_ {\rho}+Z\mathbf{e}_{z} \tag{7.1a}\] \[\theta =\Theta,\qquad z=Z. \tag{7.1b}\] Here, \(\theta\) and \(z\) are the deformed coordinates corresponding to their azimuthal and axial counterparts in the reference configuration (\(\Theta\) and \(Z\), respectively). The unit vectors along the axial and radial directions are denoted by \(\mathbf{e}_{z}\) and \(\mathbf{e}_{\rho}\), respectively. Additionally, \(R\) and \(r\) represent the radius at the mid-surface of the cylindrical shell in the two configurations. The displacement vector is given by \(\mathbf{u}=u(\rho)\mathbf{e}_{\rho}\). The covariant and contravariant vectors at the mid-surface in the two configurations, as well as the reference and deformed normals, are given by \[\mathbf{A}_{1}=Re_{\theta},\quad\mathbf{A}_{2}=\mathbf{e}_{z},\quad\mathbf{A}^{ 1}=\frac{1}{R}\mathbf{e}_{\theta},\quad\mathbf{A}^{2}=\mathbf{e}_{z},\] \[\mathbf{a}_{1}=r\mathbf{e}_{\theta},\quad\mathbf{a}_{2}=\mathbf{e}_{z},\quad\mathbf{ a}^{1}=\frac{1}{r}\mathbf{e}_{\theta},\quad\mathbf{a}^{2}=\mathbf{e}_{z},\] \[\mathbf{n}=\mathbf{N}=\mathbf{e}_{\rho}, \tag{7.2}\] where \(\mathbf{e}_{\theta}\) is the azimuthal unit vector. The normal vectors in both the configurations coincide for the deforming cylinder, implying \(\delta\mathbf{n}=\mathbf{0}\). The components of the covariant and contravariant metric tensors at the mid-surface in the reference configuration are respectively \[\begin{bmatrix}A_{\alpha\beta}\end{bmatrix}=\begin{bmatrix}R^{2}&0\\ 0&1\end{bmatrix}\quad\text{and}\quad\begin{bmatrix}A^{\alpha\beta}\end{bmatrix}= \begin{bmatrix}R^{-2}&0\\ 0&1\end{bmatrix}, \tag{7.3}\] Figure 2: Deformed configuration of an inflated infinite cylindrical shell depicting (a) Purely mechanical case with hydrostatic pressure applied at the inner and outer surfaces, and (b) Magnetoelastic case: A conductor carrying an electric current \(i\) is placed along the axis of the cylinder, and the inner boundary of the free-space is at an infinitesimal radial distance of \(\Delta\), whereas the outer boundary of the free-space is at infinity. and similarly, in the deformed configuration, \[\begin{bmatrix}a_{\alpha\beta}\end{bmatrix}=\begin{bmatrix}r^{2}&0\\ 0&1\end{bmatrix}\quad\text{and}\quad\begin{bmatrix}a^{\alpha\beta}\end{bmatrix}= \begin{bmatrix}r^{-2}&0\\ 0&1\end{bmatrix}, \tag{7.4}\] along with the determinant of the covariant metric tensors at the mid-surface given by \[A=R^{2},\quad\text{and}\quad a=r^{2}. \tag{7.5}\] From Equation (6.4), one obtains \[J_{0}-1=0\Rightarrow\lambda=\sqrt{\frac{A}{a}}=\lambda_{\theta}^{-1}, \tag{7.6}\] with \(\lambda_{\theta}=r/R\) as the azimuthal stretch. The non-zero components of the curvature tensor at the mid-surface are \[B_{1}{}^{1}=-\frac{1}{R},\quad\text{and}\quad b_{1}{}^{1}=-\frac{1}{r}. \tag{7.7}\] Furthermore, \[H=-\frac{1}{2R},\quad h=-\frac{1}{2r},\quad\text{and}\quad\Gamma_{\beta 1}^{\beta}=\Gamma_{\beta 2}^{ \beta}=0. \tag{7.8}\] A generalised neo-Hookean constitutive relation for magnetoelasticity (Dorfmann and Ogden, 2014) is chosen where \[\Omega=\frac{\mu_{\text{s}}}{4}\left[I_{1}-3\right]+\mu_{0}\left[\alpha I_{4} +\beta I_{5}\right], \tag{7.9}\] and \(\beta=n\alpha\) and \(\mu_{\text{s}}\) is the shear modulus of the material. The constants \(\alpha\) and \(\beta\) must be negative to ensure stability. Therefore, for convenience, \(\alpha=-1\), and \(n\in\mathbb{R}^{+}\). From Equation (2.20), the total Piola stress can be calculated as \[\mathbf{P}=\mu_{\text{s}}\mathbf{F}+2\mu_{0}\beta\mathbf{F}\mathbf{H}\otimes\mathbf{H}-p \mathbf{F}^{-\mathbf{T}}, \tag{7.10}\] and from Equation (2.21), the magnetic field induction vector in the reference configuration of the shell is \[\mathbf{B}=-2\mu_{0}\alpha\mathbf{H}-2\mu_{0}\beta\mathbf{C}\mathbf{H}. \tag{7.11}\] Now, the zeroth and first-order terms along the thickness of the shell of the total Piola stress are given as \[\mathbf{P}_{0} =\mu_{\text{s}}\mathbf{F}_{0}+2\mu_{0}\beta\mathbf{F}_{0}\mathbf{H}_{0} \otimes\mathbf{H}_{0}-p_{0}\mathbf{F}_{0}{}^{-T}, \tag{7.12a}\] \[\mathbf{P}_{1} =\mu_{\text{s}}\mathbf{F}_{1}+2\left[\mu_{0}\beta\mathbf{F}_{0}\mathbf{H} _{0}\otimes\mathbf{H}_{1}+\mu_{0}\beta\mathbf{F}_{0}\mathbf{H}_{1}\otimes\mathbf{ H}_{0}+\mu_{0}\beta\mathbf{F}_{1}\mathbf{H}_{1}\otimes\mathbf{H}_{0}\right]-\left[p_{0} \mathbf{F}_{1}{}^{-T}-p_{1}\mathbf{F}_{0}{}^{-T}\right], \tag{7.12b}\] and similarly, the components of the magnetic induction vector are \[\mathbf{B}_{0} =-2\mu_{0}\alpha\mathbf{H}_{0}-2\mu_{0}\beta\mathbf{C}_{0}\mathbf{H}_{ 0}, \tag{7.13a}\] \[\mathbf{B}_{1} =-2\mu_{0}\alpha\mathbf{H}_{1}-2\mu_{0}\beta\mathbf{C}_{0}\mathbf{H}_{ 1}-2\mu_{0}\beta\mathbf{C}_{1}\mathbf{H}_{0}. \tag{7.13b}\] Here the applied magnetic field in the spatial configuration at a shell-point is \(\mathbf{h}=\frac{i}{2\pi\left[r+\eta\lambda\right]}\mathbf{e}_{\theta}=\mathbf{h}_ {0}+\eta\mathbf{h}_{1}=\frac{i\lambda}{2\pi R}\mathbf{e}_{\theta}-\eta\frac{i \lambda^{2}}{2\pi R^{2}}\mathbf{e}_{\theta}\) with \(i\in\mathbb{R}^{+}\), and from the relation, \(\mathbf{H}=\mathbf{F}^{\mathrm{T}}\mathbf{h}=\mathbf{H}_{0}+\eta\mathbf{H}_{1}\), the following expressions are obtained: \[\mathbf{H}_{0}=\mathbf{F}_{0}^{\mathrm{T}}\mathbf{h}_{0},\quad\text{and}\quad \mathbf{H}_{1}=\mathbf{F}_{0}^{\mathrm{T}}\mathbf{h}_{1}+\mathbf{F}_{1}^{\mathrm{T}} \mathbf{h}_{0}. \tag{7.14}\] The components of the deformation gradient and its inverse are calculated using Equations (3.9) and (3.11) as \[\mathbf{F}_{0} =\lambda\mathbf{e}_{\rho}\otimes\mathbf{e}_{\rho}+\lambda^{-1}\mathbf{e}_{ \theta}\otimes\mathbf{e}_{\theta}+\mathbf{e}_{\varepsilon}\otimes\mathbf{e}_{z}, \qquad\mathbf{F}_{1}=\frac{1}{R}\left[\lambda-\lambda^{-1}\right]\mathbf{e}_{\rho} \otimes\mathbf{e}_{\rho}, \tag{7.15a}\] \[\mathbf{F}_{0}^{-\mathrm{T}} =\lambda^{-1}\mathbf{e}_{\rho}\otimes\mathbf{e}_{\rho}+\lambda\mathbf{e}_{ \theta}\otimes\mathbf{e}_{\theta}+\mathbf{e}_{\varepsilon}\otimes\mathbf{e}_{z}, \qquad\mathbf{F}_{1}{}^{-\mathrm{T}}=\frac{1}{R}\left[\lambda-\lambda^{-3} \right]\mathbf{e}_{\rho}\otimes\mathbf{e}_{\rho}. \tag{7.15b}\] The expressions for the zeroth and first-order components of the deformation gradient and its inverse are required for evaluating the total Piola stress and the referential magnetic induction vector. Therefore, considering the shell system of equations, specifically equations (6.1a), (6.1c), and (6.1d), for the purely mechanical loading of the infinite soft cylinder in the absence of body force and pressure at the outer boundary, the response equation is given by \[\frac{p_{\text{b}}-p_{\text{t}}}{\mu_{\text{s}}}=\frac{T}{R}\left[\frac{p_{\text{ b}}}{\mu_{\text{s}}}+1-\lambda^{4}\right]\left[\frac{1}{2}\frac{T}{R}\left[1+ \lambda^{2}\right]+\frac{\lambda^{2}}{2}\frac{T^{2}}{R^{2}}+1\right]^{-1}. \tag{7.16}\] This relationship is plotted in Figure 3 for different shell thickness values (\(\widetilde{T}=T/R\)) and external pressure values (\(\widetilde{p}=p_{t}/\mu_{s}\)). As the pressure difference (\(\Delta p=[p_{b}-p_{t}]/\mu_{s}\)) between the inner and the external shell surface increases, the stretch \(\lambda_{\theta}\) increases monotonically until a critical value of \(\Delta p\) corresponding to a limit point instability is reached. At this point massive changes in inflation occur for a minor change in the applied pressure. Similar limit point instabilities have been observed for inflation of thin hyperelastic shells as well as soft cylindrical cavities (Kiendl et al., 2015; Cheewaruangroj et al., 2019; Mehta et al., 2022). The critical limit point pressure reduces as the shell thickness is reduced. We further demonstrate the distinction between considering the pressure on top and bottom surfaces of the shell separately as opposed to the common convention of considering a pressure difference on the mid-surface. The shell's response to applied pressure difference \(\Delta p\) can significantly change by varying the pressure on the external surface \(p_{t}/\mu_{s}\). Reducing the shell thickness brings these response curves closer together, as observed from \(T/R=1/30\) to \(1/10\). For the magnetoelastic deformation of the cylindrical shell due to an applied current along its axis, the equilibrium Equations (6.2a), (6.2c), and (6.2d), governing the magnetic induction vector are trivially satisfied. Furthermore, by considering the shell equilibrium Equations (6.1a), (6.1c), and (6.1d), the following response relation for the system is obtained. \[\frac{i}{R}\sqrt{\frac{\mu_{0}}{\mu_{s}}}=\sqrt{2}\pi\left[\lambda^{4}-1 \right]^{\frac{1}{2}}\left[\frac{\left[1-\lambda^{2}\right]\left[4\lambda^{-2 }+\lambda^{2}\frac{T^{2}}{R^{2}}\right]-8}{\left[4\lambda^{-2}-\lambda^{2} \frac{T^{2}}{R^{2}}\right]^{2}}+\beta\right]^{-\frac{1}{2}}. \tag{7.17}\] Since, \(\beta=-n\) and \(\lambda<1\), it is evident that the condition, \[n>\frac{\left[1-\lambda^{2}\right]\left[4\lambda^{-2}+\lambda^{2}\frac{T^{2}} {R^{2}}\right]-8}{\left[4\lambda^{-2}-\lambda^{2}\frac{T^{2}}{R^{2}}\right]^{ 2}}, \tag{7.18}\] must be satisfied by the constitutive parameter \(n\) to ensure a physical deformation. This is further elaborated by plotting \(n\) against the azimuthal stretch for multiple \(T/R\) values in Figure 4(a). Based on this analysis, \(n>0.02\) is necessary for an inflating cylindrical shell to ensure that the condition (7.18) is satisfied for all deformation states. Figure 4: (a) Bounds on the constitutive parameter \(n\) based on the inequality (7.18). The curves for different thickness values \((T/R)\) do not differ significantly from one another and it is found that choosing \(n>0.02\) ensures a physically consistent deformation for all \(\lambda_{\theta}\) values. (b) Variation of the inflation \(\lambda_{\theta}\) of the cylindrical shell with the applied dimensionless magnetic loading. (c) Variation of the azimuthal magnetic induction at the shell mid, top, and bottom surfaces with the applied dimensionless magnetic loading. (d) Variation of the principal components of the zeroth-order Piola stress and the Lagrange multiplier with the applied dimensionless magnetic loading for \(T/R=1/30\) and \(n=0.5\). The deformation of the magnetoelastic cylinder based on Equation (7.17) is shown in Figure 4(b) for different \(T/R\) and \(n\) values. Application of magnetic field via the conductor causes the cylinder to inflate and the amount of inflation is higher for larger values of the coupling parameter \(n\). For a given \(n\neq 0\), there is a critical value of applied current (\(i/R\sqrt{\mu_{0}/\mu_{s}}\)) beyond which the cylinder experiences rapid inflation, akin to a limit point instability (Barham et al., 2008; Reddy and Saxena, 2017). For \(n=1.5\), this critical value is \(i/R\sqrt{\mu_{0}/\mu_{s}}\approx 3.64\), while for \(n=0.5\), this is close to 6.36. Notably, when \(n=0\) for \(T/R=1/30\), the cylinder exhibits slower inflation due to a weak magnetoelastic coupling, eventually saturating at \(\lambda_{\theta}\approx 1.5\). This behaviour is alternatively explained from the requirement of the radial expansion to satisfy the imposed condition (7.18) on \(n\), as presented in Figure 4(a). Furthermore, reducing the cylinder's thickness below a certain magnitude has a negligible effect for a given \(n\), as indicated by the overlapping response curves for \(T/R=1/20\) and \(1/30\) at \(n=0.5\). The constitutive relation (7.13b)\({}_{1}\) for the azimuthal component of the magnetic induction at the shell's mid-surface can be expressed as \[\zeta_{0}:=\frac{\mathbb{B}_{0\theta}}{\sqrt{\mu_{0}\mu_{\mathrm{s}}}}=-\pi^{ -1}\left[\frac{i}{R}\sqrt{\frac{\mu_{0}}{\mu_{\mathrm{s}}}}\right]\left[ \alpha+\beta\lambda^{-2}\right]. \tag{7.19}\] The above indicates that \(\zeta_{0}\) remains positive as the cylinder deforms. Additionally, the azimuthal components of the exterior magnetic induction at the outer and inner boundaries of the cylindrical shell, denoted as \(\mathbb{B}^{{}^{\prime}}_{1\theta}\) and \(\mathbb{B}^{{}^{\prime}}_{\mathrm{b}\theta}\), respectively, are given by \[\zeta_{t}:=\frac{\mathbb{B}^{{}^{\prime}}_{1\theta}}{\sqrt{\mu_{ 0}\mu_{\mathrm{s}}}}=\pi^{-1}\left[\frac{i}{R}\sqrt{\frac{\mu_{0}}{\mu_{ \mathrm{s}}}}\right]\left[\lambda+\frac{1}{2}\frac{T}{R}\left[\lambda-\lambda^ {3}\right]\right]\left[2\lambda^{-1}+\lambda\frac{T}{R}\right]^{-1}, \tag{7.20a}\] \[\zeta_{b}:=\frac{\mathbb{B}^{{}^{\prime}}_{1\theta}}{\sqrt{\mu_{ 0}\mu_{\mathrm{s}}}}=\pi^{-1}\left[\frac{i}{R}\sqrt{\frac{\mu_{0}}{\mu_{ \mathrm{s}}}}\right]\left[\lambda-\frac{1}{2}\frac{T}{R}\left[\lambda-\lambda^ {3}\right]\right]\left[2\lambda^{-1}-\lambda\frac{T}{R}\right]^{-1}. \tag{7.20b}\] The above expressions, along with the azimuthal magnetic induction at the mid-surface of the shell, are graphically presented in Figure 4(c) for \(T/R=1/30\). Since \(T/R=1/30\ll 1\) and \(\lambda<1\), \(\zeta_{t}\approx\zeta_{b}\) and the two curves overlap, hence, only \(\zeta_{t}\) is plotted. The plot demonstrates that the magnetic induction at the mid-surface increases monotonically as the applied magnetic field (via the electric current) is increased. Beyond a critical value of the applied current, the magnetic induction \(\zeta_{0}\) increases abruptly similar to a limit point instability for the mechanical problem. In the surrounding space, the magnetic induction at the shell boundaries exhibits an initial monotonic increase, followed by a gradual decrease, culminating in a sharp decline in magnitude. This decline occurs at relatively higher operational currents for smaller values of \(n\), with slightly higher magnetic induction observed at this juncture for lower \(n\). Additionally, for a specific \(n\), the magnitude of the exterior magnetic induction at the inner surface is marginally greater than that at the outer boundary, although this difference diminishes as the cylinder inflates. The radial, azimuthal, and axial components of the zeroth-order term of the total Piola stress (\(\mathbf{P}_{0}\)) are \[\frac{P_{0\rho\rho}}{\mu_{\mathrm{s}}}=\left[\lambda-\frac{p_{0}}{\mu_{ \mathrm{s}}}\lambda^{-1}\right],\quad\frac{P_{0\theta\theta}}{\mu_{\mathrm{s} }}=\left[\lambda^{-1}+\frac{\lambda^{-1}\beta}{2\pi^{2}}\Bigg{[}\frac{i}{R} \sqrt{\frac{\mu_{0}}{\mu_{\mathrm{s}}}}\Bigg{]}^{2}-\frac{p_{0}}{\mu_{\mathrm{s }}}\lambda\Bigg{]}\,,\quad\text{and}\quad\frac{P_{0zz}}{\mu_{\mathrm{s}}}= \left[1-\frac{p_{0}}{\mu_{\mathrm{s}}}\right], \tag{7.21}\] respectively, where \[\frac{p_{0}}{\mu_{\mathrm{s}}}=\left[\lambda^{2}-\frac{P_{\mathrm{M}_{\theta \rho}}+P_{\mathrm{M}_{\mathrm{b}\theta\rho}}}{2\mu_{\mathrm{s}}}\lambda \right]. \tag{7.22}\] The radial component of the Maxwell stress tensor in the surrounding space can be determined at the shell boundaries using Equation (2.23) and is given by \[\frac{P_{\mathrm{M}_{\theta\rho}}}{\mu_{\mathrm{s}}}=-\frac{\lambda^{-1}\pi^{-2 }}{2}\Bigg{[}\frac{i}{R}\sqrt{\frac{\mu_{0}}{\mu_{\mathrm{s}}}}\Bigg{]}^{2} \Bigg{[}2\lambda^{-1}+\lambda\frac{T}{R}\Bigg{]}^{-2}\quad\text{and}\quad \frac{P_{\mathrm{M}_{\theta\rho\rho}}}{\mu_{\mathrm{s}}}=-\frac{\lambda^{-1} \pi^{-2}}{2}\Bigg{[}\frac{i}{R}\sqrt{\frac{\mu_{0}}{\mu_{\mathrm{s}}}}\Bigg{]} ^{2}\Bigg{[}2\lambda^{-1}-\lambda\frac{T}{R}\Bigg{]}^{-2}, \tag{7.23}\] at the outer and inner surfaces of the cylinder, respectively. Figure 4(d) presents the principal components of \(\mathbf{P}_{0}\) for \(n=0.5\) and \(T/R=1/30\). The axial stress shows a monotonic increase with the applied magnetic loading until the limit point at \(i/R\sqrt{\mu_{0}/\mu_{s}}=6.36\). The radial component of the total Piola stress remains negative suggesting compression. It drops to a minimum value of \(-0.26\) before increasing slightly until the limit point. The azimuthal component displays a non-monotonic behaviour. It initially increases and remains positive for low magnetic loads but then starts to decrease and becomes negative for \(i/R\sqrt{\mu_{0}/\mu_{s}}>5.2\). It is notable that the radial stress is non-negligible compared to the other components, indicating the presence of inaccuracies when employing plane stress assumptions in modelling soft magnetoelastic shells. Also, the zeroth-order component of the Lagrange multiplier is plotted, and it decreases as the magnetoelastic cylindrical shell inflates. The presence of compressive stresses in the azimuthal direction indicates a possibility of wrinkling instability, the analysis of which will form part of a future study. Concluding Remarks In this work, The governing equations for the large deformation of Kirchhoff-Love magnetoelastic thin shells have been rigorously derived. The free space in which the magnetostatic energy is bounded to finite volumes is accounted for. The equilibrium equations have been obtained using the derived theory approach. This point of departure was a variational form for a three-dimensional continuum magnetoelastic body involving mechanical deformation, magnetic field, and a Lagrange multiplier in the presence of body force, dead-load traction along the bounding curve of the mid-surface, external pressures at the top and bottom surfaces, and an external magnetic field. Treating the shell as a stack of surfaces, the general deformation map in the body has been restated in terms of a point on the deformed mid-surface. This requires an additional term that incorporates the through-thickness stretch and the deformed normal (i.e., the first director). By defining a new set of generalised solution variables and thereby modifying the variational form, the shell equilibrium equations have been obtained. The thickness variable has been separated from the surface parameters, and the field variables expanded along the thickness of the thin shell. Additionally, the governing equations for the corresponding three-dimensional free space are derived. The new formulation relies on a Kirchhoff-Love type kinematic assumption for the magnetic scalar potential thereby ensuring a consistent derivation of the governing equations. The top and bottom surfaces of the shell are considered in addition to the mid-surface to consistently account for the magnetic field in vacuum. This leads to a departure from the commonly used plane-stress assumption in thin shell theory. Furthermore, a distinction between the hydrostatic pressure applied on the top and bottom surfaces has been considered. Variations of the through-thickness stretch and the deformed normal introduces richness into the formulation together with additional complexity through the distribution of the parametric derivative of the mid-surface position vector to the lateral bounding curve. The novel magnetoelastic shell theory and implications of the factors have been illustrated by analysing the inflation of a cylindrical magnetoelastic shell. Capabilities of the present theory to model large deformation and limit point instabilities have been demonstrated. The possibility of wrinkling instabilities due to the presence of compressive in-plane stresses in the shell have been detailed. The present analysis provides a new perspective into a strongly-coupled shell system of equations, which is challenging to obtain due to strong kinematic and constitutive nonlinearities. The geometrically exact formulation ensures a high level of accuracy. The focus here is on formulating and demonstrating the capabilities of the derived equations. The derivation from a variational formulation ensures that the theory is amenable for numerical implementation via the finite element method. Details of the numerical implementation will be presented in a future contribution. ## Acknowledgements This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grants EP/V030833/1 and EP/R008531/1, and a Royal Society grant IES/R1/201122. ## Appendix A Geometry of a Kirchhoff-Love thin shell ### The natural basis at the mid-surface The covariant basis vectors for the mid-surface in the reference and deformed configurations, respectively, can be expressed as \[\mathbf{A}_{\alpha}=\frac{\partial\mathbf{R}}{\partial\theta^{\alpha}},\quad\text{ and}\quad\mathbf{a}_{\alpha}=\frac{\partial\mathbf{r}}{\partial\theta^{\alpha}}.\] (A.1) Thus, the unit normal vectors in the two configurations are defined by \[\mathbf{N}=\frac{\mathbf{A}_{1}\times\mathbf{A}_{2}}{A^{1/2}},\quad\text{and}\quad\mathbf{n}= \frac{\mathbf{a}_{1}\times\mathbf{a}_{2}}{a^{1/2}},\] (A.2) where \(A\) and \(a\) are \[A=\left\|\mathbf{A}_{1}\times\mathbf{A}_{2}\right\|^{2},\quad\text{and}\quad a=\left\| \mathbf{a}_{1}\times\mathbf{a}_{2}\right\|^{2}.\] (A.3) Further, it can be shown that \[A=\det\left[A_{\alpha\beta}\right],\quad\text{and}\quad a=\det[a_{\alpha\beta}].\] (A.4) \(\mathrm{rm}\). The covariant components of the metric tensor for the mid-surface points \(\mathbf{R}\) and \(\mathbf{r}\) are respectively given by \[A_{\alpha\beta}=\mathbf{A}_{\alpha}\cdot\mathbf{A}_{\beta},\quad\text{and}\quad a_{ \alpha\beta}=\mathbf{a}_{\alpha}\cdot\mathbf{a}_{\beta}.\] (A.5) Also, the contravariant metric tensor components for the mid-surface are \[A^{\alpha\gamma}A_{\gamma\beta}=\delta^{\alpha}_{\beta},\quad\text{and}\quad a ^{\alpha\gamma}a_{\gamma\beta}=\delta^{\alpha}_{\beta},\] (A.6) where \(\delta^{\alpha}_{\beta}\) denotes the Kronecker delta. Again, \(\mathbf{A}^{\alpha}\) and \(\mathbf{a}^{\alpha}\) denote the contravariant basis vectors for the mid-surface in the two configurations, defined by \[\mathbf{A}^{\alpha}\cdot\mathbf{A}_{\beta}=\delta^{\alpha}_{\beta},\quad\text{and}\quad \mathbf{a}^{\alpha}\cdot\mathbf{a}_{\beta}=\delta^{\alpha}_{\beta}.\] (A.7) ### The unit alternator and permutation symbol In general, for a surface tensor \(\mathbf{Q}=Q_{\alpha\beta}\mathbf{A}^{\alpha}\otimes\mathbf{A}^{\beta}\), the surface inverse \(\mathbf{Q}^{-1}\) defined from \[\mathbf{Q}^{-1}\mathbf{Q}=\mathbf{I},\] (A.8) with \(\mathbf{I}=\mathbf{A}^{\beta}\otimes\mathbf{A}_{\beta}=\mathbf{A}_{\beta}\otimes\mathbf{A}^{\beta}\) (\(\mathbf{I}\) denotes the projection onto the tangent plane of \(S_{\text{m}}\)) has the contravariant components as \[Q^{\alpha\beta}_{\text{inv}}=\frac{1}{Q}e^{\alpha\gamma}Q_{\delta\gamma}e^{ \beta\delta},\] (A.9) where \(Q=\det\left[Q_{\alpha\beta}\right]\), and the so-called unit alternator given as \[[e^{\alpha\gamma}]=\begin{bmatrix}0&1\\ -1&0\end{bmatrix}.\] (A.10) Further, the permutation tensor is defined by \[\mathbf{E}=E^{\alpha\beta}\ \mathbf{A}_{\alpha}\otimes\mathbf{A}_{\beta}=\frac{1}{A^{1/2}} e^{\alpha\beta}\ \mathbf{A}_{\alpha}\otimes\mathbf{A}_{\beta},\quad\text{and}\quad\mathbf{\varepsilon}= \varepsilon^{\alpha\beta}\ \mathbf{a}_{\alpha}\otimes\mathbf{a}_{\beta}=\frac{1}{a^{1/2}}e^{ \alpha\beta}\ \mathbf{a}_{\alpha}\otimes\mathbf{a}_{\beta},\] (A.11) in the two configurations. In particular, Equation (A.9) yields \[A^{\alpha\beta}=\frac{1}{A}e^{\alpha\gamma}A_{\gamma\delta}e^{\beta\delta} \quad\text{and}\quad a^{\alpha\beta}=\frac{1}{a}e^{\alpha\gamma}a_{\gamma \delta}e^{\beta\delta}.\] (A.12) Using the relation, \(e^{\gamma\alpha}e_{\gamma\beta}=\delta^{\alpha}_{\beta}\), \[A^{\alpha\beta}e_{\beta\gamma}A=e^{\alpha\beta}A_{\beta\gamma}\quad\text{and }\quad a^{\alpha\beta}e_{\beta\gamma}a=e^{\alpha\beta}a_{\beta\gamma},\] (A.13) which can be further used to rewrite the permutation tensors as \[\mathbf{E}=E_{\alpha\beta}\ \mathbf{A}^{\alpha}\otimes\mathbf{A}^{\beta}=A^{1/2}e_{\alpha \beta}\ \mathbf{A}^{\alpha}\otimes\mathbf{A}^{\beta},\quad\text{and}\quad\mathbf{\varepsilon}= \varepsilon_{\alpha\beta}\ \mathbf{a}^{\alpha}\otimes\mathbf{a}^{\beta}=a^{1/2}e_{\alpha\beta}\ \mathbf{a}^{\alpha}\otimes\mathbf{a}^{\beta}.\] (A.14) Again, multiplying Equations (A.12)\({}_{1}\) and (A.12)\({}_{2}\) by \(A_{\alpha\beta}\) and \(a_{\alpha\beta}\), respectively, one obtains \[A=\frac{1}{2}e^{\alpha\gamma}e^{\beta\delta}A_{\alpha\beta}A_{\gamma\delta}, \quad\text{and}\quad a=\frac{1}{2}e^{\alpha\gamma}e^{\beta\delta}a_{\alpha \beta}a_{\gamma\delta}.\] (A.15) From the above, one can write \[A_{\zeta}=e^{\alpha\gamma}e^{\beta\delta}A_{\alpha\beta,\zeta}A_{\gamma\delta},\quad\text{and}\quad a_{\zeta}=e^{\alpha\gamma}e^{\beta\delta}a_{\alpha \beta,\zeta}a_{\gamma\delta},\] (A.16) which can be further rewritten as \[A_{\zeta}=AA^{\alpha\beta}A_{\alpha\beta,\zeta},\quad\text{and}\quad a_{ \zeta}=aa^{\alpha\beta}a_{\alpha\beta,\zeta},\] (A.17) by using Equations (A.12)\({}_{1}\) and (A.12)\({}_{2}\). Now, \[A_{\alpha\beta,\zeta}=\mathbf{A}_{\alpha,\zeta}\cdot\mathbf{A}_{\beta}+\mathbf{A}_{ \alpha}\cdot\mathbf{A}_{\beta,\zeta},\quad\text{and}\quad a_{\alpha\beta,\zeta}= \mathbf{a}_{\alpha,\zeta}\cdot\mathbf{a}_{\beta}+\mathbf{a}_{\alpha}\cdot\mathbf{a}_{\beta, \zeta}.\] (A.18) Therefore, \[A_{\zeta}=2AT^{\alpha}_{\alpha\zeta},\quad\text{and}\quad a_{\zeta}=2a\gamma ^{\alpha}_{\alpha\zeta},\] (A.19) with the Christoffel symbols of the second kind in the two configurations defined by \[\varGamma^{\alpha}_{\zeta\gamma}=\mathbf{A}^{\alpha}\cdot\mathbf{A}_{\zeta,\gamma}, \quad\text{and}\quad\gamma^{\alpha}_{\zeta\gamma}=\mathbf{a}^{\alpha}\cdot\mathbf{a}_{ \zeta,\gamma}.\] (A.20) ### The natural basis at a shell-point A point \(\mathbf{x}_{\text{B}}\in\mathcal{B}\) can be written as \[\mathbf{x}_{\text{B}}=\mathbf{r}+\eta\mathbf{d},\] (A.21) where \(\mathbf{d}=\lambda\mathbf{n}\) and \(\lambda=\frac{t}{T}\). The covariant basis vectors at a point \(\mathbf{X}_{\rm B}\) in the shell are given by \[\mathbf{G}_{\alpha}=\frac{\partial\mathbf{X}_{\rm B}}{\partial\theta^{\alpha}}=\frac{ \partial\mathbf{R}}{\partial\theta^{\alpha}}+\eta\frac{\partial\mathbf{N}}{\partial\bm {R}}\frac{\partial\mathbf{R}}{\partial\theta^{\alpha}}=\mathbf{M}\mathbf{A}_{\alpha},\] (A.22) where \[\mathbf{M}=\mathbf{I}-\eta\mathbf{K},\] (A.23) with \[\mathbf{K}=-\frac{\partial\mathbf{N}}{\partial\mathbf{R}}=-\mathbf{N}_{,\beta}\otimes\mathbf{a}^{ \beta}.\] (A.24) Again, the covariant basis vectors at a point \(\mathbf{x}_{\rm B}\) in the shell are \[\mathbf{g}_{\alpha}=\frac{\partial\mathbf{x}_{\rm B}}{\partial\theta^{\alpha}}=\frac{ \partial\mathbf{r}}{\partial\theta^{\alpha}}+\eta\mathbf{n}\frac{\partial\lambda}{ \partial\theta^{\alpha}}+\eta\lambda\frac{\partial\mathbf{n}}{\partial\mathbf{r}} \frac{\partial\mathbf{r}}{\partial\theta^{\alpha}}=\mathbf{\mu}\mathbf{a}_{\alpha}.\] (A.25) Here the long-wave assumption has been considered (Kiendl et al., 2015; Liu et al., 2023). That is, \[\lambda_{,\alpha}\approx 0.\] (A.26) While this assumption is strong, it is reasonable since the thickness of the shell is typically very small, resulting in negligible out-of-plane shearing and localised necking. Therefore, \[\mathbf{\mu}=\mathbf{i}-\eta\lambda\mathbf{\kappa},\] (A.27) where \(\mathbf{i}=\mathbf{a}^{\beta}\otimes\mathbf{a}_{\beta}=\mathbf{a}_{\beta}\otimes\mathbf{a}^{\beta}\) denotes the projection onto the tangent plane of \(s_{\rm m}\), the deformed counterpart of \(S_{\rm m}\). Also, \[\mathbf{\kappa}=-\frac{\partial\mathbf{n}}{\partial\mathbf{r}}=-\mathbf{n}_{,\beta}\otimes\bm {a}^{\beta}.\] (A.28) Again, \(\mathbf{N}_{,\beta}\) and \(\mathbf{n}_{,\beta}\) is given by \[\mathbf{N}_{,\beta}=-B_{\beta}{}^{\gamma}\ \mathbf{A}_{\gamma},\quad\text{and}\quad\mathbf{n }_{,\beta}=-b_{\beta}{}^{\gamma}\ \mathbf{a}_{\gamma},\] (A.29) with the surface curvature tensors in the two configurations defined by \[\mathbf{B}=B_{\beta\delta}\ \mathbf{A}^{\beta}\otimes\mathbf{A}^{\delta}\quad\text{and} \quad\mathbf{b}=b_{\beta\delta}\ \mathbf{a}^{\beta}\otimes\mathbf{a}^{\delta},\] (A.30) where \[B_{\beta\delta}=\mathbf{N}\cdot\mathbf{A}_{\beta,\delta},\quad\text{and}\quad b_{ \beta\delta}=\mathbf{n}\cdot\mathbf{a}_{\beta,\delta},\] (A.31) and further, \[B_{\beta}{}^{\gamma}=B_{\beta\delta}A^{\delta\gamma},\quad\text{and}\quad b_{ \beta}{}^{\gamma}=b_{\beta\delta}a^{\delta\gamma}.\] (A.32) Therefore, \[\mathbf{K}=B_{\beta}{}^{\gamma}\ \mathbf{A}_{\gamma}\otimes\mathbf{A}^{\beta},\quad\text{and }\quad\mathbf{\kappa}=b_{\beta}{}^{\gamma}\ \mathbf{a}_{\gamma}\otimes\mathbf{a}^{\beta}.\] (A.33) For a point in the shell, the components of the covariant and contravariant metric tensors in the reference configuration are \[G_{\alpha\beta}=\mathbf{G}_{\alpha}\cdot\mathbf{G}_{\beta}\quad\text{and}\quad G^{ \alpha\beta}=\mathbf{G}^{\alpha}\cdot\mathbf{G}^{\beta},\] (A.34) with their deformed counterparts as \[g_{\alpha\beta}=\mathbf{g}_{\alpha}\cdot\mathbf{g}_{\beta}\quad\text{and}\quad g^{ \alpha\beta}=\mathbf{g}^{\alpha}\cdot\mathbf{g}^{\beta},\] (A.35) where \(\mathbf{G}^{\alpha}\) and \(\mathbf{g}^{\alpha}\) denotes the contravariant basis vectors in the two configurations defined by \[\mathbf{G}^{\alpha}\cdot\mathbf{G}_{\beta}=\delta_{\beta}^{\alpha},\quad\text{and} \quad\mathbf{g}^{\alpha}\cdot\mathbf{g}_{\beta}=\delta_{\beta}^{\alpha}.\] (A.36) It can be shown that \[\mathbf{G}^{\alpha}=\mathbf{M}^{-\rm T}\mathbf{A}^{\alpha},\] (A.37) and \(\mathbf{M}^{-1}\) can be expanded as \[\mathbf{M}^{-1} = \left[M_{0_{\beta}}^{-1\gamma}+\eta M_{1_{\beta}}^{-1\gamma}+\eta^ {2}M_{2_{\beta}}^{-1\gamma}\right]\mathbf{A}_{\gamma}\otimes\mathbf{A}^{\beta}+\mathbf{ \mathcal{O}}(\eta^{3}),\] (A.38) with \[M_{0_{\beta}}^{-1\gamma}=\delta_{\beta}^{\gamma},\quad M_{1_{\beta}}^{-1 \gamma}=B_{\beta}{}^{\gamma},\quad\text{and}\quad M_{2_{\beta}}^{-1\gamma}=B_{ \delta}{}^{\gamma}B_{\beta}{}^{\delta}.\] (A.39) Similarly, \[\mathbf{g}^{\alpha}=\mathbf{\mu}^{-\mathrm{T}}\mathbf{a}^{\alpha},\] (A.40) and \(\mathbf{\mu}^{-1}\) can be expanded as \[\mathbf{\mu}^{-1}=\left[\mu_{\partial_{\beta}}^{-1\gamma}+\eta\mu_{1_{\beta}}^{-1 \gamma}+\eta^{2}\mu_{2_{\beta}}^{-1\gamma}\right]\mathbf{a}_{\gamma}\otimes\mathbf{a}^ {\beta}+\mathbf{\mathcal{O}}(\eta^{3}),\] (A.41) with \[\mu_{\partial_{\beta}}^{-1\gamma}=\delta_{\beta}^{\gamma},\quad\mu_{1_{\beta}} ^{-1\gamma}=\lambda b_{\beta}^{\ \gamma},\quad\text{and}\quad\mu_{2_{\beta}}^{-1\gamma}= \lambda^{2}b_{\delta}^{\ \gamma}\ b_{\beta}^{\ \delta}.\] (A.42) ### The volume and surface elements The volume element in the reference configuration can be expressed as \[dV=\left[\mathbf{G}_{1}\times\mathbf{G}_{2}\right]\cdot\mathbf{N}\ d\theta^{1}d\theta^{2} d\eta=\ \left[\mathbf{A}_{1}\times\mathbf{A}_{2}\right]\cdot\mathbf{N}Md\theta^{1}d\theta^{2}d\eta =dSd\eta,\] (A.43) where the undeformed elemental area \(dS\) is given by \[dS=MdS_{\mathrm{m}},\] (A.44) with \(dS_{\mathrm{m}}\) as the area element on \(S_{\mathrm{m}}\) written as \[dS_{\mathrm{m}}=A^{1/2}dP,\] (A.45) and the area element for the convected coordinates is \(dP=d\theta^{1}d\theta^{2}\). Also, \[M=\det\mathbf{M}=1-2\eta H+\eta^{2}K,\] (A.46) where \(H\) and \(K\) are the mean and Gaussian curvatures of the undeformed mid-surface, respectively, and are expressed as \[H=\frac{1}{2}\mathrm{tr}\mathbf{K}=\frac{1}{2}\frac{\partial\mathbf{N}}{\partial\mathbf{ R}}:\mathbf{I}=\frac{1}{2}B_{\alpha\beta}A^{\alpha\beta}=\frac{1}{2}B_{\alpha}^{ \alpha},\] (A.47) and \[K=\det\mathbf{K}=\det\left[B_{\alpha}^{\ \beta}\right]=\det\left[B_{\alpha\gamma}A^{ \gamma\beta}\right]=\frac{B}{A},\] (A.48) with \(B=\det\left[B_{\alpha\beta}\right]\). Further, an elemental area in the deformed configuration is given by \[ds=\mu\hat{a}^{1/2}dS_{\mathrm{m}},\] (A.49) with the surface stretch \(\hat{a}=\frac{a}{A}\), and \[\mu=1-2\eta\lambda h+\eta^{2}\lambda^{2}\kappa,\] (A.50) where the mean and Gaussian curvatures of the deformed mid-surface are \[h=\frac{1}{2}b_{\alpha}^{\alpha},\quad\text{and}\quad\kappa=\frac{b}{a},\] (A.51) with \(b=\det[b_{\alpha\beta}]\). Therefore, \[dS_{\mathrm{t}}=M\Big{|}_{\eta=T/2}dS_{\mathrm{m}},\quad\text{and}\quad dS_{ \mathrm{b}}=M\Big{|}_{\eta=-T/2}dS_{\mathrm{m}}.\] (A.52) Also, \[ds_{\mathrm{t}}=\mu\Big{|}_{\eta=T/2}\hat{a}^{1/2}dS_{\mathrm{m}},\quad\text{ and}\quad ds_{\mathrm{b}}=\mu\Big{|}_{\eta=-T/2}\hat{a}^{1/2}dS_{\mathrm{m}}.\] (A.53) If the bounding curve \(C_{\mathrm{m}}\) of the mid-surface \(S_{\mathrm{m}}\) is characterised by the arc-length parameter \(l\), then the infinitesimal length \(dl\) between two points \(\mathbf{R}(\theta^{1},\theta^{2})\) and \(\mathbf{R}(\theta^{1}+d\theta^{1},\theta^{2}+d\theta^{2})\) is given by \[dl=\!\Big{\|}\mathbf{R}(\theta^{1}+d\theta^{1},\theta^{2}+d\theta^{2})-\mathbf{R}( \theta^{1},\theta^{2})\Big{\|}=\!\big{\|}\mathbf{A}_{\gamma}d\theta^{\gamma}\! \big{\|}=\sqrt{\mathbf{A}_{\alpha}d\theta^{\alpha}\cdot\mathbf{A}_{\beta}d\theta^{ \beta}}=\sqrt{A_{\alpha\beta}d\theta^{\alpha}d\theta^{\beta}}.\] (A.54) The tangent vector at a point \(\mathbf{R}\) on \(C_{\mathrm{m}}\) is defined as \[\mathbf{\tau}=\frac{d\mathbf{R}}{dl}=\frac{\partial\mathbf{R}}{\partial\theta^{\beta}} \frac{d\theta^{\beta}}{dl}=\mathbf{A}_{\beta}\frac{d\theta^{\beta}}{dl},\] (A.55) and using Equation (A.54), \[\mathbf{\tau}\cdot\mathbf{\tau}=\mathbf{A}_{\alpha}\cdot\mathbf{A}_{\beta}\frac{d\theta^{\alpha}} {dl}\frac{d\theta^{\beta}}{dl}=A_{\alpha\beta}\frac{d\theta^{\alpha}}{dl}\frac{d \theta^{\beta}}{dl}=1,\] (A.56) implying that \(\mathbf{\tau}\) is a unit tangent vector. Further, define \[\mathbf{\nu}=\mathbf{E}\mathbf{\tau}=E^{\alpha\beta}A_{\beta\gamma}\frac{d\theta^{\gamma}} {dl}\mathbf{A}_{\alpha}=E_{\eta\delta}\frac{d\theta^{\delta}}{dl}\mathbf{A}^{\eta},\] (A.57) such that, \[\mathbf{\nu}\cdot\mathbf{\tau}=E_{\alpha\beta}\frac{d\theta^{\alpha}}{dl}\frac{d \theta^{\beta}}{dl}=\frac{1}{2}\left[E_{\alpha\beta}+E_{\beta\alpha}\right] \frac{d\theta^{\alpha}}{dl}\frac{d\theta^{\beta}}{dl}=0.\] (A.58) Again, \[\mathbf{\nu}\cdot\mathbf{\nu}=E^{\alpha\beta}A_{\beta\gamma}\frac{d\theta^{\gamma}}{ dl}\mathbf{A}_{\alpha}\cdot E_{\eta\delta}\frac{d\theta^{\delta}}{dl}\mathbf{A}^{\eta}=E ^{\alpha\beta}E_{\eta\delta}\delta_{\alpha}^{\gamma}A_{\beta\gamma}\frac{d \theta^{\gamma}}{dl}\frac{d\theta^{\delta}}{dl},\] (A.59) and following the relation, \(E^{\alpha\beta}E_{\eta\delta}\delta_{\alpha}^{\eta}=\left[\delta_{\eta}^{ \alpha}\delta_{-\delta}^{\beta}\delta_{\delta}^{\beta}\right]\delta_{\alpha}^{ \eta}=\delta_{\delta}^{\beta}\), \[\mathbf{\nu}\cdot\mathbf{\nu}=\delta_{\delta}^{\beta}A_{\beta\gamma}\frac{d\theta^{ \gamma}}{dl}\frac{d\theta^{\delta}}{dl}=A_{\delta\gamma}\frac{d\theta^{\gamma} }{dl}\frac{d\theta^{\delta}}{dl}=\mathbf{\tau}\cdot\mathbf{\tau}=1,\] (A.60) implying that \(\mathbf{\nu}\) is the in-plane unit normal to \(\mathbf{\tau}\) on \(\mathcal{C}_{\rm m}\), and \[\mathbf{\nu}=\mathbf{\tau}\times\mathbf{N}.\] (A.61) An elemental area, \(dS_{\ell}\), at a point \(\mathbf{X}_{\rm B}\) on the lateral surface is given by \[dS_{\ell}=\!\left\|\frac{\partial\mathbf{X}_{\rm B}}{\partial l}\times\frac{ \partial\mathbf{X}_{\rm B}}{\partial\eta}\right\|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## Appendix B Application of Green's Theorem at the mid-surface of the shell For a scalar \(T^{\alpha}\), consider the following integral: \[\int\limits_{P}\left[A^{1/2}T^{\alpha}\right]_{,\alpha}dP,\] (B.1) which can be rewritten by applying the Green's theorem as \[\int\limits_{P}\left[A^{1/2}T^{\alpha}\right]_{,\alpha}dP=\int\limits_{P} \left[\left[A^{1/2}T^{1}\right]_{,1}+\left[A^{1/2}T^{2}\right]_{,2}\right]dP= \int\limits_{\mathcal{C}_{\mathrm{p}}}A^{1/2}e_{\alpha\beta}T^{\alpha}d \theta^{\beta},\] (B.2) where \(\mathcal{C}_{\mathrm{p}}\) is the boundary of the parametric domain \(P\), and the above boundary integral can be simplified as \[\int\limits_{\mathcal{C}_{\mathrm{p}}}A^{1/2}e_{\alpha\beta}T^{\alpha}d\theta ^{\beta}=\int\limits_{\mathcal{C}_{\mathrm{p}}}E_{\alpha\beta}T^{\alpha}d \theta^{\beta}=\int\limits_{\mathcal{C}_{\mathrm{m}}}\left[E_{\alpha\beta} \frac{d\theta^{\beta}}{dl}\right]T^{\alpha}dl,\] (B.3) which on using Equation (A.57) can be further written as \[\int\limits_{\mathcal{C}_{\mathrm{p}}}A^{1/2}e_{\alpha\beta}T^{\alpha}d\theta ^{\beta}=\int\limits_{\mathcal{C}_{\mathrm{m}}}T^{\alpha}\nu_{\alpha}dl.\] (B.4) This establishes a relation between the integral over the parametric domain and the line integral along the boundary of the curved mid-surface. ## Appendix C Variation of some relevant quantities Here, the first variation of key kinematic variables are listed (essential for the calculations in Sections 5 and 6, respectively), for example, \[\delta\mathbf{F}=\delta\frac{\partial\mathbf{\chi}}{\partial\mathbf{X}}=\frac{\partial \delta\mathbf{\chi}}{\partial\mathbf{X}},\] (B.1) and following \(\mathbf{F}\mathbf{F}^{-1}=\mathbb{1}\), one obtains \[\delta\mathbf{F}^{-1}=-\mathbf{F}^{-1}\delta\mathbf{F}\mathbf{F}^{-1}.\] (B.2) Also, \[\delta J=J\mathbf{F}^{-T}:\delta\mathbf{F}.\] (B.3) Again, \[\delta\mathbf{\chi}_{\mathrm{B}} = \delta\mathbf{r}+\eta\delta\mathbf{d},\] (B.4) \[= \delta\mathbf{r}+\eta\delta\lambda\mathbf{n}+\eta\lambda\delta\mathbf{n},\] where \(\delta\mathbf{n}\) can be obtained by using the relations, \(\mathbf{n}\cdot\mathbf{n}=1\) and \(\mathbf{a}_{\alpha}\cdot\mathbf{n}=0\) as \[\delta\mathbf{n}=-\left[\mathbf{a}^{\alpha}\otimes\mathbf{n}\right]\delta\mathbf{a}_{\alpha}= -\mathbf{a}^{\alpha}\left[\mathbf{n}\cdot\delta\mathbf{a}_{\alpha}\right],\] (B.5) with \[\delta\mathbf{a}_{\alpha}=\delta\frac{\partial\mathbf{r}}{\partial\theta^{\alpha}}= \left[\delta\mathbf{r}\right]_{,\alpha},\] (B.6) and moreover, from Equation (3.17) follows \[\delta\lambda=-\frac{\lambda}{2}a^{-1}\delta a,\] (B.7) where from Equation (A.15), \[\delta a=aa^{\alpha\beta}\delta a_{\alpha\beta},\] (B.8) which can be rewritten by using \(\delta a_{\alpha\beta}=\delta\mathbf{a}_{\alpha}\cdot\mathbf{a}_{\beta}+\mathbf{a}_{\alpha }\cdot\delta\mathbf{a}_{\beta}\) as \[\delta a=2a\mathbf{a}^{\alpha}\cdot\delta\mathbf{a}_{\alpha}.\] (B.9) Therefore, the variation in the through-thickness stretch can be rewritten as \[\delta\lambda=-\lambda\mathbf{a}^{\alpha}\cdot\delta\mathbf{a}_{\alpha}.\] (B.10) Apart from the kinematic variables, the variation in the magnetic field vector is given by \[\delta\mathbf{H}=-\delta\frac{\partial\Phi}{\partial\mathbf{X}}=-\frac{\partial \delta\Phi}{\partial\mathbf{X}},\] (B.11) and further, in \(\mathcal{B}_{0}\), \[\delta\Phi=\delta\Phi_{0}+\eta\delta\Phi_{1}.\] (B.12) Also, neglecting the higher order terms, \[\delta p=\delta p_{0}+\eta\delta p_{1}.\] (B.13)
2305.05858
Vārta: A Large-Scale Headline-Generation Dataset for Indic Languages
We present V\=arta, a large-scale multilingual dataset for headline generation in Indic languages. This dataset includes 41.8 million news articles in 14 different Indic languages (and English), which come from a variety of high-quality sources. To the best of our knowledge, this is the largest collection of curated articles for Indic languages currently available. We use the data collected in a series of experiments to answer important questions related to Indic NLP and multilinguality research in general. We show that the dataset is challenging even for state-of-the-art abstractive models and that they perform only slightly better than extractive baselines. Owing to its size, we also show that the dataset can be used to pretrain strong language models that outperform competitive baselines in both NLU and NLG benchmarks.
Rahul Aralikatte, Ziling Cheng, Sumanth Doddapaneni, Jackie Chi Kit Cheung
2023-05-10T03:07:17Z
http://arxiv.org/abs/2305.05858v1
# Varta: A Large-Scale Headline-Generation Dataset for Indic Languages ###### Abstract We present Varta, a large-scale multilingual dataset for headline generation in Indic languages. This dataset includes 41.8 million news articles in 14 different Indic languages (and English), which come from a variety of high-quality sources. To the best of our knowledge, this is the largest collection of curated articles for Indic languages currently available. We use the data collected in a series of experiments to answer important questions related to Indic NLP and multilinguality research in general. We show that the dataset is challenging even for state-of-the-art abstractive models and that they perform only slightly better than extractive baselines. Owing to its size, we also show that the dataset can be used to pretrain strong language models that outperform competitive baselines in both NLU and NLG benchmarks. The data and models are available at [https://github.com/rahular/varta](https://github.com/rahular/varta). ## 1 Introduction Headline generation is a special case of abstractive summarization where the goal is to create a brief, often single-sentence'summary' of a news article. Unlike traditional summaries, which are a few sentences long and concisely convey the most important information from an article, a headline typically highlights one key fact from the article that is considered to be the most significant. In recent years, headlines have also been written to be catchy and increase click-through rates. This trend has made the task harder, as the lexical overlap between headlines and articles is low, while their compression ratio is high. There are several datasets available to train and evaluate headline generation and abstractive summarization models, but they are largely limited to English (Narayan et al., 2018; Nallapati et al., 2016; Rush et al., 2015; Volske et al., 2017). Although there have been efforts to create multilingual datasets such as MLSum (Scialom et al., 2020) and XLSum (Hasan et al., 2021), the representation of Indic languages in these datasets is still minimal.1 As a result, it is challenging to create good headline generation or summarization systems for these languages, which are spoken by approximately one in five people worldwide. Footnote 1: The only exception to this is the recently released headline-generation dataset proposed by Kumar et al. (2022). We will compare this with our dataset in §2. MotivationIndic headline generation is particularly interesting because this family of languages presents unique challenges. For example, though most Indic languages are closely related, they use different scripts which makes transfer learning harder. Many of the languages are morphologically rich, making headlines compact and thus harder to predict. With few exceptions, most Indic languages are low-resource and are underrepresented in multilingual models, which makes few-shot learning ineffective. Recent efforts have shown that language family-specific pretraining enables better transfer among languages (Doddapaneni et al., 2022) and that quality data is required in large quantities for good downstream performance on text generation tasks like summarization (Zhang et al., 2020, Huge-News). Thus, there is a strong need for a multilin \begin{table} \begin{tabular}{r c c c c} \hline \hline & MLSum & XLSum & \begin{tabular}{c} Indic- \\ Headline \\ \end{tabular} & Varta \\ \hline \# of langs. & 5 & 44 & 11 & 15 \\ \# of Indic langs. & 0 & 9 & 11 & 14 \\ Headline present & ✗ & ✓ & ✓ & ✓ \\ Summary present & ✓ & ✓ & ✗ & ✗ \\ Size of Indic parts & 0 & 165K & 1.3M & 34.5M \\ Total size & 1.5M & 1M & 1.3M & 41.8M \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of existing multilingual headline generation and summarization datasets with Varta. gual, large-scale, high-quality dataset for Indic languages. To fill this gap, we introduce Varta, a dataset consisting of more than 41 million article-headline pairs in 15 languages (14 Indic languages + English). The data is crawled from DailyHunt, a popular news aggregator in India that pulls high-quality articles from multiple trusted and reputed news publishers. It also covers a wide range of topics such as politics, science, entertainment, sports, business, etc., which makes the dataset diverse both in terms of domains and writing styles. Table 1 compares Varta with other multilingual summarization and headline generation datasets. ContributionsThe main contribution of this work is the Varta dataset, which we hope will push the research on headline generation and summarization in Indic languages. We run two sets of experiments: (i) The first set of experiments deals only with the task of headline generation under different training and evaluation settings. By analyzing the results of these experiments, we answer several important research questions related to text generation from an Indic perspective. (ii) Owing to the size and multilingual nature of Varta, the second set of experiments treats the dataset as a pretraining corpus. We pretrain a BERT-style encoder-only model and a T5-style encoder-decoder model and show that the models outperform strong baselines on IndicXTREME [10] and IndicNLG [11] benchmarks, respectively. Finally, we release the data and the pretrained models so that the community can build on top of our research. ## 2 Related Work English DataThere have been several proposed datasets for the generation of English headlines and other similar tasks. The DUC 2003 and 2004 datasets, which contain 500 and 624 article-headline pairs from the Associated Press and New York Times, respectively, were among the first to be released. rush2015unsupervised introduced a dataset based on the Gigaword corpus [20], which pairs the first sentence of a news article with its headline for sentence summarization. This dataset has around 4 million pairs in total. Other notable datasets from the literature include XSUM [14] for 1-2 sentence summaries of BBC articles, the Google dataset for sentence compression [13], and the TLDR corpus [15] for summarizing Reddit posts. Multilingual DataThe Columbia Newsblaster [12] was one of the first datasets to include multilingual data for summarization. It includes around 500 news articles in English, Russian, and Japanese, with annotations for articles, texts, titles, images, and captions. In recent years, larger datasets with a greater variety of languages have been curated. MLSum [15] comprises 1.5 million pairs of articles and summaries in French, German, Spanish, Russian, and Turkish. XLSum [1] follows the format of XSUM and crawls the BBC websites to obtain 1 million pairs in 44 languages.2 This dataset also includes the headlines of the articles, along with their summaries. kumar2022unsupervised propose a headline generation dataset specifically for Indic languages, which includes 1.3 million data points in 11 Indic languages. Footnote 2: The BBC publishes articles in 44 languages, nine of which are Indic. Modeling ApproachesThe task of headline generation has been approached in several ways over the years. An early approach was proposed by Banko et al. Banko2000unsupervised, who viewed the task as a machine translation problem. Subsequently, as sequence-to-sequence models became more popular [21], many works have used encoder-decoder architectures to tackle this problem [22]. More recently, pretrained language models such as BART [11] and T5 [13] have been used for headline generation and other related tasks with great success. ## 3 Data Varta contains 41.8 million high-quality news articles in 14 Indic languages and English. With 34.5 million non-English article-headline pairs, it is the largest headline-generation dataset of its kind. In this section, we detail how the dataset is collected along with the various processing steps involved, before moving on to some interesting statistics and analysis of the data. Table 9 in the Appendix showcases some randomly selected articles from Varta. ### DailyHunt SourceDailyHunt is a popular aggregator platform for news in India.3 It curates articles from over 1773 publishers in English and 14 Indic languages: Assamese, Bhojpuri, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Tamil, Telugu, and Urdu. We collect all our data from this platform and restrict ourselves to articles that were published between January 2010 and August 2022. Data CollectionSince DailyHunt does not have an external facing API, we crawl their website using Scrapy,4 a Python-based scraping tool that collects data efficiently without burdening their servers. To maintain the quality of the collected data, we: (i) discard articles that are less than 50 words long, (ii) discard articles where an image or video is prominently placed (since these articles cannot be understood without the aid of the embedded media), and (iii) discard articles which require us to navigate to the publisher website to get the entire text. Footnote 4: [https://scrapy.org/](https://scrapy.org/) ProcessingWe only retain text from the articles and remove other HTML content like embedded media, and hyperlinks to other websites. We also de-duplicate articles by using the headline as the identifier. We do not use the URLs as identifiers since DailyHunt re-aggregates an article if the original article text is changed by the publisher. We find and remove around 7.2 million such'stale' articles, and keep only the latest versions.5 Footnote 5: These duplicates themselves form an interesting dataset of how news articles are edited over time. ### Statistics The language-wise sizes of the processed data are shown in Table 2. To better understand the various properties of the dataset, we also compute the following metrics for each language: (i) the ratio of novel n-grams between the headline and the article, (ii) the average article and headline lengths, and (iii) the compression ratio between the headline and the article. Additionally, we also report the number of distinct words ('Vocab Size' in Table 2), and the number of publishers ('Domain Count' in Table 2) for each language. Novel n-gramsThe ratio of novel n-grams between a headline and its article text is a proxy for the abstractiveness of the headline, with higher values indicating more abstract headlines. Across languages, 29% and 63% of the unigrams and bigrams are unique respectively. In 7 of the 15 languages, the novel unigram ratio exceeds \(1/3\) indicating a high degree of abstraction and low lexical overlap. Article and Headline lengthsOn average, Varta articles have 17 sentences, and the headlines have just over one sentence. A typical article sentence contains about 18 words, and a headline sentence contains 11 words. This indicates that the headlines in our dataset are usually 39% smaller than the average sentence in an article. This gives us an idea about the extreme nature of summarization in the dataset. Compression RatioThe ratio between the number of tokens in the headline and the article gives a sense of the conciseness of the headline, with lower values indicating more concise headlines. Overall, Varta has a compression ratio of 5.72%, with English and Assamese headlines being the most and least concise respectively. ### Extractive Baselines We evaluate two extractive summarization methods on the data to see how far we can get by only selecting a sentence from the article as the headline: (i) Lead-1: choose the first sentence of the article as the headline. This can be seen as a strong performance lower bound [20], and (ii) Extractive Oracle: choose a sentence from the article which gives the highest rouge score with respect to the gold headline. This provides an upper bound for extractive models [16]. We report the language-wise rouge-l scores in Table 2 with more detailed numbers in Table 8 of the Appendix. ### Data Splits From every language, we randomly sample 10,000 articles each for validation and testing. We also ensure that at least 80% of a language's data is available for training. Therefore, if a language has less than 100,000 articles, we restrict its validation and test splits to 10% of its size. Table 7 in the Appendix shows the splits for each language. Since this training set has more than 41 million articles, performing experiments and ablations will be compute- and time-intensive. Therefore, we create a small training set by limiting the number of articles from each language to 100K. This small training set with a size of 1.3M is used in all our fine-tuning experiments. ## 4 Experiments Rather than trying to squeeze optimal performance from models, the aim of our experiments is to perform a detailed study of how different training settings affect model behavior. In this section, we first present a list of research questions that our experiments try to answer. We then provide a brief overview of the models used in the experiments. Lastly, we describe the different training and evaluation configurations of our experiments before presenting our findings in SS5. ### Research Questions **RQ1: What is the best bridge language?** A bridge or pivot language is an intermediary for transferring knowledge learned by training in one language to others. English is commonly used as the bridge language in literature. We test if English is indeed the best bridge language in our data, or if can we use Hindi since it is typologically similar to other languages in Varta. **RQ2: Are different scripts a hindrance?** Varta contains languages with 11 unique scripts. Can we have a better transfer if we use a single script during fine-tuning? Which script would help the most? Why? **RQ3: Which setup performs the best?** We can finetune models under different settings with different data configurations. Which of them would give us the best result? What is required for effective transfer? Does training language-family-specific models give us an advantage over massive multilinguality? **RQ4: Can Varta be used for pretraining?** The objective of this work is to curate a high-quality dataset for headline generation in Indic languages. Upon realization that the final dataset size is comparable to existing pretraining corpora for Indic languages, we decided to compare the downstream performance of models pretrained on Varta against those pretrained on similar corpora. Therefore, we ask the following questions: Can Varta be used to pretrain both NLU and NLG models? If yes, are they competitive? Can our finetuned models generalize to other similar tasks, like abstractive summarization? ### Models mT5introduced by Xue et al. (2021), it is the multilingual variant of T5 (Raffel et al., 2022). It is pretrained on the mC4 corpus, comprising 101 languages. The pretraining objective of mT5 is "span-corruption" which is generating spans of text that are masked in the input. **mBERT-seq2seq** uses a multilingual variant of BERT (Devlin et al., 2019) pretrained on Wikipedia, on a total of 104 languages.6 Since BERT is an encoder-only model trained on masked \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Lang.} & \multirow{2}{*}{Size} & \multirow{2}{*}{Domain} & \multirow{2}{*}{Vocab} & \multicolumn{3}{c}{Ratio of novel n-grams (\%)} & \multirow{2}{*}{CR} & \multicolumn{3}{c}{\(|\)article\(|\)} & \multicolumn{3}{c}{\(|\)headline\(|\)} & \multirow{2}{*}{Lead-1} & \multirow{2}{*}{Ext Or.} \\ \cline{5-16} & & & \multirow{2}{*}{Count} & & \multirow{2}{*}{Size} & \multirow{2}{*}{1-gram} & \multirow{2}{*}{2-gram} & \multirow{2}{*}{3-gram} & \multirow{2}{*}{4-gram} & \multirow{2}{*}{(\%)} & \multirow{2}{*}{Tok.} & \multirow{2}{*}{Sent.} & \multirow{2}{*}{Tok.} & \multirow{2}{*}{Sent.} \\ \hline as & 87.5K & 20 & 964K & 33.59 & 60.67 & 72.76 & 78.16 & 8.01 & 207.79 & 12.05 & 11.76 & 1.1 & 18.41 & 35.57 \\ bh & 1.56K & 2 & 62.1K & 23.61 & 58.56 & 74.08 & 80.03 & 5.37 & 446.08 & 23.07 & 14.31 & 1.03 & 27.11 & 35.26 \\ bn & 2.25M & 155 & 6.00M & 33.65 & 67.55 & 80.71 & 85.58 & 5.22 & 290.9 & 21.58 & 12.15 & 1.19 & 20.2 & 35.62 \\ en & 7.27M & 488 & 17.9M & 28.98 & 65.23 & 81.18 & 87.43 & 4.15 & 467.94 & 17.92 & 14.21 & 1.05 & 23.78 & 32.83 \\ gu & 2.00M & 105 & 7.49M & 33.69 & 67.9 & 80.78 & 86.18 & 5.52 & 325.94 & 17.74 & 13.83 & 1.1 & 14.59 & 30.55 \\ ni & 14.4M & 413 & 19.1M & 19.58 & 57.84 & 76.06 & 84.95 & 5.06 & 375.93 & 17.59 & 15.51 & 1.03 & 17.71 & 35.85 \\ kn & 1.47M & 74 & 8.44M & 34.83 & 86.43 & 82.63 & 87.35 & 6.2 & 233.34 & 11.69 & 10.29 & 1.12 & 19.36 & 32.4 \\ ml & 3.47M & 136 & 20.6M & 30.62 & 59.73 & 72.12 & 76.14 & 7.28 & 178.02 & 14.8 & 10.04 & 1.07 & 32.41 & 42.07 \\ mr & 2.67M & 159 & 10.1M & 33.8 & 67.99 & 81.51 & 86.44 & 4.6 & 321.77 & 20.77 & 11.94 & 1.15 & 14.53 & 31.33 \\ ne & 32.5K & 2 & 435K & 26.37 & 62.97 & 79.89 & 88.37 & 4.99 & 253.17 & 16.06 & 9.45 & 1.01 & 4.25 & 36.38 \\ or & 1.09M & 59 & 3.43M & 28.75 & 64.3 & 78.96 & 84.07 & 6.12 & 214.66 & 14.49 & 10.26 & 1.06 & 21.52 & 35.89 \\ pa & 842K & 32 & 2.10M & 20.16 & 57.1 & 73.95 & 82.42 & 5.81 & 328.87 & 12.26 & 14.62 & 1.03 & 22.24 & 31.75 \\ ta & 2.64M & 120 & 10.0M & 36.21 & 67.63 & 81.58 & 86.59 & 7.18 & 210.73 & 1.49 & 14.13 & 1.64 & 23.77 & 34.23 \\ te & 3.27M & 113 & 13.4M & 37.94 & 71.22 & 82.2 & 79.8 & 5.13 & 238.62 & 19.04 & 9.17 & 1.25 & 16.67 & 30.31 \\ ur & 303K & 21 & 1.56M & 15.2 & 47.67 & 66.08 & 76.23 & 5.08 & 442.61 & 15.51 & 14.29 & 1.08 & 25.91 & 35.08 \\ \hline Avg. & - & - & - & 29.13 & 62.96 & 77.63 & 83.32 & 5.72 & 302.43 & 16.99 & 12.22 & 1.13 & 20.16 & 34.34 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics and extractive baseline results on Varta. CR and Ext Or. stand for the Compression Ratio and Extractive Oracle, respectively. Only Rouge-l F1-scores are reported for Lead-1 and Extractive Oracle; see Table 8 in the Appendix for Rouge-1 and Rouge-2 scores. language modeling, it cannot be used directly for generation. However, Rothe et al. (2020) show that initializing encoder-decoder models with pretrained encoder-only or decoder-only checkpoints or "warm-starting", yields competitive results in multiple sequence-to-sequence tasks. We warm-start both the encoder and decoder of our model with mBERT weights and the encoder-decoder attention weights are initialized randomly.7 We denote this model as just mBERT from here on. Footnote 7: We do not share parameters between the encoder and decoder since early experiments showed that sharing parameters slightly hurt performance. Another reason for choosing such a model configuration as a baseline is that we can compare and contrast how models learn when their encoder-decoder attention weights are (not) initialized from pretrained weights. Aralikatte et al. (2021) show models with randomly initialized encoder-decoder attention learn data characteristics much better than their pre-initialized counterparts. Varta-T5In the pretraining corpus used for mT5 and mBERT, the size of the Indic languages we are interested in, is relatively small. Previous works have shown that such underrepresented languages in massively multilingual models suffer due to a lack of model capacity Khanuja et al. (2022), and poor transfer Ponti et al. (2021, Section 2), and that pretraining only on a group of closely related languages results in better downstream performance Doddapaneni et al. (2022). Therefore, we use the full training set from Varta to pretrain a T5 model from scratch. We use span corruption and gap-sentence generation as the pretraining objectives. Both objectives are sampled uniformly during pretraining. Span corruption is similar to masked language modeling except that instead of masking random tokens, we mask spans of tokens with an average length of 3. In gap-sentence prediction, whole sentences are masked instead of spans. We follow the original work Zhang et al. (2020), and select sentences based on their 'importance'. rouge-1 F1-score between the sentence and the document is used as a proxy for importance. We use 0.15 and 0.2 as the masking ratios for span corruption and gap-sentence generation, respectively. We use a standard _T5-base_ architecture with 12 encoder and decoder layers, and 12 attention heads. Since data sizes across languages in Varta vary from 1.5K (Bhojpuri) to 14.4M articles (Hindi), we use standard temperature-based sampling to upsample data when necessary.8 For other training details, see Appendix A.3. Footnote 8: We use the IndicTrans Bhat et al. (2015) and IndicNLP Kunchukuttan (2020) libraries to transliterate text to English and Devanagari respectively. For Urdu, we use IndicTrans in both cases. Though Bhojpuri uses Devanagari, we ignore it while performing the unified script experiments since it does not have a good translation system. ### Finetuning and Evaluation settings To answer the questions posed in SS4.1, we perform a series of experiments. First, we finetune each model described in SS4.2 on Varta in five settings: (i) _en_: finetune only on English data from the small training set, and evaluate on all language test sets in original scripts. (ii) _hi_: finetune only on Hindi data from the small training set, and evaluate on all language test sets in original scripts. (iii) _latin_: finetune on the small training set transliterated to Latin (English) script, and evaluate on all language test sets transliterated to Latin script.9 (iv) _dyn.:_ finetune on the small training set transliterated to Devanagari script, and evaluate on all language test sets transliterated to Devanagari script.10 (v) _all_: finetune on all languages of the small training set, and evaluate on all language test sets, in their original scripts. Footnote 10: We denote _latin_ (_dyn._) model as English (Devanagari) ‘unified model’ since it is trained on a single, unified script. Next, to assess the generalizability of our pre-trained model, we conduct the following two experiments on the Varta-T5 model: (i) evaluate on the XL-Sum dataset Hasan et al. (2021). XL-Sum is particularly useful as it contains article-headline-summary triplets, which allows for the evaluation of both headline generation and abstractive summarization tasks. We evaluate the best models from the previous set of experiments in a zero-shot setting. (ii) To determine the generalizability of Varta-T5 on text generation tasks, we evaluate the model on the IndicNLG benchmark Kumar et al. (2022) which contains five diverse generation tasks in 11 Indic languages. Finally, to determine if Varta can be used to train good NLU models, we train Varta-BERT, an encoder-only model trained with the masked language modeling objective and evaluated on IndicX-TREME Doddapaneni et al. (2022), a zero-shot cross-lingual NLU benchmark for Indic languages consisting of nine tasks in 18 languages. ## 5 Results ### Headline Generation Results The rouge-l scores for each of the three models in all five settings can be found in Table 3. Overall, Varta-T5 finetuned in the _all_ setting performs the best with an average rouge-l of 40.48. Although better, it is just six points above the Extractive Oracle baseline. This indicates the difficulty of the Varta dataset, and that there is much room for improvement even among state-of-the-art models. #### 5.1.1 mBERT Observations On average, mBERT has inferior performance compared to other models, which is expected due to it having to learn a fraction of its parameters from scratch. However, when trained on the Devanagari and original scripts, mBERT either outperforms or is comparable to the other models in Bhojpuri and Hindi (both use Devanagari script). It also achieves comparable results to mT5 (with a margin of 0.5 rouge-l or less) in five languages when trained on the original script data. But when trained only on English and Hindi data, the model is unable to transfer anything meaningful to other languages, showing that warm starting is not effective in zero-shot cross-lingual settings. This effect is particularly visible in the case of Oriya, which does not share its script with any other language in the dataset, and on which the model has not been pretrained. However, we observe that when trained only on Hindi data, the model demonstrates faster learning and better transfer capabilities than when trained only on English. This might be because (i) 4/15 languages in the dataset use the Devanagari script, which is also used by Hindi, and (ii) Hindi is closely related to the other languages in the dataset. This suggests that Hindi might be a better bridge language for Varta than English. This hypothesis is further supported by the fact that the Devanagari unified model (_dwn._) performs significantly better than the Latin unified model. #### 5.1.2 mT5 Observations Finetuning on Original ScriptsThe model performs the best in this setting, which is expected since it is pretrained on 12 out of 15 languages in the dataset. More importantly, it performs much better than mBERT in Assamese, Bhojpuri, and Oriya, the only three languages both models have not seen during pretraining. This indicates better intra-script and inter-script zero-shot transfer in mT5 (Assamese shares its script with Bengali, and Bhojpuri with Hindi, but Oriya is unique). Bridge LanguageWe find that the mT5 model fine-tuned only on English data demonstrates an average improvement of 3.81 rouge-l points over the model fine-tuned only on Hindi. This difference in performance can potentially be attributed to the fact that English is the model's largest pretraining language, and it also has the highest share in the vocabulary of the model. Further analysis of the Varta-T5 model will support this hypothesis. Unified ScriptOn average, the Devanagari unified model demonstrates better performance, with an improvement of 0.67 rouge-l, when compared \begin{table} \begin{tabular}{c|c|c c c c c c c c c c c c c|c} \hline \hline Model & as & bh & bn & en & gu & hi & kn & ml & mr & ne & or & pa & ta & te & ur & Avg. \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & en & 1.24 & 2.82 & 1.92 & 39.14 & 1.70 & 4.44 & 2.73 & 1.75 & 2.88 & 2.29 & 0.35 & 2.44 & 2.04 & 3.42 & 2.75 & 4.79 \\ & hi & 1.41 & 29.86 & 1.35 & 15.50 & 1.35 & 39.89 & 1.35 & 1.01 & 19.35 & 29.74 & 0.26 & 1.89 & 1.07 & 2.36 & 2.17 & 9.90 \\ & latin & 28.04 & - & 24.92 & 34.89 & 21.15 & 28.98 & 28.10 & 35.52 & 21.53 & 31.14 & 28.08 & 32.52 & 27.36 & 24.20 & 35.72 & 28.72 \\ & dvn. & 30.66 & - & 28.24 & 39.37 & 24.88 & 34.55 & 32.09 & 37.13 & 26.91 & 38.80 & 33.29 & 37.48 & 30.77 & 27.40 & 40.62 & 33.01 \\ & all & 32.98 & 37.20 & 33.68 & 41.36 & 26.81 & 36.89 & 35.50 & 39.88 & 29.29 & 41.55 & 1.24 & 40.60 & 36.25 & 30.21 & 43.82 & 33.82 \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & en & 24.19 & 29.01 & 27.73 & 43.20 & 21.66 & 30.65 & 28.83 & 37.08 & 22.06 & 31.98 & 24.92 & 32.03 & 28.84 & 23.18 & 34.87 & 29.35 \\ & hi & 22.84 & 27.20 & 25.81 & 29.68 & 15.99 & 30.74 & 25.63 & 34.79 & 20.96 & 31.60 & 25.88 & 22.71 & 26.36 & 20.08 & 22.86 & 25.54 \\ & latin & 34.02 & - & 29.34 & 41.53 & 26.43 & 34.90 & 33.45 & 39.81 & 27.31 & 38.11 & 34.81 & 39.27 & 32.54 & 27.49 & 41.81 & 34.34 \\ & dvn. & 32.69 & - & 29.42 & 41.77 & 26.65 & 37.45 & 34.28 & 39.65 & 29.56 & 41.39 & 35.50 & 39.66 & 32.09 & 28.25 & 42.53 & 35.01 \\ & all & 36.14 & 36.15 & 33.70 & 42.06 & 27.40 & 37.25 & 37.80 & 43.62 & 30.09 & 41.68 & 33.08 & 40.69 & 37.44 & 32.49 & 43.69 & 36.88 \\ \hline \multirow{8}{*}{ \begin{tabular}{} \end{tabular} } & en & 32.48 & 20.09 & 32.17 & 43.79 & 26.75 & 33.90 & 33.35 & 39.73 & 25.61 & 27.60 & 33.45 & 35.38 & 31.75 & 26.27 & 36.59 & 31.93 \\ & hi & 29.26 & 29.26 & 33.35 & 38.13 & 28.53 & 40.11 & 35.06 & 41.74 & 29.62 & 34.40 & 35.99 & 40.09 & 33.19 & 28.76 & 41.67 & 34.61 \\ \cline{1-1} & latin & 32.81 & - & 29.56 & 43.62 & 26.04 & 35.17 & 33.32 & 39.68 & 27.25 & 38.41 & 34.62 & 39.82 & 32.34 & 27.54 & 41.86 & 34.43 \\ \cline{1-1} & dvn. & 33.99 & - & 30.33 & 43.82 & 28.14 & 40.06 & 35.01 & 38.93 & 33.72 & 44.50 & 36.38 & 40.82 & 31.85 & 28.92 & 44.40 & 36.49 \\ \cline{1-1} & all & **41.21** & **39.17** & **37.10** & **43.87** & **32.00** & **40.29** & **41.13** & **45.16** & **34.60** & **44.63** & **40.56** & **44.13** & **39.91** & **36.69** & **46.66** & **40.48** \\ \hline \hline \end{tabular} \end{table} Table 3: Headline generation results for the three baseline models trained in all five data settings: English only (en), Hindi only (hi), Latin transliterated data (latin), Devanagari transliterated data (dvn.), and original script data (all). Only rouge-l scores are shown here. See Appendix A.1 for more details. to the Latin unified model. Although the performance of the two models is similar for the majority of languages, the Devanagari model shows significant improvement on Marathi, Nepali, and Hindi, all of which also use the Devanagari script. This observation of improved transfer across languages that share a script is a recurrent theme across models and settings. However, it is noteworthy that the Latin model performed better on Assamese and Tamil. #### 5.1.3 Varta-T5 observations Finetuning on Original ScriptsVarta-T5 fine-tuned in this setting has the highest performance among all models, with a rouge-l score of 40.48. This result is expected, as it is pretrained on the same data and one of its pretraining objectives (gap-sentence generation) is almost equivalent to generating sentences that the Extractive Oracle would extract. Bridge LanguageWe see that the model fine-tuned only on Hindi outperforms the model fine-tuned only on English by a margin of 2.68 rouge-l points. This supports our previous hypothesis that the language with the largest pretraining data is the most effective bridge language. In addition to being the largest language with the largest vocabulary share, Hindi also shares its script with three other languages in the dataset and is typologically similar to the majority of other languages in the dataset. This should ideally make the Hindi-only models significantly better than the English-only models. However, we should note that English is the second biggest language in Varta with 7.2M articles (half the size of Hindi), and therefore the difference in performance between English-only and Hindi-only models, or Latin unified and Devanagari unified models is not as significant as expected. Unified ScriptOn average, the Devanagari unified model is 2.06 rouge-l points better than the Latin unified model. The Devanagari model performs much better on Gujarati, Hindi, Marathi, Nepali, and Urdu, with improvements of more than 2 rouge-l points.11 In general, all Indo-European languages see improvements with the Devanagari unified model. However, such a claim cannot be made for Dravidian languages. While Kannada and Telugu see improvements, we see a performance drop in Malayalam and Telugu.12 Footnote 11: Hindi, Marathi, and Nepali use the Devanagari script. Footnote 12: Surprisingly, we do not see this drop in mT5 models where English is the dominant pretraining language. ### Generalization Results We see that the models pretrained on Varta consistently outperform strong baselines and often by significant margins. We argue (particularly in SS5.2.2) that Varta is a good resource for pretraining large language models on Indic languages, especially since it has data from diverse, high-quality sources. #### 5.2.1 XL-Sum To test the generalizability of our models on other datasets and tasks, we evaluate models finetuned on Varta on the Indic subset of the XL-Sum dataset (Hasan et al., 2021), for both abstractive summarization and headline generation. We select the best mT5 and Varta-T5 models obtained previously and evaluate them on nine Indic languages and English, without any additional fine-tuning. The results are shown in Table 4. We find that Varta-T5 consistently outperforms mT5 on all languages in both tasks. On average, we see a gain of around 2 rouge-l points on abstractive summarization and 4 rouge-l points on headline generation respectively. #### 5.2.2 Varta on NLU tasks To verify if Varta can be used to train good NLU models, we use it to pretrain a masked language model with the BERT-Base architecture. We name this model Varta-BERT and more information about the pretraining can be found in Appendix \begin{table} \begin{tabular}{r r r r r} \hline \hline \multirow{2}{*}{Lang.} & \multicolumn{2}{c}{Head. Gen.} & \multicolumn{2}{c}{Abs. Sum.} \\ \cline{2-5} & mT5 & V-T5 & mT5 & V-T5 \\ \hline bn & 22.32 & **27.02** & 16.12 & **17.68** \\ en & 19.32 & **21.41** & 13.24 & **14.80** \\ gu & 14.22 & **17.72** & 10.73 & **12.68** \\ hi & 19.13 & **23.34** & 18.87 & **22.04** \\ mr & 16.63 & **20.65** & 9.3 & **10.85** \\ ne & 15.81 & **20.14** & 9.94 & **12.84** \\ pa & 19.15 & **23.45** & 15.88 & **18.19** \\ ta & 19.03 & **22.88** & 12.55 & **14.64** \\ te & 19.32 & **22.04** & 11.14 & **13.02** \\ ur & 20.15 & **24.53** & 19.05 & **21.24** \\ \hline avg. & 18.51 & **22.32** & 13.68 & **15.80** \\ \hline \hline \end{tabular} \end{table} Table 4: Zero-shot results on XL-Sum headline generation and abstractive summarization tasks. Only rouge-l scores are shown here. See Table 13 in the Appendix for more details. A.2. We evaluate our model on the IndicXTREME benchmark Doddapaneni et al. (2022) which consists of 9 tasks in 19 Indic languages and English. We compare our models against two strong baselines: IndicBERT v1 and v2. These two BERT-Base models are trained on IndicCorp v1 Kakwani et al. (2020) and IndicCorp v2 Doddapaneni et al. (2022), with 8.5B and 20.9B tokens respectively. We are mainly interested in the comparison with IndicBERT v1 since the size of its pretraining corpus is comparable with that of Varta (9B tokens). Table 5 shows the results averaged across languages on the nine tasks. We see that Varta-BERT consistently outperforms IndicBERT v1 indicating its quality. We should also note that Varta-BERT's performance is not too far behind IndicBERT v2 even though it is trained on 11B fewer tokens. In fact, it outperforms IndicBERT v2 on three tasks and loses out on three other tasks by a margin of less than one point. #### 5.2.3 Varta on NLG tasks Finally, we evaluate Varta-T5 on IndicNLG and compare its performance against two strong baselines: IndicBART Dabre et al. (2022) and mTS.13 The comparison is presented in Table 6. It is to be noted that, at the time of writing we could not independently reproduce all the results presented in Kumar et al. (2022). We, therefore, present the results as originally reported along with the ones obtained during our experiments. We see that, overall, Varta-T5 is the best-performing model in 3 out of 5 tasks. But when compared with the reproduced results only, it performs better in 4 out of 5 tasks. Footnote 13: Here we use the original mT5 model trained on the mC4 corpus. ### Key Takeaways Based on our experiments and analyses, we now try to answer the research questions posed in SS4.1. Rq1We find that the largest pretraining language typically acts as the best bridge language. It helps if the language is typologically similar to other languages and shares a common script with them. In the case of dataset Varta, Hindi is the ideal bridge language as it has all of the above properties (though it shares a common script with only three other languages). Rq2 and RQ3The performance of the headline generation models is not negatively impacted by the variety of scripts. In fact, models fine-tuned in this setting yield the best results. It is also interesting to note that the transliterated models perform better than the monolingual models. This suggests that, in general, multilingual training enables positive transfer among related languages, with or without the use of their original scripts. Rq4We find that Varta-T5 generalizes well on other, similar tasks like abstractive summarization. Varta-BERT and Varta-T5 generally perform on par or better than strong baselines on both NLU and NLG benchmarks for Indic languages. ## 6 Conclusion In this work, we create Varta, a large-scale headline-generation dataset for Indic languages. We show that this dataset is challenging even for state-of-the-art text generation models. Utilizing the size and quality of the dataset, we answer pertinent research questions about multilingual models, \begin{table} \begin{tabular}{l c c c c} \hline \hline Task & \begin{tabular}{c} Indic \\ BART\({}^{*}\) \\ \end{tabular} & mT5\({}^{*}\) & mT5 & \begin{tabular}{c} Varta- \\ mT5 \\ \end{tabular} \\ \hline Wikibio & 53.8 & **54.6** & 53.3 & **54.5** \\ HeadGen. & 42.4 & **45.5** & 35.0 & 37.8 \\ SentSum. & 54.9 & **55.1** & 36.1 & 32.6 \\ ParaGen. & 16.2 & 7.5 & 20.5 & **26.3** \\ QuestGen. & 26.6 & 25.1 & 25.4 & **28.5** \\ \hline \hline \end{tabular} \end{table} Table 6: IndicNLG evaluation results. Columns marked with \({}^{*}\) are directly taken from Kumar et al. (2022). More details on tasks and results can be found in Appendix E. \begin{table} \begin{tabular}{r|c|c c} \hline \hline \multirow{2}{*}{Task} & \multicolumn{2}{c}{\begin{tabular}{c} IndicCorp \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Varta \\ \end{tabular} } \\ \cline{2-3} & v2 & & & v1 \\ \hline Sentiment & 90.2 & 85.7 & **87.6** \\ NLI & 73.0 & 66.4 & **73.6** \\ COPA & 62.7 & 52.4 & **60.5** \\ XPara & 56.9 & 49.6 & **61.9** \\ Intent Clf. & 78.8 & 25.8 & **78.0** \\ NER & 73.2 & 58.3 & **65.2** \\ Slot Fill. & 56.7 & 34.4 & **57.7** \\ QA & 48.3 & 37.6 & **48.3** \\ Retrieval & 69.4 & **54.9** & 47.5 \\ \hline \hline \end{tabular} \end{table} Table 5: Results on the nine IndicXTREME tasks. Task descriptions, metrics, and detailed results can be found in Appendix D. from an Indic text generation perspective. We also show that Varta can be used as a pretraining corpus to train strong NLU and NLG models. ## Acknowledgements We would like to thank the anonymous reviewers for their comments and suggestions. We also thank Google's TPU Research Cloud (TRC) for giving us free access to their v3-128 TPUs for pretraining our models. We also acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) for funding Ziling Cheng through their grant program. ## Limitations This work is mainly dedicated to the curation of a new multilingual dataset for Indic languages, many of which are low-resource languages. During data collection, we face several limitations that can potentially result in ethical concerns. Some of the important ones are mentioned below: * Our dataset contains only those articles written by DailyHunt's partner publishers. This has the potential to result in a bias towards a particular narrative or ideology that can affect the representativeness and diversity of the dataset. * Another limitation is the languages represented in Varta. Out of 22 languages with official status in India, our dataset has only 13. There are 122 major languages spoken by at least 10,000 people and 159 other languages which are extremely low-resourced.14 None of these languages are represented in our dataset. Footnote 14: [https://en.wikipedia.org/wiki/Languages_of_India](https://en.wikipedia.org/wiki/Languages_of_India) * We do not perform any kind of debiasing on Varta. This means that societal and cultural biases may exist in the dataset, which can adversely affect the fairness and inclusivity of the models trained on it. ## Ethics Statement The ethical considerations that arise from the limitations of our data collection process are already detailed in the previous section. In this section, we discuss the implications of releasing the data, how we intend to do so in a safe manner, and the license under which it would be released. While Varta has the potential to advance NLP research for Indic languages,15 it can also be used in ways not intended by the authors. Since Varta can be used to pretrain text generation models, it can be used to build models that generate hate speech, fake news, etc. Footnote 15: Appendix F has a detailed datasheet describing the rationale behind the creation of Varta and other essential information. Since our data is aggregated from different sources and each source may have different restrictions on distributing their data, we only release a list of URLs pointing to the original articles and not the articles themselves, which is a standard and acceptable way of sharing data.16 However, we provide a sample script that can be used to crawl the URLs and rebuild Varta. We release the URL list under a CC-BY license17 and dedicate it to the public domain. The released code and models will have an Apache License 2.0.18 Footnote 16: Other works like Narayan et al. (2018) also follow this method for sharing their data. Footnote 17: [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/) Footnote 18: [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)
2307.02796
VerifAI: Verified Generative AI
Generative AI has made significant strides, yet concerns about the accuracy and reliability of its outputs continue to grow. Such inaccuracies can have serious consequences such as inaccurate decision-making, the spread of false information, privacy violations, legal liabilities, and more. Although efforts to address these risks are underway, including explainable AI and responsible AI practices such as transparency, privacy protection, bias mitigation, and social and environmental responsibility, misinformation caused by generative AI will remain a significant challenge. We propose that verifying the outputs of generative AI from a data management perspective is an emerging issue for generative AI. This involves analyzing the underlying data from multi-modal data lakes, including text files, tables, and knowledge graphs, and assessing its quality and consistency. By doing so, we can establish a stronger foundation for evaluating the outputs of generative AI models. Such an approach can ensure the correctness of generative AI, promote transparency, and enable decision-making with greater confidence. Our vision is to promote the development of verifiable generative AI and contribute to a more trustworthy and responsible use of AI.
Nan Tang, Chenyu Yang, Ju Fan, Lei Cao, Yuyu Luo, Alon Halevy
2023-07-06T06:11:51Z
http://arxiv.org/abs/2307.02796v2
# VerifAI: Verified Generative AI ###### Abstract. Generative AI has made significant strides, yet concerns about the accuracy and reliability of its outputs continue to grow. Such inaccuracies can have serious consequences such as inaccurate decision-making, the spread of false information, privacy violations, legal liabilities, and more. Although efforts to address these risks are underway, including explainable AI and responsible AI practices such as transparency, privacy protection, bias mitigation, and social and environmental responsibility, misinformation caused by generative AI will remain a significant challenge. We propose that verifying the outputs of generative AI from a data management perspective is an emerging issue for generative AI. This involves analyzing the underlying data from multi-modal data lakes, including text files, tables, and knowledge graphs, and assessing its quality and consistency. By doing so, we can establish a stronger foundation for evaluating the outputs of generative AI models. Such an approach can ensure the correctness of generative AI, promote transparency, and enable decision-making with greater confidence. Our vision is to promote the development of verifiable generative AI and contribute to a more trustworthy and responsible use of AI. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none from data lakes and reasoning with it in order to assess the quality and consistency of the generated data. By doing so, we can establish a more robust foundation for evaluating the accuracy and reliability of the data generated by generative AI. We believe that a verification approach can complement the aforementioned approaches in order to ensure that systems using generative AI are deployed responsibly, effectively, and in a trustworthy manner. This process can help maximize the potential benefits of generative AI while minimizing the risks of unintended consequences or negative impacts. It is important to clarify that our proposed approach focuses on verifying generative data that has ground truth, as opposed to subjective data [20]. We illustrate our approach with the following examples based on running our proposed framework VeriFAI. **Example 1**: _[Tuple generation/completion/augmentation (Figure 1(a))] Consider the prompt that provides a serialized table with missing values in attribute_**incumbent** _and asks_ ChatGPT _to complete these tuples. The new table with completed tuples by_ ChatGPT _is shown below the prompt._ _For the first tuple,_ VeriFAI _can "search"a tuple in the data lake that verifies the imputed and generative value that the_**incumbent** _value is correct._ _For the third tuple,_ VeriFAI _can search a tuple and a text file in the data lake that both validate the imputed value to be incorrect._ _[ Text generation (Figure 1(b))] We asked_ ChatGPT _a question "Does Meagan Good play a role in Stomp the Yard" and got an answer as shown in the figure._ **VeriFAI _can search a text file and a tuple in the data lake that both validate the generated text to be incorrect._** The examples above only scratch the surface of generated (bad) data. These data can be leveraged for downstream applications such as analytical (_e.g._, OLAP) queries and data visualization, as well as query acquisition. However, if not properly managed, generative data can have negative and even disastrous consequences. Despite efforts from model providers (_e.g._, OpenAI) and retrieval-enhanced methods [3, 4] for improving the accuracy of generative AI during the data generation process, the spread of misinformation from generative AI will remain a significant problem. Our proposed post-generation verification approach can also complement these methods by enhancing the overall reliability of generative AI. By combining our verification approach with those existing and rapidly evolving generative methods, we can create a synergistic effect that further improves the accuracy and reliability of generative AI. **Challenges.** Generative AI can generate data in various contexts and domains, utilizing the "world knowledge" that the model has learned [4]. However, when the generated data is dirty or inaccurate, traditional data cleaning [21, 27, 35] and data integration [10] approaches that solely rely on the data at hand may not be sufficient. Furthermore, when generative AI is used in the context of a specific enterprise, we may need to consult data that is specific to Figure 1. Generative AI can generate (a) values in tuples and (b) text. Our system, VeriFAI, tries to either verify or refute generated value, by reasoning the (generated data, evidence) pair where the evidence is discovered from data lakes. the enterprise in order to verify the correctness of the generated data, raising several new challenges. 1. [label=()] 2. _Indexing and searching multi-modal data lakes_. Although there have been efforts to manage data lakes with relational data [(7; 15; 16; 23)] and textual data [(24)], indexing multi-modal data lakes at scale and effectively retrieving top-\(k\) data instances cross data modalities remains an unsolved problem. 3. _Cross-modal data verification_. Data matching is a key concept in data integration [(10)] and data cleaning [(14)]. However, matching and reasoning across different data modalities are not well-addressed. 4. _Trust of heterogeneous datasets in multi-modal data lakes_. Evaluating the trustworthiness of web sources for knowledge fusion has been well studied [(11)]. However, evaluating the trustworthiness of different datasets in data lakes, particularly when they are not well curated, remains an open problem. 5. _Provenance of the verification process_. Fully automated verification of generative AI outputs can be challenging. It is important to store the lineage of the end-to-end verification process, in case the retrieved data from data lakes is flawed or incomplete, or the verification process itself makes mistakes. This allows for later human checks or debugging. **Contributions.** In this paper, we present **VerifAI**, a framework for verified generative AI, which offers the following notable contributions: 1. [label=(0)] 2. _A modularized framework_. We propose a modular framework for verifying generative data that is extendable to different types of data sources and generative data. The framework comprises three main components: an **Indexer**, a **Reranker**, and a **Verifier**, as illustrated in Figure 2 (more details can be found in Section 3). * The **Indexer** module serves the purpose of indexing datasets from diverse modalities including but not limited to tuples, tables, text, and knowledge graphs. It comprises generic and coarse-grained indexes that aid in this task. * The **Reranker** module is responsible for reranking the top-\(k\) data sources from the **Indexer** with respect to a specific generated data object. This step is more fine-grained and task-specific in nature, in order to further optimize the ranking of the retrieved data sources. * For each retrieved data instance, the **Verifier** module will determine whether it can verify or invalidate the generated data object. 3. _Experiments_. We show that **VerifAI** achieves high accuracy for generated tables and text (see Section 4), which demonstrates the feasibility of using multi-modal data lakes to verify generative AI. 4. _Open problems_. Contribution (1) highlighted the potential of **VerifAI** in addressing challenges **C1** and **C2**. However, there are still open problems for these challenges, as well as for other challenges such as **C3** and **C4**, which will be discussed in Section 5. ## 2. Verified Generative AI: The Problem **Generative Data**. Generative data generally refers to data that is created or generated by a model or algorithm, rather than being directly observed or collected from the real world. In the context of this work, we specifically focus on data generated by large language models, such as ChatGPT, using natural language generation techniques. This includes text data such as paragraphs and sentences, or tabular data such as tuples and tables, but not other data modalities such as image data or graph data. **Multi-Modal Data Lakes.** A multi-modal data lake provides a single repository or data store for storing and managing multiple types of data, including structured, semi-structured, and unstructured data. This can include tables, text, knowledge graphs, images, audio, and other forms of digital content. This work focuses on showcasing data lakes that are designed to handle and integrate tabular data and text data. In the rest of this paper, we will use the term _data object_ for generated data and _data instance_ for a specific unit of data within a data lake, which can take the form of a tuple, table, or text. We will briefly discuss potential strategies for supporting and integrating other data modalities (_e.g._, knowledge graphs) into a multi-modal data lake in Section 5. **Verified Generative AI**. Given a generated data object \(\mathbf{g}\), and a multi-modal data lake \(\mathbf{L}\), the problem of _verified generative AI_ involves discovering a set of data instances \(\mathbf{L}^{\mathbf{g}}\) from \(\mathbf{L}\) relevant to \(\mathbf{g}\), and verifying each data instance \(\mathbf{x}\in\mathbf{L}^{\mathbf{g}}\) as a mapping from \((\mathbf{g},\mathbf{x})\) to a ternary value as: \[\mathbf{verify}(\mathbf{g},\mathbf{x})\to 0\mid 1\mid 2\;(\mathbf{ verified}\mid\mathbf{refuted}\mid\mathbf{not}\;\mathbf{related})\] _Remark_. The verification process is specific to the application and may requires additional metadata from the user. For instance, when given a tuple, the verification requirement could be either on the entire tuple or on a specific column _e.g._, only verifying the attribute **incumbent** in Figure 1(a). Another example is when we task a generative AI to fill in the blank for the statement '_The average score of Michael Jordan's basketball career is **bunch**' and then only verify the correctness of the generated number. Figure 2. An overview of **VerifAI**. ## 3. The Design of Verifai An overview of different modules of \(\mathtt{VerifAI}\) is given in Figure 3. ### Indexer The \(\mathtt{Indexer}\) module is designed based on two key principles: 1. _Task-agnostic_: It can support a wide range of tasks and use cases, making it versatile and adaptable to different needs. 2. _Support for both content- and semantic-based search_: It can handle traditional string similarity search as well as vector-based search, enabling more advanced search and retrieval capabilities. Based on the aforementioned principles, \(\mathtt{VerifAI}\) currently consists of two types of indexes. * _Content-based index_. Tables or text files are serialized as strings and then indexed using string similarity-based indexes such as Elasticsearch (Kumar et al., 2017), or special data structures such as Tries or suffix trees.. * _Semantic-based index_. We first use embedding techniques (_e.g._, tuple-to-vec (Srivastava et al., 2016) or text-to-vec using BERT (Devlin et al., 2017)) to convert tuples or chunked text files into vectors1, which are then indexed by off-the-shelf vector-based indexes such as Meta Faiss (Krishnan et al., 2017) or PostgreSQL's pgvector2 extension. Footnote 1: see [https://www.dudatascience.com/the-best-document-similarity-algorithm-in-2020-a-beginners-guide-a019fe8cf05](https://www.dudatascience.com/the-best-document-similarity-algorithm-in-2020-a-beginners-guide-a019fe8cf05) for a more comprehensive guide. Footnote 2: [https://supabase.com/docs/guides/database/extensions/pgvector](https://supabase.com/docs/guides/database/extensions/pgvector). The above two types of indexes are commonly used for indexing data lakes. For example, Aurum (T In **VerifAI**, we utilize two types of **Verifiers**. The first type is a one-size-fits-all model such as ChatGPT. The second type is composed of specific and localized models designed for different tasks, such as the table fact verification model (Kumar et al., 2017; Kumar et al., 2018) for (text, table) verification, and a fine-tuned RoBERTa (Roh et al., 2017) model for (tuple, tuple) verification (Beng et al., 2019), both are from our previous work. While ChatGPT can be used by default for simplicity, there are two main reasons why we support specific and localized models, which were also discussed in RetClean (Beng et al., 2019). 1. _Data privacy_. Many applications (_e.g.,_ healthcare, government) contain highly sensitive information. One legitimate concern when using externally hosted models such as ChatGPT is data privacy. The model could potentially learn and retain sensitive information from the data it has seen (Beng et al., 2019). 2. _Better accuracy_. Specific and local models, when being fine-tuned on specific datasets and tasks, can oftentimes outperform generic models such as ChatGPT, as will be shown later in Section 4. _Remark_.: When retrieving data for a generated data instance, it's possible that we may retrieve multiple data instances that either verify or refute the generated data object. This can occur for several reasons, _e.g.,_ incorrect data instances being retrieved. Therefore, understanding the trustworthiness of different data sources (Kumar et al., 2017) and maintaining the provenance (Roh et al., 2017) of verification are important. ## 4. Preliminary Results In this section, we showcase preliminary experimental results that highlight the initial achievements of **VerifAI** in facilitating the verification of generative AI. **Setting.** We consider three verification tasks. [leftmargin=*] **Generative AI**: Retracted Data Mellitus tuple completion tuple completion text generation Note that the (text, text) verification problem is essentially equivalent to the standard fact-checking problem in the natural language processing community (Kumar et al., 2017), which has already been demonstrated to be viable. Therefore, for the sake of this discussion, we will focus primarily on scenarios that involve tuples or tables. * _Tuples in need of verification._ We collected 100 tuples from web tables. For each tuple, we randomly removed a non-key attribute cell value and then asked ChatGPT to infer the missing value by utilizing the given template provided below. If multiple tuples share the same schema, we can handle them together as a batch. [leftmargin=*] **Group template of tuple completion with ChatGPT** **Question:** Table name column 1 column 2 & column n Due to space limitations, we will not report the effectiveness of reranking in our study. However, it's worth noting that reranking (text, text) pairs has been proven effective in ColBERT (Zhu et al., 2017), and reranking (tuple, tuple) pairs has been discussed in Reclean (Bordes and McAllester, 2017). **Evaluation Metric for Verifier.** For evaluating the **Verifier**'s performance, we use accuracy as the measure. A **Verifier**'s decision is considered correct in one of the following three cases: 1. When the retrieved data instance supports the imputed tuple or claim, the **Verifier** outputs "true"; 2. When the retrieved table refutes the imputed tuple or claim, the **Verifier** outputs "false"; 3. When the retrieved data instance can neither support nor refute it, the **Verifier** outputs "not related". To compare **C**hat**G**PT with PASTA that only offers two different answers: "true" or "false", in this case, we consider it's also correct when PASTA outputs "false". **Results.** The accuracy of **C**hat**G**PT in imputing missing values for tuples and determining the correctness of claims is only **0.52** and **0.54**, respectively, in the absence of additional data. These findings emphasize the significance of verifying generative data to guarantee accuracy and dependability. Table 1 displays the retrieval outcomes, revealing that the retrieval module performs well for (tuple, tuple) and (textual claim, table). However, it doesn't retrieve the associated text files effectively based on a tuple. This is due to the fact that we simply utilized Elasticsearch as the **Indexer** and only retrieved three text files. We anticipate that the retrieval performance will improve when we expand the number of retrieved files and conduct further experiments by adding the **Reranker**. Table 2 reveals that as the **Verifier**, **C**hat**G**PT can accurately determine whether the imputed value is correct or not, with a high accuracy of **0.88**. Regarding textual claims, we present results in two settings. When a relevant table is retrieved and provided as evidence to the **Verifier** in the form of a (text, relevant table) pair, PASTA is able to verify the textual claim with higher accuracy than **C**hat**G**PT based on the table. However, many retrieved tables are irrelevant to the claim. In all (text, retrieved table) pairs, PASTA's accuracy drops to 0.72 because it hasn't encountered this scenario during training. On the other hand, **C**hat**G**PT has superior generalization capabilities and performs better than PASTA when dealing with many irrelevant tables. In Figure 4, we present a case of verifying a textual claim based on retrieved tables using **C**hat**G**PT. **Verif**AI** retrieves two tables \(E_{1}\) and \(E_{2}\), where \(E_{1}\) can be used with an aggregation query to refute the claim while \(E_{2}\) is not related because it is for the year 1959. The red boxes in Figure 4 show that **C**hat**G**PT can provide not only a verification result but also some explanation. Hence, when the retrieved data is highly related to the generative data, local models like PASTA have higher accuracy while protecting privacy. In contrast, **C**hat**G**PT is better at generalizing and providing explanations for further judgments. Users can select the appropriate model based on their requirements. In summary, we have evaluated the usefulness of **Verif**AI** through various experiments and demonstrated several use cases in Figures 1 and 4. ## 5. Open Problems and Call to Arms **Open Problems.** We have identified important open problems. * _Cross-Modal Data Discovery._ Data discovery is a challenging problem in data preparation (Dwork et al., 2017), particularly in data lakes that contain heterogeneous data stored in various formats (Dwork et al., 2017), including structured (_e.g._, tables), semi-structured (_e.g._, graphs), and unstructured data. Unlike data lakes that contain only relational tables, discovering data from different modalities requires addressing the heterogeneity of the data. To address this issue, a promising direction is to explore _cross-modal representation learning_, which involves \begin{table} \begin{tabular}{|c|c|c|} \hline & **Chat**G**PT & **PASTA** \\ \hline \hline (tuple, tuple+text) & 0.88 & NA \\ \hline (text, relevant table) & 0.75 & **0.89** \\ (text, retrieved table) & **0.91** & 0.72 \\ \hline \end{tabular} \end{table} Table 2. Evaluation on Verifier. Figure 4. Verifying a textual claim using retrieved tables. \begin{table} \begin{tabular}{|c|c|c|} \hline & **Chat**G**PT & **PASTA** \\ \hline \hline (tuple, tuple+text) & 0.88 & NA \\ \hline (text, relevant table) & 0.75 & **0.89** \\ (text, retrieved table) & **0.91** & 0.72 \\ \hline \end{tabular} \end{table} Table 2. Evaluation on Verifier. encoding data from different modalities into a homogeneous vector space. This approach can facilitate a unified data discovery process, such as using a semantic-based indexer, as illustrated in Figure 3. * _Cross-Modal Verification_. In addition to textual and relational data, datasets in other modalities, such as knowledge graph entities (or small subgraphs), can contain valuable information for verifying generative AI. As discussed earlier, generic models like ChatGPT may not provide a comprehensive solution for reasoning due to challenges such as privacy and accuracy. Therefore, a promising direction is to develop local models that are specifically trained for certain use cases, such as (text, knowledge graph entity) or (tuple, text). By focusing on specific cases, these local models can provide more accurate and effective solutions for verifying generative AI. * _Trustworthiness of Data Sources_. The accuracy of discovering and verifying data across different modalities in a data lake can be influenced by the quality and reliability of the underlying data sources. Therefore, it is crucial to assess the trustworthiness of different sources accurately to enhance the overall accuracy and reliability of the entire verification process. * _Provenance Management_. It is important to maintain a record of the provenance of data instances or sources used in verification to facilitate further human checking or debugging. * _Managing Data Generated by Generative AI_ Generative AI solution providers, including OpenAI, are continuously collecting prompts and generated data. While this information is valuable for improving generative AI models, it can also be useful for end-users, particularly enterprise users. Therefore, a promising direction is to explore how to manage the (conversational) prompts and data generated by generative AI to enable better prompt engineering and data lineage tracking, similar to ModelDB [34] which is used for managing machine learning models. Such a solution would enable end-users to better facilitate effective collaboration between different stakeholders. **Call to Arms.** In this paper, we propose a framework called VerifAI that addresses the growing concern about the reliability of generative AI, which is leading to the spread of misinformation at an alarming rate. The modular design of VerifAI enables the verification of generated data using multi-modal data lakes, paving the way for research activities that will produce practical solutions for enhancing the reliability of generative AI. We discuss several promising directions and describe the early successes of using VerifAI to verify common inaccuracies in data generated by generative AI. We urge the database community to take advantage of this opportunity to make significant advances in this field and join us to improve the reliability of generative AI.
2310.01888
Deterioration modeling of sewer pipes via discrete-time Markov chains: A large-scale case study in the Netherlands
Sewer pipe network systems are an important part of civil infrastructure, and in order to find a good trade-off between maintenance costs and system performance, reliable sewer pipe degradation models are essential. In this paper, we present a large-scale case study in the city of Breda in the Netherlands. Our dataset has information on sewer pipes built since the 1920s and contains information on different covariates. We also have several types of damage, but we focus our attention on infiltrations, surface damage, and cracks. Each damage has an associated severity index ranging from 1 to 5. To account for the characteristics of sewer pipes, we defined 6 cohorts of interest. Two types of discrete-time Markov chains (DTMC), which we called Chain `Multi' and `Single' (where Chain `Multi' contains additional transitions compared to Chain `Single'), are commonly used to model sewer pipe degradation at the pipeline level, and we want to evaluate which suits better our case study. To calibrate the DTMCs, we define an optimization process using Sequential Least-Squares Programming to find the DTMC parameter that best minimizes the root mean weighted square error. Our results show that for our case study, there is no substantial difference between Chain `Multi' and `Single', but the latter has fewer parameters and can be easily trained. Our DTMCs are useful to compare the cohorts via the expected values, e.g., concrete pipes carrying mixed and waste content reach severe levels of surface damage more quickly compared to concrete pipes carrying rainwater, which is a phenomenon typically identified in practice.
L. A. Jimenez-Roa, T. Heskes, T. Tinga, H. Molegraaf, M. Stoelinga
2023-10-03T08:44:32Z
http://arxiv.org/abs/2310.01888v1
Deterioration modeling of sewer pipes via discrete-time Markov chains: A large-scale case study in the Netherlands ###### Abstract Sewer pipe network systems are an important part of civil infrastructure, and in order to find a good trade-off between maintenance costs and system performance, reliable sewer pipe degradation models are essential. In this paper, we present a large-scale case study in the city of Breda in the Netherlands. Our dataset has information on sewer pipes built since the 1920s and contains information on different covariates. We also have several types of damage, but we focus our attention on infiltrations, surface damage, and cracks. Each damage has an associated severity index ranging from 1 to 5. To account for the characteristics of sewer pipes, we defined 6 cohorts of interest. Two types of discrete-time Markov chains (DTMC), which we called Chain 'Multi' and 'Single' (where Chain 'Multi'contains additional transitions compared to Chain 'Single'), are commonly used to model sewer pipe degradation at the pipeline level, and we want to evaluate which suits better our case study. To calibrate the DTMCs, we define an optimization process using Sequential Least-Squares Programming to find the DTMC parameter that best minimizes the root mean weighted square error. Our results show that for our case study there is no substantial difference between Chain 'Multi' and 'Single', but the latter has fewer parameters and can be easily trained. Our DTMCs are useful to compare the cohorts via the expected values, e.g., concrete pipes carrying mixed and waste content reach severe levels of surface damage more quickly compared to concrete pipes carrying rainwater, which is a phenomenon typically identified in practice. Degradation modeling, discrete-time Markov chain, sewer pipe network, large-scale case study, reliability engineering. ## 1 Introduction Sewer network systems are an important part of the civil infrastructure required to achieve an adequate level of social and economic welfare. The management of these systems has become increasingly challenging due to the need to cope with limited budgets, environmental changes, uncertainty about network deterioration, and a lack of rigorous degradation analysis. This often leads to conservative approaches that result in the early replacement of sewer pipes. Thus, aiming at finding a good trade-off between maintenance costs and system performance, robust and reliable _sewer pipe degradation models_ are needed to prioritize pipes at high risk of failure for proactive maintenance, support decision making, and strategic rehabilitation planning (Scheidegger et al., 2011; Egger et al., 2013). Moreover, there is a need in the research community for sharing existing case studies aiming at increasing the evidence on sewer pipe degradation models (Tscheikner-Gratl et al., 2019). Concerning sewer pipe deterioration models, three types can be identified: those based on physics, artificial intelligence, and statistics. A detailed review of the different types of models used to predict the degradation of sewer networks is presented in Hawari et al. (2020). Physics-based models may be too complex to capture the complete degradation behavior, and artificial intelligence models require high computational costs and demands of data (Ana and Bauwens, 2010). Thus, given the limitations of these types of models, we center our attention on statistical methods. In particular, we are interested in Markov chain models, since they proved to be among the most reliable and widely used approaches to simulate sewer pipe deterioration (Kobayashi et al., 2012; Tscheikner-Gratl et al., 2019), and enable the modeling of sequential events, such as sewer pipes deterioration (Ana and Bauwens, 2010). Several types of Markov chains (MC) have been implemented for the modeling of sewer pipe networks, examples are discrete-time MC (Micevski et al., 2002; Baik et al., 2006), continuous-time MC (Lin et al., 2019), non-Homogeneous MC (Le Gat, 2008), hidden-MC (Kobayashi et al., 2012), semi-MC (Scheidegger et al., 2011). As a first step, we decided to use discrete-time Markov chains (DTMCs) because these proved to be a straightforward approach to model degradation patterns associated with sewer pipe networks. Moreover, we are interested in two typical types of DTMCs (see Fig. 1) that we call Chains '_Multi_' and '_Single_', where the former contains additional transitions compared to the latter, and we are interested in evaluating which of them best suits our case study. Similar to Caradot et al. (2018), our goal with these DTMCs is to predict the probability for a pipe to be in a severity class for a certain type of damage, based on the pipe age and a set of numerical or categorical variables (called covariates) organized in 6 cohorts (i.e., group of pipes with the same characteristics) of interest. Our research questions are both application-oriented (RQ1) and methodological (RQ2): **RQ1** how do the predefined cohorts compare in terms of deterioration rate? **RQ2** how can DTMCs assist in getting this insight, and how do Chains 'Multi' and 'Single' compare in terms of performance? The experimental evaluation is based on a large-scale case study in the city of Breda in the Netherlands, where we have information on sewer pipes built since the 1920s which contains information on different covariates. We focus on three typical types of sewer pipes damages namely _infiltration_, _surface damage_, and _cracks_. Each damage has an associated severity index ranging from 1 to 5. Our main contribution is to demonstrate the application of existing degradation models in a large-scale case study. The present work is a valuable step toward the development of an evidence-based asset management framework. The scripts and comparative figures can be found at zenodo.org/record/6535853. The structure of this paper is as follows. Section 2 provides the theoretical background on DTMCs. Section 3 presents our methodology. In Section 4 we preset the case study, the experimental evaluation and the main results. We discuss and conclude in Section 5. ## 2 Homogeneous discrete-time Markov chain A Markov chain is a stochastic process used to describe the deterioration of sewer pipes through condition states (Hawari et al., 2020). Among the different types of Markov chains, we adopt a discrete-time Markov chain (DTMC) because it is the simplest type suitable for modeling the degradation of sewer pipes (Micevski et al., 2002). A DTMC is a directed graph whose nodes are called states, and whose edges are called transitions. In our case, the states are the possible sewer pipe damage severity (see Fig. 1). When the transition probabilities remain constant over time, we talk about _homogeneous Markov chains_. The _time interval_ is a discrete time period (e.g., one year) representing a single transition (or step \(n\)). From Baik et al. (2006), a Markov chain is a discrete-time stochastic process, where the probability of any future event depends only on the present state and is independent of the past states. The latter is known as the _Markov property_ and can be formally expressed in Eq. 1 for the states \(i_{0},i_{1},\ldots,i_{n},i_{n+1}\) and all \(n\geq 0\) as, \[P(X_{n+1}=i_{n+1}|X_{n}=i_{n},X_{n-1}=i_{n-1},\] \[\ldots,X_{0}=i_{0})\] \[=P(X_{n+1}=i_{n+1}|X_{n}=i_{n}), \tag{1}\] where \(X_{n}\) is the state of a sewer pipe at step \(n\). The DTMC assumes that the conditional probability does not change over time i.e., for all the states \(i\) and \(j\) and all \(n\), \(P(X_{n+1}=j|X_{n}=i)\) is independent of \(n\), \[p_{ij}=P(X_{n+1}=j|X_{n}=i), \tag{2}\] where \(p_{ij}\) is the _transition probability_ that given a system in state \(i\) at step \(n\), it will be in a state \(j\) at step \(n+1\). Since \(p_{ij}\) is a probability, it is bounded in the closed interval \(0\leq p_{ij}\leq 1\). The transition probabilities are often expressed in a \(K\times K\)_transition probability matrix_\(P\), where \(K\) is the total number of states in the DTMC, \[P=\begin{bmatrix}p_{11}&p_{12}&p_{13}&\ldots&p_{1K}\\ p_{21}&p_{22}&p_{23}&\ldots&p_{2K}\\ p_{31}&p_{32}&p_{33}&\ldots&p_{3K}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ p_{K1}&p_{K2}&p_{K3}&\ldots&p_{KK}\end{bmatrix} \tag{3}\] Here \(\sum_{j}^{K}p_{ij}=1\) for all \(i=1,2,\ldots,K\). The values \(p_{i,j}\) in Fig. 1 are discussed in Section 4.2.4. The _initial state vector_\(\bar{S}^{(0)}\) indicates the probability that the system is in the state \(k\) at the step \(n=0\) (where \(1\leq k\leq K,k\in\mathbb{N}\)), \[\bar{S}^{(0)}=\begin{bmatrix}S^{(0)}_{1},S^{(0)}_{2},S^{(0)}_{3},\ldots,S^{(0 )}_{k}\end{bmatrix}^{T}, \tag{4}\] where \(\sum_{l=1}^{k}S^{(n)}_{l}=1\) at any step \(n\). In order to calculate the state probabilities associated to the \(n\)-step \(S^{(n)}\), we apply the Chapman-Kolmogorov equation (Stewart, 2009), \[\bar{S}^{(n)} =\bar{S}^{(0)}P^{(n)} \tag{5}\] \[=\begin{bmatrix}S^{(n)}_{1},S^{(n)}_{2},S^{(n)}_{3},\ldots,S^{(n )}_{k}\end{bmatrix}^{T}\] In Eq. 5, the \(n\)-step transition probability matrix \(P^{(n)}\) is obtained by multiplying the matrix \(P\) by itself \(n\) times, i.e., \(P^{(n)}=P^{n}\). We can compute the expected severity \(E^{n}\) as done in Baik et al. (2006), by multiplying the state probability vector \(\bar{S}^{(n)}\) at a step \(n\) by the severity class vector \(\bar{C}=[1,2,3,4,5]\). Here \(1\) indicates pristine condition and \(5\) is the maximal severity that can be possibly assigned to a type of damage. \[E^{n}=\bar{S}^{(0)}P^{(n)}\bar{C}^{T} \tag{6}\] We adopt two typical chains to model the degradation of sewer pipes via DTMCs (Ana and Bauwens, 2010) (Fig. 1), where: 1. the states \(k\) are associated to the damage severities i.e., the classes in \(\bar{C}\); 2. for both chains, the transitions can only occur from the current state to worse-case states (i.e., \(p_{ij}=0\) for \(i>j\)), since it is impossible for a pipe to improve its condition without interventions (i.e., repairs); 3. only the last state in both chains is absorbing, that is \(p_{55}=1.00\); 4. the Chain 'Multi' (Fig. 1.a) allows transitions between consecutive and non-consecutive states (i.e., \(0\leq p_{ij}\leq 1\) for all \(i\leq j\)) (Micevski et al., 2002; Baik et al., 2006; Scheidegger et al., 2011); 5. the Chain 'Single' (Fig. 1.b) is a simplified version of Chain 'Multi', and allows transitions only between consecutive states (i.e., \(0\leq p_{ij}\leq 1\) for all \(i\) and \(j=i+1\), and \(p_{ij}=0\) Figure 1: Example of DTMC modeling the degradation of sewer pipes considering five degradation states (i.e., \(K=5\)) and no repairs. (a) Chain ‘Multi’; (b) Chain ‘Single’. for \(j>i+1\)) (Le Gat, 2008; Scheidegger et al., 2011; Lin et al., 2019). ## 3 Methodology Our goal is to describe the degradation of sewer pipes based on a historic set of inspection data. To achieve this we calibrate discrete-time Markov chains (DTMCs) that quantify the probability of a pipe (from the historic data set) being in a condition class given the age of the pipe. These DTMCs represent cohorts of pipes, i.e., they are trained with data from pipes that share similar characteristics. An overview of the four steps we follow are: 1. pre-processing the data (cleaning); 2. definition of cohorts; 3. creating a _discretized table_ per cohort. This is the input data to calibrate the DTMC; 4. calibration of the DTMC; The details about each of the steps are provided in the following sections. ### data pre-processing Our work is based on the case study described in Section 4.1. For each sewer pipe the data set contains information about inspections carried out over the years, including (i) inspection's unique identifiers (ii) inspection date, (iii) damage size, (iv) damage codes (unique identifier to a type of damage), (v) damage class (this is what the severity vector \(\bar{C}\) in Eq. 6 describes), and (vi) relative position of the damage. It is worth mentioning that each damage code has an associated damage class. We ignore data associated with pipes built before 1920 and those that have missing/erroneous construction year. ### definition of cohorts In order to account for explanatory variables (other than pipe age) in deterioration modeling, it is necessary to construct cohorts (i.e., groups of sewer pipes that share similar characteristics) and calibrate a DTMC per each cohort. Table 1 presents 6 cohorts of interest, and the number of pipes with certain characteristics (as a fraction of the total number of inspected pipes). We note that a drawback of defining cohorts is that it could result in small subsets (e.g., Cohort PR), which might not be statistically representative. ### discretized table We build a _discretized table_ per cohort (Table 2) and damage code, this is the input to calibrate the DTMCs. For this, we identify the _state of a sewer pipe_ as the _maximum damage class found during an inspection_ for damage codes of interest. This approach is conservative and is important in determining which pipes should be repaired in the near future. To build Table 2, we define a time interval \(\Delta t\), and make groups of pipes by age at the time of inspection, then count, per group, how many pipes were found in each damage class and normalize with respect the total number of pipes in the group. For example, in Table 2, for the Cohort CMW, damage code _BAF_ (surface damage), and \(\Delta t=3\) years, there were \(2^{\prime}339\) pipes with \(48\leq\) PipeAge \(<51\) years at the time of inspection. Here the _count_ vector (\(c\)) indicates the total number of pipes found within a PipeAge interval. The time \(t\) is the time calculated as the mean value of the PipeAge interval, and \(\hat{n}\) is the step associated to the discretization. Thus, the step \(\hat{n}=16\) is associated to the time interval \(48\leq\) PipeAge \(<51\) at \(t=49.5\) years. In this interval, 35% of the pipes were in State 1 (i.e., \(\hat{S}_{k=1}^{(\hat{n}=16)}=0.35\)), 50% in State 2 (i.e., \(\hat{S}_{k=2}^{(\hat{n}=16)}=0.50\)), and so on. Therefore, \(\hat{S}_{k}^{(\hat{n})}\) is a \(|\hat{n}|\times K\) matrix and represents the _ground truth_ \begin{table} \begin{tabular}{l l l} \hline \hline Cohort & \multirow{2}{*}{Description} & Fraction \\ name & & (\%) \\ \hline \multirow{2}{*}{CMW} & Material: Concrete \& \\ & Content: Mixed and Waste & \\ & Material: Concrete \& \\ CR & Content: Rainwater & \\ & Material: PVC \& \\ & Content: Mixed and Waste & \\ & Material: PVC \& \\ & Content: Rainwater & \\ & Material: Concrete \& \\ & Width \(\leq\) 500 mm & 50.16 \\ \multirow{2}{*}{CdG} & Material: Concrete \& \\ & Width \(\geq\) 500 mm & 22.02 \\ \hline \hline \end{tabular} \end{table} Table 1: Cohorts of interest, Fraction (%) of total number of inspected pipes (25’507). used to calibrate the DTMCs._ The sum of the counts (\(c\)) yields the total number of pipes in the network. Notice that \(c\) varies in each PipeAge interval. We consider this when calibrating the DTMCs by defining a _weight_ vector. It is worth mentioning that we ignore the _right-censoring_ in our data set, this means that we assume that the sewer pipe has just moved to the condition as observed during the inspections, which may not be the case, as the pipe could have entered that condition prior to the inspection. ### calibration of the DTMC To calibrate a DTMC we implement an optimization process where the objective is to find the set of parameters in the DTMC, namely \(\bar{S}^{(0)}\) (Eq. 4) and \(P\) (Eq. 3), that minimize the _Root Mean Weighted Square Error_ (\(Err\)) (Eq. 8). For this we first compute a weight vector \(\bar{w}\), which results from the normalization of the counts (\(\bar{c}\)) (Table 2) with respect its maximum value, \[\bar{w}=\frac{c}{\max(c)}, \tag{7}\] \(Err\) is computed as the difference between the discretized table \(\left(\hat{S}_{k}^{(h)}\right)\), and the predictions made with the DTMC for the same steps \(\hat{n}\) using Eq. 5 \(\left(\hat{S}_{k}^{(\hat{n})}\right)\), \[Err=\sqrt{\frac{\sum_{\hat{n},k}\left[\left(\bar{S}_{k}^{(\hat{n})}-\hat{S}_{ k}^{(\hat{n})}\right)^{2}*\bar{w}_{\hat{n}}\right]}{|\hat{n}|\times K}} \tag{8}\] The minimization of \(Err\) is carried out through the _Sequential Least-Squares Programming_ (SLSQP) algorithm available in Scipy (Virtanen et al., 2020), and we use the default parameters. All the optimization parameters are bounded in the closed interval \([0,1]\), and are always initialized in the same form: (i) in \(\bar{S}^{(0)}\), \(\bar{S}_{k=1}^{(0)}=1\) and \(\bar{S}_{k+1}^{(0)}=0\), (ii) \(P\) is the identity matrix. We adopt the constraints described at the end of Section 2 for both Markov chains. We calibrate the DTMCs by randomly selecting 50% of the available data per cohort using repeated half-sample bootstrap (Saigo et al., 2001). After convergence, the output of the calibration process are the \(\bar{S}^{(0)}\) and \(P\) with the smallest error \(Err\) for a given \(\Delta t\). ## 4 Experimental Evaluation ### Case study Our case study consists of a large-scale sewer pipe network in the city of Breda. The network is composed of 25'723 (1'052 km) sewer pipes, mostly built from 1950 onwards. Most of the pipes are made out of concrete (72%) and PVC (27%), have rounded (94%) and ovoid (5.4%) shapes, are used for transport (98%), are less than 170 meters long (99.9%), have a diameter up to 1 meter (98.3%), have different content such as mixed (63%), rainwater (21%), and waste (16%). Through visual inspection conducted in different sections along the sewer pipe length and based on the European standard EN 13508 (EN13508, 2012; EN13508-2, 2011), the damage codes and class present in the sewer pipe (if any) are set. A total of 29'667 inspections are registered for 25'507 sewer pipes. We observe that about 88% of the inspections were carried out between 2005 and 2016. About 87% of the pipes were inspected \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Count & PipeAge & Time & Step & & & \(\hat{S}_{k}^{(\hat{n})}\) & & \\ (\(c\)) & (years) & (\(t\)) & (\(\hat{n}\)) & \(k=1\) & \(k=2\) & \(k=3\) & \(k=4\) & \(k=5\) \\ \hline 832 & [0,3) & 1.5 & 0 & 0.95 & 0.03 & 0.01 & 0.01 & 0.00 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ 2’339 & [48,51) & 49.5 & 16 & 0.35 & 0.50 & 0.12 & 0.02 & 0.01 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ 64 & [75,78) & 76.5 & 25 & 0.44 & 0.20 & 0.28 & 0.05 & 0.03 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \hline \hline \end{tabular} \end{table} Table 2: Discretized table \(\hat{S}_{k}^{(\hat{n})}\) for cohort CMW, surface damage (BAF), with \(\Delta t=3\) years. at most once, 12% twice, and less than 1% three or more times. As recommended by domain experts, we focus our attention on the damage codes (d.c.): infiltration (BBF), surface damage (BAF), and crack (BAB), which were observed respectively in 44%, 35% and 18% of the inspections. ### Results We compare the cohorts and chains visually. For this, we use the results in Fig. 2. In order to build these figures, we train a thousand DTMCs using repeated half-sampled bootstrap (Section 3.4), this with the aim to account for uncertainty. Fig. 2 presents a few of our results, and the figures for all the analysis can be found in Zenodo1. Footnote 1: All comparative figures and scripts are available at zenodo.org/record/6535853 Fig. 2.(a)-(d) present the probability of being in the state \(k\) given PipeAge, cohort, chain and damage code. The markers correspond to the discretized table \(\hat{S}_{k}^{(i)}\) (with \(\Delta t=3\) year) and their size visualizes the counts. The dashed lines indicate a 95th percentile _confidence interval_ calculated from the projections of the thousand calibrated DTMCs, and the solid line is the median value. The figures comparing cohorts and chains can be found in Zenodo1 in the folders: _/comparing_cohorts_ and _/comparing_chains_, respectively. Footnote 1: All comparative figures and scripts are available at zenodo.org/record/6535853 Fig.2.(e)-(g) presents the expectations computed with Eq. 6, which corresponds to the expected (average) severity class for a given damage class at a certain PipeAge, the dashed and solid lines are the confidence interval and the median value, respectively. We use these results to ease the comparison between cohorts and can be found in Zenodo\({}^{\dagger}\)_/comparing_expectations_. #### 4.2.1 Comparing Cohorts CMW and CR Fig. 2.(a)-(b) show that older concrete pipes (25 - 75 years) carrying Mixed and Waste content (Cohort CMW) have a higher probability of being at more severe _surface damage_ levels (BAF) than concrete pipes carrying Rainwater (Cohort CR). We can also see the same behavior in Fig. 2.(e), where the expected severity class of Cohort CMW is higher than Cohort CR for any PipeAge, where the maximum expected class in a 125-years time horizon is \(k=3\). For _crucks_ (BAB), we observe complex Figure 2: Comparisons between different cohorts and chains, for various degradation states and damage types. Find all the figures at zenodo.org/record/6535854. changes in the degradation pattern that were not properly captured by the DTMCs (e.g., Cohort CMW in Fig. 2.(c)). In addition, we find that it is unlikely to find cracks in states \(k=2,3,4^{\dagger}\). For _infiltration_ (BBF), Fig. 2.(d) shows that there is an initial probability of roughly 25% that pipes in Cohorts CMW and CR experience at least some mild infiltrations (i.e., \(k>1\)). #### 4.2.2 Comparing Cohorts PMW and PR Fig. 2.(f) shows that Cohorts PMW (PVC pipes carrying Mixed and Waste content) and PR (PVC pipes carrying Rainwater) present a similar pattern under _surface damage_ (BAF), with a maximum expected severity in a 125-years time horizon of \(k<2\). Under the condition of cracks and infiltration, we did not find a significant difference between cohorts+. Footnote †: All comparative figures and scripts are available at zenodo.org/record/6535853 #### 4.2.3 Comparing Cohorts CdL and CdG Fig. 2.(g) compares Cohorts CdL (Concrete pipes and Width\(<\)500 mm) and CdG (Concrete pipes and Width\(\geq\)500 mm), and shows that narrow concrete pipes appear to have more severe _surface damage_ (BAF) than the wider pipes. The same conclusion holds under the condition of _cracks_ (BAB)+. Regarding _infiltration_ (BBF), wider pipes seem faster to reach the probabilities of being in a more severe state, however, there is large uncertainty+. Footnote †: All comparative figures and scripts are available at zenodo.org/record/6535853 #### 4.2.4 Comparing Chains 'Multi' and 'Single' When comparing Chains 'Multi' and 'Single', for most of the cases, we did not observe a significant difference between the projections made with one or the other. However, we observe for a few cases e.g., Cohort CdG, surface damage (BAF), in Fig. 2.(h), where Chain 'Single' shows to transit faster to more severe states compared to Chain 'Multi' when PipeAge \(>\) 60 years (find the associated values of \(P\) in Fig. 1, we used \(\Delta t=3\) years). We suspect this is because the chain 'Multi' has the possibility of converging in the diagonal values of \(P\) to values (very) close to \(1\) (e.g., Fig. 1.(a), \(p_{3,3}=p_{4,4}=0.9999\)), making these states (almost) adsorbing, something that was not observed for Chain 'Single' (Fig. 1.(b)). ## 5 Discussion and Conclusions We model sewer pipe degradation in a large-scale case study in the city of Breda by means of discrete-time Markov chains (DTMCs). We describe a methodology to calibrate DTMCs, and visually+ compare degradation patterns across cohorts (i.e., groups of sewer pipes sharing similar characteristics) for three types of damage, namely infiltration, surface damage, and cracks. Footnote †: All comparative figures and scripts are available at zenodo.org/record/6535853 We find our DTMCs useful for projecting and estimating future degradation states of sewer pipes and comparing cohorts, for example, across expected severity classes. Using this method we conclude, for example, that concrete pipes carrying Mixed and Waste content degrade faster than those carrying Rainwater, which is a phenomenon typically identified in practice. Comparing the DTMC types, we find that the chains 'Multi' and 'Single' have similar performance. Although the chain 'Single' is simpler, it can be more easily calibrated because it has fewer parameters, and is suitable for this case study. As for the 'Multi' chain, it requires a better implementation to avoid the formation of absorbing intermediate states. In terms of limitations and future directions, when computing the discretized table, we assume that the condition of the sewer pipes is observed at the time of inspection, which may not be the case due to right-censorship (i.e., damage with certain severity happens before the inspection). This makes our DTMCs biased, predicting failures later than they actually occur. To address this, we plan to explore _multi-state survival models_ for _interval-censored data_(van den Hout, 2016), which leverage survival analysis and better account for censored data. Additionally, we will improve the way the parameters of the DTMCs are inferred by accounting for the covariates via _Maximum Likelihood Estimation_. In this way, we will not need to discretize our data based on time intervals. ## Acknowledgement This research has been partially funded by NWO under the grant PrimaVera ([https://primavera-project.com](https://primavera-project.com)) number NWA.1160.18.238, and has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101008233.
2306.12874
Charting nanocluster structures via convolutional neural networks
A general method to obtain a representation of the structural landscape of nanoparticles in terms of a limited number of variables is proposed. The method is applied to a large dataset of parallel tempering molecular dynamics simulations of gold clusters of 90 and 147 atoms, silver clusters of 147 atoms, and copper clusters of 147 atoms, covering a plethora of structures and temperatures. The method leverages convolutional neural networks to learn the radial distribution functions of the nanoclusters and to distill a low-dimensional chart of the structural landscape. This strategy is found to give rise to a physically meaningful and differentiable mapping of the atom positions to a low-dimensional manifold, in which the main structural motifs are clearly discriminated and meaningfully ordered. Furthermore, unsupervised clustering on the low-dimensional data proved effective at further splitting the motifs into structural subfamilies characterized by very fine and physically relevant differences, such as the presence of specific punctual or planar defects or of atoms with particular coordination features. Owing to these peculiarities, the chart also enabled tracking of the complex structural evolution in a reactive trajectory. In addition to visualization and analysis of complex structural landscapes, the presented approach offers a general, low-dimensional set of differentiable variables which has the potential to be used for exploration and enhanced sampling purposes.
Emanuele Telari, Antonio Tinti, Manoj Settem, Luca Maragliano, Riccardo Ferrando, Alberto Giacomello
2023-06-22T13:35:34Z
http://arxiv.org/abs/2306.12874v1
# Charting nanocluster structures via convolutional neural networks ###### Abstract A general method to obtain a representation of the structural landscape of nanoparticles in terms of a limited number of variables is proposed. The method is applied to a large dataset of parallel tempering molecular dynamics simulations of gold clusters of 90 and 147 atoms, silver clusters of 147 atoms, and copper clusters of 147 atoms, covering a plethora of structures and temperatures. The method leverages convolutional neural networks to learn the radial distribution functions of the nanoclusters and to distill a low-dimensional chart of the structural landscape. This strategy is found to give rise to a physically meaningful and differentiable mapping of the atom positions to a low-dimensional manifold, in which the main structural motifs are clearly discriminated and meaningfully ordered. Furthermore, unsupervised clustering on the low-dimensional data proved effective at further splitting the motifs into structural subfamilies characterized by very fine and physically relevant differences, such as the presence of specific punctual or planar defects or of atoms with particular coordination features. Owing to these peculiarities, the chart also enabled tracking of the complex structural evolution in a reactive trajectory. In addition to visualization and analysis of complex structural landscapes, the presented approach offers a general, low-dimensional set of differentiable variables which has the potential to be used for exploration and enhanced sampling purposes. [email protected] [email protected] [email protected] ## 1 Introduction Finite-size aggregates - of atoms, molecules or colloidal particles - can present a much broader variety of structures than infinite crystals, because they are not constrained by translational invariance on an infinite lattice. For example, the _structural landscape_ of small metal particles that consist of few tens to few hundreds of atoms is much richer than that of their bulk material counterparts [1, 2, 3, 4]. Different factors cooperate at rendering this variegated scenario: first of all, possible structures are not limited to fragments of bulk crystals, but they include non-crystalline motifs, such as icosahedra or decahedra, which contain fivefold symmetries that are forbidden in infinite crystals [5]. Moreover, for small sizes, also planar, cage-like, and amorphous clusters have been observed [6, 7, 8], along with hybrid structures that exhibit features associated to more than one motif within the same cluster [9]. Adding to this already complex scenario, metal nanoclusters are very likely to present defects, of which there are many different types. Volume defects for instance, such as stacking faults and twin planes, are frequently observed in experiments and simulations [10, 11, 12, 13, 14]. Furthermore, surface reconstructions are known to occur in several clusters, [15, 16, 17, 18] and internal vacancies can also be stabilized in some cases [19, 20]. Owing to the complexity of the structural landscape of nanoclusters, there is an urgent need for a robust classification method that can separate their structures into physically meaningful groups, possibly producing an informative chart of the structural landscape in terms of a small number of collective variables (CVs). In addition to providing a low-dimensional representation of the structural landscape, CVs are an essential tool of techniques to enhance sampling in configuration space, such as umbrella sampling, [21] metadynamics, [22] temperature-accelerated MD [23], and many others. A common trait to most enhanced sampling approaches is the requirement that the chart be differentiable with respect to atomic coordinates, _i.e._, that the CVs are differentiable functions of the coordinates. Machine learning (ML) is emerging as an invaluable analysis tool in the field of nanoclusters, as it allows to efficiently navigate the complexity of the structural landscape by extracting meaningful patterns from large collections of data. ML has already found application in microscopy image recognition [24], dimensionality reduction and exploration of potential energy surfaces [25], structural recognition [26, 25, 27], characterization of the local atomic environment [28, 29], and machine learnt force fields for metals [30]. One of the main challenges in the study of nanoclusters concerns the identification of descriptors that can discriminate the various structural classes. The availability of such a tool is crucial for navigating the landscape of structures generated during simulations. In this context the histogram of the interatomic distances, _i.e._ the radial distribution function (RDF), has been used to study the solid-solid transitions in metallic/bimetallic clusters via metadynamics,[31] owing to its capability to encode structural information. Another widely used approach, is Common Neighbor Analysis (CNA) [32], a tool which relies on analyzing local atomic coordination signatures for individual atoms [33, 26]. Often, arbitrary rules [33, 9] are then applied to CNA signatures of the atoms as a means to assign the whole nanocluster to a structural family. Albeit being widely used and informative, CNA still presents certain drawbacks. First, CNA classifications are based on the arrangement of first neighbors around any given atom, and therefore they do not directly encode information on the overall shape of the nanoparticles. In addition, even though CNA can be used for charting the structural landscape and for unsupervised clustering to obtain very refined groupings of structures (_e.g._, along the lines developed by Roncaglia & Ferrando [26]), the resulting chart is non-differentiable. In this work, we propose to use a descriptor capable of capturing in full generality the most important structural features of metal nanoclusters -the RDF- and feed it to an artifical neural network (ANN) that is trained to perform an unsupervised dimensionality reduction, yielding a low-dimensional, informative representation, where data are distributed according to their structural similarities. We start off by showing that RDFs are excellent descriptors of nanocluster structures, given their capability to describe both the local [34] and global order together with the overall shape of diverse systems, and then we proceed to discuss the results obtained by using convolutional ANNs to reduce the dimensionality of the original descriptors. The combination of RDF and ANNs allowed us to learn a differentiable map from the atomic positions to a low-dimensional (3D) chart of the structural features of nanoclusters of various sizes and metals. The employed datasets contain hundreds of thousands of unique structures obtained by parallel-tempering molecular dynamics (PTMD) simulations [9, 35]. It was possible to classify in an unsupervised manner this wealth of structures, reproducing the well-known CNA classes and, additionally, being able to distinguish subtle features present in metal nanoclusters, including location of the twinning planes stacking faults, surface defects, central vacancies in icosahedra, and intermediate/distorted structures. The chart also allowed us to track and describe in detail dynamical structural transformations. Additional advantages of the present chart are the transferability and robustness, which was demonstrated using independent datasets of metal clusters of varying size and chemical nature, together with its differentiability (and hence suitability for CV-based exploration and biasing in molecular dynamics). ## 2 Results and discussion Our goal is to gain insights into the structural complexity of metal nanoclusters by means of a differentiable map of the of configuration space onto a low-dimensional, yet sufficiently informative manifold (the chart). The method consists in generating, for every cluster configuration in the dataset, a set of high dimensional descriptors, the RDFs, which are known to describe both the local structural order and global shape, and distill this information representing it in a low-dimensional, highly compressed form. The specific ANN architecture we chose to perform the unsupervised dimensionality reduction is that of an autoencoder (AE) [36] endowed with convolutional layers that renders it highly specialized at learning from numerical sequences [37]. A dimensionality reduction step follows the convolutions, yielding a physically informed three-dimensional (3D) chart of the structural landscape of our dataset, which allows to navigate and easily understand it. Finally, we apply a clustering technique to the 3D chart to gauge its quality and to identify different structural families. AEs constitute a particular class of ANNs that is highly specialized in the unsupervised dimensionality reduction of data [36]. AEs are designed to reproduce the input while forcing the data through a bottleneck with severely reduced dimensionality (Fig. 1). In this way, the network needs to learn, in the first section of the network (encoder), an efficient representation of the data in such a way that the information can then be reconstructed by the second half of the network (decoder) with sufficient accuracy. The quality of the reconstruction, is measured by a loss function that is also used in the training the network. Convolutional layers, which are specialized at learning from ordered sequences, are adopted in the AE hereby presented, because discretized RDFs are by all means sequences. They work applying different kernels that slide along the data allowing the recognition of local features and patterns, which makes them well versed for the analysis of inputs like signals (using 1d convolutional kernels) or images (2d kernels). Moreover, the connections between the nodes and the related parameters are considerably reduced as compared to the fully connected layers used in standard ANN, which decreases the computational cost while allowing for better performances. Figure 1: Simple sketch of the autoencoder architecture, showing how encoder and decoder meet at a low-dimensional (3D) bottleneck. In order to test the method, we took advantage of the large dataset of nanocluster structures produced by the group [9, 35] via parallel tempering molecular dynamics (PTMD) for gold, silver, and copper nanoclusters of different sizes. In the next section we discuss in detail the results obtained for the most challenging case -a gold cluster of 90 atoms, Au\({}_{90}\)- while results relative to other metals and sizes will be shown in later sections. ### Structural landscape of Au\({}_{90}\) Gold nanoclusters represent an ideal test case, owing to the broad variety of structures [6, 7, 8, 9, 15] they present, which include face-centered-cubic (fcc) lattice, twins, icosahedra (lh), and decahedra (Dh). In the following, nanoclusters will be broadly classified into such standard structural families by CNA (in addition to the mix and amorphous classes), as used by Settem et al. [9], with the aim of having an independent benchmark for our unsupervised study. Here we focus on a small gold nanocluster, Au\({}_{90}\), which is characterized by an extremely challenging structural landscape, owing to the large fraction of surface atoms. In particular, we chart a set of Au\({}_{90}\) configurations extracted from PTMD simulations [9] exploring a total of 35 temperatures ranging from 250 K to 550 K. Starting from an initial set of 921,600 atom configurations, we performed a local minimization and filtered out duplicates, reducing the dataset to 49,016 independent configurations. As previously mentioned, RDFs were chosen because they are general descriptors of short and long range order [38, 39] that are equivariant with respect to rototranslation and permutation of the atom coordinates. The aptness of RDFs as structural descriptors is well demonstrated by Fig. 2, in which the RDFs of all CNA classes (fcc, twin, Dh, lh, mix, and amorphous) are well separated. We will show in the following that this descriptive power also applies to other metals and nanocluster sizes, which actually have a less rich structural landscape. However, a major drawback of using a probability distribution as a descriptor -even in its discretized version- is its high dimensionality. Our approach to provide an efficient charting of the structural landscape of metal nanoclusters, _i.e._, a Figure 2: Radial distribution functions families for Au\({}_{90}\). Colors reflect cluster structure classification provided by CNA. Blue is used for Dh, green for Twin, red for Fcc, orange for lh, purple for Mix, and pink for Amorphous. Shaded areas represent intervals containing 90 % of the data for each CNA label, with the lower boundary representing the 0.05 quantile of the RDF population and the upper boundary the 0.95 quantile. low-dimensional representation, relies therefore on a dimensionality reduction step. A large number of RDFs, corresponding to individual PTMD-derived structures, are used to train an autoencoder (AE), which automatically learns to compress the high-dimensional RDF information to a 3D latent representation (Fig. 1). Our AE is composed by an input and an output layer, a central block, comprising the bottleneck layer, formed by three fully-connected layers, while the cores of the encoder and the decoder are formed by convolutional layers (Fig. 1). The training was run feeding the AE with the RDFs dataset (49,016 independent data), split in training and validation sets; the mean squared error (MSE) between the output and the input RDF is used as the loss function. We chose to adopt a latent space dimensionality of 3. This choice allowed for better performances in terms of the loss function as compared to higher compressions, while still allowing for a convenient visual representation. We refer to the Supporting Information for a comparison of the results obtained varying the dimensionality of the latent space. The 3D chart obtained by the AE is shown in Fig. 3 with datapoints colored by their CNA label. This representation clearly indicates how each structural family is grouped in separate regions of the chart and how their spatial ordering and distance reflects affinities among these families: similar structures are placed close together (_e.g._, fcc and twin), while structures that share common features occupy intermediate regions (_e.g._, the twin region is interposed between fcc and Dh). Overall, the obtained chart allows for a physically meaningful representation of the structures. The scatter in the data suggests that the resolution of the analysis of the chart allowed by the CNA summary labels is not fully conclusive and how further analysis can allow for a better understanding of the physical information encoded in the structures distribution inside the latent space and, consequently, a finer discrimination of different families of structures. In order to increase the structural resolution and to have deeper insights into the physical information encoded in the latent space, we applied a clustering technique to identify meaningful and coherent regions in the chart. In particular, we chose a non-parametric technique known as mean shift [40]. Applying this method to the 3D chart of Fig. 4, was justified not only by the non-parametric nature of the clustering technique but also by its aptness at dealing with clusters of different sizes and shapes. The only input variable required by mean shift is the bandwidth, which dictates the resolution of the analysis, with the smaller bandwidths leading to more detailed parceling of the data. We chose a bandwidth that yields a robust clustering of the chart with sufficient detail as discussed in the Supporting Information. Our analysis resulted in a robust discrimination of 27 major regions for the \(\text{Au}_{90}\) chart, corresponding to 27 different major structural families, as reported in Fig. 4. From the figure it is immediately apparent how the mean shift classification is able to distinguish and split Figure 3: Visualization of the 3D chart generated via convolutional AE for \(\text{Au}_{90}\) dataset, from different perspectives. Individual points refer to a given \(\text{Au}_{90}\) configuration in the dataset mapped according to their latent space representation. The three latent coordinates are referred to as _CVs_. Points are colored following their (independent) CNA label classification; the color code is the same used in Fig. 2. Figure 4: A) Representative samples for each of the 27 structural families identified via application of the mean shift clustering algorithm on the latent space representation of the Au\({}_{90}\) dataset. These 27 classes were subsequently grouped in 7 bigger families by similarity. Atom colors refer to their coordination: green represents atoms with fcc coordination, red stands for hcp coordination, white for neither of the previous ones. Atomistic representations with transparency report 3D views, whereas those in solid colors represent cross sections. Every structure is given a numeric index associated with the label of the belonging cluster and a particular the color. The table on the right reports both the numeric and color labels of the clusters along with a description of the various structures. B) Single view for a 3D plot, analogous to the one on the extreme right of Fig. 3 except for the coloring, which is now representative of the labels assigned by the mean shift through the same color coding reported in panel A. C) Mean shift families fractions as a function of the temperature in the whole PTMD dataset. The color code is the same of panels A and B. More likely structures are represented with the same name of the macro-family, numeric index and color of panel A. D) Plot analogous to panel C with the only difference that the PTMD data as been classified using the CNA label classification as in the work of Settem et al. [9]. Color code and labeling are the same used in Fig. 2. clusters that belong to spatially separated regions of the chart, properly reflecting the ordering of the data. Representative structures of each mean shift family are shown in Fig. 4A, while Fig. 4B shows the 3D chart with the points colored according to the same families. They are broadly categorised into lh, Dh, fcc, faulted fcc, faulted hcp, intermediates, and amorphous. Fauulted fcc nanoclusters are those with a predominant fcc part but which contain twin planes and/or stacking faults. Fauulted hcp clusters are those with a predominant hcp part but which contain twin planes and/or stacking faults. Typically, structures observed in experiments and simulations are classified into basic structural families [41, 42, 43, 33, 9] which rarely capture the fine geometrical details within a given family. In contrast, our approach leads to a physically meaningful classification along with capturing the fine structural details, by splitting the broader families into several subfamilies. A closer look at the various fcc and hcp faulted nanoclusters illustrates this point. There are three subfamilies (cluster-3, cluster-11, cluster-15) which contain only one hcp plane. Cluster-3, referred to as 2:1 fcc, consists of two and one fcc plane(s) on either side of the hcp plane. Similarly, clusters-11, 15 are 1:1 fcc with differing shapes. When the hcp plane is adjacent to surface layer, we have hcp islands (clusters-7). Cluster-10 has two converging hcp islands. In cluster-4, local surface reconstruction occurs along with a single hcp plane. Moving on to faulted hcp structures, three hcp planes converge in cluster-16. With the increase in the number of parallel hcp planes, we have either stacking faults (cluster-14) or fcc island (cluster-21) which contains one fcc plane (opposite of hcp island). In the extreme case, we have full hcp particles (clusters-20, 25). Clusters-17 and 23 both undergo local surface reconstruction similar to cluster-4. In fcc families, we have the conventional fcc structures (cluster-5) and fcc structures with local surface reconstruction (cluster-13). In the case of decahedra, there are five sub-families. Clusters-8, 9, and 12 are all conventional decahedra. In cluster-9, the decahedral axis is at the periphery as opposed to clusters-8 and 12. Additionally, cluster-12 has a partial cap on top (atoms belonging to the cap are shown in red color). Decahedra in cluster-2 have an hcp island on the surface. Finally, decahedra also exhibit reconstruction at the reentrant grooves resulting in icosahedron-like features (cluster-1). There are three icosahedral clusters: Cluster-18 consists of incomplete non-compact icosahedra; cluster-19 is combination of lh and lh+Dh (has features of both lh and Dh) while cluster-26 is a combination of lh+dh and lh+amor (has features of both lh and amorphous). Similarly, there are three types of amorphous structures (clusters-0, 22, and 24). Finally, we have intermediate structures in cluster-6. The structural distributions of Au\({}_{90}\), _i.e._, the fraction of various families as a function of temperature, of the PTMD data according to mean shift and CNA labels are shown in Figs. 4C and D, respectively. In both cases, we find the conventional structure families. However, mean shift further refines the CNA-based classification [9]. For instance, with mean shift, we have a clear separation of the various types of Dh that were previously grouped together in a broad group of mixed structures. In the case of faulted structures, there is a prominent faulted fcc cluster (Faulted fcc-3) while all other faulted structures (band between Faulted fcc-3 and Dh-8 in Fig. 4C) have very low fractions. It is noteworthy that mean shift can classify even structures that have very low probability of occurrence. In short, the Au\({}_{90}\) analysis showcased the descriptive power of RDFs and the capability of the unsupervised dimensionality reduction performed by AE to properly compress information. Through the AE we were able to generate a highly physical representation of the data, which, rather than simply splitting different structures, is able to coherently distribute them in a 3D chart according to their physical similarities. As a consequence, the subsequent independent classification via mean shift easily identified a wealth of distinct structures and underscored the capability of the approach to distinguish both local and global structural motifs: location of twinning planes, surface defects, distorted cluster shapes, etc. ### Generality of the approach In this section we show that the approach adopted for Au\({}_{90}\) is of general applicability. At the root of such generality is the wealth of structural information carried by RDFs, which are expected to be valuable for a broad class of systems which includes nanoclusters of other metals and sizes, as showcased below, but is not limited to them [3, 28]. Here we focus on larger cluster sizes that, as a general trend, show a lower variety of structures as compared to smaller ones. In particular, we study clusters of \(147\) atoms with elemental gold (Au\({}_{147}\)), copper (Cu\({}_{147}\)), and silver (Ag\({}_{147}\)). These two latter cases exhibit rather different properties as compared to the gold clusters; in particular, they exhibit a lower differentiation in the structural landscape that is mainly dominated by lh structures. We discuss only selected structural families identified by the method for the three cases, that best showcase the discerning capabilities of the method: faulted structures characteristic of Au\({}_{147}\) and on the different types of lh present in Ag\({}_{147}\). Results for Cu\({}_{147}\) are similar to Ag\({}_{147}\) and are reported in the Supporting Information. These two examples put our approach to a test, because these two families are characterized by distinct structural features: faulted structures mainly differ for small changes in the overall shape of the particles and for their atomic coordination while lh have more similar shapes and lower degrees of crystallinity. Figure 5A shows that, in the case of Au\({}_{147}\), our approach is capable of distinguishing fine features in the large family of faulted structures, which are broadly grouped into faulted fcc and faulted hcp, in analogy to Au\({}_{90}\). In the standard faulted fcc (A5, corresponding to a standard double twin), there is a single hcp plane with at least one fcc plane on either side. When the hcp Figure 5: A) Cross-sections of the different types of twin families obtained by using mean shift clustering on the latent space representation of Au\({}_{147}\). The families were splitted in two groups,in the same fashion of our treatment for the Au\({}_{90}\) twin structures. Colors of the atoms refer to their individual coordination, similarly to Fig. 4. Every structure is labeled with the same alphanumeric index of Fig. 6A, where the 3D chart of Au\({}_{147}\) is depicted B) The four different families of icosahedral structures for Ag\({}_{147}\) are sketched. As customary the families were extracted via mean shift clustering in the 3D space resulting from encoding of the Ag\({}_{147}\) dataset. The six-atom rosette defects are highlighted in red. The first three figures on the left are three dimensional representations, the last figure is a cross section. Complete description of the clustering of all the Ag\({}_{147}\) structures can be found in the Supporting Information. surface layer, we have hcp islands (A10) or sometimes partial hcp islands (A13, A14). In addition, an hcp plane and an hcp island can occur within the same structure (A19). When there are more than one hcp plane, stacking defects are observed. In the extreme case, it can be completely hcp (A20) or fcc island (A16). When there are two hcp planes, depending on the location of the hcp planes, we have either the central stacking fault (A15) or peripheral stacking fault (A9). In the standard faulted hcp (A18), there is a single fcc plane with at least one hcp plane on either side. Finally, we have the faulted hcp cluster with converging hcp planes (A11). Owing to the particular characteristics of silver, the structural landscape of Ag\({}_{147}\) is largely dominated by icosahedra, which the clustering method is able to split into four subfamilies (Fig. 5B). Conventional lh consisting of surface vacancies are the dominant among them. Icosahedra also undergo reconstruction and disordering through "rosette" defects on the surface. When the disordering increases further, we observe lh with surface disordering. Finally, one can recognize lh with a central vacancy where the central atom is missing as shown in the cross section in the rightmost panel of Fig. 5B. Distinguishing with ease the latter structural subfamily is a feature of our approach; indeed CNA can hardly recognize icosahedra with a central vacancy because it relies on the (missing) lh-coordinated atom to identify the lh class. In summary, for all the considered cases, the method proved to be transferable and robust, being capable of characterizing the wealth of structures of Au\({}_{147}\) and giving insights into the fine features distinguishing lh subclasses for Cu\({}_{147}\) and Ag\({}_{147}\). ### Dynamical structural transitions The previous sections demonstrated how the method at hand is capable of generating reliable, low-dimensional structural charts from large datasets of nanoclusters configurations for different metals and sizes. In all considered cases, the charts, informed by RDFs, excelled at distributing the different families of structures in a physically meaningful fashion, keeping similar structures closer while positioning different ones far apart. The method was able to distinguish both structures presenting major shape differences (as faulted fcc and hcp in Au nanoclusters) and structures with lower degrees of crystallinity and closer overall shape (lh subfamilies). In other words, the three CVs defining the chart can discriminate between different metastable states of the systems studied while mantaining an insightful ordering among them. These features suggest that the approach can be used for describing structural transitions occurring along reactive trajectories, _e.g.,_ obtained by MD simulations. To test this idea, we use the chart to study a continuous dynamical trajectory (Fig. 6). We consider a 2 \(\mu\)s unbiased MD run of Au\({}_{147}\) at \(396\) K. At this temperature, the most probable structure for Au\({}_{147}\) is Dh [9]. By choosing as initial configuration an lh structure, which is very unlikely in such thermodynamic conditions, it is possible to observe a spontaneous lh \(\rightarrow\) Dh transition in an unbiased trajectory. In particular, we map 2 millions of individual MD snapshots on the chart through the AE in Fig. 1, which was previously trained on independent structures generated by PTMD. To be compatible with this representation, each snapshot undergoes a short local minimization. Figures 6A, B compare the structural chart of the entire PTMD dataset with the partial representation of the same chart as obtained from the unbiased MD trajectory. The trajectory progressively populates a connected, tube-shaped region of the chart, which joins smoothly lh to Dh domains, passing through intermediate, defected structures which belong to well defined families. More in detail, the following structural pathway is observed: lh (cluster-4) \(\rightarrow\) distorted-lh (cluster-2) \(\rightarrow\) distorted-Dh (cluster-7) \(\rightarrow\) Dh (cluster-3) which is confirmed by analyzing the structures along the trajectory (Fig. 6C). Beginning from lh there is an initial transition to distorted-lh where the disorder increases and we start observing fcc-coordinated atoms in the nanocluster. The distorted-lh then changes to distorted-Dh where the amount of fcc coordinated atoms increases further. Apart from the difference in the amount of fcc, distorted-lh is geometrically similar to lh while distorted-Dh is closer to Dh. Finally, the distorted-Dh transitions to Dh which completes a gradual change from lh to Dh with physically meaningful changes along the tube-shaped region. In the absence of the chart, it would in principle be possible to perform a visual analysis of the lh \(\rightarrow\) Dh trajectory of roughly two millions of structures. However, it would be extremely cumbersome to identify the main thermally activated transformation, and to track the fine structural changes and fluctuations along the trajectory which are crucial for understanding the transition mechanisms. This difficulty is easily overcome by tracking changes in the chart coordinates as reported in Fig. 6C, which shows the time evolution of the CVs as a function of time along the trajectory. Changes in CVs are found to correlate very well with structural changes. Three broad phases can then be distinguished during the evolution of the trajectory. In the initial phase (up to \(\sim\) 250 ns), the nanocluster is predominantly lh (cluster-4) with intermittent fluctuations to distorted-lh (cluster-2), distorted-Dh Figure 6: A) Structural chart of Au\({}_{147}\) containing 87,050 structures. Points are colored according to the structural families identified by mean shift clustering, see also Fig. S4, now labeled using alphanumeric indexes to distinguish them by the families of Fig. 4. B) Plot of an unbiased MD simulation of Au\({}_{147}\) undergoing a structural transition from lh to Dh in the same chart as A. The point are colored using their mean shift classification obtained on the training dataset represented in panel A. In the plot are depicted representative structures of the different regions. C) Scatter plots of the time evolution of the three CVs along the trajectory of panel B. Dark red dashed lines highlight two intervals in which the main transformations from lh to Dh occurs. The colors of the points correspond to their mean shift label as in panel A and B. Black dashed lines represent a running average of the scatter plots. Bottom panels report magnifications of the two main transitions with snapshots of the main structures observed. (cluster-7). The actual \(\mathrm{lh}\)\(\rightarrow\)\(\mathrm{Dh}\) transition occurs around \(\sim\) 245 ns, followed by a long intermediate phase (spanning \(\sim\) 245 ns to \(\sim\) 1820 ns), in which fluctuations between \(\mathrm{Dh}\) (cluster-3, dominant) and distorted-\(\mathrm{Dh}\) (cluster-7, minor) are observed. A final transition step at \(\sim\) 1820 ns leads to the final phase consisting of \(\mathrm{Dh}\) with very few fluctuations to distorted-\(\mathrm{Dh}\). Here, we stress that this information can be obtained simply by following the CVs even before analyzing the structures. We will now focus on the transition regions and look closely at the structural changes. For this purpose, we consider CV1. In the tube-like region, a continuous increase in CV1 is synonymous with a continuous change from \(\mathrm{Ih}\) to \(\mathrm{Dh}\). A zoomed plot of the first transition (between 240 ns and 260 ns) is shown in the lower left panel of Fig. 6C, see Fig. S7 for CV2 and CV3. The initial \(\mathrm{Ih}\) structures (I-A) transition to distorted-\(\mathrm{Ih}\) structures (II-A, III-A) where we begin to see the fcc-coordinated atoms along with \(\mathrm{Dh}\)-like features. With further increase in CV1, there is a gradual change to distorted-\(\mathrm{Dh}\) structures (IV-A, V-A). Finally, these structures transition to \(\mathrm{Dh}\) structures which have an \(\mathrm{hcp}\) island (VI-A, VII-A). Decahedra with \(\mathrm{hcp}\) island dominate the middle phase and \(\mathrm{hcp}\) island-free \(\mathrm{Dh}\) are obtained after a final transition around \(\sim\) 1822 ns (shown in the lower right section of Fig. 6C). This second transition is marked by a slight increase in the mean CV1 value (black dashed line): initially, we have \(\mathrm{Dh}\) with \(\mathrm{hcp}\) island (I-B, II-B) which transition to a better \(\mathrm{Dh}\) (without \(\mathrm{hcp}\) island) around \(\sim\) 1823 ns (V-B). It appears that this transition is aided by fluctuations to distorted-\(\mathrm{Dh}\) intermediates (III-B, IV-B). After the transition to a better \(\mathrm{Dh}\) (beyond \(\sim\) 1825 ns), there are three distinct horizontal branches. The dominant one, which has the highest CV1 value, corresponds to the perfect defect-free \(\mathrm{Dh}\) (V-B). However, this structure often undergoes two types of local reconstructions near the reentrant groove (VI-B, VII-B), which coincide with two distinct values of CV1. The preceding discussion underscores that the three deep CVs are capable of describing in a detailed and physical fashion what happens during a dynamical transition. The chart enables on-the-fly tracking of the system along its structural changes and describes transitions between different metastable states. This is a further evidence of the physical insightfulness of the latent space generated starting from the RDFs, underscoring the reliability of the structural information contained in the charts and further showcasing the power of the approach. In particular, the method shows promise for characterizing and analyzing long trajectories generated via molecular simulations enabling a fast and informed way to study and follow the time evolution of this type of systems. Importantly, the differentiability of the coordinates of the latent space with respect to the atomic positions opens the way to address the challenge of biasing MD simulations of structural changes [31, 44]. The specific merit of this approach is to provide a natural route to devise a general, informative, and low-dimensional collective variable space capable of describing dozens of structural motifs. We plan to investigate structural transformation driven by deep learnt collective variables in a separate communication. ## 3 Conclusions This work presents an original machine learning method capable to chart the structural landscape of nanoparticles according to their radial distribution function. The approach comprises two subsequent information extraction steps. The first consists in translating the atomic coordinates into RDFs, which encode information about the structure in translationally, rotationally, and permutationally invariant way. The high dimensional information contained in the RDF is then reduced to a low-dimensional (3D) and yet informative representation ("chart") by exploiting convolutional autoencoders. These deep-learnt collective variables are surprisingly good at describing structural features in a physically meaningful way, discriminating the different states of the system. The 3D charts of different metal nanoclusters were then analysed using a non-parametric clustering technique, which allowed us to classify the datapoints into structural families. The method succeeded at disentangling the complex structural motifs of nanoclusters having different shapes and metals (\(\mathrm{Au}_{90}\), \(\mathrm{Au}_{147}\), \(\mathrm{Ag}_{147}\), and \(\mathrm{Cu}_{147}\)), distinguishing also fine difference between faulted and mixed structures as well as small defects (icosahedra with central vacancy, surface defects, etc.). Related structural motifs, _e.g._, fcc and faulted fcc/hcp were found to occupy close regions of the chart, allowing us to garner insights also into dynamical structural transformations. Finally, the method further proved useful in the analysis of a long unbiased MD run of Au\({}_{147}\) undergoing a structural transition. The collective variables allowed us to accurately track and describe structural changes along the dynamics. This pushes the method applicability beyond the simple analysis of structural differences in large datasets, making it a powerful tool for the inspection, interpretation and possibly generation of reactive trajectories between metastable states. Indeed, the ability to discriminate with a high level of detail different metastable states, together with the intrinsic differentiability of neural networks, make the encoded variables promising low-dimensional CVs for biased MD simulations. The excellent results obtained for metal nanoclusters, for which the method could learn to identify a variety of structures ranging from crystalline to faulted and amorphous, demonstrates the virtue of machine learning from radial distribution functions. Building on the generality of its descriptors, this machine learning framework could be used to chart the structural landscape of diverse kinds of systems including non-metallic nanoparticles, [45, 27] colloidal assemblies [46, 47, 28], advancing our capability to classify, explore, and understand transitions in these systems. ## 4 Methods The original datasets we considered included hundreds of thousands of structures for each particular cluster size and type. The structures were generated through Parallel-Tempering Molecular Dynamics PTMD simulations (see the Supporting Information). Original structures were then locally minimized to discount thermal noise. In order to avoid redundancy in the data, due to duplicates in the locally minimized structures, the initial set of structures was filtered out in order to only select unique samples. This selection was based on both CNA classification and potential energy. As a result, structures in the final dataset differed from each other by at least 0.1 meV in the potential energy or by CNA label, leading to a reduction in the number of structures to few tens of thousands for every cluster type. The RDFs of each configurations were obtained using kernel density estimation on the interatomic distances (using the KernelDensity library from scikit-learn package [48]) with gaussian kernels and a bandwidth of 0.2 nanometers. The RDFs were then discretized and processed by the autoencoder as described in Fig. 1. Input and output of the AE share the same sizes, equal to the total mesh points of the discretized RDFs. The convolutional part of the encoder is composed by 5 blocks made of a convolutional layer, a rectified linear unit activation function and a batch normalization. After the convolutions the outputs are flattened and fed to a fully connected linear layer which outputs the 3 CVs values, closing the encoder section. The decoder follows, mirroring the encoder. The 3 outputs of the encoder are fed to a another fully connected layer whose output is reshaped and fed to 5 deconvolutional blocks that replicate, mirrored, the convolutional part of the encoder. Finally, in the output layer of the decoder, data are returned to their initial size. The output is compared to the input in the training using MSE loss. More details regarding the AE architecture parameters and the training can be found in the Supporting Information. After the training, the three dimensional output of the bottleneck is evaluated for all the data to obtain a 3D chart, _e.g._ the one reported in Fig. 3. After the chart of the data has been generated, the mean shift[40] clustering technique is exploited to identify families of structures and evaluate the quality of the chart. Mean shift requires to set only one parameter, the bandwidth, dictating the resolution of the analysis. Bandwidth selection was obtained looking for intervals of values, yielding an (almost) constant number of clusters, see Fig. S3. Finally, the 50 configurations closer to each centroid were analyzed visually, in order to inspect for major structural feature characterizing the different regions identified by the clustering. Acknowledgements This research is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 803213). This work has been supported by the project "Understanding and Tuning FRiction through nanOstructure Manipulation (UTFROM)" funded by MIUR Progetti di Ricerca di Rilevante Interesse Nazionale (PRIN) Bando 2017 - grant 20178PZCB5. The authors acknowledge PRACE for awarding us access to Marconi100 at CINECA, Italy.
2310.11443
Operations on the set of scalar and matrix-valued quiddity sequences
Our purpose with this paper is, in first place, to recast the space of quiddity sequences corresponding to usual frieze patterns as a different type of SET operad, and second to introduce and study $\mathfrak{M}$-quiddity sequences where $\mathfrak{M}$ is a monodromy block matrix of order two. Also, we examine some related topic as are the possibility of to define matrix-valued friezes patterns and noncommutative signed Chebyshev polynomials.
Raúl Felipe
2023-10-17T17:52:46Z
http://arxiv.org/abs/2310.11443v1
# Operations on the set of scalar and matrix-valued quiddity sequences ###### Abstract Our purpose with this paper is, in first place, to recast the space of quiddity sequences corresponding to usual frieze patterns as a different type of SET operad, and second to introduce and study \(\mathfrak{M}\)-quiddity sequences where \(\mathfrak{M}\) is a monodromy block matrix of order two. Also, we examine some related topic as are the possibility of to define matrix-valued friezes patterns and noncommutative signed Chebyshev polynomials. To the memory of my colleague Nancy Lopez-Reyes _2020 Mathematics Subject Classification (MSC2020): 39A06, 15A24_ _Key words:_ Frieze pattern; operad; matrix periodic difference equations. ## 1 Introduction We star with some necessary definitions. Given a block matrix \[\mathfrak{M}=\left(\begin{array}{cc}m_{11}&m_{12}\\ m_{21}&m_{22}\end{array}\right), \tag{1}\] being each \(m_{ij}\) a complex matrix of order \(l\) for \(i,j=1,2\) such that \(|\mathfrak{M}|\neq 0\), we say that the finite sequence of complex block matrices \[\left(\begin{array}{cc}\mathfrak{p}_{1}&\mathfrak{q}_{1}\\ I&O\end{array}\right),\cdots,\left(\begin{array}{cc}\mathfrak{p}_{n}& \mathfrak{q}_{n}\\ I&O\end{array}\right),\] **decomposes \(\mathfrak{M}\)**, if the following equality holds \[\left(\begin{array}{cc}\mathfrak{p}_{n}&\mathfrak{q}_{n}\\ I&O\end{array}\right)\cdots\left(\begin{array}{cc}\mathfrak{p}_{1}& \mathfrak{q}_{1}\\ I&O\end{array}\right)=\mathfrak{M}, \tag{2}\] in which case, the bi-vector \((\mathfrak{p}_{1},\ldots,\mathfrak{p}_{n},\mathfrak{q}_{1},\ldots,\mathfrak{q} _{n})\) is said to be an \(\mathfrak{M}\)**-quiddity sequence** of length \(n\). The set of all \(\mathfrak{M}\)-quiddity sequences is denoted by \(\mathfrak{DS}(\mathfrak{M})\). It is clear that \(\mathfrak{DS}(\mathfrak{M})=\bigcup_{n=1}^{\infty}\mathfrak{DS}_{n}( \mathfrak{M})\) where for all \(n\in\mathbb{N}\), \(\mathfrak{DS}_{n}(\mathfrak{M})\) denotes the set of all \(\mathfrak{M}\)-quiddity sequences of length \(n\). Here, \(I\) is the identity matrix of order \(l\). We can also say that \(\mathfrak{DS}(\mathfrak{M})\) is the set of solutions of equation (2) or of the decomposition problem for the matrix \(\mathfrak{M}\) and that the \(\mathfrak{M}\)-quiddity sequences are its solutions. The main question addressed in the present research is the following : to provide the space \(\mathfrak{DS}(\mathfrak{M})\) with certain products such that from two \(\mathfrak{M}\)-quiddity sequences we can construct a new \(\mathfrak{M}\)-quiddity sequence in such a way that \(\mathfrak{DS}(\mathfrak{M})\) acquires a certain structure of SET operad. Taking into account that this topic is connected with difference equations with periodic coefficients, below \(\mathfrak{M}\) will be called **monodromy matrix**. We have to distinguish two cases of special interest : * \(l=1\) for \(\mathfrak{M}=\pm\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right)=\pm\,I_{2}\), and \(\mathfrak{q}_{1}=\cdots=\mathfrak{q}_{n}=-1\). In this context, we say that the vector \((\mathfrak{p}_{1},\ldots,\mathfrak{p}_{n})\) is an \(\mathfrak{M}\)-quiddity sequence. * \(l\) arbitrary for \(\mathfrak{M}=\pm\left(\begin{array}{cc}I&0\\ 0&I\end{array}\right)\), and \(\mathfrak{q}_{1},\cdots,\mathfrak{q}_{n}\in M_{l}(\mathbb{R})\). An important part of our work is dedicated to show that this question is connected with extensions to the non-commutative setting (**matrix spaces of certain order**) of different long-studied subjects for scalar case. We list some of these themes : 1. matrix version of the theory of frieze patterns. 2. difference equations with periodic matrix coefficients and non-commutative Chebyshev polynomials. Our work suggests that many of these objects related to \(\mathfrak{M}\)-quiddity sequences can also be multiplied of some manner giving rise to an object of the same type which will be the subject of future research. For convenience and for reasons which will be clear after, in this point, we will make a brief summary of the theory of frieze patterns. The finite frieze patterns of positive integers were introduced by Coxeter in 1971,and studied later in detail by Conway and Coxeter in 1973. They are nothing more that objects formed by \(n+1\) bi-infinite rows which turn out to be of period \(n\). The first and the last rows are composed of zeros; the second row and the row before the last are filled by \(1\). The entries in the remaining rows are positive integers and they are calculated by the following unimodular rule: for every diamond with entries \(a,b,c\) and \(d\) we have \(ad=1+bc\). In general, a finite frieze can be seen in the form. \[\begin{array}{ \(3D\)-dissections of a special type which result in the notion of \(3D\)-quiddity sequences (the reader will find more details in this regard in the section 3). We also wish to point that interesting results on frieze patterns can be found in the following recent papers : [1], [2], [9], [10], [11],[21], [22] which represent only a small selection of all the articles that can be consulted on the subject (we apologize in advance for this). Operads first appeared in the context of algebraic topology in the 70s of the last century. Recently, in particular Set Operads have been used in other areas more applied such as free probability and database theory in computer science, see [17] and [28] for more details. Next, we make a different utility of the Operad theory. Below, the set \(\{1,2,\cdots,n-1,n\}\) is abbreviated to [n]. We now define a nonsymmetric (ns) SET **quiddity-operad** as a collection of sets \[\mathcal{P}=\bigsqcup_{n\geq 1}\mathcal{P}(n), \tag{5}\] together with partial compositions maps \[\circ_{i}:\mathcal{P}(n)\times\mathcal{P}(m)\longrightarrow\mathcal{P}(n+m-1 ),\hskip 28.452756ptn,m\geq 1,\hskip 5.690551pti\in[n]=\{1,\cdots,n\}, \tag{6}\] and a distinguished element \(\mathbf{1}\in\mathcal{P}(1)\), the unit of \(\mathcal{P}\). This object has to satisfy the following properties \[(x\circ_{i}y)\circ_{i+j-1}z=x\circ_{i}(y\circ_{j}z),\hskip 5.690551pt\forall x \in\mathcal{P}(n),\forall y\in\mathcal{P}(m),\forall z\in\mathcal{P}(k), \tag{7}\] for \(i\in[n]\) and \(j\in[m-1]\) where \(2\leq m\). \[(x\circ_{i}y)\circ_{j+m-1}z=(x\circ_{j}z)\circ_{i}y,\hskip 5.690551pt\forall x \in\mathcal{P}(n),\forall y\in\mathcal{P}(m),\forall z\in\mathcal{P}(k), \tag{8}\] and \(i,j\in[n]\) such that \(i<j\) for \(1<n\). Finally \[\mathbf{1}\circ_{1}x=x=x\circ_{i}\mathbf{1},\hskip 5.690551pt\forall x\in \mathcal{P}(n), \tag{9}\] for \(i\in[n]\). The reason for not including, as usual, \(j=m\) in (7) will be shown below. The interested reader may find a simple axiomatization of the customary non-symmetric operad theory in the category of sets in [3] page 8. This paper is organized as following : section 2 has as its central purpose that of introducing a structure of SET quiddity-operad on the set of all Conway-Coxeter quiddity sequences of positive integers corresponding to frieze patterns. To establish this structure, we must first define what we mean by a convex 1-gon and a convex 2-gon and assign their corresponding quiddity sequences to them. With this purpose, we can consider a point and a line segment as polygons without triangulations and then define their quiddity sequences as \((0)\) and \((00)\) respectively. Then, we introduce the products \((10)\), \((11)\) and \((13)\)-\((18)\) from which we can obtain the theorem 1. This states that for any of these products, the multiplication of two Conway-Coxeter quiddity sequences of length greater than 3 is again a Conway-Coxeter quiddity sequence, that is, the result is a vector of positive integers that has associated a triangulation of a convex polygon. The main result of this section is that the set of all Conway-Coxeter quiddity sequences constitutes a nonsymmetric SET quiddity-operad. In the section 3, we show that the products defined in section 2 on the set of all Conway-Coxeter quiddity sequences also work in the space of \(3D\)-dissections introduced recently by Ovsienko, in other words, the multiplication of two \(3D\)-dissections with respect some of these products is again a \(3D\)-dissection. This section concludes by showing a product on the space of quiddity sequences in the case for which the monodromy matrix is the identity matrix. Concretely, the section 4 has been developed from to introduce the notion of matrix quiddity sequence in its more general form trying to replicate the relationship that has this concept in the scalar case with difference equations. In this sense, we have developed in the matricial setting the following subjects: \(a)\) the space of matrix quiddity bi-sequence, a concept introduced by us, and products on this space, \(b)\) we give some elements in order to construct a theory of matrix-valued frieze patterns and \(c)\) we propose a theory of noncommutative signed Chebyshev polynomials. ## 2 Non symmetric set quiddity-operad structure for quiddity sequences ### A first proposal of ns set quiddity-operad structure For \(n\geq 3\), let us denote by \(\mathcal{QS}(n)\) the set of all quiddity sequences for a plane convex \(n\)-gon and \(\mathcal{QS}=\bigsqcup_{n\geq 1}\mathcal{QS}(n)\), where the sets \(\mathcal{QS}(1)\) and \(\mathcal{QS}(2)\) will be defined bellow. By simple inspection we have \[\mathcal{QS}(3)=\{(1,1,1)\},\] \[\mathcal{QS}(4)=\{(2,1,2,1),(1,2,1,2)\},\] and \[\mathcal{QS}(5)=\{(3,1,2,2,1),(1,3,1,2,2),(2,1,3,1,2),(2,2,1,3,1),(1,2,2,1,3)\},\] for \(3\leq n\leq 5\). Assume for the moment that \(3\leq n,m\) and consider \(\mathcal{T}=(t_{1},\ldots,t_{n})\in\mathcal{QS}(n)\), \(\mathcal{S}=(s_{1},\ldots,s_{m})\in\mathcal{QS}(m)\) arbitraries, then we define for any \(i\in[n]\) \[\mathcal{T}\circ_{i}\mathcal{S} =(t_{1},\ldots,t_{n})\circ_{i}(s_{1},\ldots,s_{m})\] \[=(t_{1},\ldots,t_{i-1},t_{i}+s_{1}+1,s_{2},\ldots,s_{m-1},s_{m}+ 1,t_{i+1}+1,t_{i+2},\ldots,t_{n}), \tag{10}\] we must clarify that if \(i=n\) then one takes \(i+1\) as \(1\), in other words \[\mathcal{T}\circ_{n}\mathcal{S} =(t_{1},\ldots,t_{n})\circ_{n}(s_{1},\ldots,s_{m})\] \[=(t_{1}+1,t_{2},\ldots,t_{n-1},t_{n}+s_{1}+1,s_{2},\ldots,s_{m-1 },s_{m}+1). \tag{11}\] If \(\mathcal{S}\in\mathcal{QS}(n)\) then \(\mathcal{S}_{\triangle}\) denotes the triangulation for a plane convex \(n\)-gon which gives rise to the quiddity sequence \(\mathcal{S}\). Also, below we will use the notation \((i,i+1)(\mathcal{P})\) to indicate the side \((i,i+1)\) of polygon \(\mathcal{P}\). If the polygon \(\mathcal{P}\) has \(n\) vertices we identify the vertex \(n+1\) with the vertex \(1\). **Theorem 1**: _Let us suppose that \(\mathcal{T}=(t_{1},\ldots,t_{n})\in\mathcal{QS}(n)\) and \(\mathcal{S}=(s_{1},\ldots,s_{m})\in\mathcal{QS}(m)\), where \(n,m\geq 3\). Then \(\mathcal{T}\circ_{i}\mathcal{S}\in\mathcal{QS}(n+m-1)\) for all \(i\in[n]\)._ **Proof.** We take advantage of the one to one correspondence mentioned above between the set of triangulations of plane convex polygons and quiddity sequences. We claim that \(\mathcal{T}\circ_{i}\mathcal{S}\) is the quiddity sequence for a triangulation of a plane convex polygon \(\mathcal{P}_{n+m+1}\) with \(n+m-1\) vertices. In fact, we know that \(\mathcal{T}\) and \(\mathcal{S}\) correspond to triangulations \(\mathcal{T}_{\triangle}\) and \(\mathcal{S}_{\triangle}\) of two polygons \(\mathcal{P}_{n}\) and \(\mathcal{P}_{m}\) with \(n\) and \(m\) vertices respectively. The vertices of each one of them labeled counterclockwise by the elements of \([n]\) for \(\mathcal{T}\) and by the set \([m]\) for \(\mathcal{S}\). Then, \(\mathcal{T}\circ_{i}\mathcal{S}\) is the quiddity sequence of a triangulation \((\mathcal{T}\circ_{i}\mathcal{S})_{\triangle}\) for a polygon with \(n+m-1\) vertices constructed by identifying or overlapping the vertex \(i\) of \(\mathcal{P}_{n}\) with the vertex \(1\) of \(\mathcal{P}_{m}\) and joining by means of introducing a side, the vertex \(i+1\) of \(\mathcal{P}_{n}\) with the vertex \(m\) of \(\mathcal{P}_{m}\). Next, we label the vertices of the new polygon from the vertex \(1\) of \(\mathcal{P}_{n}\). The triangulation \((\mathcal{T}\circ_{i}\mathcal{S})_{\triangle}\) associated to \(\mathcal{T}\circ_{i}\mathcal{S}\) is formed by the union of the triangulations corresponding to \(\mathcal{T}\) and \(\mathcal{S}\), taking into account the new labeled of the vertices, plus the sides (now turned into internal sides of the new polygon) \((i,i+1)\) of \(\mathcal{P}_{n}\) and \((1,m)\) of \(\mathcal{P}_{m}\) but now with the new labels. In other words, \((\mathcal{T}\circ_{i}\mathcal{S})_{\triangle}=\mathcal{T}_{\triangle}\cup \mathcal{S}_{\triangle}\cup(i,i+1)(\mathcal{P}_{n})\cup(m,1)(\mathcal{P}_{m})\). To see that \(\mathcal{T}\circ_{i}\mathcal{S}\) is the quiddity sequence of the above triangulation \((\mathcal{T}\circ_{i}\mathcal{S})_{\triangle}\) for the polygon \(\mathcal{P}_{n+m+1}\), observe that the number of triangles incident in the vertex \(i\) with respect to \((\mathcal{T}\circ_{i}\mathcal{S})_{\triangle}\) is conformed for the triangles incident in \(i\) as vertex of \(\mathcal{P}_{n}\) for the triangulation \(\mathcal{T}_{\triangle}\), the triangles incident in the vertex \(1\) of \(\mathcal{P}_{m}\) for \(\mathcal{S}_{\triangle}\), plus \(1\) (the latter given by the triangle formed with the vertices \(i\) and \(i+1\) of \(\mathcal{P}_{n}\) and the vertex \(m\) of \(\mathcal{P}_{m}\) in the old labeling). On other hand, it is clear that with respect to the triangulation constructed \((\mathcal{T}\circ_{i}\mathcal{S})_{\triangle}\) of polygon \(\mathcal{P}_{n+m+1}\), the number of triangles incident for the vertices \(i+1\) of \(\mathcal{P}_{n}\) and \(m\) of \(\mathcal{P}_{m}\) have increased by one. The remaining vertices keep the same number of triangles that incident them. It follows from the proof of previous theorem that the product (10) can be adapted to the plane convex polygons with their triangulations when the number of vertices of both polygons is greater than or equal to \(3\). **Example 1**: _To show this fact we give an example_ _thus, as it was already defined_ \[(2,2,1,3,1)\circ_{2}(3,1,3,1,3,1)=(2,6,1,3,1,3,2,2,3,1).\] Now, we set \[\mathcal{QS}(1)=\{(0)=1_{\mathcal{QS}}\},\ \ \mathcal{QS}(2)=\{(00)\}. \tag{12}\] The assignation (12) is justified if a point and a line segment are considered as polygons without triangulations, labeled by \([1]=\{1\}\) and \([2]=\{1,2\}\) respectively, and next we move by graphical considerations: since, when it is superimposing a point with any vertex of a polygon, then the polygon is recovered, this suggests to define \[(0)\circ_{1}(t_{1},\ldots,t_{n})=(t_{1},\ldots,t_{n})\circ_{i}(0)=(t_{1}, \ldots,t_{n}), \tag{13}\] for all \((t_{1},\ldots,t_{n})\in\mathcal{QS}(n)\) where \(n\geq 1\) and \(i\in[n]\). On other hand, we set \[(t_{1},\cdots,t_{n})\circ_{i}(00)=(t_{1},\cdots,t_{i-1},t_{i}+1,1,t_{i+1}+1,t_ {i+2},\cdots,t_{n}), \tag{14}\] in particular \[(t_{1},\cdots,t_{n})\circ_{n}(00)=(1,t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+1), \tag{15}\] also we put \[(00)\circ_{1}(t_{1},\cdots,t_{n})=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+1,1), \tag{16}\] \[(00)\circ_{2}(t_{1},\cdots,t_{n})=(1,t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+1), \tag{17}\] \(\forall\ \mathcal{T}=(t_{1},\ldots,t_{n})\in\mathcal{QS}(n)\) where \(n\geq 3\). Finally, we define \[(00)\circ_{1}(00)=(00)\circ_{2}(00)=(1,1,1). \tag{18}\] Next, we can illustrate (14), (16) and (17) with three examples: that is \[\begin{array}{c}\includegraphics[width=142.362pt]{200.eps}\end{array},\] that translated into the quiddity sequences give us \[(00)\circ_{1}(1,2,1,2)=(2,2,1,3,1).\] Now, \((00)\circ_{2}(1,2,1,2)=(2,2,1,3,1)\) corresponding to The necessity to impose (18) is evident. **Corollary 2**: _Suppose that \(1\leq n,m\). Then, for all \(\mathcal{T}=(t_{1},\ldots,t_{n})\in\mathcal{QS}(n)\) and any \(\mathcal{S}=(s_{1},\ldots,s_{m})\in\mathcal{QS}(m)\), we have that \(\mathcal{T}\circ_{i}\mathcal{S}\in\mathcal{QS}(n+m-1)\)._ **Proof.** It follows from theorem 1 and (13)-(18). **Theorem 3**: _The collection \(\mathcal{QS}\) equipped with the products (10), (13)-(18) is a ns SET quiddity-operad._ **Proof.** (13) implies that (9) holds. Assume \(3\leq n\), \(k\) arbitrary and recall that \(2\leq m\). Let us suppose that \(\mathcal{T}=(t_{1},\ldots,t_{n})\in\mathcal{QS}(n)\), \(\mathcal{S}=(s_{1},\ldots,s_{m})\in\mathcal{QS}(m)\) and \(\mathcal{R}=(r_{1},\ldots,r_{k})\in\mathcal{QS}(k)\) are arbitrary. We pass to prove (7). Suppose that \(1<i<n\). For all \(j\in[m-1]\) we have \(i\leq i+j-1\leq i+m-2\). Hence \[(\mathcal{T}\circ_{i}\mathcal{S})\circ_{i+j-1}\mathcal{R} =((t_{1},\ldots,t_{n})\circ_{i}(s_{1},\ldots,s_{m}))\circ_{i+j-1} (r_{1},\ldots,r_{k})\] \[=(t_{1},\ldots,t_{i-1},t_{i}+s_{1}+1,s_{2},\ldots,s_{m-1},s_{m}+1,t_{i+1}+1,t_{i+2},\ldots,t_{n})\] \[\circ_{i+j-1}(r_{1},\ldots,r_{k}).\] We have three cases: 1) \(i=i+j-1\), 2) \(i<i+j-1<i+m-2\) and 3) \(i+j-1=i+m-2\). For the first case (\(i=i+j-1\), that is, \(j=1\)), we have (also, \(j\leq n+m-1\)) \[(\mathcal{T}\circ_{i}\mathcal{S})\circ_{i}\mathcal{R}=(t_{1},\ldots,t_{i-1},t _{i}+s_{1}+r_{1}+2,r_{2},\ldots,r_{k}+1,s_{2}+1,\ldots,s_{m-1},s_{m}+1,t_{i+1 }+1,t_{i+2},\ldots,t_{n-1},t_{n}).\] On other hand, \[\mathcal{T}\circ_{i}(\mathcal{S}\circ_{1}\mathcal{R})=(t_{1},\ldots,t_{n}) \circ_{i}((s_{1},\ldots,s_{m})\circ_{1}(r_{1},\ldots,r_{k}))=(t_{1},\ldots,t_ {n})\circ_{i}(s_{1}+r_{1}+1,r_{2},\cdots,r_{k}+1,s_{2}+1,\cdots,s_{m}).\] Hence, it shows that \[(\mathcal{T}\circ_{i+(1)-1}\mathcal{S})\circ_{i}\mathcal{R}=\mathcal{T}\circ_ {i}(\mathcal{S}\circ_{1}\mathcal{R}), \tag{19}\] equation, in this case equivalent to (7) for \(1<i<n\) and \(j=1\). Let us assume that \(i+j-1=i+m-2\), thus \(j=m-1\). Then \[(\mathcal{T}\circ_{i}\mathcal{S})\circ_{i+(m-1)-1}\mathcal{R}=(t_{1},\ldots,t _{i-1},t_{i}+s_{1}+1,s_{2},\ldots,s_{m-1}+r_{1}+1,r_{2},\cdots,r_{k}+1,s_{m}+2,t_{i+1}+1,t_{i+2},\ldots,t_{n}),\] and \[\mathcal{T}\circ_{i}(\mathcal{S}\circ_{m-1}\mathcal{R}) =(t_{1},\ldots,t_{n})\circ_{i}((s_{1},\ldots,s_{m})\circ_{m-1}(r _{1},\ldots,r_{k}))\] \[(t_{1},\ldots,t_{n})\circ_{i}(s_{1},\ldots,s_{m-1}+r_{1}+1,r_{2}, \cdots,r_{k-1},r_{k}+1,s_{m}+1).\] The last two equalities imply that \[(\mathcal{T}\circ_{i}\mathcal{S})\circ_{i+(m-1)-1}\mathcal{R}=\mathcal{T} \circ_{i}(\mathcal{S}\circ_{m-1}\mathcal{R}), \tag{20}\] then (7) holds for \(1<i<n\) and \(j=m-1\). Assume now that \(i<i+j-1<i+m-2\), then \(1<j<m-1\). We have \[(\mathcal{T}\circ_{i}\mathcal{S})\circ_{i+j-1}\mathcal{R} =(t_{1},\ldots,t_{i-1},t_{i}+s_{1}+1,s_{2},\cdots,s_{j}+r_{1}+1,r_{ 2},\cdots,r_{k}+1,\] \[s_{j+1}+1,\cdots,s_{m-1},s_{m}+1,t_{i+1}+1,t_{i+2},\ldots,t_{n}),\] also \[\mathcal{T}\circ_{i}(\mathcal{S}\circ_{j}\mathcal{R}) =(t_{1},\ldots,t_{n})\circ_{i}((s_{1},\ldots,s_{m})\circ_{j}(r_{1}, \ldots,r_{k}))\] \[(t_{1},\ldots,t_{n})\circ_{i}(s_{1},\ldots,s_{j}+r_{1}+1,r_{2}, \cdots,r_{k}+1,s_{j+1}+1,\cdots,s_{m-1},s_{m}).\] Thus, the equation (7) holds in the case \(1<i<n\) and \(1<j<m-1\). Hence, taking into account also (19) and (20) we conclude that for \(1<i<n\) and \(j\in[m-1]\) the equation (7) is satisfied. Now, we suppose that \(i=1\) and \(j=1\) then \[(\mathcal{T}\circ_{1}\mathcal{S})\circ_{1}\mathcal{R} =((t_{1},\ldots,t_{n})\circ_{1}(s_{1},\ldots,s_{m}))\circ_{1}(r_{1},\ldots,r_{k})\] \[=(t_{1}+s_{1}+1,s_{2},\ldots,s_{m-1},s_{m}+1,t_{2}+1,t_{3},\ldots,t _{n})\circ_{1}(r_{1},\ldots,r_{k})\] \[=(t_{1}+s_{1}+r_{1}+2,r_{2},\ldots,r_{k-1},r_{k}+1,s_{2}+1,s_{3}, \ldots,s_{m-1},s_{m}+1,t_{2}+1,t_{3},\ldots,t_{n}), \tag{21}\] \[\mathcal{T}\circ_{1}(\mathcal{S}\circ_{1}\mathcal{R}) =(t_{1},\cdots,t_{n})\circ_{1}((s_{1},\cdots,s_{m})\circ_{1}(r_{1}, \cdots,r_{k}))\] \[=(t_{1},\cdots,t_{n})\circ_{1}(s_{1}+r_{1}+1,r_{2},\cdots,r_{k-1}, r_{k}+1,s_{2}+1,s_{3},\cdots,s_{m})\] \[=(t_{1}+s_{1}+r_{1}+2,r_{2},\ldots,r_{k-1},r_{k}+1,s_{2}+1,s_{3}, \ldots,s_{m-1},s_{m}+1,t_{2}+1,t_{3},\ldots,t_{n}), \tag{22}\] Hence, (21) and (22) imply that \((\mathcal{T}\circ_{1}\mathcal{S})\circ_{1}\mathcal{R}=\mathcal{T}\circ_{1}( \mathcal{S}\circ_{1}\mathcal{R})\). Consider now the case \(i=1\) and \(j=m-1\). We have \[(\mathcal{T}\circ_{1}\mathcal{S})\circ_{m-1}\mathcal{R} =((t_{1},\cdots,t_{n})\circ_{1}(s_{1},\cdots,s_{m}))\circ_{m-1}( r_{1},\cdots,r_{k})\] \[=(t_{1}+s_{1}+1,s_{2},\cdots,s_{m-1},s_{m}+1,t_{2}+1,t_{3}, \cdots,t_{n})\circ_{m-1}(r_{1},\cdots,r_{k})\] \[=(t_{1}+s_{1}+1,s_{2},\cdots,s_{m-2},s_{m-1}+r_{1}+1,r_{2}, \cdots,r_{k-1},r_{k}+1,s_{m}+2,t_{2}+1,t_{3},\cdots,t_{n}), \tag{23}\] furthermore \[\mathcal{T}\circ_{1}(\mathcal{S}\circ_{m-1}\mathcal{R}) =(t_{1},\cdots,t_{n})\circ_{1}((s_{1},\cdots,s_{m})\circ_{m-1}( r_{1},\cdots,r_{k}))\] \[=(t_{1},\cdots,t_{n})\circ_{1}(s_{1},\cdots,s_{m-2},s_{m-1}+r_{1} +1,r_{2},\cdots,r_{k-1},r_{k}+1,s_{m}+1)\] \[=(t_{1}+s_{1}+1,s_{2}\cdots,s_{m-2},s_{m-1}+r_{1}+1,r_{2}, \cdots,r_{k-1},r_{k}+1,s_{m}+2,t_{2}+1,t_{3},\cdots,t_{n}), \tag{24}\] it shows of (23) and (24) that \((\mathcal{T}\circ_{1}\mathcal{S})\circ_{m-1}\mathcal{R}=\mathcal{T}\circ_{1} (\mathcal{S}\circ_{m-1}\mathcal{R})\). Let us look the case for which \(i=1\) and \(1<j<m-1\), we have \[(\mathcal{T}\circ_{1}\mathcal{S})\circ_{j}\mathcal{R} =(t_{1}+s_{1}+1,s_{2},\cdots,s_{j-1},s_{j}+r_{1}+1,r_{2},\cdots,r _{k-1},r_{k}+1,s_{j+1}+1,s_{j+2},\cdots,s_{m-1},s_{m}+1,t_{2}+1,\] \[\quad\quad t_{3},\cdots,t_{n}), \tag{25}\] \[\mathcal{T}\circ_{1}(\mathcal{S}\circ_{j}\mathcal{R}) =(t_{1},\cdots,t_{n})\circ_{1}((s_{1},\cdots,s_{m})\circ_{j}(r_{ 1},\cdots,r_{k}))\] \[=(t_{1},\cdots,t_{n})\circ_{1}(s_{1},\cdots,s_{j-1},s_{j}+r_{1}+1,r_{2},\cdots,r_{k-1},r_{k}+1,s_{j+1}+1,s_{j+2},\cdots,s_{m})\] \[=(t_{1}+s_{1}+1,s_{2},\cdots,s_{j-1},s_{j}+r_{1}+1,r_{2},\cdots,r _{k-1},r_{k}+1,s_{j+1}+1,s_{j+2},\cdots,s_{m-1},s_{m}+1,t_{2}+1,\] \[\quad\quad t_{3},\cdots,t_{n}), \tag{26}\] hence (25) and (26) prove that \((\mathcal{T}\circ_{1}\mathcal{S})\circ_{j}\mathcal{R}=\mathcal{T}\circ_{1}( \mathcal{S}\circ_{j}\mathcal{R})\). Then, the axiom (7) is fulfilled in the case \(i=1\) and \(1\leq j\leq m-1\). We continue the proof for \(i=n\) and \(j=1\). Thus, one must prove that \[(\mathcal{T}\circ_{n}\mathcal{S})\circ_{n}\mathcal{R}=\mathcal{T}\circ_{n}( \mathcal{S}\circ_{1}\mathcal{R}), \tag{27}\] we have \[(\mathcal{T}\circ_{n}\mathcal{S})\circ_{n}\mathcal{R} =((t_{1},\cdots,t_{n})\circ_{n}(s_{1},\cdots,s_{m}))\circ_{n}(r_ {1},\cdots,r_{k})\] \[=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+s_{1}+1,s_{2},\cdots,s_{m-1},s_{m}+1)\circ_{n}(r_{1},\cdots,r_{k})\] \[=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+s_{1}+r_{1}+2,r_{2},\cdots,r _{k-1},r_{k}+1,s_{2}+1,s_{3},\cdots,s_{m-1},s_{m}+1), \tag{28}\] and \[\mathcal{T}\circ_{n}(\mathcal{S}\circ_{1}\mathcal{R}) =(t_{1},\cdots,t_{n})\circ_{n}((s_{1},\cdots,s_{m})\circ_{1}(r_ {1},\cdots,r_{k}))\] \[=(t_{1},\cdots,t_{n})\circ_{n}(s_{1}+r_{1}+1,r_{2},\cdots,r_{k-1},r_{k}+1,s_{2}+1,s_{3}\cdots,s_{m})\] \[=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+s_{1}+r_{1}+2,r_{2},\cdots,r _{k-1},r_{k}+1,s_{2}+1,s_{3}\cdots,s_{m-1},s_{m}+1), \tag{29}\] then of (28) and (29) we have (27). Suppose now that \(i=n\) and \(j=m-1\) then \[(\mathcal{T}\circ_{n}\mathcal{S})\circ_{n+m-2}\mathcal{R} =((t_{1},\cdots,t_{n})\circ_{n}(s_{1},\cdots,s_{m}))\circ_{n+m-2}( r_{1},\cdots,r_{k})\] \[=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+s_{1}+1,s_{2},\cdots,s_{m-1},s_{m}+1)\circ_{n+m-2}(r_{1},\cdots,r_{k})\] \[=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+s_{1}+1,s_{2},\cdots,s_{m-1} +r_{1}+1,r_{2},\cdots,r_{k-1},r_{k}+1,s_{m}+2),\] on the other hand \[\mathcal{T}\circ_{n}(\mathcal{S}\circ_{m-1}\mathcal{R}) =(t_{1},\cdots,t_{n})\circ_{n}((s_{1},\cdots,s_{m})\circ_{m-1}( r_{1},\cdots,r_{k}))\] \[=(t_{1},\cdots,t_{n})\circ_{n}(s_{1},\cdots,s_{m-1}+r_{1}+1,r_{2}, \cdots,r_{k-1},r_{k}+1,s_{m}+1)\] \[=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+s_{1}+1,s_{2},\cdots,s_{m-1}+r _{1}+1,r_{2},\cdots,r_{k-1},r_{k}+1,s_{m}+2),\] the two last equalities imply that \((\mathcal{T}\circ_{n}\mathcal{S})\circ_{n+m-2}\mathcal{R}=\mathcal{T}\circ_{n}( \mathcal{S}\circ_{m-1}\mathcal{R})\). For \(i=n\) and \(1<j<m-1\) we have \[(\mathcal{T}\circ_{n}\mathcal{S})\circ_{n+j-1}\mathcal{R} =((t_{1},\cdots,t_{n})\circ_{n}(s_{1},\cdots,s_{m}))\circ_{n+j-1} (r_{1},\cdots,r_{k})\] \[=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+s_{1}+1,s_{2},\cdots,s_{m-1},s_{m}+1)\circ_{n+j-1}(r_{1},\cdots,r_{k})\] \[=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+s_{1}+1,s_{2},\cdots,s_{j-1},s_{j}+r_{1}+1,r_{2},\cdots,r_{k-1},r_{k}+1,s_{j+1}+1,s_{j+2},\] \[\qquad\cdots,s_{m-1},s_{m}+1),\] and also \[\mathcal{T}\circ_{n}(\mathcal{S}\circ_{j}\mathcal{R}) =(t_{1},\cdots,t_{n})\circ_{n}((s_{1},\cdots,s_{m})\circ_{j}(r_{1},\cdots,r_{k}))\] \[=(t_{1},\cdots,t_{n})\circ_{n}(s_{1},\cdots,s_{j-1},s_{j}+r_{1}+1,r_{2},\cdots,r_{k-1},r_{k}+1,s_{j+1}+1,s_{j+2},\cdots,s_{m})\] \[=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+s_{1}+1,s_{2},\cdots,s_{j-1},s_{j}+r_{1}+1,r_{2},\cdots,r_{k-1},r_{k}+1,s_{j+1}+1,s_{j+2},\] \[\qquad\cdots,s_{m-1},s_{m}+1),\] thus \((\mathcal{T}\circ_{n}\mathcal{S})\circ_{n+j-1}\mathcal{R}=\mathcal{T}\circ_{ n}(\mathcal{S}\circ_{j}\mathcal{R})\). It implies that axiom (7) holds for \(3\leq n\), \(2\leq m\) and \(1\leq k\). Now, in any context, when \(n=1\) then necessarily \(x=\mathcal{T}=\mathbf{1}=\mathbf{1}_{\mathcal{QS}}\), hence the axiom (7) reduces to the identity \[\mathcal{S}\circ_{j}\mathcal{R}=\mathcal{S}\circ_{j}\mathcal{R},\ \ for\ \ \mathcal{S}\in\mathcal{QS}(m)\ \ and\ \ \mathcal{R}\in\mathcal{QS}(k),\] if (9) holds. Thus, in our case, taking into account that (9) is satisfied, then (7) holds if \(n=1\), for all \(j\in[m-1]\) where \(2\leq m\) is arbitrary and for any \(k\) such that \(1\leq k\). We turn next to the proof when \(n=2\), for \(2\leq m\) and \(1\leq k\). Two identities shall be proved \[((00)\circ_{1}\mathcal{S})\circ_{j}\mathcal{R}=(00)\circ_{1}(\mathcal{S}\circ _{j}\mathcal{R}), \tag{30}\] and \[((00)\circ_{2}\mathcal{S})\circ_{j+1}\mathcal{R}=(00)\circ_{2}(\mathcal{S} \circ_{j}\mathcal{R}) \tag{31}\] for all \(\mathcal{S}\in\mathcal{QS}(m)\) and for any \(\mathcal{R}\in\mathcal{QS}(k)\). One divides the proof of (30) in 3 cases. Suppose \(j=1\) then \[((00)\circ_{1}\mathcal{S})\circ_{1}\mathcal{R} =(s_{1}+1,s_{2},\cdots,s_{m-1},s_{m}+1,1)\circ_{1}\mathcal{R}\] \[=(s_{1}+r_{1}+2,r_{2},\cdots,r_{k-1},r_{k}+1,s_{2}+1,s_{3}, \cdots,s_{m-1},s_{m}+1,1),\] and \[(00)\circ_{1}(\mathcal{S}\circ_{1}\mathcal{R}) =(00)\circ_{1}(s_{1}+r_{1}+1,r_{2},\cdots,r_{k-1},r_{k}+1,s_{2}+1, s_{3},\cdots,s_{m-1},s_{m})\] \[=(s_{1}+r_{1}+2,r_{2},\cdots,r_{k-1},r_{k}+1,s_{2}+1,s_{3}, \cdots,s_{m-1},s_{m}+1,1),\] thus \(((00)\circ_{1}\mathcal{S})\circ_{1}\mathcal{R}=(00)\circ_{1}(\mathcal{S} \circ_{1}\mathcal{R})\). For \(j=m-1\) we have \[((00)\circ_{1}\mathcal{S})\circ_{m-1} \mathcal{R} =(s_{1}+1,s_{2},\cdots,s_{m-1},s_{m}+1,1)\circ_{m-1}\mathcal{R}\] \[=(s_{1}+1,s_{2},\cdots,s_{m-1}+r_{1}+1,r_{2},\cdots,r_{k-1},r_{k}+ 1,s_{m}+2,1),\] moreover \[(00)\circ_{1}(\mathcal{S}\circ_{m-1}\mathcal{R}) =(00)\circ_{1}(s_{1},\cdots,s_{m-2},s_{m-1}+r_{1}+1,r_{2},\cdots,r_ {k-1},r_{k}+1,s_{m}+1)\] \[=(s_{1}+1,\cdots,s_{m-2},s_{m-1}+r_{1}+1,r_{2},\cdots,r_{k-1},r_{k }+1,s_{m}+2,1),\] it shows that \(((00)\circ_{1}\mathcal{S})\circ_{m-1}\mathcal{R}=(00)\circ_{1}(\mathcal{S} \circ_{m-1}\mathcal{R})\). Let us consider \(1<j<m-1\) then \[((00)\circ_{1}\mathcal{S})\circ_{j}\mathcal{R} =(s_{1}+1,s_{2},\cdots,s_{m-1},s_{m}+1,1)\circ_{j}\mathcal{R}\] \[=(s_{1}+1,s_{2},\cdots,s_{j-1},s_{j}+r_{1}+1,r_{2},\cdots,r_{k-1},r _{k}+1,s_{j+1}+1,s_{j+2},\cdots,s_{m-1},s_{m}+1,1),\] on other hand \[(00)\circ_{1}(\mathcal{S}\circ_{j}\mathcal{R}) =(00)\circ_{1}(s_{1},\cdots,s_{j-1},s_{j}+r_{1}+1,r_{2},\cdots,r_{ k-1},r_{k}+1,s_{j+1}+1,s_{j+2},\cdots,s_{m})\] \[=(s_{1}+1,s_{2},\cdots,s_{j-1},s_{j}+r_{1}+1,r_{2},\cdots,r_{k-1},r _{k}+1,s_{j+1}+1,s_{j+2},\cdots,s_{m-1},s_{m}+1,1).\] Hence, we have proved (30). To check (31) we must work in a similar manner. First, suppose \(j=m-1\) then \[((00)\circ_{2}\mathcal{S})\circ_{m}\mathcal{R} =(1,s_{1}+1,s_{2},\cdots,s_{m-1},s_{m}+1)\circ_{m}(r_{1},\cdots,r_{ k})\] \[=(1,s_{1}+1,s_{2},\cdots,s_{m-2},s_{m-1}+r_{1}+1,r_{2},\cdots,r_{ k-1},r_{k}+1,s_{m}+2),\] and \[(00)\circ_{2}(\mathcal{S}\circ_{m-1}\mathcal{R}) =(00)\circ_{2}(s_{1},\cdots,s_{m-2},s_{m-1}+r_{1}+1,r_{2},\cdots,r _{k-1},r_{k}+1,s_{m}+1)\] \[=(1,s_{1}+1,s_{2},\cdots,s_{m-2},s_{m-1}+r_{1}+1,r_{2},\cdots,r_{ k-1},r_{k}+1,s_{m}+2),\] thus, \(((00)\circ_{2}\mathcal{S})\circ_{m}\mathcal{R}=(00)\circ_{2}(\mathcal{S} \circ_{m-1}\mathcal{R})\). Consider \(j=1\) \[((00)\circ_{2}\mathcal{S})\circ_{2}\mathcal{R}=(1,s_{1}+1,s_{2},\cdots,s_{m-1 },s_{m}+1)\circ_{2}(r_{1},\cdots,r_{k}).\] The proof of (8) essentially follows from the fact that \(i<j\) where \(i,j\in[n]\), so it will be omitted. For example, for \(1\leq i\leq n-1\) and \(j=i+1\) we have \[(\mathcal{S}\circ_{i}\mathcal{T})=(s_{1},\cdots,s_{i-1},s_{i}+t_{1}+1,t_{2}, \cdots,t_{m}+1,s_{i+1}+1,s_{i+2},\cdots,s_{n}),\] and \[(\mathcal{S}\circ_{i+1}\mathcal{R})=(s_{1},\cdots,s_{i},s_{i+1}+r_{1}+1,r_{2}, \cdots,r_{k}+1,s_{i+2}+1,s_{i+3},\cdots,s_{n}),\] hence, \[(\mathcal{S}\circ_{i}\mathcal{T})\circ_{i+m}\mathcal{R} =(\mathcal{S}\circ_{i+1}\mathcal{R})\circ_{i}\mathcal{T}\] \[=(s_{1},\cdots,s_{i-1},s_{i}+t_{1}+1,t_{2},\cdots,t_{m}+1,s_{i+1 }+r_{1}+2,r_{2},\cdots,r_{k-1},r_{k}+1,s_{i+2}+1,s_{i+3},\cdots,s_{n}).\] **Remark 4**: _Note that in general for \(\mathcal{T}=(t_{1},\ldots,t_{n})\in\mathcal{QS}(n)\), \(\mathcal{S}=(s_{1},\ldots,s_{m})\in\mathcal{QS}(m)\) and \(\mathcal{R}=(r_{1},\ldots,r_{k})\in\mathcal{QS}(k)\) the equation_ \[(\mathcal{T}\circ_{i}\mathcal{S})\circ_{i+m-1}\mathcal{R}=\mathcal{T}\circ_{i }(\mathcal{S}\circ_{m}\mathcal{R}),\] _is not hold in the context of quiddity sequences and triangulations of labeled polygons._ _In fact, for instance we have (in our example \(i=2\) and \(j=m=4\))_ \[((31221)\circ_{2}(2121))\circ_{5=2+4-1}(111)=(34122321)\circ_{5}(111)=(34124 12421), \tag{32}\] _but from (11)_ \[(31221)\circ_{2}((2121)\circ_{4}(111))=(31221)\circ_{2}(312312)=(3512313321). \tag{33}\] _Thus, taking into account (32) and (33) one concludes that_ \[((31221)\circ_{2}(2121))\circ_{5}(111)\neq(31221)\circ_{2}((2121)\circ_{4}(11 1)).\] We illustrate graphically the previous remark, thus in correspondence with this, the reader can check that On other hand, \[\xy(0,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0} {\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0} {\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0} {\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0} {\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0 }{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{ \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0 }{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)* {0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0 \xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0} {\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0 {0}\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0}{\xy(-1,0)*{0} {\xy(-1,0)*{0}{\xy(-1, on other hand, \[\mathcal{T}\circ_{n}(\mathcal{S}\circ_{m}\mathcal{R}) =(t_{1},\cdots,t_{n})\circ_{n}((s_{1},\cdots,s_{m})\circ_{m}(r_{1}, \cdots,r_{k}))\] \[=(t_{1},\cdots,t_{n})\circ_{n}(s_{1}+1,s_{2},\cdots,s_{m-1},s_{m} +r_{1}+1,r_{2},\cdots,r_{k-1},r_{k}+1)\] \[=(t_{1}+1,t_{2},\cdots,t_{n-1},t_{n}+s_{1}+2,s_{2},\cdots,s_{m-1},s_{m}+r_{1}+1,r_{2},\cdots,r_{k-1},r_{k}+2), \tag{40}\] thus of (39) and (40), we have that \((\mathcal{T}\circ_{n}\mathcal{S})\circ_{n+m-1}\mathcal{R}\neq\mathcal{T} \circ_{n}(\mathcal{S}\circ_{m}\mathcal{R})\). ### A comment about the previous subsection In this subsection, we introduce a new product for quiddity sequences very different which was defined in the previous subsection. With this intention we will maintain the notations, denominations and conventions introduced earlier. Let \(\mathcal{T}=(t_{1},\cdots,t_{n})\) and \(\mathcal{S}=(s_{1},\cdots,s_{m})\) be two quiddity sequences. Define \[\mathcal{T}\bullet_{i}\mathcal{S} =(t_{1},\cdots,t_{n})\bullet_{i}(s_{1},\cdots,s_{m})\] \[\quad(t_{1},\cdots,t_{i-2},t_{i-1}+1,s_{2}+1,s_{3},\cdots,s_{m}, t_{i}+s_{1}+1,t_{i+1},\cdots,t_{n}), \tag{41}\] where for \(i=1\) we have \[\mathcal{T}\bullet_{1}\mathcal{S} =(t_{1},\cdots,t_{n})\bullet_{1}(s_{1},\cdots,s_{m})\] \[\quad(t_{1}+s_{1}+1,\cdot,t_{2},\cdots,t_{n}+1,s_{2}+1,s_{3}, \cdots,s_{m}), \tag{42}\] for which one considers that \(t_{0}\) coincides with \(t_{n}\). **Example 2**: \[(2,2,1,3,1)\bullet_{2}(3,1,3,1,3,1)=(3,2,3,1,3,1,6,1,3,1),\] (43) \[(2,2,1,3,1)\bullet_{1}(3,1,3,1,3,1)=(6,2,1,3,2,2,3,1,3,1).\] (44) **Theorem 6**: _For all \(\mathcal{T}\in\mathcal{QS}(n)\) and any \(\mathcal{S}\in\mathcal{QS}(m)\), we have that \(\mathcal{T}\bullet_{i}\mathcal{S}\in\mathcal{QS}(n+m-1)\) for \(i=1,\cdots,n\), where \(n,m\geq 3\)._ **Proof.** In fact, \(\mathcal{T}\bullet_{i}\mathcal{S}\) is the quiddity sequence of a triangulation \((\mathcal{T}\bullet_{i}\mathcal{S})_{\triangle}\) of a polygon \(\mathcal{P}_{n+m-1}\) of \(n+m-1\) vertices. We know that \(\mathcal{T}\) is determined by a triangulation \(\mathcal{T}_{\triangle}\) of a polygon of \(n\) vertices \(\mathcal{P}_{n}\), and in the same form \(\mathcal{S}\) corresponds to a triangulation \(\mathcal{S}_{\triangle}\) of a polygon \(\mathcal{P}_{m}\) of \(m\) vertices. We remember that the vertices of any \(s\)-gon are labeled counterclockwise by the set \(\{1,\cdots,s\}\). To create \((\mathcal{T}\bullet_{i}\mathcal{S})_{\triangle}\) we overlap the vertex \(i\) of \(\mathcal{P}_{n}\) with the vertex \(1\) of \(\mathcal{P}_{m}\) and we connect the vertex \(i-1\) of \(\mathcal{P}_{n}\) with the vertex \(2\) of \(\mathcal{P}_{m}\) with a segment, giving rise to a polygon \(\mathcal{P}_{n+m-1}\) with \(n+m-1\) vertices. As it is customary, vertex \(0\) of \(\mathcal{P}_{n}\) is identified with its vertex \(n\). The vertices of this new polygon are labeled starting from the vertex \(1\) of \(\mathcal{P}_{n}\). One can then see that the triangularization of \(\mathcal{P}_{n+m-1}\) is generated by \((\mathcal{T}\bullet_{i}\mathcal{S})_{\triangle}=\mathcal{T}_{\triangle} \cup\mathcal{S}_{\triangle}\cup(i-1,i)\mathcal{P}_{n}\cup(1,2)\mathcal{P}_{n}\). One can check that the quiddity sequence of this triangulation is precisely \(\mathcal{T}\bullet_{i}\mathcal{S}\). ## 3 Generalized quiddity sequences which decompose the matrices \(\pm Id\) We start this section by summarizing the results which will be presented here : a) first, we extend the products introduced in the previous section for usual quiddity sequences of triangulations to the more general space \(\mathfrak{D}_{3d}\) of all \(3d\)-quiddity sequences corresponding to \(3d\)-dissections which turns out to be closed under this products, b) on the other hand, in subsection 3.2 some products are introduced on the set of quiddity sequences (\(Id\)-quiddity sequences) associated with the identity matrix. Given a sequence \((a_{1},\cdots,a_{n})\in\mathbb{C}^{n}\) and \(n\in\mathbb{N}\) arbitrary, one introduces, as usual, the matrix \(M_{n}(a_{1},\cdots,a_{n})\in SL_{2}(\mathbb{C})\) defined by the product \[M_{n}(a_{1},\cdots,a_{n})=\left(\begin{array}{cc}a_{n}&-1\\ 1&0\end{array}\right)\left(\begin{array}{cc}a_{n-1}&-1\\ 1&0\end{array}\right)\cdots\left(\begin{array}{cc}a_{1}&-1\\ 1&0\end{array}\right)=M(a_{n})\cdots M(a_{1}). \tag{45}\] **Definition 7**: _Let us suppose that \(2\leq n\), a finite sequence \((a_{1},\cdots,a_{n})\in\mathbb{C}^{n}\) will be called a **generalized quiddity sequence** of length \(n\) if \(M_{n}(a_{1},\cdots,a_{n})=-Id\) where \(M_{n}(a_{1},\cdots,a_{n})\) is given by (45), that is, the generalized quiddity sequences are, in the terminology of section \(1\), neither more nor less than what was previously called for us \((-I_{2})\)-quiddity sequences._ This section is dedicated to study the following question: how to construct a new generalized quiddity sequences from two given generalized quiddity sequences?. As we can see below, the products introduced in the previous section on triangulations may help answer, in a certain sense, the question asked recently. ### Products on the space of generalized \(3d\)-quiddity sequences In [25], V. Ovsienko has characterized the sequence of **positive integers**\((a_{1},\cdots,a_{n})\) such that \(M_{n}(a_{1},\cdots,a_{n})=-Id\) in terms of \(3d\)-dissections introduced by him. This problem was studied in [6], [7]. As it was already mentioned before, a result of Conway and Coxeter in the last mentioned papers, establishes a one-to-one correspondence between the solutions of the equation \(M_{n}(a_{1},\cdots,a_{n})=-Id\), such that, \(a_{1}+a_{2}+\cdots+a_{n}=3n-6\), and triangulations of \(n\)-gons. **Definition 8**: _(See [25]) A \(3d\)-dissection is a partition of a convex \(n\)-gon into sub-polygons by means of pairwise non-crossing diagonals, such that the number of vertices of every sub-polygon is a multiple of \(3\). The \(3d\)-quiddity sequence of a \(3d\)-dissection of an \(n\)-gon is the (cyclically ordered) \(n\)-tuple of numbers \((a_{1},\cdots,a_{n})\) such that \(a_{i}\) is the number of sub-polygons adjacent to \(i\)-th vertex of the \(n\)-gon. A triangulation of an \(n\)-gon is justly a \(3d\)-dissection in which any sub-polygon of the partition is a triangle._ _A \(3d\)-dissection is said to be even if the total number of sub-polygons with even number of vertices (\(6\)-gons, \(12\)-gons,...) is even. Otherwise we say that the \(3d\)-dissection is odd. Hence, triangulations can be considered even \(3d\)-dissections. In the same way the \(3d\)-quiddity sequence \(\mathcal{S}\) corresponding to a \(3d\)-dissection \(\mathcal{S}_{\triangle}\) of an \(n\)-gon will be called even (resp. odd) if the \(3d\)-dissection \(\mathcal{S}_{\triangle}\) is even (resp. odd)._ V. Ovsienko proved the following assertion: **every \(3d\)-quiddity sequence**\((a_{1},\cdots,a_{n})\) **of an even \(3d\)-dissection of an \(n\)-gon is a solution of the equation \(M_{n}(a_{1},\cdots,a_{n})=-Id\). Conversely, every solution of positive integers**\((a_{1},\cdots,a_{n})\) **of the equation \(M_{n}(a_{1},\cdots,a_{n})=-Id\) is a \(3d\)-quiddity sequence of an even \(3d\)-dissection of an \(n\)-gon**. It implies that all \(3d\)-quiddity sequence is a generalized quiddity sequence and on the other hand all generalized quiddity sequence of positive integers must be a \(3d\)-quiddity sequence. The \(3d\)-quiddity sequences are related with other topics, for example consider the linear equation \[v_{i+1}=a_{i}v_{i}-v_{i-1}, \tag{46}\] with known coefficients \((a_{i})_{i\in\mathbb{Z}}\) and (indeterminate) sequence \((v_{i})_{i\in\mathbb{Z}}\). There is a one-to-one correspondence between even \(3d\)-quiddity sequences (that is solutions of the equation \(M_{n}(a_{1},\cdots,a_{n})=-Id\) with \(a_{i}\in\mathbb{N}\) for \(i=1,\ldots,n\)) and the equation (46) with positive integer \(n\)-periodic coefficients \(a_{i}\) such that every solution \((v_{i})_{i\in\mathbb{Z}}\) of (46) is \(n\)-antiperiodic (\(v_{i+n}=-v_{i}\) for all \(i\)). Let \(S\) be a \(3d\)-dissection, denote by \(\mathrm{i}(S)\) the number of sub-polygons with even number of vertices (\(6\)-gons, \(12\)-gons,...) that are part of \(S\). It will be called the Ovsienko index. We write \(\mathfrak{D}_{3d}\) for the set of all \(3d\)-dissections, then \(\mathfrak{D}_{3d}=\mathfrak{E}\mathfrak{D}_{3d}\sqcup\mathfrak{D}\mathfrak{D}_ {3d}\) where \[\mathfrak{E}\mathfrak{D}_{3d}=\sqcup_{m\in\mathbb{N}\cup\{0\}}\ \mathfrak{E} \mathfrak{D}_{3d}^{m}=\{S\in\mathfrak{D}_{3d}|\ \mathrm{i}(S)=2m\},\] and \[\mathfrak{D}\mathfrak{D}_{3d}=\sqcup_{m\in\mathbb{N}}\ \mathfrak{D}\mathfrak{D}_{3d}^{m}=\{S\in \mathfrak{D}_{3d}|\ \mathrm{i}(S)=2m+1\},\] in other words, \(\mathfrak{E}\mathfrak{D}_{3d}\) (resp. \(\mathfrak{D}\mathfrak{D}_{3d}\)) is the class of even (resp. odd) \(3d\)-dissections. Note that the set \(\mathfrak{E}\mathfrak{D}_{3d}^{0}\) corresponds to the triangulations. The Ovsienko index should be extended to \(3d\)-quiddity sequences, thus if \(\mathcal{S}\) is a \(3d\)-quiddity sequences we define \(i(\mathcal{S})=i(\mathcal{S}_{\triangle})\) where \(\mathcal{S}_{\triangle}\) is the \(3d\)-dissection giving rise to \(\mathcal{S}\). We also extend the notation, in this case \(\mathcal{S}\in\mathfrak{E}\mathfrak{D}_{3d}\) (resp. \(\mathcal{S}\in\mathfrak{D}_{3d}\)) means that \(\mathcal{S}_{\triangle}\in\mathfrak{E}\mathfrak{D}_{3d}\) (resp. \(\mathcal{S}_{\triangle}\in\mathfrak{D}\mathfrak{D}_{3d}\)). **Clearly, we can extend the products (10)-(11) (the \(\circ_{k}\)) and (41)-(42) (the \(\bullet_{k}\)) to \(3d\)-dissections. In fact, these products can be described by glueing two \(3d\)-dissections through of a triangle being the final result a new \(3d\)-dissection. First the vertices of each dissection are labeled counterclockwise, then we proceed exactly the same as it was done in the previous section: overlapping a vertex of the first dissection with the vertex \(1\) of the second dissection and joining with a segment a pair of adjacent vertices to those who were overlapped. Observe that the dissection obtained corresponds to a polygon whose number of vertices is equal to the sum of the amount of vertices of each dissection minus one.** **Example 3**: _The \(3d\)-quiddity sequence \((1,1,1,1,1,1)\) represents a hexagon, then \((1,1,1,1,1,1)\circ_{1}(1,1,1,1,1,1)=(3,1,1,1,1,2,2,1,1,1,1)\) is an even \(3d\)-dissection of two hexagons and one triangle. Now_ \[M_{11}(3,1,1,1,1,2,2,1,1,1,1) =M_{4}(1,1,1,1)M_{2}(2,2)M_{4}(1,1,1,1)M_{1}(3)\] \[=\left(\begin{array}{cc}-1&1\\ -1&0\end{array}\right)\left(\begin{array}{cc}3&-2\\ 2&-1\end{array}\right)\left(\begin{array}{cc}-1&1\\ -1&0\end{array}\right)\left(\begin{array}{cc}3&-1\\ 1&0\end{array}\right)=-Id,\] _which is in correspondence with the result reported by Ovsienko already mentioned above._ _On other hand, \((1,1,1,1,1,1)\bullet_{2}(1,1,1,1,1,1)=(2,2,1,1,1,1,3,1,1,1,1)\). Hence, coinciding with the criterion of Ovsienko we obtain_ \[M_{11}(2,2,1,1,1,3,1,1,1,1) =M_{4}(1,1,1,1)M_{1}(3)M_{4}(1,1,1,1)M_{2}(2,2)\] \[=\left(\begin{array}{cc}-1&1\\ -1&0\end{array}\right)\left(\begin{array}{cc}3&-1\\ 1&0\end{array}\right)\left(\begin{array}{cc}-1&1\\ -1&0\end{array}\right)\left(\begin{array}{cc}3&-2\\ 2&-1\end{array}\right)=-Id,\] The following proposition is very useful because this shows that the Ovsienko index turns out to be additive with respect to products (10)-(11) and (41)-(42) of \(3d\)-dissections. **Proposition 9**: _Suppose that \(\mathcal{S}=(s_{1},\cdots,s_{n}),\mathcal{T}=(t_{1},\cdots,t_{m})\in\mathfrak{ D}_{3d}\) where \(n,m\geq 3\) then_ \[\mathrm{i}(\mathcal{S}\circ_{k}\mathcal{T})=\mathrm{i}(\mathcal{T})+\mathrm{i }(\mathcal{S}),\ \ k=1,2,\ldots,n,\] _and_ \[\mathrm{i}(\mathcal{S}\bullet_{k}\mathcal{T})=\mathrm{i}(\mathcal{T})+ \mathrm{i}(\mathcal{S}),\ \ k=1,2,\ldots,n,\] **Proof.** Straightforward. Hence, we have **Corollary 10**: _Assume that \(\mathcal{S}=(s_{1},\cdots,s_{n}),\mathcal{T}=(t_{1},\cdots,t_{m})\in\mathfrak{ E}\mathfrak{D}_{3d}\) then \(\mathcal{S}\circ_{k}\mathrm{T}\) and \(\mathcal{S}\bullet_{k}\mathcal{T}\) belong to \(\mathfrak{E}\mathfrak{D}_{3d}\). Additionally, if \(\mathcal{S}=(s_{1},\cdots,s_{n}),\mathcal{T}=(t_{1},\cdots,t_{m})\in \mathfrak{D}\mathfrak{D}_{3d}\) we have \(\mathcal{S}\circ_{k}\mathcal{T},\mathcal{S}\bullet_{k}\mathcal{T}\in\mathfrak{ E}\mathfrak{D}_{3d}\)._ As a consequence of all this we have the following theorem **Theorem 11**: _Two even (resp. two odd) \(3d\)-dissections \(\mathcal{S}=(s_{1},\cdots,s_{n})\) and \(\mathcal{T}=(t_{1},\cdots,t_{m})\) give place to \(2n\) new solutions of positive integers \(\mathcal{S}\circ_{k}\mathcal{T}\), \(\mathcal{S}\bullet_{k}\mathcal{T}\) for \(k=1,2,\ldots,n\) of the equation \(M_{n}(a_{1},\cdots,a_{n+m-1})=-Id\)._ In this point, we leave any type of quiddity sequences of positive integers. We denote by \(\mathcal{A}_{n}\) the set of all generalized quiddity sequences of length \(n\). Observe that \(\mathcal{A}_{2}=\{(0,0)\}\) and \(\mathcal{A}_{3}=\{(1,1,1)\}\). Next, we will see that the products \(\circ_{k}\) and \(\bullet_{k}\) can be defined between elements of the class \(\mathfrak{A}=\sqcup_{n\geq 2}\mathcal{A}_{n}\). From now on, we write \(M(a_{1},\cdots,a_{n})\) instead of \(M_{n}(a_{1},\cdots,a_{n})\). **Example 4**: _Let us suppose that \(\lambda\neq 0\in\mathbb{C}\) then one can prove that \((1,\lambda+1,\frac{2}{\lambda},\lambda,\frac{2}{\lambda}+1)\in\mathcal{A}_{5}\). Really, this \(5\)-sequence induces a complex-valued frieze pattern._ **Theorem 12**: _Let \(A=(a_{1},\cdots,a_{n})\in\mathcal{A}_{n}\) and \(B=(b_{1},\cdots,b_{m})\in\mathcal{A}_{m}\) be two generalized quiddity sequences. Then \(A\circ_{k}B,A\bullet_{k}B\in\mathcal{A}_{n+m-1}\) for \(k=1,\ldots,n\)._ **Proof.** First, we will do the proof for the products \(\circ_{k}\). Suppose that \(k\neq n\) (we recall that \((a_{1},\cdots,a_{n})\in\mathbb{C}^{n}\) and \((b_{1},\cdots,b_{m})\in\mathbb{C}^{m}\)) then \[M(A\circ_{k}B) =M(a_{1},\cdots,a_{k-1},a_{k}+b_{1}+1,b_{2},\cdots,b_{m}+1,a_{k +1}+1,a_{k+2},\cdots,a_{n})\] \[=M(a_{n})\cdots M(a_{k+2})M(a_{k+1}+1)M(b_{m}+1)\cdots M(b_{2})M (a_{k}+b_{1}+1)M(a_{k-1})\cdots M(a_{1}),\] now, we know that \[M(a_{k+1}+1)M(b_{m}+1)=M(a_{k+1})\left(\begin{array}{cc}1&1\\ -1&0\end{array}\right)M(b_{m}),\] thus \[M(A\circ_{k}B)=M(a_{n})\cdots M(a_{k+2})M(a_{k+1})\left(\begin{array}{cc}1 &1\\ -1&0\end{array}\right)M(b_{m})\cdots M(b_{2})M(a_{k}+b_{1}+1)M(a_{k-1})\cdots M( a_{1}),\] moreover \[M(a_{k}+b_{1}+1)=\left(\begin{array}{cc}a_{k}+b_{1}+1&-1\\ 1&0\end{array}\right)=\left(\begin{array}{cc}b_{1}&-1\\ 1&0\end{array}\right)\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)\left(\begin{array}{cc}a_{k}&-1\\ 1&0\end{array}\right)=M(b_{1})\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)M(a_{k}),\] it shows that \[M(A\circ_{k}B)=M(a_{n})\cdots M(a_{k+2})M(a_{k+1})\left(\begin{array}{cc}1 &1\\ -1&0\end{array}\right)M(b_{m})\cdots M(b_{2})M(b_{1})\left(\begin{array}{cc}0&1 \\ -1&-1\end{array}\right)M(a_{k})M(a_{k-1})\cdots M(a_{1}).\] However, by our assumptions \(M(b_{1},\cdots,b_{m})=M(b_{m})\cdots M(b_{2})M(b_{1})=-Id\). It implies \[M(A\circ_{k}B) =M(a_{n})\cdots M(a_{k+2})M(a_{k+1})\left(\begin{array}{cc}1&1 \\ -1&0\end{array}\right)\left(\begin{array}{cc}-1&0\\ 0&-1\end{array}\right)\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)M(a_{k})M(a_{k-1})\cdots M(a_{1})\] \[=M(a_{n})\cdots M(a_{k+2})M(a_{k+1})M(a_{k})M(a_{k-1})\cdots M(a_ {1})=M(a_{1},\cdots,a_{n})=-Id,\] because \(A=(a_{1},\cdots,a_{n})\in\mathcal{A}_{n}\) and \[\left(\begin{array}{cc}1&1\\ -1&0\end{array}\right)\left(\begin{array}{cc}-1&0\\ 0&-1\end{array}\right)\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right).\] Consider now that case \(k=n\), we have \[M(A\circ_{n}B) =M(a_{1}+1,\cdots,a_{n-1},a_{n}+b_{1}+1,b_{2},\cdots,b_{m}+1)\] \[=M(b_{m}+1)M(b_{m-1})\cdots M(b_{2})M(b_{1}+a_{n}+1)M(a_{n-1}) \cdots M(a_{1}+1)\] \[=\left(\begin{array}{cc}1&1\\ 0&1\end{array}\right)M(b_{m})M(b_{m-1})\cdots M(b_{2})M(b_{1}+a_{n}+1)M(a_{n-1 })\cdots M(a_{1})\left(\begin{array}{cc}1&0\\ -1&1\end{array}\right)\] \[=\left(\begin{array}{cc}1&1\\ 0&1\end{array}\right)M(b_{m})M(b_{m-1})\cdots M(b_{2})M(b_{1})\left(\begin{array} []{cc}0&1\\ -1&-1\end{array}\right)M(a_{n})M(a_{n-1})\cdots M(a_{1})\left(\begin{array}{ cc}1&0\\ -1&1\end{array}\right)\] \[=\left(\begin{array}{cc}1&1\\ 0&1\end{array}\right)M(b_{1},\cdots,b_{m})\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)M(a_{1},\cdots,a_{n})\left(\begin{array}{cc}1&0\\ -1&1\end{array}\right),\] so, taking into account that \(M(a_{1},\cdots,a_{n})=M(b_{1},\cdots,b_{m})=-Id\), it implies \[M(A\circ_{n}B)=\left(\begin{array}{cc}1&1\\ 0&1\end{array}\right)\left(\begin{array}{cc}-1&0\\ 0&-1\end{array}\right)\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)\left(\begin{array}{cc}-1&0\\ 0&-1\end{array}\right)\left(\begin{array}{cc}1&0\\ -1&1\end{array}\right)=-Id.\] Now, we focus our attention on the products \(\bullet_{k}\). Let us assume that \(k\neq 1\) then \[M(A\bullet_{k}B) =M(a_{1},\cdots,a_{k-2},a_{k-1}+1,b_{2}+1,b_{3}\cdots,b_{m},a_{k} +b_{1}+1,a_{k+1},\cdots,a_{n})\] \[=M(a_{n})\cdots M(a_{k+1})M(a_{k}+b_{1}+1)M(b_{m})\cdots M(b_{3}) M(b_{2}+1)M(a_{k-1}+1)M(a_{k-2})\cdots M(a_{1})\] \[=M(a_{n})\cdots M(a_{k+1})M(a_{k})\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)M(b_{1})M(b_{m})\cdots M(b_{3})M(b_{2}+1)\] \[\quad M(a_{k-1}+1)M(a_{k-2})\cdots M(a_{1})\] \[=M(a_{n})\cdots M(a_{k+1})M(a_{k})\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)M(b_{1})M(b_{m})\cdots M(b_{3})M(b_{2})\left(\begin{array} []{cc}1&1\\ -1&0\end{array}\right)\] \[\quad M(a_{k-1})M(a_{k-2})\cdots M(a_{1}).\] Clearly, we have \(M(b_{2},b_{3},\cdots,b_{m},b_{1})=M(b_{1})M(b_{m})\cdots M(b_{3})M(b_{2})=-Id\). Indeed, we have assumed that the equality \[M(b_{1},\cdots,b_{m})=M(b_{m})\cdots M(b_{1})=-Id,\] is hold, then \[M(b_{m})\cdots M(b_{2})=-(M(b_{1}))^{-1},\] thus \[M(b_{2},b_{3},\cdots,b_{m},b_{1})=M(b_{1})M(b_{m})\cdots M(b_{2})=-Id.\] It implies \[M(A\bullet_{k}B) =M(a_{n})\cdots M(a_{k+1})M(a_{k})\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)\left(\begin{array}{cc}-1&0\\ 0&-1\end{array}\right)\left(\begin{array}{cc}1&1\\ -1&0\end{array}\right)M(a_{k-1})M(a_{k-2})\cdots M(a_{1})\] \[=M(a_{n})\cdots M(a_{k+1})M(a_{k})\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right)M(a_{k-1})M(a_{k-2})\cdots M(a_{1})=-Id.\] For \(k=1\) \[M(A\bullet_{1}B) =M(a_{1}+b_{1}+1,a_{2},\cdots,a_{n}+1,b_{2}+1,b_{3},\cdots,b_{m})\] \[=M(b_{m})\cdots M(b_{3})M(b_{2}+1)M(a_{n}+1)M(a_{n-1})\cdots M(a_{ 2})M(a_{1}+b_{1}+1)\] \[=M(b_{m})\cdots M(b_{3})M(b_{2}+1)M(a_{n}+1)M(a_{n-1})\cdots M(a_{ 2})M(a_{1})\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)M(b_{1})\] \[=M(b_{m})\cdots M(b_{3})M(b_{2})\left(\begin{array}{cc}1&1\\ -1&0\end{array}\right)M(a_{n})M(a_{n-1})\cdots M(a_{2})M(a_{1})\left(\begin{array} []{cc}0&1\\ -1&-1\end{array}\right)M(b_{1})\] \[=M(b_{m})\cdots M(b_{3})M(b_{2})\left(\begin{array}{cc}1&1\\ -1&0\end{array}\right)M(a_{1}\cdots,a_{n})\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)M(b_{1})\] \[=M(b_{m})\cdots M(b_{3})M(b_{2})\left(\begin{array}{cc}1&1\\ -1&0\end{array}\right)\left(\begin{array}{cc}-1&0\\ 0&-1\end{array}\right)\left(\begin{array}{cc}0&1\\ -1&-1\end{array}\right)M(b_{1})\] \[=M(b_{1},\cdots,b_{m})=-Id,\] it concludes the proof of this theorem. We notice that there are other operations that can be defined with similar properties. For example, if \(\mathcal{S}\) and \(\mathcal{T}\) are two \(3d\)-quiddity sequences which correspond to the \(3d\)-dissections \(\mathcal{S}_{\triangle}\) and \(\mathcal{T}_{\triangle}\) respectively, then when we identify the segment joining the vertices \(k\) and \(k+1\) of the \(3d\)-dissection \(\mathcal{S}_{\triangle}\) with the segment joining the first and last vertices of the \(3d\)-dissection \(\mathcal{T}_{\triangle}\), we obtain a \(3d\)-dissection \(\mathcal{R}_{\triangle}\) for which its \(3d\)-quiddity sequence \(\mathcal{R}\) satisfies the following property \(i(R)=i(S)+i(T)\). This construction (and other similar ones) allows us to construct new operations between generalized quiddity sequences. Next, we introduce new interesting operations between generalized quiddity sequences. For \(A=(a_{1},\cdots,a_{n})\in\mathcal{A}_{n}\) and \(B=(b_{1},\cdots,b_{m})\in\mathcal{A}_{m}\) we define \[A\boxplus_{i}B=(a_{1},\cdots,a_{n})\boxplus_{i}(b_{1},\cdots,b_{m})=(a_{1}, \cdots,a_{i-1},a_{i}+b_{1},b_{2},\cdots,b_{n-1},b_{n}+a_{i+1},a_{i+2},\cdots, a_{n}), \tag{47}\] for \(i=1,\cdots,n-1\), and \[A\boxplus_{n}B=(a_{1},\cdots,a_{n})\boxplus_{n}(b_{1},\cdots,b_{m})=(a_{1}+b _{n},a_{2},\cdots,a_{n-1},a_{n}+b_{1},b_{2},\cdots,b_{n-1}). \tag{48}\] **Theorem 13**: _If \(A=(a_{1},\cdots,a_{n})\in\mathcal{A}_{n}\) and \(B=(b_{1},\cdots,b_{m})\in\mathcal{A}_{m}\) then \(A\boxplus_{k}B\in\mathcal{A}_{n+m-2}\) for \(k=1,\cdots,n\)._ **Proof.** Suppose first that \(i\neq n\). Then \[M(A\boxplus_{i}B) =M(a_{1},\cdots,a_{i-1},a_{i}+b_{1},b_{2},\cdots,b_{n-1},b_{n}+a_ {i+1},a_{i+2},\cdots,a_{n})\] \[=M(a_{n})\cdots M(a_{i+2})M(b_{n}+a_{i+1})M(b_{n-1})\cdots M(b_{2 })M(a_{i}+b_{1})M(a_{i-1})\cdots M(a_{1})\] \[=M(a_{n})\cdots M(a_{i+2})M(a_{i+1})\left(\begin{array}{cc}0&1 \\ -1&0\end{array}\right)M(b_{n})M(b_{n-1})\cdots M(b_{2})M(b_{1})\left(\begin{array} []{cc}0&1\\ -1&0\end{array}\right)M(a_{i})\] \[\quad M(a_{i-1})\cdots M(a_{1})\] \[=M(a_{n})\cdots M(a_{i+2})M(a_{i+1})\left(\begin{array}{cc}0&1 \\ -1&0\end{array}\right)\left(\begin{array}{cc}-1&0\\ 0&-1\end{array}\right)\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right)M(a_{i})M(a_{i-1})\cdots M(a_{1})\] \[=-Id,\] because \[\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right)\left(\begin{array}{cc}-1&0\\ 0&-1\end{array}\right)\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right)=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right).\] For \(i=n\) we obtain \[M(A\boxplus_{n}B) =M(a_{1}+b_{n},a_{2},\cdots,a_{n-1},a_{n}+b_{1},b_{2},\cdots,b_{n-1})\] \[=M(b_{n-1})\cdots M(b_{2})M(a_{n}+b_{1})M(a_{n-1})\cdots M(a_{2} )M(a_{1}+b_{n})\] \[=M(b_{n-1})\cdots M(b_{2})M(b_{1})\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right)M(a_{n})M(a_{n-1})\cdots M(a_{2})M(a_{1})\left(\begin{array} []{cc}0&1\\ -1&0\end{array}\right)M(b_{n})\] \[=M(b_{n-1})\cdots M(b_{2})M(b_{1})M(b_{n})=-Id.\] ### Products of quiddity sequences associated to the identity In this subsection we consider vectors \((a_{1},\cdots,a_{n})\in\mathbb{C}^{n}\) such that \[M_{n}(a_{1},\cdots,a_{n})=\left(\begin{array}{cc}a_{n}&-1\\ 1&0\end{array}\right)\left(\begin{array}{cc}a_{n-1}&-1\\ 1&0\end{array}\right)\cdots\left(\begin{array}{cc}a_{1}&-1\\ 1&0\end{array}\right)=M(a_{n})\cdots M(a_{1})=Id, \tag{49}\] and solution vectors of (49) will be called \(Id\)**-quiddity sequences**. We want to indicate that the equation (49) is related with the recognition problem (or Deligne-Simpson problem) which can be formulated as follows: given conjugacy classes \(\mathbf{C}_{1},\cdots,\mathbf{C}_{n}\) determine whether or not one can solve the equation \[\mathbf{A}_{1}\cdots\mathbf{A}_{n}=I, \tag{50}\] with \(\mathbf{A}_{i}\in\mathbf{C}_{i}\) (see [23], [8] and references in these). We are reviewing the case in which \(C_{i}=Cl(M(a_{i}))=\{AM(a_{i})A^{-1}|A\in GL_{2}\}\). Let \(\mathbf{a}=(a_{1},\cdots,a_{n})\) and \(\mathbf{b}=(b_{1},\cdots,b_{m})\) be two \(Id\)-quiddity sequences. We propose products of the form \[\mathbf{a}\circ_{k}\mathbf{b}=(a_{1},\cdots,a_{n})\circ_{k}(b_{1},\cdots,b_{m} )=(a_{1},\cdots,a_{k-1},\mathbf{x}(a_{k},b_{1}),b_{2},\cdots,b_{m-1},\mathbf{y} (b_{m}),\mathbf{w}(a_{k+1}),a_{k+2},\cdots,a_{n}), \tag{51}\] for \(k=1,\cdots,n-1\), where \(\mathbf{x}(a_{k},b_{1})\), \(\mathbf{y}(b_{m})\) and \(\mathbf{w}(a_{k+1})\) are transformations that should be chose in such a way that \(\mathbf{a}\circ_{k}\mathbf{b}\) is a new \(Id\)-quiddity sequence. With this purpose, we show two simple facts: * there is a matrix \(Z\) such that \[\left(\begin{array}{cc}a_{k+1}&-1\\ 1&0\end{array}\right)\left(\begin{array}{cc}z_{11}&z_{12}\\ z_{21}&z_{22}\end{array}\right)\left(\begin{array}{cc}b_{m}&-1\\ 1&0\end{array}\right)=\left(\begin{array}{cc}{\bf w}(a_{k+1})&-1\\ 1&0\end{array}\right)\left(\begin{array}{cc}{\bf y}(b_{m})&-1\\ 1&0\end{array}\right),\] (52) * It is possible to find a matrix \(T\) satisfying \[\left(\begin{array}{cc}{\bf x}(a_{k},b_{1})&-1\\ 1&0\end{array}\right)=\left(\begin{array}{cc}b_{1}&-1\\ 1&0\end{array}\right)\left(\begin{array}{cc}t_{11}&t_{12}\\ t_{21}&t_{22}\end{array}\right)\left(\begin{array}{cc}a_{k}&-1\\ 1&0\end{array}\right),\] (53) from (52) we obtain \(z_{11}=1\), \(z_{22}=z_{12}z_{21}+1\), \({\bf y}(b_{m})=b_{m}+z_{12}\) and \({\bf w}(a_{k+1})=a_{k+1}-z_{21}\). On the other hand from (53) follows that \(t_{11}=0\), \(t_{12}=1\), \(t_{21}=-1\) and \({\bf x}(a_{k},b_{1})=a_{k}+b_{1}-t_{22}\). Now, to carry our goal to the end it is necessary to impose the condition \[\left(\begin{array}{cc}1&z_{12}\\ z_{21}&z_{12}z_{21}+1\end{array}\right)\left(\begin{array}{cc}0&1\\ -1&t_{22}\end{array}\right)=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right), \tag{54}\] then \(z_{12}=-1\), \(z_{21}=1\) and \(t_{22}=1\) so \[{\bf x}(a_{k},b_{1})=a_{k}+b_{1}-1,\quad{\bf y}(b_{m})=b_{m}-1,\quad{\bf w}(a_ {k+1})=a_{k+1}-1,\] thus (51) takes of simple form \[{\bf a}\circ_{k}{\bf b}=(a_{1},\cdots,a_{n})\circ_{k}(b_{1},\cdots,b_{m})=(a_ {1},\cdots,a_{k-1},a_{k}+b_{1}-1,b_{2},\cdots,b_{m-1},b_{m}-1,a_{k+1}-1,a_{k+2 },\cdots,a_{n}),\] where \(k=1,\cdots,n-1\). We are able to prove the following result **Theorem 14**: _Let \({\bf a}=(a_{1},\cdots,a_{n})\) and \({\bf b}=(b_{1},\cdots,b_{m})\) be two \(Id\)-quiddity sequences, then \({\bf a}\circ_{k}{\bf b}\) is an \(Id\)-quiddity sequences for \(k=1,\cdots,n\) if we define_ \[{\bf a}\circ_{k}{\bf b}=(a_{1},\cdots,a_{n})\circ_{k}(b_{1},\cdots,b_{m})=(a_ {1},\cdots,a_{k-1},a_{k}+b_{1}-1,b_{2},\cdots,b_{m-1},b_{m}-1,a_{k+1}-1,a_{k+2 },\cdots,a_{n}), \tag{55}\] _for \(k\neq n\) and_ \[{\bf a}\circ_{n}{\bf b}=(a_{1}-1,a_{2},\cdots,a_{n-1},a_{n}+b_{1}-1,b_{2}, \cdots,b_{m-1},b_{m}-1). \tag{56}\] **Proof.** It remains to prove the case \(k=n\). For this purpose, note that \[\left(\begin{array}{cc}b_{m}-1&-1\\ 1&0\end{array}\right)=\left(\begin{array}{cc}1&-1\\ 0&1\end{array}\right)\left(\begin{array}{cc}b_{m}&-1\\ 1&0\end{array}\right),\quad\left(\begin{array}{cc}a_{1}-1&-1\\ 1&0\end{array}\right)=\left(\begin{array}{cc}a_{1}&-1\\ 1&0\end{array}\right)\left(\begin{array}{cc}1&0\\ 1&1\end{array}\right),\] and, on the other hand, we already know that \[\left(\begin{array}{cc}a_{n}+b_{1}-1&-1\\ 1&0\end{array}\right)=\left(\begin{array}{cc}b_{1}&-1\\ 1&0\end{array}\right)\left(\begin{array}{cc}0&1\\ -1&1\end{array}\right)\left(\begin{array}{cc}a_{n}&-1\\ 1&0\end{array}\right).\] Now, \[\left(\begin{array}{cc}1&-1\\ 0&1\end{array}\right)\left(\begin{array}{cc}0&1\\ -1&1\end{array}\right)\left(\begin{array}{cc}1&0\\ 1&1\end{array}\right)=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right),\] it concludes the proof of the theorem. ## 4 Decomposition of the monodromy for block matrices. Left and right matrix quiddity sequences ### General results In this section, our propose is to find matrix vectors \(({\mathfrak{a}}_{1},{\mathfrak{a}}_{2},\cdots,{\mathfrak{a}}_{n})\) solutions of the equation \[M({\mathfrak{a}}_{1},\cdots,{\mathfrak{a}}_{n})=\left(\begin{array}{cc}{ \mathfrak{a}}_{n}&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}{\mathfrak{a}}_{n-1}&-I\\ I&O\end{array}\right)\cdots\left(\begin{array}{cc}{\mathfrak{a}}_{1}&-I\\ I&O\end{array}\right)=M({\mathfrak{a}}_{n})\cdots M({\mathfrak{a}}_{1})=-\left( \begin{array}{cc}I&O\\ O&I\end{array}\right)=-Id, \tag{57}\] where the \({\mathfrak{a}}_{k}\) are square \(l\times l\) matrices for \(k=\overline{1,n}\), being \(I\) and \(O\) the identity matrix and the null matrix \(l\times l\) respectively. It is important to observe that we do not suppose that the entries of each \({\mathfrak{a}}_{k}\) are positive integers, neither we do not assume that \(|{\mathfrak{a}}_{k}|\neq 0\) for all \(k\). Such sequences will be called **left matrix quiddity sequences**. We recall the following result which is known as the **Schur determinant lemma** (see [30] for more details) **Lemma 15**: _Let \(P,Q,S,R\) denote \(n\times n\) matrices and suppose that \(P\) and \(R\) commute. Then the determinant \(|N|\) of the \(2n\times 2n\) matrix_ \[N=\left(\begin{array}{cc}P&Q\\ R&S\end{array}\right),\] _is equal to the determinant of the matrix \(PS-RQ\)._ There exists a generalization in certain sense of the previous result which can be found also in [30] for any square matrix \(N\). Consider now that \(N\) is partitioned where \(P,Q,S,R\) do not necessarily have the same dimension. Suppose \(P\) is nonsingular and denote the matrix \(S-RP^{-1}Q\) by \(N/P\) and call it the Schur complement of \(P\) in \(N\), or the Schur complement of \(N\) relative to \(P\). The following result is well known **Theorem 16**: _(Schur's Formula) Let \(N\) be a square matrix partitioned. If \(P\) is nonsingular, then_ \[det(N/P)=\frac{detN}{detP}. \tag{58}\] **Remark 17**: _The equality (57) is well defined, this is due to \(|M(\mathfrak{a}_{k})|=1\) for \(k=1,\cdots,n\). In fact, from the Schur determinant lemma we obtain_ \[\left|\begin{array}{cc}\mathfrak{a}_{k}&-I\\ I&O\end{array}\right|=|\mathfrak{a}_{k}O-I(-I)|=1,\] _for all \(k\)._ **Example 5**: _If \(|\mathfrak{a}|\neq 0\), then the sequences \((\mathfrak{a},2\mathfrak{a}^{-1},\mathfrak{a},2\mathfrak{a}^{-1})\) and \((\mathfrak{a},2\mathfrak{a}^{-1}+I,I,\mathfrak{a}+I,2\mathfrak{a}^{-1})\) are left matrix quididity sequences._ **Remark 18**: _Suppose that \((\mathfrak{a}_{1},\mathfrak{a}_{2},\cdots,\mathfrak{a}_{n})\) is a left matrix quidity sequences, then \((\mathfrak{a}_{1},\cdots,\mathfrak{a}_{i}+I,I,\mathfrak{a}_{i+1}+I,\cdots, \mathfrak{a}_{n})\) are also left matrix quididity sequences for \(i=1,\ldots,n-1\). Indeed, for two arbitrary matrices \(\mathfrak{x}\) and \(\mathfrak{y}\) we have_ \[\left(\begin{array}{cc}\mathfrak{x}+I&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}I&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{y}+I&-I\\ I&O\end{array}\right) =\left(\begin{array}{cc}\mathfrak{x}&-(\mathfrak{x}+I)\\ I&-I\end{array}\right)\left(\begin{array}{cc}\mathfrak{y}+I&-I\\ I&O\end{array}\right)\] \[=\left(\begin{array}{cc}\mathfrak{x}\mathfrak{y}-I&-\mathfrak{ x}\\ \mathfrak{y}&-I\end{array}\right)=\left(\begin{array}{cc}\mathfrak{x}&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{y}&-I\\ I&O\end{array}\right).\] **Lemma 19**: _Let \((\mathfrak{a}_{1},\mathfrak{a}_{2},\cdots,\mathfrak{a}_{n})\) be a left matrix quididity sequence, that is, \(M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})=-Id\), then_ \[M(\mathfrak{a}_{k+1},\cdots,\mathfrak{a}_{n},\mathfrak{a}_{1},\cdots, \mathfrak{a}_{k})=-Id, \tag{59}\] _for \(k=1,\cdots,n-1\)._ **Proof.** The proof of this lemma follows simply by means of operations of conjugation on both sides of equality \(M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})=-Id\) a finite number of times. As in the previous section there is a close relation between equation (57) and the left matrix-recurrent relation \[\mathfrak{u}_{k+1}=\mathfrak{c}_{k}\mathfrak{u}_{k}-\mathfrak{u}_{k-1},\ \ \ \ \ k\in\mathbb{Z}, \tag{60}\] where \((\mathfrak{c}_{k})\) is a known \(n\)-periodic sequence of matrices of order \(l\times l\) for each \(k\) and the sequence \((\mathfrak{u}_{k})_{k\in\mathbb{Z}}\subset M_{l}(\mathbb{C})\) which is not known. **Proposition 20**: _There is a one-to-one correspondence between left matrix quidity sequences (that is matrix sequences \((\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})\) for which \(M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})=-Id\)) and left matrix-recurrent relations (60) for which \(\mathfrak{c}_{k}=\mathfrak{a}_{k}\) is an \(n\)-periodic sequence that have only \(n\)-antiperiodic solutions._ **Proof.** Let \((\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})\) be a left matrix quidity sequence, \(\mathfrak{u}_{1}\) and \(\mathfrak{u}_{0}\) arbitrary matrices of order \(l\), and let \((\mathfrak{u}_{k})_{k\in\mathbb{Z}}\) be the sequence induced by these initial matrices through the left matrix-recurrent relation (60). Then, from the previous lemma \[\left(\begin{array}{c}\mathfrak{u}_{k+n}\\ \mathfrak{u}_{k-1+n}\end{array}\right) =\left(\begin{array}{cc}\mathfrak{a}_{n+k-1}&-I\\ I&O\end{array}\right)\cdots\left(\begin{array}{cc}\mathfrak{a}_{k}&-I\\ I&O\end{array}\right)\left(\begin{array}{c}\mathfrak{u}_{k}\\ \mathfrak{u}_{k-1}\end{array}\right)\] \[=\left(\begin{array}{cc}-I&O\\ O&-I\end{array}\right)\left(\begin{array}{c}\mathfrak{u}_{k}\\ \mathfrak{u}_{k-1}\end{array}\right),\] it implies that \((\mathfrak{u}_{k})_{k\in\mathbb{Z}}\) is \(n\)-antiperiodic, that is, \(\mathfrak{u}_{k+n}=-\mathfrak{u}_{k}\) for all \(k\in\mathbb{Z}\). Thus, all solution of (60) is \(n\)-antiperiodic. Conversely, if every solution of (60) is \(n\)-antiperiodic where \((\mathfrak{c}_{k})\) is \(n\)-periodic. Then \[\left(\left(\begin{array}{cc}\mathfrak{a}_{n}&-I\\ I&O\end{array}\right)\cdots\left(\begin{array}{cc}\mathfrak{a}_{1}&-I\\ I&O\end{array}\right)-\left(\begin{array}{cc}-I&O\\ O&-I\end{array}\right)\right)\left(\begin{array}{c}\mathfrak{u}_{1}\\ \mathfrak{u}_{0}\end{array}\right)=\left(\begin{array}{c}O\\ O\end{array}\right),\] where \(\mathfrak{a}_{k}=\mathfrak{c}_{k}\) for \(k=1,\cdots,n\). Now, being \(\mathfrak{u}_{1}\) and \(\mathfrak{u}_{0}\) arbitrary we obtain \(M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})=-Id\). Consider now the equation \[N(\mathfrak{b}_{1},\cdots,\mathfrak{b}_{n})=\left(\begin{array}{cc}O&-I\\ I&\mathfrak{b}_{1}\end{array}\right)\left(\begin{array}{cc}O&-I\\ I&\mathfrak{b}_{2}\end{array}\right)\cdots\left(\begin{array}{cc}O&-I\\ I&\mathfrak{b}_{n}\end{array}\right)=N(\mathfrak{b}_{1})\cdots N(\mathfrak{b}_ {n})=-Id, \tag{61}\] the solutions of (61) will be called **right matrix quiddity sequences**. **Remark 21**: _A different alternative to (61), it will be the equality_ \[L(\mathfrak{b}_{1},\cdots,\mathfrak{b}_{n})=\left(\begin{array}{cc}O&I\\ -I&\mathfrak{b}_{1}\end{array}\right)\left(\begin{array}{cc}O&I\\ -I&\mathfrak{b}_{2}\end{array}\right)\cdots\left(\begin{array}{cc}O&I\\ -I&\mathfrak{b}_{n}\end{array}\right)=L(\mathfrak{b}_{1})\cdots L(\mathfrak{b} _{n})=-Id, \tag{62}\] _now, since for all matrix \(\mathfrak{a}\) of order \(l\times l\)_ \[\left(\begin{array}{cc}\mathfrak{a}&-I\\ I&O\end{array}\right)^{-1}=\left(\begin{array}{cc}O&I\\ -I&\mathfrak{a}\end{array}\right),\] _this type of equality can be obtained of (57) applying the inverse of each matrix on the left. However, we must recognize that (62) makes more sense from of point of view of moving frame theory._ **Theorem 22**: _If \((\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})\) is a left matrix quiddity sequence then \((\mathfrak{a}^{*}{}_{1},\cdots,\mathfrak{a}^{*}{}_{n})\) is a right matrix quiddity sequence. Conversely, if \((\mathfrak{b}_{1},\cdots,\mathfrak{b}_{n})\) is a right matrix quiddity sequence then \((\mathfrak{b}^{*}{}_{1},\cdots,\mathfrak{b}^{*}{}_{n})\) is a left matrix quiddity sequence._ **Proof.** For any block matrix \(\left(\begin{array}{cc}\mathfrak{F}&\mathfrak{y}\\ \mathfrak{s}&\mathfrak{w}\end{array}\right)\) its conjugate block matrix with respect to the anti-diagonal is obtained in the following form \[\left(\begin{array}{cc}\mathfrak{w}^{*}&\mathfrak{y}^{*}\\ \mathfrak{s}^{*}&\mathfrak{y}^{*}\end{array}\right)=\left(\begin{array}{cc}O &I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{f}&\mathfrak{y}\\ \mathfrak{s}&\mathfrak{w}\end{array}\right)^{*}\left(\begin{array}{cc}O&I\\ I&O\end{array}\right)=\left(\begin{array}{cc}O&I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{f}^{*}&\mathfrak{y}^{*} \\ \mathfrak{y}^{*}&\mathfrak{w}^{*}\end{array}\right)\left(\begin{array}{cc}O&I\\ I&O\end{array}\right),\] where for a matrix \(H\) of order \(l\) arbitrary, the notation \(H^{*}\) denotes its complex conjugate matrix. Note also that the matrix \(J=\left(\begin{array}{cc}O&I\\ I&O\end{array}\right)\) satisfies \(J^{*}=J\) and \(J^{2}=Id\). Let us suppose that \((\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})\) is a left matrix quiddity sequence, that is \[M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})=\left(\begin{array}{cc}\mathfrak{ a}_{n}&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{a}_{n-1}&-I\\ I&O\end{array}\right)\cdots\left(\begin{array}{cc}\mathfrak{a}_{1}&-I\\ I&O\end{array}\right)=M(\mathfrak{a}_{n})\cdots M(\mathfrak{a}_{1})=-\left( \begin{array}{cc}I&O\\ O&I\end{array}\right)=-Id,\] and from it follows that \[M^{*}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})=M^{*}(\mathfrak{a}_{1})\cdots M ^{*}(\mathfrak{a}_{n})=\left(\begin{array}{cc}\mathfrak{a}_{1}^{*}&I\\ -I&O\end{array}\right)\cdots\left(\begin{array}{cc}\mathfrak{a}_{n}^{*}&I\\ -I&O\end{array}\right)=-\left(\begin{array}{cc}I&O\\ O&I\end{array}\right)=-Id,\] thus \[-Id=-J^{2}=JM^{*}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n})J =JM^{*}(\mathfrak{a}_{1})J^{2}M^{*}(\mathfrak{a}_{2})\cdots M^{*}( \mathfrak{a}_{n-1})J^{2}M^{*}(\mathfrak{a}_{n})J\] \[=\left(\begin{array}{cc}O&I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{a}_{1}^{*}&I\\ -I&O\end{array}\right)\left(\begin{array}{cc}O&I\\ I&O\end{array}\right)\cdots\left(\begin{array}{cc}O&I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{a}_{n}^{*}&I\\ -I&O\end{array}\right)\left(\begin{array}{cc}O&I\\ I&O\end{array}\right)\] \[=\left(\begin{array}{cc}O&-I\\ I&\mathfrak{a}_{1}^{*}\end{array}\right)\cdots\left(\begin{array}{cc}O&-I\\ I&\mathfrak{a}_{n}^{*}\end{array}\right)=N(\mathfrak{a}_{1}^{*})\cdots N(\mathfrak{a}_{n}^{*} )=N(\mathfrak{a}_{1}^{*},\cdots,\mathfrak{a}_{n}^{*}),\] it shows that \((\mathfrak{a}_{1}^{*},\cdots,\mathfrak{a}_{n}^{*})\) is a right matrix quiddity sequence. The reciprocal is proved in similar form. The following proposition is evident **Proposition 23**: _Consider the right matrix-recurrent equation_ \[\mathfrak{u}_{k+1}=\mathfrak{u}_{k}\mathfrak{c}_{k}-\mathfrak{u}_{k-1},\qquad k \in\mathbb{Z}, \tag{63}\] _where \((\mathfrak{c}_{k})\) is an \(n\)-periodic sequence, then all solution of (63) is \(n\)-antiperiodic if and only if \((\mathfrak{b}_{1},\cdots,\mathfrak{b}_{n})\) is a right matrix quididity sequence where \(\mathfrak{b}_{k}=\mathfrak{c}_{k}\) for \(k=1,\cdots,n\)_ Next, we will make some calculations to find left matrix quididity sequence in low dimensions. Since \[\left(\begin{array}{cc}\mathfrak{a}_{5}&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{a}_{4}&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{a}_{3}&-I\\ I&O\end{array}\right)=\left(\begin{array}{cc}(\mathfrak{a}_{5}\mathfrak{a}_ {4}-I)\mathfrak{a}_{3}-\mathfrak{a}_{5}&-(\mathfrak{a}_{5}\mathfrak{a}_{4}-I) \\ (\mathfrak{a}_{4}\mathfrak{a}_{3}-I)&-\mathfrak{a}_{4}\end{array}\right),\] then it follows that the only one left matrix quididity sequence \((\mathfrak{a}_{5},\mathfrak{a}_{4},\mathfrak{a}_{3})\) is accurately \((I,I,I)\). One takes another step \[M(\mathfrak{a}_{2},\mathfrak{a}_{3},\mathfrak{a}_{4},\mathfrak{ a}_{5}) =\left(\begin{array}{cc}\mathfrak{a}_{5}&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{a}_{4}&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{a}_{3}&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{a}_{2}&-I\\ I&O\end{array}\right)\] \[=\left(\begin{array}{cc}(\mathfrak{a}_{5}\mathfrak{a}_{4}-I) \mathfrak{a}_{3}-\mathfrak{a}_{5}&-(\mathfrak{a}_{5}\mathfrak{a}_{4}-I)\\ (\mathfrak{a}_{4}\mathfrak{a}_{3}-I)&-\mathfrak{a}_{4}\end{array}\right) \left(\begin{array}{cc}\mathfrak{a}_{2}&-I\\ I&O\end{array}\right)\] \[=\left(\begin{array}{cc}((\mathfrak{a}_{5}\mathfrak{a}_{4}-I) \mathfrak{a}_{3}-\mathfrak{a}_{5})\mathfrak{a}_{2}-(\mathfrak{a}_{5} \mathfrak{a}_{4}-I)&-((\mathfrak{a}_{5}\mathfrak{a}_{4}-I)\mathfrak{a}_{3}- \mathfrak{a}_{5})\\ ((\mathfrak{a}_{4}\mathfrak{a}_{3}-I)\mathfrak{a}_{2}-\mathfrak{a}_{4})&-( \mathfrak{a}_{4}\mathfrak{a}_{3}-I)\end{array}\right),\] hence, so that the equation \(M(\mathfrak{a}_{2},\mathfrak{a}_{3},\mathfrak{a}_{4},\mathfrak{a}_{5})=-Id\) is tru one must has \(\mathfrak{a}_{4}\mathfrak{a}_{3}-I=I\), that is, \(\frac{\mathfrak{a}_{4}\mathfrak{a}_{3}}{2}=I\) meaning that \(\mathfrak{a}_{4}\) and \(\mathfrak{a}_{3}\) are invertible matrices, even more \(\mathfrak{a}_{4}=2\mathfrak{a}_{3}^{-1}\). From the form of the remaining entries of the previous matrix follows that \(\mathfrak{a}_{3}=\mathfrak{a}_{5}=\mathfrak{m}\) and \(\mathfrak{a}_{2}=\mathfrak{a}_{4}=2\mathfrak{a}_{3}^{-1}=2\mathfrak{m}^{-1}\). Thus \((2\mathfrak{m}^{-1},\mathfrak{m},2\mathfrak{m}^{-1},\mathfrak{m})\) and its cyclical permutation are the only ones left matrix quidity sequences of length \(4\). **The last step is interesting because it leads to a possible theory of matrix-valued frieze patterns as it will be seen after**. We have **Theorem 24**: _Each left matrix quididity sequence \((\mathfrak{a}_{1},\mathfrak{a}_{2},\mathfrak{a}_{3},\mathfrak{a}_{4}, \mathfrak{a}_{5})\) with \(|\mathfrak{a}_{1}|\neq 0\) and \(|\mathfrak{a}_{3}|\neq 0\) is of the form_ \[(\mathfrak{a}_{1},\mathfrak{a}_{2},\mathfrak{a}_{3},\mathfrak{a}_{4}, \mathfrak{a}_{5})=(\mathfrak{c},\mathfrak{c}^{-1}(I+(\mathfrak{c}+I) \mathfrak{d}^{-1}),\mathfrak{d},(\mathfrak{c}+I)\mathfrak{d}^{-1},( \mathfrak{c}^{-1}+I)(\mathfrak{c}+I)^{-1}(\mathfrak{d}+I)), \tag{64}\] _for which \([\mathfrak{c},\mathfrak{d}]=O\)._ **Proof.** Indeed, \[M(\mathfrak{a}_{1},\mathfrak{a}_{2},\mathfrak{a}_{3},\mathfrak{ a}_{4},\mathfrak{a}_{5}) =\left(\begin{array}{cc}((\mathfrak{a}_{5}\mathfrak{a}_{4}-I)( \mathfrak{a}_{3}\mathfrak{a}_{2}-I)-\mathfrak{a}_{5}\mathfrak{a}_{2})&-(( \mathfrak{a}_{5}\mathfrak{a}_{4}-I)\mathfrak{a}_{3}-\mathfrak{a}_{5})\\ ((\mathfrak{a}_{4}\mathfrak{a}_{3}-I)\mathfrak{a}_{2}-\mathfrak{a}_{4})&-( \mathfrak{a}_{4}\mathfrak{a}_{3}-I)\end{array}\right)\left(\begin{array}{cc} \mathfrak{a}_{1}&-I\\ I&O\end{array}\right)\] \[=\left(\begin{array}{cc}((\mathfrak{a}_{5}\mathfrak{a}_{4}-I)( \mathfrak{a}_{3}\mathfrak{a}_{2}-I)-\mathfrak{a}_{5}\mathfrak{a}_{2})\mathfrak{ a}_{1}-((\mathfrak{a}_{5}\mathfrak{a}_{4}-I)\mathfrak{a}_{3}-\mathfrak{a}_{5})&-(( \mathfrak{a}_{5}\mathfrak{a}_{4}-I)(\mathfrak{a}_{3}\mathfrak{a}_{2}-I)- \mathfrak{a}_{5}\mathfrak{a}_{2})\\ ((\mathfrak{a}_{4}\mathfrak{a}_{3}-I)\mathfrak{a}_{2}-\mathfrak{a}_{4}) \mathfrak{a}_{1}-(\mathfrak{a}_{4}\mathfrak{a}_{3}-I)&-((\mathfrak{a}_{4} \mathfrak{a}_{3}-I)\mathfrak{a}_{2}-\mathfrak{a}_{4})\end{array}\right).\] The equality \(M(\mathfrak{a}_{1},\mathfrak{a}_{2},\mathfrak{a}_{3},\mathfrak{a}_{4},\mathfrak{ a}_{5})=-Id\) results in four equations 1. \((\mathfrak{a}_{4}\mathfrak{a}_{3}-I)\mathfrak{a}_{2}-\mathfrak{a}_{4}=I\), 2. \((\mathfrak{a}_{5}\mathfrak{a}_{4}-I)\mathfrak{a}_{3}-\mathfrak{a}_{5}=I\), 3. \((\mathfrak{a}_{5}\mathfrak{a}_{4}-I)(\mathfrak{a}_{3}\mathfrak{a}_{2}-I)- \mathfrak{a}_{5}\mathfrak{a}_{2}=O\), 4. \((\mathfrak{a}_{4}\mathfrak{a}_{3}-I)(\mathfrak{a}_{2}\mathfrak{a}_{1}-I)- \mathfrak{a}_{4}\mathfrak{a}_{1}=O\). Now, from 3. and 2. follow that \(((\mathfrak{a}_{5}\mathfrak{a}_{4}-I)\mathfrak{a}_{3}-\mathfrak{a}_{5})\mathfrak{ a}_{2}=(\mathfrak{a}_{5}\mathfrak{a}_{4}-I)\), that is, \(\mathfrak{a}_{5}=(\mathfrak{a}_{2}+I)\mathfrak{a}_{4}^{-1}\). In the same way, using 4. and 1. we obtain \(\mathfrak{a}_{4}=(\mathfrak{a}_{1}+I)\mathfrak{a}_{3}^{-1}\) which implies that \(\mathfrak{a}_{5}=(\mathfrak{a}_{2}+I)\mathfrak{a}_{3}(\mathfrak{a}_{1}+I)^{-1}\). On the other hand, the first equation give us \(\mathfrak{a}_{1}\mathfrak{a}_{2}=I+\mathfrak{a}_{4}=I+(\mathfrak{a}_{1}+I) \mathfrak{a}_{3}^{-1}\), thus \(\mathfrak{a}_{2}=\mathfrak{a}_{1}^{-1}(I+(\mathfrak{a}_{1}+I)\mathfrak{a}_{3}^{- 1})\). We claim that \([\mathfrak{a}_{1},\mathfrak{a}_{3}]=O\); to see this we use the second equation. In fact, equation 2. implies that \(\mathfrak{a}_{2}\mathfrak{a}_{3}=I+\mathfrak{a}_{5}\) hence \(\mathfrak{a}_{2}\mathfrak{a}_{3}(\mathfrak{a}_{1}+I)-\mathfrak{a}_{2}\mathfrak{ a}_{3}-\mathfrak{a}_{3}=\mathfrak{a}_{1}+I\) thus \((\mathfrak{a}_{2}\mathfrak{a}_{3}-I)\mathfrak{a}_{1}=I+\mathfrak{a}_{3}\). Then \((\mathfrak{a}_{1}^{-1}(I+(\mathfrak{a}_{1}+I)\mathfrak{a}_{3}^{-1})\mathfrak{a}_{ 3}-I)\mathfrak{a}_{1}=\mathfrak{a}_{3}+I\), it shows that \((\mathfrak{a}_{1}^{-1}(\mathfrak{a}_{3}+\mathfrak{a}_{1}+I)-I)\mathfrak{a}_{1}=I+ \ **Proposition 25**: _Let us assume that \(\mathfrak{A},\mathfrak{B}\in GL_{l}(\mathbb{K})\) such that \([\mathfrak{A},\mathfrak{B}]=0\) and suppose that \(I+\mathfrak{A}\), \(I+\mathfrak{B}\) and \(I+\mathfrak{A}+\mathfrak{B}\) are invertible (for example, if \(\|\mathfrak{A}\|<\frac{1}{2}\), \(\|\mathfrak{B}\|<\frac{1}{2}\)). Define the Gauss map_ \[\mathcal{G}\ :\ \ (\mathfrak{A},\mathfrak{B})\longrightarrow(\mathfrak{B}, \mathfrak{A}^{-1}(\mathfrak{B}+I)), \tag{65}\] _then, we have \(\mathcal{G}^{5}(\mathfrak{A},\mathfrak{B})=(\mathfrak{A},\mathfrak{B})\)._ **Proof.** The proof is similar to the scalar case and so it will be omitted. The connection of this result with a possible theory of matrix-valued frieze patterns will be made clear later in the paper. The previous subsection revisited for \(n\)-antiperiodic sequences in the projective space \(\mathbb{P}_{1}(M_{l}(\mathbb{R}))\) We begin this subsection showing a general process of decomposition for the monodromy. Let \(G\) be a group and let \(1_{G}\) be the identity of \(G\). A left action of \(G\) on a set \(\Omega\) is a map \(G\times\Omega\longrightarrow\Omega\), that is, \((g,\omega)\longrightarrow g\cdot\omega\) such that \((hg)\cdot\omega=h\cdot(g\cdot\omega)\) for all \(\omega\in\Omega\) and \(h,g\in G\) and \(1_{G}\cdot\omega=\omega\) for all \(\omega\in\Omega\). We will always assume unless we say otherwise that the action is transitive, that is \(\Omega\) is a \(G\)-space. Suppose that \(G\) is a finite group and let \(V\) be a vector space over \(\mathbb{C}\). Denote by \(GL(V)\) the linear group consisting of all invertible linear maps \(A:V\longrightarrow V\). A representation of \(G\) over \(V\) is an action \(G\times V:(g,v)\longrightarrow\rho(g)v\) where \(\rho(g)\in GL(V)\) for all \(g\in G\). It means that \(\rho:G\longrightarrow GL(V)\) such that \(\rho(hg)=\rho(h)\rho(g)\) and \(\rho(1_{G})=1_{V}\) where \(1_{V}\) is the identity element of \(GL(V)\). The dimension of \(V\) is denoted by \(l\). We shall denote the representation by the pair \((\rho,V)\). From now on, during the section unless stated otherwise, we work with a fixed representation \((\rho,V)\). We say that a sequence \(\mathfrak{v}=\{v_{k}\}_{k\in\mathbb{Z}}\subset V\) is a twisted \(n\)-gon with respect to the representation \(\rho\) if there exists \(m_{\mathfrak{v}}\in G\) such that \(v_{k+n}=m_{\mathfrak{v}}\cdot v_{k}=\rho(m_{\mathfrak{v}})v_{k}\) for every \(k\in\mathbb{Z}\). The element \(m_{\mathfrak{v}}\in G\) is called the monodromy of \(\mathfrak{v}\). A twisted \(n\)-gon \(\mathfrak{v}=\{v_{k}\}_{k\in\mathbb{Z}}\) is called regular if for all \(k\) the vectors \(v_{k},v_{k+1},\cdots,v_{k+l-1}\) constitute a basis of \(V\). We denote by \(\mathcal{P}_{n}\) the set of all twisted \(n\)-gons. Observe that \(G\) acts on \(\mathcal{P}_{n}\) in the following form \(g\cdot\mathfrak{v}=\{g\cdot v_{k}\}_{k\in\mathbb{Z}}\) where the monodromy of \(g\cdot\mathfrak{v}\) is \(gm_{\mathfrak{v}}g^{-1}\). Now, we will consider recurrent equations of the form \[v_{k+1}=g_{k}\cdot v_{k}=\rho(g_{k})v_{k}, \tag{66}\] where \(\{g_{k}\}_{k\in\mathbb{Z}}\subset G\) is an \(n\)-periodic sequence and \(v_{0}\in V\) is arbitrary. We say that a twisted \(n\)-gon is of recurrent type if this satisfies a recurrent equation (66) for some \(n\)-periodic sequence \(\{g_{k}\}_{k\in\mathbb{Z}}\subset G\). More concretely, we will say that \(\mathfrak{v}\) is a twisted \(n\)-gon with respect to \(\{g_{k}\}_{k\in\mathbb{Z}}\). The set of all the pairs \((\mathfrak{v}=\{v_{k}\}_{k\in\mathbb{Z}},\{g_{k}\}_{k\in\mathbb{Z}})\) where \(\mathfrak{v}\) is a twisted \(n\)-gon of recurrent type with respect to \(\{g_{k}\}_{k\in\mathbb{Z}}\) is denoted by \(\mathcal{RP}_{n}\). We have **Theorem 26**: _Suppose that all the solutions of (66) satisfy the condition \(v_{k+n}=m\cdot v_{k}=\rho(m)v_{k}\) for every \(k\in\mathbb{Z}\) and some \(m\in G\) fixed, then_ \[\rho(m)=\rho(g_{n-1})\rho(g_{n-2})\cdots\rho(g_{1})\rho(g_{0}). \tag{67}\] _Reciprocally, let us assume that (67) is true and \(mg_{k}=g_{k}m\) for \(k=0,1,\ldots,n-2,n-1\). Then, every solution of (66) satisfies the condition \(v_{k+n}=m\cdot v_{k}=\rho(m)v_{k}\) for every \(k\in\mathbb{Z}\)._ **Proof.** Suppose that each solution of (66) satisfies the condition \(v_{k+n}=m\cdot v_{k}=\rho(m)v_{k}\) for every \(k\in\mathbb{Z}\) and some \(m\in G\) fixed. Then, \[\rho(m)v_{0}=m\cdot v_{0}=v_{n}=g_{n-1}\cdot v_{n-1}=(g_{n-1}g_{n-2})\cdot v_{ n-2}=(g_{n-1}g_{n-2}\cdots g_{1})\cdot v_{1}=(g_{n-1}g_{n-2}\cdots g_{1}g_{0}) \cdot v_{0},\] hence \[\rho(m)v_{0}=\rho(g_{n-1}g_{n-2}\cdots g_{1}g_{0})v_{0}=\rho(g_{n-1})\rho(g_{n -2})\cdots\rho(g_{1})\rho(g_{0})v_{0},\] now, \(v_{0}\) is an arbitrary vector. It shows that (67) holds. Conversely, assume that (67) is true, thus \(v_{n}=\rho(m)v_{0}=m\cdot v_{0}\). It should be proven that \(v_{k+n}=\rho(m)v_{k}=m\cdot v_{k}\) for very \(k\in\mathbb{Z}\). Recall that \(\{g_{k}\}_{k\in\mathbb{Z}}\) is an \(n\)-periodic sequence. Hence, \[v_{n+1}=g_{n}\cdot v_{n}=g_{1}\cdot v_{n}=g_{1}\cdot(m\cdot v_{0})=(g_{1}m) \cdot v_{0}=(mg_{1})\cdot v_{0}=m\cdot(g_{1}\cdot v_{0})=m\cdot v_{1}.\] On the other hand, from the recurrent equation (66), \(v_{0}=g_{n-1}\cdot v_{-1}\). So, \[g_{n-1}\cdot v_{n-1}=v_{n}=m\cdot v_{0}=m\cdot(g_{n-1}\cdot v_{-1})=g_{n-1} \cdot(m\cdot v_{-1}),\] it shows that \(v_{n-1}=m\cdot v_{-1}\). Therefore, the equality \(v_{k+n}=\rho(m)v_{k}=m\cdot v_{k}\) is hold when \(|k|\leq 1\). Suppose now that this equality holds for \(|k|\leq r\) where \(1<r\). From the induction hypothesis, we obtain \[v_{(r+1)+n}=g_{r+n}\cdot v_{r+n}=g_{r+n}\cdot(m\cdot v_{r})=(g_{r+n}m)\cdot v_{ r}=(g_{r}m)\cdot v_{r}=(mg_{r})\cdot v_{r}=m\cdot(g_{r}\cdot v_{r})=m\cdot g_{r+1},\] and \[v_{-(r+1)+n}=g_{-(r+1)+n}^{-1}v_{-r+n}=g_{-(r+1)}^{-1}\cdot(m\cdot v_{-r})=(g_ {-(r+1)}^{-1}m)\cdot v_{-r}=(mg_{-(r+1)}^{-1})\cdot v_{-r}=m\cdot(g_{-(r+1)}^{- 1}\cdot v_{-r})=m\cdot v_{-(r+1)}.\] Two twisted \(n\)-gons, \(\mathfrak{v}=\{v_{k}\}_{k\in\mathbb{Z}}\subset V\) and \(\mathfrak{w}=\{w_{k}\}_{k\in\mathbb{Z}}\subset V\) are called equivalent and we write \(\mathfrak{v}\sim\mathfrak{w}\) if there is \(h\in G\) such that \(v_{k}=h\cdot w_{k}=\rho(h)w_{k}\) for all \(k\in\mathbb{Z}\). One can see that if \(\mathfrak{v}\sim\mathfrak{w}\) then \(m_{\mathfrak{w}}=h^{-1}m_{\mathfrak{w}}h\) where \(m_{\mathfrak{v}}\) and \(m_{\mathfrak{w}}\) are the monodromy of \(\mathfrak{v}\) and \(\mathfrak{w}\) respectively. Moreover, if \(\mathfrak{v}\) is solution of a recurrent equation (66) with respect to an \(n\)-periodic sequence \(\{g_{k}\}_{k\in\mathbb{Z}}\subset G\) then the sequence \(\mathfrak{w}\) satisfies the recurrent equation \(w_{k+1}=\hat{g}_{k}\cdot w_{k}=\rho(\hat{g}_{k})w_{k}\) in which \(\hat{g}_{k}=h^{-1}g_{k}h\) for all \(k\in\mathbb{Z}\), note that the sequence \(\{\hat{g}_{k}\}_{k\in\mathbb{Z}}\) is \(n\)-periodic. The expression (67) is called a monodromy decomposition and we say that \((\rho(g_{0}),\rho(g_{1}),\cdots,\rho(g_{n-2}),\rho(g_{n-1}))\) is a \(GL(V)\)-quiddity sequence if \(\rho(m)=-1_{V}=-\rho(1_{G})\). Recall that we are working with a fixed representation \((V,\rho)\). It is well known that \(G\) acts on \(G^{n}\) by means of the diagonal action. Define a map \(\varrho:\mathcal{RP}_{n}\longrightarrow G^{n}\) of the following manner for all pair \((\mathfrak{v}=\{v_{k}\}_{k\in\mathbb{Z}},\{g_{k}\}_{k\in\mathbb{Z}})\in \mathcal{RP}_{n}\) we put \(\varrho(\mathfrak{v}=\{v_{k}\}_{k\in\mathbb{Z}})=(g_{1},\cdots,g_{n})=(\varrho _{1}(\mathfrak{v}),\cdots,\varrho_{i}(\mathfrak{v}),\cdots\varrho_{n}( \mathfrak{v}))\). By way of motivation we will consider \(n\)-antiperiodic sequences in the projective space \(\mathbb{P}_{1}(M_{l}(\mathbb{R}))\). Briefly, we now review the \((k-1)\)-dimensional right-projective spaces over the real \(l\times l\) matrices [27]. Real matrices of order \(sl\times tl\) with \(t,s\geq 1\) and \(t\neq s\) or \(t=s\) for \(t,s\geq 2\) are denoted by calligraphic capital letters. One writes the \(sl\times l\) matrix \(\mathcal{Y}\) in block form: \(\mathcal{Y}=\left(\begin{array}{c}Y_{1}\\ \vdots\\ Y_{s}\end{array}\right)\), in which each \(Y_{i}\) is an \(l\times l\) matrix. \(R_{0}(sl^{2})\) will be the set of real or complex matrices \(\mathcal{Y}\) of rank equal to \(l\). \(R_{0}(sl^{2})\) is a connected topological space and its topology is defined by means of any generalized matrix norm. \[\text{Two matrices }\mathcal{Y}=\left(\begin{array}{c}Y_{1}\\ \vdots\\ Y_{s}\end{array}\right)\text{ and }\mathcal{U}=\left(\begin{array}{c}U_{1}\\ \vdots\\ U_{s}\end{array}\right)\text{ of }R_{0}(sl^{2})\text{ are right- or column-equivalent if there exists an }l\times l\] invertible matrix \(S\) such that \[\mathcal{U}=\left(\begin{array}{c}U_{1}\\ \vdots\\ U_{s}\end{array}\right)=\left(\begin{array}{c}Y_{1}\\ \vdots\\ Y_{s}\end{array}\right)S=\mathcal{Y}S,\quad|S|\neq 0. \tag{68}\] This relation partitions \(R_{0}(sl^{2})\) into equivalence classes of column-equivalent matrices. These equivalence classes are the points of the \((s-1)\)-dimensional right-projective space over the real or complex \(l\times l\) matrices \(\mathbb{P}_{(s-1)}(M_{l}(\mathbb{R}))\). The projective mappings \(\mathcal{C}\) of this left-projective space are given by means of constant invertible \(sl\times sl\) matrices. \(\mathcal{C}\) is written in block form \[\mathcal{C}=\left(\begin{array}{ccc}C_{11}&\cdots&C_{1s}\\ \vdots&&\vdots\\ C_{s1}&\cdots&C_{ss}\end{array}\right),\quad|\mathcal{C}|\neq 0, \tag{69}\] where each block \(C_{ij}\), \(i,j=1,\cdots,s\) is an \(l\times l\) matrix. We denote the space of all projective mappings by \(PGL_{sl}(M_{l}(\mathbb{R}))\) or simply \(PGL_{sl}\). For \(\mathcal{C}\) fixed, one defines \[\widetilde{\mathcal{Y}}=\left(\begin{array}{c}\widetilde{Y}_{1}\\ \vdots\\ \widetilde{Y}_{s}\end{array}\right)=\mathcal{C}(\mathcal{Y})=\mathcal{CY}= \left(\begin{array}{ccc}C_{11}&\cdots&C_{1s}\\ \vdots&&\vdots\\ C_{s1}&\cdots&C_{ss}\end{array}\right)\left(\begin{array}{c}Y_{1}\\ \vdots\\ Y_{s}\end{array}\right), \tag{70}\] for all \(\mathcal{Y}\in\mathbb{P}_{(s-1)}(M_{l}(\mathbb{R}))\), then \(\mathcal{C}(\mathcal{Y})\in\mathbb{P}_{(s-1)}(M_{l}(\mathbb{R}))\). If \(\mathcal{U}=\mathcal{YS}\) where \(|S|\neq 0\), then \(\widetilde{\mathcal{U}}=\mathcal{CU}=\mathcal{OY}S=\widetilde{\mathcal{Y}}S\); Hence, column-equivalent matrices have column-equivalent transformations. Thus, the transformation (70) induces a transformation of \(\mathbb{P}_{(s-1)}(M_{l}(\mathbb{R}))\) onto itself. In this part, we will work in the projective space \(\mathbb{P}_{1}(M_{l}(\mathbb{R}))\). **Definition 27**: _A sequence \(\left\{\Phi_{k}=\left(\begin{array}{c}Y_{1}^{k}\\ Y_{2}^{k}\end{array}\right)\right\}_{k\in\mathbb{Z}}\subset R_{0}(2l^{2})\) is said to be \(n\)-antiperiodic if \(\Phi_{k+n}=-\Phi_{k}\) for all \(k\in\mathbb{Z}\). An \(n\)-antiperiodic sequence \(\{\Phi_{k}\}_{k\in\mathbb{Z}}\) of \(R_{0}(2l^{2})\) is called regular if \(|\Phi_{k}\)\(\Phi_{k+1}|\neq 0\) for any \(k\in\mathbb{Z}\)._ We can introduce an equivalence relation \(\sim\) in the set \(AP_{n}\) of \(n\)-antiperiodic sequences of matrices belong to \(R_{0}(2l^{2})\). We say that two sequences \(\{\Phi_{k}\}_{k\in\mathbb{Z}}\), \(\{\Lambda_{k}\}_{k\in\mathbb{Z}}\in AP_{n}\) are related if there exists \(G\in PGL_{2l}\) such that \(\Lambda_{k}=G\Phi_{k}\) for every \(k\in\mathbb{Z}\). Below, the notation \(CAP_{n}=AP_{n}\diagup\sim\) will be used. The following remark is trivial **Remark 28**: _Suppose that \(\mathcal{L}\in CAP_{n}\) such that there is \(\{\Phi_{k}\}_{k\in\mathbb{Z}}\in\mathcal{L}\) regular. Then, all sequence belongs to \(\mathcal{L}\) is regular. In this case, we say that the class \(\mathcal{L}\) is regular, otherwise \(\mathcal{L}\) is called non-regular._ Let \(\mathcal{L}\) be a regular class and \(\{\Phi_{k}\}_{k\in\mathbb{Z}}\in\mathcal{L}\), then for all \(k\in\mathbb{Z}\) there are two \(n\)-periodic sequences of matrices of order \(l\), say \(\{\mathfrak{l}_{k}\}_{k\in\mathbb{Z}}\) and \(\{\mathfrak{s}_{k}\}_{k\in\mathbb{Z}}\) such that \[\Phi_{k+1}=\Phi_{k}\mathfrak{l}_{k}+\Phi_{k-1}\mathfrak{s}_{k}, \tag{71}\] for all \(k\in\mathbb{Z}\), thus for any \(k\in\mathbb{Z}\) \[(\Phi_{k}\ \Phi_{k+1})=(\Phi_{k-1}\ \Phi_{k})\left(\begin{array}{cc}O& \mathfrak{s}_{k}\\ I&\mathfrak{l}_{k}\end{array}\right). \tag{72}\] From (72) and the Schur determinant lemma follow that \(|\mathfrak{s}_{k}|\neq 0\) for \(k\in\mathbb{Z}\). Define \[N(\mathfrak{l}_{1},\cdots,\mathfrak{l}_{n};\mathfrak{s}_{1},\cdots,\mathfrak{ s}_{n})=\left(\begin{array}{cc}O&\mathfrak{s}_{1}\\ I&\mathfrak{l}_{1}\end{array}\right)\cdot\cdots\cdot\left(\begin{array}{ cc}O&\mathfrak{s}_{n}\\ I&\mathfrak{l}_{n}\end{array}\right)=N(\mathfrak{l}_{1};\mathfrak{s}_{1}) \cdot\cdots\cdot N(\mathfrak{l}_{n};\mathfrak{s}_{n}),\] observe that \(|N(\mathfrak{l}_{k};\mathfrak{s}_{k})|=|\mathfrak{s}_{k}|\) where \(k=1,\cdots,n\), hence \(|N(\mathfrak{l}_{1},\cdots,\mathfrak{l}_{n};\mathfrak{s}_{1},\cdots,\mathfrak{ s}_{n})|=|\mathfrak{s}_{1}|\cdots|\mathfrak{s}_{n}|\). Since \(\{\Phi_{k}\}_{k\in\mathbb{Z}}\) is regular and \(n\)-antiperiodic, we conclude that \(|N(\mathfrak{l}_{1},\cdots,\mathfrak{l}_{n};\mathfrak{s}_{1},\cdots,\mathfrak{ s}_{n})|=|s_{1}|\cdots|\mathfrak{s}_{n}|=1\). It is not difficult to see that all regular solution \(\{\Phi_{k}\}_{k\in\mathbb{Z}}\subset\mathbb{P}_{1}(M_{l}(\mathbb{R}))\) of the equation (71) with coefficients \(\{\mathfrak{l}_{k}\}\) and \(\{\mathfrak{s}_{k}\}\) which are \(n\)-periodic sequences is \(n\)-antiperiodic if and only if \[\left(\begin{array}{cc}O&\mathfrak{s}_{1}\\ I&\mathfrak{l}_{1}\end{array}\right)\cdot\cdots\cdot\left(\begin{array}{cc}O &\mathfrak{s}_{n}\\ I&\mathfrak{l}_{n}\end{array}\right)=-Id.\] We arrive at the following definition **Definition 29**: _A bi-vector \((\overline{\mathfrak{i}};\overline{\mathfrak{s}})_{n}=(\mathfrak{l}_{1}, \cdots,\mathfrak{l}_{n};\mathfrak{s}_{1},\cdots,\mathfrak{s}_{n})\) is called a **right matrix quiddity bi-sequence** if_ \[N((\overline{\mathfrak{i}};\overline{\mathfrak{s}})_{n})=N(\mathfrak{l}_{1}, \cdots,\mathfrak{l}_{n};\mathfrak{s}_{1},\cdots,\mathfrak{s}_{n})=N(\mathfrak{ l}_{1};\mathfrak{s}_{1})\cdot\cdots\cdot N(\mathfrak{l}_{n};\mathfrak{s}_{n})=- \left(\begin{array}{cc}I&O\\ O&I\end{array}\right)=-Id, \tag{73}\] _in this case, \(n\) is called the length of \((\overline{\mathfrak{i}};\overline{\mathfrak{s}})_{n}\)._ It follows from (73) that \(JN^{*}(\mathfrak{l}_{1},\cdots,\mathfrak{l}_{n};\mathfrak{s}_{1},\cdots, \mathfrak{s}_{n})J=(JN^{*}(\mathfrak{l}_{n};\mathfrak{s}_{n})J)\cdot\cdots\cdot (JN^{*}(\mathfrak{l}_{1};\mathfrak{s}_{1})J)=-Id\), where as before \(J=\left(\begin{array}{cc}O&I\\ I&O\end{array}\right)\), thus \[\left(\begin{array}{cc}\mathfrak{l}_{n}^{*}&\mathfrak{s}_{n}^{*}\\ I&O\end{array}\right)\cdot\cdots\cdot\left(\begin{array}{cc}\mathfrak{l}_{1 }^{*}&\mathfrak{s}_{1}^{*}\\ I&O\end{array}\right)=-\left(\begin{array}{cc}I&O\\ O&I\end{array}\right)=-Id.\] This leads to the next definition **Definition 30**: _A bi-vector \((\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}=(\mathfrak{p}_{1}, \cdots,\mathfrak{p}_{n};\mathfrak{q}_{1},\cdots,\mathfrak{q}_{n})\) is called a **left matrix quiddity bi-sequence** of length \(n\) if_ \[M((\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n})=M(\mathfrak{p}_{1}, \cdots,\mathfrak{p}_{n};\mathfrak{q}_{1},\cdots,\mathfrak{q}_{n})=\left( \begin{array}{cc}\mathfrak{p}_{n}&\mathfrak{q}_{n}\\ I&O\end{array}\right)\cdot\cdots\cdot\left(\begin{array}{cc}\mathfrak{p}_{1 }&\mathfrak{q}_{1}\\ I&O\end{array}\right)=M(\mathfrak{p}_{n};\mathfrak{q}_{n})\cdot\cdots\cdot M( \mathfrak{p}_{1};\mathfrak{q}_{1})=-Id. \tag{74}\] It is clear that there is a one-to-one correspondence between the set \(\mathcal{RMQB}_{n}\) of all the right matrix quiddity bi-sequences of length \(n\) and the set \(\mathcal{LMQB}_{n}\) of all the left matrix quiddity bi-sequences. So, in this part we work in the class of left matrix quiddity bi-sequences of any length \(\mathcal{LMQB}=\sqcup_{n}\mathcal{LMQB}_{n}\). **Example 6**: _Let us study the equation_ \[M((\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{3})=\left(\begin{array}{ cc}\mathfrak{p}_{3}&\mathfrak{q}_{3}\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{p}_{2}&\mathfrak{q}_{2} \\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{p}_{1}&\mathfrak{q}_{1} \\ I&O\end{array}\right)=-\left(\begin{array}{cc}I&O\\ O&I\end{array}\right), \tag{75}\] _now_ \[M((\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{3})=\left(\begin{array}{ cc}\mathfrak{p}_{3}\mathfrak{p}_{2}+\mathfrak{q}_{3}&\mathfrak{p}_{3}\mathfrak{q}_{2}\\ \mathfrak{p}_{2}&\mathfrak{q}_{2}\end{array}\right)\left(\begin{array}{cc} \mathfrak{p}_{1}&\mathfrak{q}_{1}\\ I&O\end{array}\right)=\left(\begin{array}{cc}(\mathfrak{p}_{3}\mathfrak{p}_{2}+ \mathfrak{q}_{3})\mathfrak{p}_{1}+\mathfrak{p}_{3}\mathfrak{q}_{2}&(\mathfrak{p}_{3} \mathfrak{p}_{2}+\mathfrak{q}_{3})\mathfrak{q}_{1}\\ \mathfrak{p}_{2}\mathfrak{p}_{1}+\mathfrak{q}_{2}&\mathfrak{p}_{2}\mathfrak{q}_{1} \end{array}\right),\] _hence \(\mathfrak{p}_{1}=\mathfrak{q}_{1}\mathfrak{q}_{2}\), \(\mathfrak{p}_{2}=-\mathfrak{q}_{1}^{-1}\) and \(\mathfrak{p}_{3}=\mathfrak{q}_{3}\mathfrak{q}_{1}\) such that \(\mathfrak{q}_{3}\mathfrak{q}_{1}\mathfrak{q}_{2}=-I\). It follows that \((\mathfrak{q}_{1}\mathfrak{q}_{2},-\mathfrak{q}_{1}^{-1},\mathfrak{q}_{3} \mathfrak{q}_{1};\mathfrak{q}_{1},\mathfrak{q}_{2},-\mathfrak{q}_{2}^{-1} \mathfrak{q}_{1}^{-1})\) is a left matrix quiddity bi-sequence of length \(3\)._ **Proposition 31**: _Let \((\mathfrak{p}_{1},\cdots,\mathfrak{p}_{n};\mathfrak{q}_{1},\cdots,\mathfrak{q}_{n})\) be an element of \(\mathcal{LMQB}_{n}\) then_ \[(\mathfrak{p}_{1},\cdots,\mathfrak{p}_{k-1},\mathfrak{p}_{k}+I,I,\mathfrak{p}_{ k+1}-\mathfrak{q}_{k+1},\mathfrak{p}_{k+2},\cdots,\mathfrak{p}_{n};\mathfrak{q}_{1}, \cdots,\mathfrak{q}_{k-1},\mathfrak{q}_{k},-I,\mathfrak{q}_{k+1},\mathfrak{q }_{k+2},\cdots,\mathfrak{q}_{n})\in\mathcal{LMQB}_{n+1}, \tag{76}\] _for \(k=1,\cdots,n-1\)._ **Proof.** In order to prove the proposition it is sufficient to observe that for \(k=1,\cdots,n-1\) \[M(\mathfrak{p}_{k+1}-\mathfrak{q}_{k+1};\mathfrak{q}_{k+1})M(I; -I)M(\mathfrak{p}_{k}+I;\mathfrak{q}_{k}) =\left(\begin{array}{cc}\mathfrak{p}_{k+1}-\mathfrak{q}_{k+1}& \mathfrak{q}_{k+1}\\ I&O\end{array}\right)\left(\begin{array}{cc}I&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{p}_{k}+I&\mathfrak{ q}_{k}\\ I&O\end{array}\right)\] \[=\left(\begin{array}{cc}\mathfrak{p}_{k+1}&\mathfrak{q}_{k+1}- \mathfrak{p}_{k+1}\\ I&-I\end{array}\right)\left(\begin{array}{cc}\mathfrak{p}_{k}+I&\mathfrak{ q}_{k}\\ I&O\end{array}\right)\] \[=\left(\begin{array}{cc}\mathfrak{p}_{k+1}\mathfrak{p}_{k}+ \mathfrak{q}_{k+1}&\mathfrak{p}_{k+1}\mathfrak{q}_{k}\\ \mathfrak{p}_{k}&\mathfrak{q}_{k}\end{array}\right)=M(\mathfrak{p}_{k+1}; \mathfrak{q}_{k+1})M(\mathfrak{p}_{k};\mathfrak{q}_{k}).\] We also have **Theorem 32**: _Suppose that \((\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}=(\mathfrak{p}_{1}, \cdots,\mathfrak{p}_{n};\mathfrak{q}_{1},\cdots,\mathfrak{q}_{n})\in\mathcal{ LMQB}_{n}\) and \((\overline{\mathfrak{l}};\overline{\mathfrak{s}})_{m}=(\mathfrak{l}_{1}, \cdots,\mathfrak{l}_{m};\mathfrak{s}_{1},\cdots,\mathfrak{s}_{m})\in\mathcal{ LMQB}_{m}\). Introduce the following products_ \[(\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}\circ_{k} \left(\overline{\mathfrak{l}};\overline{\mathfrak{s}}\right)_{m}= \tag{77}\] \[(\mathfrak{p}_{1},\cdots,\mathfrak{p}_{k-1},\mathfrak{l}_{1}- \mathfrak{s}_{1}(\mathfrak{p}_{k}+I),\mathfrak{l}_{2},\cdots,\mathfrak{l}_{m- 1},\mathfrak{l}_{m}+I,\mathfrak{p}_{k+1}-\mathfrak{q}_{k+1},\mathfrak{p}_{k+2},\cdots,\mathfrak{p}_{n};\mathfrak{q}_{1},\cdots,\mathfrak{q}_{k-1},- \mathfrak{s}_{1}\mathfrak{q}_{k},\mathfrak{s}_{2},\cdots,\mathfrak{s}_{m}, \mathfrak{q}_{k+1},\cdots,\mathfrak{q}_{n}),\] _for \(k=1,\cdots,n-1\) (of course \(|\mathfrak{s}_{1}\mathfrak{q}_{k}|\neq 0\)), and_ \[(\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}\circ_{n} \left(\overline{\mathfrak{l}};\overline{\mathfrak{s}}\right)_{m}=(\mathfrak{p} _{1}-\mathfrak{q}_{1},\mathfrak{p}_{2},\cdots,\mathfrak{p}_{n-1},\mathfrak{l }_{1}-\mathfrak{s}_{1}(\mathfrak{p}_{n}+I),\mathfrak{l}_{2},\cdots,\mathfrak{ l}_{m-1},\mathfrak{l}_{m}+I;\mathfrak{q}_{1},\cdots,\mathfrak{q}_{n-1},- \mathfrak{s}_{1}\mathfrak{q}_{n},\mathfrak{s}_{2},\cdots,\mathfrak{s}_{m}), \tag{78}\] _Then \((\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}\circ_{k}\left(\overline{ \mathfrak{l}};\overline{\mathfrak{s}}\right)_{m}\in\mathcal{LMQB}_{n+m-1}\) for \(k=1,\cdots,n\)._ **Proof.** In fact, let \(1\leq k\leq n-1\) then \[(\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}\circ_{k} \left(\overline{\mathfrak{l}};\overline{\mathfrak{s}}\right)_{m} =M(\mathfrak{p}_{n};\mathfrak{q}_{n})\cdots M(\mathfrak{p}_{k+2}; \mathfrak{q}_{k+2})M(\mathfrak{p}_{k+1}-\mathfrak{q}_{k+1};\mathfrak{q}_{k+1}) M(\mathfrak{l}_{m}+I;\mathfrak{s}_{m})M(\mathfrak{l}_{m-1};\mathfrak{s}_{m-1})\cdots\] \[M(\mathfrak{l}_{2};\mathfrak{s}_{2})M(\mathfrak{l}_{1}- \mathfrak{s}_{1}(\mathfrak{p}_{k}+I);-\mathfrak{s}_{1}\mathfrak{q}_{k})M( \mathfrak{p}_{k-1};\mathfrak{q}_{k-1})\cdots M(\mathfrak{p}_{1};\mathfrak{q }_{1}),\] now let us note that \[M(\mathfrak{l}_{1}-\mathfrak{s}_{1}(\mathfrak{p}_{k}+I);- \mathfrak{s}_{1}\mathfrak{q}_{k}) =\left(\begin{array}{cc}\mathfrak{l}_{1}-\mathfrak{s}_{1}( \mathfrak{p}_{k}+I)&-\mathfrak{s}_{1}\mathfrak{q}_{k}\\ I&O\end{array}\right)\] \[=\left(\begin{array}{cc}\mathfrak{l}_{1}&\mathfrak{s}_{1}\\ I&O\end{array}\right)\left(\begin{array}{cc}O&I\\ -I&-I\end{array}\right)\left(\begin{array}{cc}\mathfrak{p}_{k}&\mathfrak{q}_{ k}\\ I&O\end{array}\right)=M(\mathfrak{l}_{1};\mathfrak{s}_{1})\left(\begin{array}{ cc}O&I\\ -I&-I\end{array}\right)M(\mathfrak{p}_{k};\mathfrak{s}_{k}),\] and \[M(\mathfrak{p}_{k+1}-\mathfrak{q}_{k+1};\mathfrak{q}_{k+1})M( \mathfrak{l}_{m}+I;\mathfrak{s}_{m}) =\left(\begin{array}{cc}\mathfrak{p}_{k+1}-\mathfrak{q}_{k+1}& \mathfrak{q}_{k+1}\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{l}_{m}+I&\mathfrak{s}_{m }\\ I&O\end{array}\right)\] \[=\left(\begin{array}{cc}\mathfrak{p}_{k+1}&\mathfrak{q}_{k+1}\\ I&O\end{array}\right)\left(\begin{array}{cc}I&I\\ -I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{l}_{m}&\mathfrak{s}_{m} \\ I&O\end{array}\right)=M(\mathfrak{p}_{k+1};\mathfrak{q}_{k+1})\left(\begin{array}{ cc}I&I\\ -I&O\end{array}\right)M(\mathfrak{l}_{m};\mathfrak{s}_{m}),\] then combining the hypotheses of the theorem and the last three equalities, it turns out that \[(\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}\circ_{k} \left(\overline{\mathfrak{l}};\overline{\mathfrak{s}}\right)_{m} =M(\mathfrak{p}_{n};\mathfrak{q}_{n})\cdots M(\mathfrak{p}_{k+2};\mathfrak{q}_{ k+2})M(\mathfrak{p}_{k+1};\mathfrak{q}_{k+1})\left(\begin{array}{cc}I&I\\ -I&O\end{array}\right)M(\mathfrak{l}_{m};\mathfrak{s}_{m})M(\mathfrak{l}_{m-1}; \mathfrak{s}_{m-1})\cdots\] \[M(\mathfrak{l}_{2};\mathfrak{s}_{2})M(\mathfrak{l}_{1};\mathfrak{s }_{1})\left(\begin{array}{cc}O&I\\ -I&-I\end{array}\right)M(\mathfrak{p}_{k};\mathfrak{s}_{k})M(\mathfrak{p}_{k-1}; \mathfrak{q}_{k-1})\cdots M(\mathfrak{p}_{1};\mathfrak{q}_{1})\] \[=M(\mathfrak{p}_{n};\mathfrak{q}_{n})\cdots M(\mathfrak{p}_{k+1}; \mathfrak{q}_{k+1})\left(\begin{array}{cc}I&I\\ -I&O\end{array}\right)\left(\begin{array}{cc}-I&O\\ -I&-I\end{array}\right)M(\mathfrak{p}_{k};\mathfrak{s}_{k})\cdots M(\mathfrak{p}_{1}; \mathfrak{q}_{1})\] \[=M(\mathfrak{p}_{n};\mathfrak{q}_{n})\cdots M(\mathfrak{p}_{k+1}; \mathfrak{q}_{k+1})M(\mathfrak{p}_{k};\mathfrak{s}_{k})\cdots M(\mathfrak{p}_{1}; \mathfrak{q}_{1})=-Id.\] Here, we have taken into account that \((\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}\in\mathcal{LMQ}\mathcal{B}_{n}\). Now, \[(\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}\circ_{n}( \overline{\mathfrak{i}};\overline{\mathfrak{s}})_{m} =M(\mathfrak{l}_{m}+I;\mathfrak{s}_{m})M(\mathfrak{l}_{m-1}; \mathfrak{s}_{m-1})\cdots M(\mathfrak{l}_{2};\mathfrak{s}_{2})M(\mathfrak{l}_ {1}-\mathfrak{s}_{1}(\mathfrak{p}_{n}+I);-\mathfrak{s}_{1}\mathfrak{q}_{n})M( \mathfrak{p}_{n-1};\mathfrak{q}_{n-1})\] \[\quad\cdots M(\mathfrak{p}_{2};\mathfrak{q}_{2})M(\mathfrak{p}_ {1}-\mathfrak{q}_{1};\mathfrak{q}_{1})\] \[=\left(\begin{array}{cc}I&I\\ O&I\end{array}\right)M(\mathfrak{l}_{m};\mathfrak{s}_{m})M(\mathfrak{l}_{m-1}; \mathfrak{s}_{m-1})\cdots M(\mathfrak{l}_{2};\mathfrak{s}_{2})M(\mathfrak{l}_ {1};\mathfrak{s}_{1})\left(\begin{array}{cc}O&I\\ -I&-I\end{array}\right)M(\mathfrak{p}_{n};\mathfrak{s}_{n})\] \[\quad M(\mathfrak{p}_{n-1};\mathfrak{q}_{n-1})\cdots M(\mathfrak{ p}_{2};\mathfrak{q}_{2})M(\mathfrak{p}_{1};\mathfrak{q}_{1})\left(\begin{array}{ cc}I&O\\ -I&I\end{array}\right)\] \[=\left(\begin{array}{cc}I&I\\ O&I\end{array}\right)\left(\begin{array}{cc}-I&O\\ O&-I\end{array}\right)\left(\begin{array}{cc}O&I\\ O&-I\end{array}\right)\left(\begin{array}{cc}-I&O\\ -I&I\end{array}\right)=-Id.\] We finish the proof of the theorem. **Theorem 33**: _Let \((\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}=(\mathfrak{p}_{1}, \cdots,\mathfrak{p}_{n};\mathfrak{q}_{1},\cdots,\mathfrak{q}_{n})\in\mathcal{ LMQ}\mathcal{B}_{n}\) and \((\overline{\mathfrak{i}};\overline{\mathfrak{s}})_{m}=(\mathfrak{l}_{1}, \cdots,\mathfrak{l}_{m};\mathfrak{s}_{1},\cdots,\mathfrak{s}_{m})\in\mathcal{ LMQ}\mathcal{B}_{m}\). Define_ \[(\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}\bullet_{k}( \overline{\mathfrak{i}};\overline{\mathfrak{s}})_{m}= \tag{79}\] \[(\mathfrak{p}_{1},\cdots,\mathfrak{p}_{k-1}+I,\mathfrak{l}_{2}- \mathfrak{s}_{2},\mathfrak{l}_{3},\cdots,\mathfrak{l}_{m},(\mathfrak{p}_{k}- \mathfrak{q}_{k})-\mathfrak{q}_{k}\mathfrak{l}_{1},\mathfrak{p}_{k+1},\cdots, \mathfrak{p}_{n};\mathfrak{q}_{1},\cdots,\mathfrak{q}_{k-1},\mathfrak{s}_{2},\cdots,\mathfrak{s}_{m},-\mathfrak{q}_{k}\mathfrak{s}_{1},\mathfrak{q}_{k+ 1},\cdots,\mathfrak{q}_{n}),\] _for \(k=2,\cdots,n\) (note that \(|\mathfrak{q}_{k}\mathfrak{s}_{1}|\neq 0\)), and_ \[(\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}\bullet_{1}(\overline{ \mathfrak{i}};\overline{\mathfrak{s}})_{m}=((\mathfrak{p}_{1}-\mathfrak{q}_{1} )-\mathfrak{q}_{1}\mathfrak{l}_{1},\mathfrak{p}_{2},\cdots,\mathfrak{p}_{n-1},\mathfrak{p}_{n}+I,\mathfrak{l}_{2}-\mathfrak{s}_{2},\mathfrak{l}_{3}\cdots, \mathfrak{l}_{m};-\mathfrak{q}_{1}\mathfrak{s}_{1},\mathfrak{q}_{2},\cdots, \mathfrak{q}_{n},\mathfrak{s}_{2},\cdots,\mathfrak{s}_{m}), \tag{80}\] _Then \((\overline{\mathfrak{p}};\overline{\mathfrak{q}})_{n}\bullet_{k}(\overline{ \mathfrak{i}};\overline{\mathfrak{s}})_{m}\in\mathcal{LMQ}\mathcal{B}_{n+m-1}\) for \(k=1,\cdots,n\)._ **Proof.** The proof of this theorem is similar to that of the previous theorem, hence it will be omitted. ### The moving frame theory revisited in relation with the two previous subsections An alternative way of studying the fixed points under the action of the group of rigid movements on Euclidean space is through the notion of moving frames which leads to the construction of invariant functions, see [4]. For our purposes, in this part, we will be interested in the theory of discrete moving frames which was founded in [18]. Let \(\mathfrak{M}_{n}=(M_{l}(\mathbb{C}))^{n}\) equipped with the following metric \[\|\overline{X}\|=\sqrt{\sum_{k=1}^{n}\|\mathfrak{x}_{k}\|^{2}}, \tag{81}\] for all \(\overline{X}=(\mathfrak{x}_{1},\cdots,\mathfrak{x}_{n})\in\mathfrak{M}_{n}\), where \(\|\mathfrak{x}_{k}\|=\sup_{r\neq 0\in\mathbb{C}^{1}}\frac{\|\mathfrak{x}_{r}\|}{\|r\|}\) and \(k=1,\ldots,n\). Let us suppose that a group \(G\) acts from the left on \(M_{l}(\mathbb{C})\) and so on \(\mathfrak{M}_{n}\) for means of the product action, that is \(g\cdot\overline{X}=(g\cdot\mathfrak{x}_{1},\cdots,g\cdot\mathfrak{x}_{n})\). We will assume that the action of \(G\) on \(M_{l}(\mathbb{C})\) is free, that is, for all \(\mathfrak{r}\in M_{l}(\mathbb{C})\) the corresponding isotropy group \(G_{\mathfrak{r}}=\{g\in G|g\cdot\mathfrak{r}=\mathfrak{r}\}\) is equal to \(\{e\}\), where \(e\) is the identity of \(G\). **Definition 34**: _A \(M_{l}(\mathbb{C})\)-valued function \(I:\mathfrak{M}_{n}\longrightarrow M_{l}(\mathbb{C})\) is called \(G\)-invariant if \(I(g\cdot\overline{X})=I(\overline{X})\) for all \(\overline{X}\in\mathfrak{M}_{n}\) and every \(g\in G\). Note that an invariant function in our context is a matrix-valued function._ It is easy to see that the set of all \(G\)-invariant \(M_{l}(\mathbb{C})\)-valued functions is an algebra on \(\mathbb{C}\). In fact, denote this set by \(I(G;M_{l}(\mathbb{C}))\). Let \(I\) and \(\widetilde{I}\) be two \(G\)-invariant \(M_{l}(\mathbb{C})\)-valued functions and define \((I+\widetilde{I})(\overline{X})=I(\overline{X})+\widetilde{I}(\overline{X})\) for all \(\overline{X}\in\mathfrak{M}_{n}\) then \(I+\widetilde{I}\in I(G;M_{l}(\mathbb{C}))\) because for every \(g\in G\), we obtain \((I+\widetilde{I})(g\cdot\overline{X})=I(g\cdot\overline{X})+\widetilde{I}(g\cdot \overline{X})=I(\overline{X})+\widetilde{I}(\overline{X})=(I+\widetilde{I})( \overline{X})\). For \(\alpha\in\mathbb{C}\), in the same way, we have \((\alpha I)(\overline{X})=\alpha I(\overline{X})\in I(G;M_{l}(\mathbb{C}))\) and \((I\cdot\widetilde{I})(\overline{X})=I(\overline{X})\widetilde{I}(\overline{X})\in I (G;M_{l}(\mathbb{C}))\). Now, since \(M_{l}(\mathbb{C})\) is a complex algebra it follows that \(I(G;M_{l}(\mathbb{C}))\) is a complex algebra. **Definition 35**: _A right moving frame \(\rho\) is a function \(\rho:\Omega\subset\mathfrak{M}_{n}\longrightarrow G\) such that \(\rho(g\cdot\overline{X})=\rho(\overline{X})g^{-1}\) for every \(\overline{X}\in\Omega\) and any \(g\in G\). On the other hand, \(\rho:\Omega\subset\mathfrak{M}_{n}\longrightarrow G\) is a left moving frame if \(\rho(g\cdot\overline{X})=g\rho(\overline{X})\) for \(\overline{X}\in\Omega\) and \(g\in G\) arbitraries. The set \(\Omega\) is called of domain of \(\rho\)._ Suppose that \(\rho\) is a fixed right moving frame then for \(k=1,\ldots,n\), the function \(I_{k}(\overline{X})=\rho(\overline{X})\cdot\mathfrak{r}_{k}\) from \(\mathfrak{M}_{n}\) into \(M_{l}(\mathbb{C})\) is \(G\)-invariant, in other words \(I_{k}\in I(G;M_{l}(\mathbb{C}))\). It is well know, observe that \(I_{k}(g\cdot\overline{X})=\rho(g\cdot\overline{X})\cdot(g\cdot\mathfrak{r}_{ k})=(\rho(\overline{X})g^{-1})\cdot(g\cdot\mathfrak{r}_{k})=\rho(\overline{X}) \cdot\mathfrak{r}_{k}=I_{k}(\overline{X})\). The \(I_{k}\) are called the normalized \(G\)-invariants and of course they depend of \(\rho\). On the other hand, any other \(G\)-invariant \(M_{l}(\mathbb{C})\)-valued function is a function of these normalized \(G\)-invariants, because if we take \(I\in I(G;M_{l}(\mathbb{C}))\) arbitrary, then since \(I(g\cdot\overline{X})=I(\overline{X})\) for all \(\overline{X}\in\mathfrak{M}_{n}\) and every \(g\in G\), in particular \(I(\overline{X})=I(\rho(\overline{X})\cdot\overline{X})=I(\rho(\overline{X}) \cdot\mathfrak{r}_{1},\cdots,\rho(\overline{X})\cdot\mathfrak{r}_{n})=I(I_{ 1}(\overline{X}),\ldots,I_{n}(\overline{X}))\). Next, we will examine an example, define \(G_{\times}=GL(\mathbb{C})\times M_{l}(\mathbb{C})=\{(\mathfrak{z},\mathfrak{ w})|\mathfrak{z}\in GL(\mathbb{C}),\mathfrak{w}\in M_{l}(\mathbb{C})\}\), then \(G_{\times}\) is a group with respect to the following product \((\mathfrak{z}_{1},\mathfrak{w}_{1})(\mathfrak{z}_{2},\mathfrak{w}_{2})=( \mathfrak{z}_{1}\mathfrak{z}_{2},\mathfrak{z}_{1}\mathfrak{w}_{2}+ \mathfrak{w}_{1})\). Observe that the unit of \(G_{\times}\) is the pair \((I,O)\) and moreover \((\mathfrak{z},\mathfrak{w})^{-1}=(\mathfrak{z}^{-1},-\mathfrak{z}^{-1} \mathfrak{w})\). Consider the action of \(G_{\times}\) on \(M_{l}(\mathbb{C})\) defined of the following form \(\mathfrak{r}\longrightarrow\mathfrak{z}\mathfrak{r}+\mathfrak{w}\), that is, \((\mathfrak{z},\mathfrak{w})\cdot\mathfrak{a}=\mathfrak{z}\mathfrak{r}+ \mathfrak{w}\). It can be seen as a left action because \[\left(\begin{array}{c}\mathfrak{z}\mathfrak{r}+\mathfrak{w}\\ I\end{array}\right)=\left(\begin{array}{cc}\mathfrak{z}&\mathfrak{w}\\ O&I\end{array}\right)\left(\begin{array}{c}\mathfrak{r}\\ I\end{array}\right), \tag{82}\] hence, this left action extend its action to \(\mathfrak{M}_{n}\) by means to the product action \[\left(\begin{array}{ccc}\mathfrak{z}\mathfrak{r}_{1}+\mathfrak{w}&\cdots& \mathfrak{z}\mathfrak{r}_{n}+\mathfrak{w}\\ I&\cdots&I\end{array}\right)=\left(\begin{array}{ccc}\mathfrak{z}&\mathfrak{ w}\\ O&I\end{array}\right)\left(\begin{array}{ccc}\mathfrak{r}_{1}&\cdots& \mathfrak{r}_{n}\\ I&\cdots&I\end{array}\right). \tag{83}\] We have **Lemma 36**: _Denote \(\Omega=\{\overline{X}=(\mathfrak{r}_{1},\cdots,\mathfrak{r}_{n})\in \mathfrak{M}_{n}\,|\,\mathfrak{r}_{2}-\mathfrak{r}_{1}\in GL_{l}\}\), then the function_ \[\rho(\overline{X})=\rho(\mathfrak{r}_{1},\cdots,\mathfrak{r}_{n})=(( \mathfrak{r}_{2}-\mathfrak{r}_{1})^{-1},-(\mathfrak{r}_{2}-\mathfrak{r}_{1})^ {-1}\mathfrak{r}_{1}), \tag{84}\] _from \(\Omega\) into \(G_{\times}\) is a right moving frame._ **Proof.** Observe that for \(g=(\mathfrak{z},\mathfrak{w})\in G_{\times}\) arbitrary \[\rho(g\cdot\overline{X})=\rho(g\cdot\mathfrak{r}_{1},\cdots,g\cdot\mathfrak{ r}_{n})=((g\cdot\mathfrak{r}_{2}-g\cdot\mathfrak{r}_{1})^{-1},-(g\cdot\mathfrak{r}_{2}-g \cdot\mathfrak{r}_{1})^{-1}g\cdot\mathfrak{r}_{1}),\] now we obtain \[(g\cdot\mathfrak{r}_{2}-g\cdot\mathfrak{r}_{1})^{-1}=(\mathfrak{r}_{2}- \mathfrak{r}_{1})^{-1}\mathfrak{z}^{-1},\] and \[-(g\cdot\mathfrak{r}_{2}-g\cdot\mathfrak{r}_{1})^{-1}g\cdot\mathfrak{r}_{1}=- (\mathfrak{r}_{2}-\mathfrak{r}_{1})^{-1}\mathfrak{z}^{-1}(\mathfrak{r}_{1}+ \mathfrak{z}^{-1}\mathfrak{w}).\] Hence, \[\rho(g\cdot\overline{X})=((\mathfrak{r}_{2}-\mathfrak{r}_{1})^{-1}\mathfrak{ z}^{-1},-(\mathfrak{r}_{2}-\mathfrak{r}_{1})^{-1}(\mathfrak{r}_{1}+\mathfrak{z}^{-1} \mathfrak{w})).\] On the other hand \[\rho(\overline{X})g^{-1}=((\mathfrak{r}_{2}-\mathfrak{r}_{1})^{-1},-(\mathfrak{ r}_{2}-\mathfrak{r}_{1})^{-1}\mathfrak{r}_{1})(\mathfrak{z}^{-1},-\mathfrak{z}^{-1} \mathfrak{w})=((\mathfrak{r}_{2}-\mathfrak{r}_{1})^{-1}\mathfrak{z}^{-1},-( \mathfrak{r}_{2}-\mathfrak{r}_{1})^{-1}\mathfrak{z}^{-1}\mathfrak{w}-( \mathfrak{r}_{2}-\mathfrak{r}_{1})^{-1}\mathfrak{r}_{1})),\] thus \(\rho(g\cdot\overline{X})=\rho(\overline{X})g^{-1}\). From this lemma follows that the normalized invariants are in this example \[I_{k}(\mathfrak{r}_{1},\cdots,\mathfrak{r}_{n})=((\mathfrak{r}_{2}-\mathfrak{ r}_{1})^{-1},-(\mathfrak{r}_{2}-\mathfrak{r}_{1})^{-1}\mathfrak{r}_{1})\cdot \mathfrak{r}_{k}=(\mathfrak{r}_{2}-\mathfrak{r}_{1})^{-1}(\mathfrak{r}_{k}- \mathfrak{r}_{1}), \tag{85}\] for \(k=1,2,\ldots,n\), thus \(I_{1}=O\), \(I_{2}=I\), etc. We return to the general case. A twisted \(n\)-gon in \(M_{l}(\mathbb{C})\) is a sequence which is constructed of the following form (we recall that \(G\) is a left action over \(M_{l}(\mathbb{C})\) and \(\mathfrak{M}_{n}\)) \[(\mathfrak{q}_{k})_{k\in\mathbb{Z}}=\cdots,m^{-2}\cdot\mathfrak{r}_{1},\cdots,m^ {-2}\cdot\mathfrak{r}_{n},m^{-1}\cdot\mathfrak{r}_{1},\cdots,m^{-1}\cdot \mathfrak{r}_{n},\mathfrak{r}_{1},\cdots,\mathfrak{r}_{n},m\cdot\mathfrak{r}_{1}, \cdots,m\cdot\mathfrak{r}_{n},m^{2}\cdot\mathfrak{r}_{1},\cdots,m^{2}\cdot \mathfrak{r}_{n},\cdots, \tag{86}\] where \(\overline{X}=(\mathfrak{r}_{1},\cdots,\mathfrak{r}_{n})\in\mathfrak{M}_{n}\) and \(m\in G\) are fixed. In this case \(m\) is called the monodromy for the twisted \(n\)-gon \((\mathfrak{q}_{k})_{k\in\mathbb{Z}}\). Observe that if \((\mathfrak{q}_{k})_{k\in\mathbb{Z}}\) is a twisted \(n\)-gon then \(\mathfrak{q}_{k+n}=m\cdot\mathfrak{q}_{k}\) for all \(k\). Moreover, \(G\) acts over the set of all twisted \(n\)-gon in the form \(g\cdot(\mathfrak{q}_{k})_{k\in\mathbb{Z}}=(g\cdot\mathfrak{q}_{k})_{k\in\mathbb{Z}}\) being it a left action. The space of all twisted \(n\)-gons can be identified with \(\mathfrak{M}_{n}\). If \(\rho\) is a right moving frame (resp. left moving frame) and \((\mathfrak{q}_{k})_{k\in\mathbb{Z}}\) is a twisted \(n\)-gon such that \((\mathfrak{q}_{k},\cdots,\mathfrak{q}_{k+n-1})\in\Omega\) for all \(k\in\mathbb{Z}\), we can construct the sequence \((\rho_{k}=\rho(\mathfrak{q}_{k},\cdots,\mathfrak{q}_{k+n-1}))_{k\in\mathbb{Z}}\subset G\). **Definition 37**: _For a right moving frame given (resp. left moving frame) the element of \(G\), \(\mathfrak{K}_{k}=\rho_{k+1}\rho_{k}^{-1}\) (resp. \(\mathfrak{K}_{k}=\rho_{k}^{-1}\rho_{k+1}\)) is called the right \(k\)-Maurer-Cartan element (resp. left \(k\)-Maurer-Cartan element). The equation \(\mathfrak{K}_{k}\rho_{k}=\rho_{k+1}\) (resp. \(\rho_{k}\mathfrak{K}_{k} Clearly, the elements \(\mathfrak{K}_{k}\) are invariants under the action of \(G\). For our previous example \[\rho_{k}=((\mathfrak{q}_{k+1}-\mathfrak{q}_{k})^{-1},-(\mathfrak{q}_{k+1}- \mathfrak{q}_{k})^{-1}\mathfrak{q}_{k}),\] and so \(\mathfrak{K}_{k}=((\mathfrak{q}_{k+2}-\mathfrak{q}_{k+1})^{-1}(\mathfrak{q}_{k +1}-\mathfrak{q}_{k}),-(\mathfrak{q}_{k+2}-\mathfrak{q}_{k+1})^{-1}(\mathfrak{ q}_{k+1}-\mathfrak{q}_{k}))\), or also \[\rho_{k}=\left(\begin{array}{cc}(\mathfrak{q}_{k+1}-\mathfrak{q}_{k})^{-1}&- (\mathfrak{q}_{k+1}-\mathfrak{q}_{k})^{-1}\mathfrak{q}_{k}\\ O&I\end{array}\right),\] hence \[\mathfrak{K}_{k} =\left(\begin{array}{cc}(\mathfrak{q}_{k+2}-\mathfrak{q}_{k+ 1})^{-1}&-(\mathfrak{q}_{k+2}-\mathfrak{q}_{k+1})^{-1}\mathfrak{q}_{k+1}\\ O&I\end{array}\right)\left(\begin{array}{cc}(\mathfrak{q}_{k+1}-\mathfrak{ q}_{k})&\mathfrak{q}_{k}\\ O&I\end{array}\right)\] \[=\left(\begin{array}{cc}(\mathfrak{q}_{k+2}-\mathfrak{q}_{k+ 1})^{-1}(\mathfrak{q}_{k+1}-\mathfrak{q}_{k})&-(\mathfrak{q}_{k+2}-\mathfrak{ q}_{k+1})^{-1}(\mathfrak{q}_{k+1}-\mathfrak{q}_{k})\\ O&I\end{array}\right).\] **Definition 38**: _Let \(H:\mathfrak{M}_{n}\longrightarrow M_{l}(\mathbb{C})\) be a function defined on \(n\)-gons. We say that \(H\) is a discrete \(G\)-invariant function if for every twisted \(n\)-gon \((\mathfrak{q}_{k})_{k\in\mathbb{Z}}\), we have \(H(g\cdot(\mathfrak{q}_{k},\cdots,\mathfrak{q}_{k+n-1}))=H(\mathfrak{q}_{k}, \cdots,\mathfrak{q}_{k+n-1})\) for all \(k\in\mathbb{Z}\) and any \(g\in G\)._ For instance, the quantities \(H_{k;j}=\rho_{k}\cdot\mathfrak{q}_{j}\) are discrete \(G\)-invariant functions for all \(k,j\in\mathbb{Z}\) and any other \(G\)-invariant function is a function of these \(H_{k;j}\). In fact, \(H(\mathfrak{q}_{j},\cdots,\mathfrak{q}_{j+n-1})=H(g_{k}\cdot(\mathfrak{q}_{j},\cdots,\mathfrak{q}_{j+n-1}))=H(g_{k}\cdot\mathfrak{q}_{j},\cdots,g_{k}\cdot \mathfrak{q}_{j+n-1})=H(H_{k;j},\cdots,H_{k;j+n-1})\). ### Elements to a theory of matrix-valued frieze patterns In this subsection, we explore a notion of matrix frieze pattern. A **left matrix-valued frieze**\(\mathcal{F}_{m}\) with \(p-3\) non-trivial rows and \(p\) periodic will be seen in the form \[\begin{array}{ccccccccc}&&O&&O&&O&&\\ &I&&I&&I&&I&&\\ &&\mathfrak{M}_{-1,-1}&&\mathfrak{M}_{0,0}&&\mathfrak{M}_{1,1}&&\mathfrak{M}_{ 2,2}&&\\ &\cdots&&\ddots&&\ddots&&\ddots&&\ddots&&\cdots\\ &&&\mathfrak{M}_{-1,p-5}&&\mathfrak{M}_{0,p-4}&&\mathfrak{M}_{1,p-3}&&\mathfrak{ M}_{2,p-2}&&\\ &&I&&I&&I&&I&&I\end{array},\] where \(M_{i,j}\in GL_{l}(\mathbb{R})\) for any \(i,j\in\mathbb{Z}\) and such that the following diamond matrix rule \[\mathfrak{M}_{i,j}\mathfrak{M}_{i+1,j+1}-\mathfrak{M}_{i,j+1}\mathfrak{M}_{i+1,j}=I, \tag{87}\] holds for all \(i,j\in\mathbb{Z}\). If we replace (87) by \[\mathfrak{M}_{i+1,j+1}\mathfrak{M}_{i,j}-\mathfrak{M}_{i+1,j}\mathfrak{M}_{i,j +1}=I, \tag{88}\] then \(\mathcal{F}_{m}\) is called **right matrix-valued frieze**. It is clear that we can define other types of matrix-valued friezes depending on the order in which the matrices are located in the two terms of (87). However in this section, we prefer to concentrate only in the two types previously defined of matrix-valued frieze. A matrix-valued finite frieze which is both left and right matrix-valued finite frieze will be call a **two-sided matrix-valued frieze**. **Definition 39**: _Let \(\mathcal{F}_{m}\) be a matrix-valued finite frieze (left or right). The finite sequence \((\mathfrak{M}_{0,0},\cdots,\mathfrak{M}_{p-1,p-1})\) is called the **matrix frieze quiddity sequence** of \(\mathcal{F}_{m}\)._ From now on, we identify a matrix-valued frieze with its quiddity sequence. If \(\mathcal{F}_{m}\) is a matrix-valued finite frieze its nontrivial rows are those that are located between the two rows composed only of the identity matrix. In this point, we recall the definition the **joint spectrum**. For an \(s\)-tuple \(T=(T_{1},\cdots,T_{s})\) of complex \(l\times l\)-matrices, we define the joint spectrum \(\sigma(T)\) as the set of all points \(\lambda=(\lambda_{1},\cdots,\lambda_{s})\in\mathbb{C}^{s}\) for which there exists a nonzero vector \(x\in\mathbb{C}^{l}\) (called the joint eigenvector) satisfying \[T_{k}x=\lambda_{k}x,\] for \(k=1,\ldots,s\). If the \((T_{k})\)'s are commuting then \(\sigma(T)\neq\emptyset\). **Proposition 40**: _We give two simple properties of the matrix-valued frieze patterns_ 1. _If we transpose the non-trivial rows of a matrix-valued frieze pattern_ \({\cal F}_{m}\)_, we obtain a new matrix-valued frieze pattern denoted by_ \({\cal F}_{m}^{*}\)_. Suppose that_ \({\cal F}_{m}\) _is a right matrix-valued frieze then_ \({\cal F}_{m}^{*}\) _is a left matrix-valued frieze. Hence, if_ \({\cal F}_{m}\) _is a two-sided matrix-valued frieze then_ \({\cal F}_{m}^{*}\) _is a two-sided matrix-valued frieze._ 2. _Let us assume that_ \({\cal F}_{m}\) _is matrix-valued frieze pattern such that all its matrices commute and let_ \(x\in{\mathbb{C}}^{l}\) _be a common eigenvector to all matrices of_ \({\cal F}_{m}\) _with_ \(\|x\|=1\)_. Then, we can construct a scalar frieze pattern_ \({\cal F}_{s}(x)\) _with complex entries of the following form: if the matrix_ \(A\) _is an entry of_ \({\cal F}_{m}\) _then the corresponding entry of_ \({\cal F}_{s}(x)\) _is_ \((Ax,x)_{\mathbb{C}^{l}}\)_._ **Proof.** 1. follows of (87) and (88). To prove 2. observe that there exist \(\lambda_{i,j}\in{\mathbb{C}}\) such that \({\mathfrak{M}}_{i,j}x=\lambda_{i,j}x\) where \(x\) is a joint eigenvector, therefore again the assertion follows for using (87) and (88). The proof of the following proposition is a trivial calculate. **Proposition 41**: _For all \({\mathfrak{M}}\in GL_{l}({\mathbb{R}})\)_ \[\begin{array}{ccccccccccccc}&O&O&&O&&O&&&&&\\ &I&&I&&I&&I&&I&&I&&\\ &\cdots&&{\mathfrak{M}}&&2{\mathfrak{M}}^{-1}&&{\mathfrak{M}}&&2{\mathfrak{M} }^{-1}&&\cdots&2{\mathfrak{M}}^{-1}&&\cdots&,\\ &&&&I&&I&&I&&I&&I&&\\ &&&&O&&O&&O&&O&&O\end{array} \tag{89}\] _is a two-sided matrix-valued finite frieze of period \(4\), its matrix frieze quiddity sequence is \(({\mathfrak{M}},2{\mathfrak{M}}^{-1},{\mathfrak{M}},2{\mathfrak{M}}^{-1})\) called for us here the **basic matrix frieze quiddity sequence**. Moreover, we already know that the basic matrix frieze quiddity sequence is a left matrix quiddity sequence._ Starting from the basic matrix frieze quiddity sequence \(({\mathfrak{M}},2{\mathfrak{M}}^{-1},{\mathfrak{M}},2{\mathfrak{M}}^{-1})\) we can obtain other matrix frieze quiddity sequence. In this sense, we have **Theorem 42**: _The following matrix vector \({\cal MQS}=(I,{\mathfrak{M}}+I,2{\mathfrak{M}}^{-1},{\mathfrak{M}},2{ \mathfrak{M}}^{-1}+I)\) is a matrix frieze quiddity sequence (and therefore so are its cyclical permutations) corresponding to \(5\)-periodic matrix-valued frieze pattern._ **Proof.** We just show the matrix-valued frieze pattern corresponding to this matrix vector: \[\begin{array}{ccccccccccccc}&&O&&O&&O&&O&&O&&O\\ &I&&I&&I&&I&&I&&I&&I\\ &&I&&{\mathfrak{M}}+I&&2{\mathfrak{M}}^{-1}&&{\mathfrak{M}}&&2{\mathfrak{M}}^{- 1}&&I&&2{\mathfrak{M}}^{-1}+I&&I&&I\\ &\cdots&&{\mathfrak{M}}&&2{\mathfrak{M}}^{-1}+I&&I&&{\mathfrak{M}}+I&&2{ \mathfrak{M}}^{-1}&&{\mathfrak{M}}&&\ldots\\ &&&&&I&&I&&I&&I&&I&&I&&I\\ &&&&O&&O&&O&&O&&O&&O\end{array}.\] A more general result is the following **Theorem 43**: _Suppose that \({\mathfrak{A}},{\mathfrak{B}}\in GL_{l}({\mathbb{K}})\) such that \([{\mathfrak{A}},{\mathfrak{B}}]=0\) then_ \[{\mathfrak{A}}=({\mathfrak{A}},{\mathfrak{A}}^{-1}(I+{\mathfrak{B}}),(I+{ \mathfrak{A}}){\mathfrak{B}}^{-1},{\mathfrak{B}},{\mathfrak{B}}^{-1}(I+{ \mathfrak{A}}+{\mathfrak{B}}){\mathfrak{A}}^{-1}), \tag{90}\] _is a matrix frieze quiddity sequence of a two-sided matrix-valued frieze of period \(5\)._ **Proof.** Indeed, the corresponding matrix-valued frieze pattern for the row matrix vector (90) is the following \[\begin{array}{ccccccccccccc}&O&&O&&O&&O&&O&&O&&O\\ &I&&I&&I&&I&&I&&I&&\\ \cdots&{\mathfrak{A}}&&{\mathfrak{A}}^{-1}(I+{\mathfrak{B}})&&{\mathfrak{A}}^{ -1}&(I+{\mathfrak{A}}){\mathfrak{B}}^{-1}&&(I+{\mathfrak{A}}){\mathfrak{B}}^{ -1}&&{\mathfrak{B}}^{-1}(I+{\mathfrak{A}}+{\mathfrak{B}}){\mathfrak{A}}^{-1}&& {\mathfrak{A}}&\ldots\\ &&{\mathfrak{B}}&&{\mathfrak{B}}^{-1}(I+{\mathfrak{A}}+{\mathfrak{B}}){ \mathfrak{A}}^{-1}&&I&&{\mathfrak{A}}^{-1}(I+{\mathfrak{B}})&(I+{\mathfrak{A}}) {\mathfrak{B}}^{-1}&&I&{\mathfrak{B}}\\ &&I&&I&&I&&I&&I&&I&&I&\\ &&O&&O&&O&&O&&O&&O\end{array}.\] ### Noncommutative signed Chebyshev polynomials In this subsection, different types of matrix Chebyshev polynomials are introduced. Also, we show the relation of these polynomials with the matrix periodic difference equations and their monodromy matrices. In the classic case (for scalars), the interested reader can consult [5]. Let \((\mathfrak{a}_{k})_{k\geq 1}\subset GL_{l}(\mathbb{C})\) be a sequence of square matrices of certain order. The left matrix signed Chebyshev polynomials are defined of the following recurrent form: \(\mathfrak{p}_{-1}=O\), \(\mathfrak{p}_{0}=I\) and \[\mathfrak{p}_{m}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{m})=\mathfrak{a}_{m} \mathfrak{p}_{m-1}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{m-1})-\mathfrak{p}_{ m-2}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{m-2}), \tag{91}\] for instance, \(\mathfrak{p}_{1}(\mathfrak{a}_{1})=\mathfrak{a}_{1}\), \(\mathfrak{p}_{2}(\mathfrak{a}_{1},\mathfrak{a}_{2})=\mathfrak{a}_{2}\mathfrak{ a}_{1}-I\), \(\mathfrak{p}_{3}(\mathfrak{a}_{1},\mathfrak{a}_{2},\mathfrak{a}_{3})= \mathfrak{a}_{3}(\mathfrak{a}_{2}\mathfrak{a}_{1}-I)-\mathfrak{a}_{1}\), etc. **Lemma 44**: _For all \(m\geq 1\), we have_ \[M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{m})=\left(\begin{array}{cc} \mathfrak{p}_{m}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{m})&-\mathfrak{p}_{m-1 }(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{m})\\ \mathfrak{p}_{m-1}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{m-1})&-\mathfrak{p}_ {m-2}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{m-1})\end{array}\right), \tag{92}\] _we remember that \(|M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{m})|=1\)._ **Proof.** We prove the lemma by induction. For \(m=1\) we have \[M(\mathfrak{a}_{1})=\left(\begin{array}{cc}\mathfrak{a}_{1}&-I\\ I&O\end{array}\right)=\left(\begin{array}{cc}\mathfrak{p}_{1}(\mathfrak{a} _{1})&-\mathfrak{p}_{0}\\ \mathfrak{p}_{0}&-\mathfrak{p}_{-1}\end{array}\right),\] thus the result is true for \(m=1\). Let us suppose that (92) is hold for \(m=s\), then \[M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{s+1}) =\left(\begin{array}{cc}\mathfrak{a}_{s+1}&-I\\ I&O\end{array}\right)M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{s})=\left( \begin{array}{cc}\mathfrak{a}_{s+1}&-I\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{p}_{s}(\mathfrak{a} _{1},\cdots,\mathfrak{a}_{s})&-\mathfrak{p}_{s-1}(\mathfrak{a}_{2},\cdots, \mathfrak{a}_{s})\\ \mathfrak{p}_{s-1}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{s-1})&-\mathfrak{p}_ {s-2}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{s-2})\end{array}\right)\] \[=\left(\begin{array}{cc}\mathfrak{a}_{s+1}\mathfrak{p}_{s}( \mathfrak{a}_{1},\cdots,\mathfrak{a}_{s})-\mathfrak{p}_{s-1}(\mathfrak{a}_{1}, \cdots,\mathfrak{a}_{s-1})&-\mathfrak{a}_{s+1}\mathfrak{p}_{s-1}(\mathfrak{a}_ {2},\cdots,\mathfrak{a}_{s})+\mathfrak{p}_{s-2}(\mathfrak{a}_{2},\cdots, \mathfrak{a}_{s-1})\\ \mathfrak{p}_{s}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{s})&-\mathfrak{p}_{s-1 }(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{s})\end{array}\right)\] \[=\left(\begin{array}{cc}\mathfrak{p}_{s+1}(\mathfrak{a}_{1}, \cdots,\mathfrak{a}_{s},\mathfrak{a}_{s+1})&-\mathfrak{p}_{s}(\mathfrak{a}_{2}, \cdots,\mathfrak{a}_{s+1})\\ \mathfrak{p}_{s}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{s})&-\mathfrak{p}_{s-1 }(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{s})\end{array}\right).\] Define \[\mathfrak{Q}_{m}=\left(\begin{array}{ccccc}\mathfrak{a}_{m}&I&O&\cdots&O\\ I&\mathfrak{a}_{m-1}&\ddots&\ddots&\vdots\\ O&\ddots&\ddots&\ddots&O\\ \vdots&\ddots&\ddots&\ddots&I\\ O&\cdots&O&I&\mathfrak{a}_{1}\end{array}\right), \tag{93}\] for \(m\geq 2\) and \(\mathfrak{Q}_{1}=\mathfrak{a}_{1}\), where \(\mathfrak{a}_{k}\in GL_{l}\) for \(k=\overline{1,m}\), then we have **Proposition 45**: _For all \(m\geq 1\)_ \[|\mathfrak{Q}_{m}|=|\mathfrak{p}_{m}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{m})|. \tag{94}\] **Proof.** We prove the proposition first for \(m=1\), \(m=2\) and \(m=3\). It is clear that \(|\mathfrak{Q}_{1}|=|\mathfrak{a}_{1}|=|\mathfrak{p}_{1}(\mathfrak{a}_{1})|\). From the Schur determinant lemma, we obtain \[|\mathfrak{Q}_{2}|=\left|\begin{array}{cc}\mathfrak{a}_{2}&I\\ I&\mathfrak{a}_{1}\end{array}\right|=|\mathfrak{a}_{2}\mathfrak{a}_{1}-I|=| \mathfrak{p}_{2}(\mathfrak{a}_{1},\mathfrak{a}_{2})|.\] Next, we compute \(|\mathfrak{Q}_{3}|\) through the Schur's formula (also we use the Schur determinant lemma). We have \[|\mathfrak{Q}_{3}|=\left|\begin{array}{cc}\mathfrak{a}_{3}&I&O\\ I&\mathfrak{a}_{2}&I\\ O&I&\mathfrak{a}_{1}\end{array}\right|=|\mathfrak{a}_{3}|\left|\begin{array}[] {cc}\mathfrak{a}_{2}-\mathfrak{a}_{3}^{-1}&I\\ I&\mathfrak{a}_{1}\end{array}\right|=|\mathfrak{a}_{3}|\left|(\mathfrak{a}_{ 2}-\mathfrak{a}_{3}^{-1})\mathfrak{a}_{1}-I|=|\mathfrak{a}_{3}|\left|\mathfrak{ a}_{3}^{-1}|\left|\mathfrak{a}_{3}\mathfrak{a}_{2}\mathfrak{a}_{1}-\mathfrak{a}_{3}- \mathfrak{a}_{1}\right|=|\mathfrak{p}_{3}(\mathfrak{a}_{1},\mathfrak{a}_{2}, \mathfrak{a}_{3})|.\] Hence, the result is true for \(m=1\), \(m=2\) and \(m=3\). Next, we will proceed by induction. Suppose that \(|\mathfrak{L}_{s}|=|\mathfrak{p}_{s}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{s})|\). Then \[|\mathfrak{L}_{s+1}|=|\mathfrak{a}_{s+1}|\left|\begin{array}{ccccc}\mathfrak{ a}_{s}-\mathfrak{a}_{s+1}^{-1}&I&O&\cdots&O\\ I&\mathfrak{a}_{s-1}&\ddots&\ddots&\vdots\\ O&\ddots&\ddots&\ddots&O\\ \vdots&\ddots&\ddots&\ddots&I\\ O&\cdots&O&I&\mathfrak{a}_{1}\end{array}\right|=|\mathfrak{a}_{s+1}|\,| \mathfrak{p}_{s}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{s}-\mathfrak{a}_{s+1}^ {-1})|. \tag{95}\] Nevertheless, observe that \[\mathfrak{p}_{s}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{s}- \mathfrak{a}_{s+1}^{-1}) =(\mathfrak{a}_{s}-\mathfrak{a}_{s+1}^{-1})\mathfrak{p}_{s-1}( \mathfrak{a}_{1},\cdots,\mathfrak{a}_{s-1})-\mathfrak{p}_{s-2}(\mathfrak{a}_ {1},\cdots,\mathfrak{a}_{s-2})\] \[=\mathfrak{p}_{s}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{s})- \mathfrak{a}_{s+1}^{-1}\mathfrak{p}_{s-1}(\mathfrak{a}_{1},\cdots,\mathfrak{ a}_{s-1})=\mathfrak{a}_{s+1}^{-1}(\mathfrak{a}_{s+1}\mathfrak{p}_{s}(\mathfrak{a}_{1}, \cdots,\mathfrak{a}_{s})-\mathfrak{p}_{s-1}(\mathfrak{a}_{1},\cdots,\mathfrak{ a}_{s-1}))\] \[=\mathfrak{a}_{s+1}^{-1}\mathfrak{p}_{s+1}(\mathfrak{a}_{1}, \cdots,\mathfrak{a}_{s+1}), \tag{96}\] so combining (95) and (96) we obtain (94). Let us denote by \(\mathfrak{R}_{m}\) the formal inverse of \(\mathfrak{L}_{m}\) for \(1\leq m\) and define \(\mathfrak{P}_{m}=(-1)^{m-1}\left((\mathfrak{R}_{m})_{m1}\right)^{-1}\) where \((\mathfrak{R}_{m})_{m1}\) is the \(l\times l\) matrix located in the position \(m1\) of the matrix \(\mathfrak{R}_{m}\). **Proposition 46**: _For all \(m\geq 1\), we have_ \[\mathfrak{P}_{m}=\mathfrak{p}_{m}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{m}). \tag{97}\] **Proof.** It will be useful to do the calculation of \(\mathfrak{P}_{1},\mathfrak{P}_{2}\) and \(\mathfrak{P}_{3}\). For \(m=1\), \(\mathfrak{R}_{1}=\mathfrak{a}_{1}^{-1}\) hence \(\mathfrak{P}_{1}=\left(\mathfrak{a}_{1}^{-1}\right)^{-1}=\mathfrak{a}_{1}= \mathfrak{p}_{1}(\mathfrak{a}_{1})\). Lets us suppose that \(m=2\), then from a straightforward calculation, we can see that \((\mathfrak{R}_{2})_{21}=-(\mathfrak{a}_{2}\mathfrak{a}_{1}-I)^{-1}\), thus (97) holds. If \(m=3\) the entries of \(\mathfrak{R}_{3}\), \((\mathfrak{R}_{3})_{11}\), \((\mathfrak{R}_{3})_{21}\) and \((\mathfrak{R}_{3})_{31}\) can be calculated of the following equations \[\mathfrak{a}_{3}(\mathfrak{R}_{3})_{11}+(\mathfrak{R}_{3})_{21}=I,(\mathfrak{ R}_{3})_{11}+\mathfrak{a}_{2}(\mathfrak{R}_{3})_{21}+(\mathfrak{R}_{3})_{31}=O,( \mathfrak{R}_{3})_{21}+\mathfrak{a}_{1}(\mathfrak{R}_{3})_{31}=O,\] thus \((\mathfrak{R}_{3})_{21}=-\mathfrak{a}_{1}(\mathfrak{R}_{3})_{31}\) and \((\mathfrak{R}_{3})_{11}=(\mathfrak{a}_{2}\mathfrak{a}_{1}-I)(\mathfrak{R}_{3 })_{31}\). It shows that \[(\mathfrak{R}_{3})_{31}=\left(\mathfrak{a}_{3}(\mathfrak{a}_{2}\mathfrak{a}_ {1}-I)-\mathfrak{a}_{1}\right)^{-1},\] hence \(\mathfrak{P}_{3}=\left(\mathfrak{a}_{3}(\mathfrak{a}_{2}\mathfrak{a}_{1}-I)- \mathfrak{a}_{1}\right)=\mathfrak{p}(\mathfrak{a}_{1},\mathfrak{a}_{2}, \mathfrak{a}_{3})\). In the general case, in order to find the first column of \(\mathfrak{R}_{m}\), we must solve the linear system : \[\left(\begin{array}{ccccc}\mathfrak{a}_{m}&I&O&\cdots&O\\ I&\mathfrak{a}_{m-1}&\ddots&\ddots&\vdots\\ O&\ddots&\ddots&\ddots&O\\ \vdots&\ddots&\ddots&\ddots&I\\ O&\cdots&O&I&\mathfrak{a}_{1}\end{array}\right)\left(\begin{array}{c}( \mathfrak{R}_{s})_{11}\\ (\mathfrak{R}_{s})_{21}\\ \vdots\\ (\mathfrak{R}_{s})_{s(s-1)1}\\ (\mathfrak{R}_{s})_{s1}\end{array}\right)=\left(\begin{array}{c}I\\ O\\ \vdots\\ \vdots\\ O\end{array}\right),\] now, using the last \((m-1)\) equations of this system from the bottom to the top, we obtain \[(\mathfrak{R}_{m})_{(m-1)1} =-\mathfrak{p}_{1}(\mathfrak{a}_{1})(\mathfrak{R}_{m})_{m1},\] \[(\mathfrak{R}_{m})_{(m-2)1} =-\mathfrak{a}_{2}(\mathfrak{R}_{m})_{(m-1)1}-(\mathfrak{R}_{m})_ {m1}=(\mathfrak{a}_{2}\mathfrak{p}_{1}(\mathfrak{a}_{1})-I)(\mathfrak{R}_{m}) _{m1}=\mathfrak{p}_{2}(\mathfrak{a}_{1},\mathfrak{a}_{2})(\mathfrak{R}_{m})_{m1},\] \[\ldots\qquad\cdots\qquad\cdots\qquad\cdots\qquad\cdots\qquad \cdots\] \[(\mathfrak{R}_{m})_{21} =(\mathfrak{R}_{m})_{(m-(m-2))1}=(-1)^{(m-2)}\mathfrak{p}_{m-2}( \mathfrak{a}_{1},\cdots,\mathfrak{a}_{m-2})(\mathfrak{R}_{m})_{m1},\] \[(\mathfrak{R}_{m})_{11} =(\mathfrak{R}_{m})_{(m-(m-1))1}=(-1)^{(m-1)}\mathfrak{p}_{m-1}( \mathfrak{a}_{1},\cdots,\mathfrak{a}_{m-1})(\mathfrak{R}_{m})_{m1}.\] The first equation is \(\mathfrak{a}_{m}(\mathfrak{R}_{s})_{11}+(\mathfrak{R}_{s})_{21}=I\), then \[I =\mathfrak{a}_{m}\left((-1)^{(m-1)}\mathfrak{p}_{m-1}(\mathfrak{a} _{1},\cdots,\mathfrak{a}_{m-1})+(-1)^{(m-2)}\mathfrak{p}_{m-2}(\mathfrak{a}_{1}, \cdots,\mathfrak{a}_{m-2})\right)(\mathfrak{R}_{m})_{m1}\] \[=(-1)^{(m-1)}(\mathfrak{a}_{m}\mathfrak{p}_{m-1}(\mathfrak{a}_{1}, \cdots,\mathfrak{a}_{m-1})-\mathfrak{p}_{m-2}(\mathfrak{a}_{1},\cdots,\mathfrak{a }_{m-2}))\,(\mathfrak{R}_{m})_{m1}\] \[=(-1)^{(m-1)}\mathfrak{p}_{m}(\mathfrak{a}_{1},\cdots,\mathfrak{a }_{m})(\mathfrak{R}_{m})_{m1},\] it shows that \((\mathfrak{R}_{m})_{m1}=(-1)^{(m-1)}(\mathfrak{p}_{m}(\mathfrak{a}_{1},\cdots, \mathfrak{a}_{m}))^{-1}\). Therefore (97) holds. In this moment, we will show the relation between left matrix Chebyshev polynomials and matrix difference equations. Consider the recurrent equation \[\mathfrak{y}_{n+1}=\mathfrak{a}_{n}\mathfrak{y}_{n}-\mathfrak{y}_{n-1},\qquad \quad 1\leq n, \tag{98}\] where \((\mathfrak{a}_{n})\subset M_{l}(\mathbb{C})\). We put \(\mathfrak{a}_{0}=I\), then **Lemma 47**: _For \(\mathfrak{y}_{0},\mathfrak{y}_{1}\) given and \(1\leq n\), we have_ \[\mathfrak{y}_{n+1}=\mathfrak{p}_{n}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{n} )\mathfrak{y}_{1}-\mathfrak{p}_{n-1}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{ n})\mathfrak{y}_{0}. \tag{99}\] **Proof.** We do the proof by complete induction. Clearly, the lemma holds for \(n=1\). Now, if \(n=2\) then \[\mathfrak{y}_{3}=\mathfrak{a}_{2}\mathfrak{y}_{2}-\mathfrak{y}_{1}=\mathfrak{ a}_{2}(\mathfrak{a}_{1}\mathfrak{y}_{1}-\mathfrak{y}_{0})-\mathfrak{y}_{1}=( \mathfrak{a}_{2}\mathfrak{a}_{1}-I)\mathfrak{y}_{1}-\mathfrak{a}_{2} \mathfrak{y}_{0}=\mathfrak{p}_{2}(\mathfrak{a}_{1},\mathfrak{a}_{2}) \mathfrak{y}_{1}-\mathfrak{p}_{1}(\mathfrak{a}_{2})\mathfrak{y}_{0}.\] Suppose the result holds for \(n\leq k\), then \[\mathfrak{y}_{k+1}=\mathfrak{p}_{k}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{k} )\mathfrak{y}_{1}-\mathfrak{p}_{k-1}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{k })\mathfrak{y}_{0}, \tag{100}\] and \[\mathfrak{y}_{k}=\mathfrak{p}_{k-1}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{k-1 })\mathfrak{y}_{1}-\mathfrak{p}_{k-2}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{ k-1})\mathfrak{y}_{0}, \tag{101}\] thus, if we multiply to the left of (100) by \(\mathfrak{a}_{k+1}\) and to the resulting equality one subtracts (101) we obtain \[\mathfrak{y}_{k+2}=\mathfrak{p}_{k+1}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{ k+1})\mathfrak{y}_{1}-\mathfrak{p}_{k}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{ k+1})\mathfrak{y}_{0},\] hence the result is also true for \(n=k+1\). Suppose now that the sequence \((\mathfrak{a}_{n})\) is \(N\)-periodic, then from the previous lemma follows that in order to any solution \((\mathfrak{y}_{n})\) of (98) constitutes an \(N\)-antiperiodic sequence is necessary and sufficient that \[\mathfrak{p}_{N-1}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{N-1})=0,\qquad \mathfrak{p}_{N-2}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{N-1})=I, \tag{102}\] and \[\mathfrak{p}_{N}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{N})=-I,\qquad \mathfrak{p}_{N-1}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{N})=0, \tag{103}\] indeed, the conditions in (102) and (103) imply that \(\mathfrak{y}_{N}=-\mathfrak{y}_{0}\), \(\mathfrak{y}_{N+1}=-\mathfrak{y}_{1}\). We present a more general result **Proposition 48**: _Consider a recurrent relation (98) for which the sequence \((\mathfrak{a}_{n})_{1\leq n}\) is \(N\)-periodic. Suppose that the matrix \(\mathfrak{m}\) satisfies \([\mathfrak{m},\mathfrak{a}_{k}]=0\) for \(k=1,2,\cdots,N\). Then, \(\mathfrak{m}\) is the monodromy matrix of all solution of (98) (that is, if \((\mathfrak{y}_{k})_{0\leq k}\) is any solution of (98), then \(\mathfrak{y}_{k+N}=\mathfrak{m}\mathfrak{y}_{k}\) for all \(0\leq k\)) if and only if_ \[\mathfrak{p}_{N-1}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{N-1})=0,\qquad \mathfrak{p}_{N-2}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{N-1})=-\mathfrak{m}, \tag{104}\] _and_ \[\mathfrak{p}_{N}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{N})=\mathfrak{m},\qquad \mathfrak{p}_{N-1}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{N})=0. \tag{105}\] _Under the hypothesis of the proposition_ \[M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{N})=\left(\begin{array}{cc}\mathfrak{ m}&O\\ O&\mathfrak{m}\end{array}\right). \tag{106}\] **Proof.** Let \((\mathfrak{y}_{k})_{0\leq k}\) be an arbitrary solution. From (99) and (104) follow that \(\mathfrak{y}_{N}=\mathfrak{m}\mathfrak{y}_{0}\), and taking into account (99) and (105) is easy to obtain that \(\mathfrak{y}_{N+1}=\mathfrak{m}\mathfrak{y}_{1}\). Now using complete induction it is immediate to see that for all \(0\leq k\) we have \(\mathfrak{y}_{k+N}=\mathfrak{m}\mathfrak{y}_{k}\). In fact, let us assume that this is hold for \(k\leq r\) (we already know that the statement is true for \(k=0\) and \(k=1\)) then \[\mathfrak{y}_{k+1+N}=\mathfrak{a}_{k+N}\mathfrak{y}_{k+N}-\mathfrak{y}_{k-1+N}= \mathfrak{a}_{k}\mathfrak{m}\mathfrak{y}_{k}-\mathfrak{m}\mathfrak{y}_{k-1}= \mathfrak{m}(\mathfrak{a}_{k}\mathfrak{y}_{k}-\mathfrak{y}_{k-1})=\mathfrak{m} \mathfrak{y}_{k+1},\] since \(\mathfrak{y}_{0}\) and \(\mathfrak{y}_{1}\) are arbitrary it implies that if (104) and (105) are satisfied, then all solution of (98) has to \(\mathfrak{m}\) as monodromy matrix. The proof of the necessity follows from (99). Finally, observe that (106) can be obtained from (92). **Remark 49**: _Under the conditions of the previous proposition any solution \((\mathfrak{h}_{k})_{0\leq k}\) of (98) where \((\mathfrak{a}_{n})_{1\leq n}\) is \(N\)-periodic can be extended to the left obtaining a solution \((\widehat{\mathfrak{h}}_{k})_{k\in\mathbb{Z}}\) of (98) which maintains the property that \(\widehat{\mathfrak{h}}_{k+N}=\mathfrak{m}\widehat{\mathfrak{h}}_{k}\) for all \(k\in\mathbb{Z}\). On the other hand, if \([\mathfrak{m},\mathfrak{a}_{k}]=0\) for \(k=1,2,\cdots,N\) then for any solution \((\mathfrak{a}_{1},\cdots,\mathfrak{a}_{N})\) of the equation (106) each of its cyclic permutations is also a solution (106). Indeed, suppose that \([\mathfrak{m},\mathfrak{b}]=0\) then_ \[\left[\left(\begin{array}{cc}\mathfrak{m}&O\\ O&\mathfrak{m}\end{array}\right),\left(\begin{array}{cc}\mathfrak{b}&-I\\ I&O\end{array}\right)\right]=\left(\begin{array}{cc}O&O\\ O&O\end{array}\right),\] _and this implies the assertion._ Consider again the recurrent relation (98) where the sequence \((\mathfrak{a}_{n})_{1\leq n}\) is \(N\)-periodic such that for \(0\leq k\) \[\left(\begin{array}{c}\mathfrak{y}_{k+1+N}\\ \mathfrak{y}_{k+N}\end{array}\right)=\left(\begin{array}{cc}\mathfrak{m}_{1 1}&\mathfrak{m}_{12}\\ \mathfrak{m}_{21}&\mathfrak{m}_{22}\end{array}\right)\left(\begin{array}{ c}\mathfrak{y}_{k+1}\\ \mathfrak{y}_{k}\end{array}\right), \tag{107}\] for every solution \((\mathfrak{y}_{k})_{0\leq k}\). Then, we obtain (taking \(k=0\)) \[\left(\begin{array}{cc}\mathfrak{p}_{N}(\mathfrak{a}_{1},\cdots,\mathfrak{ a}_{N})&-\mathfrak{p}_{N-1}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{N})\\ \mathfrak{p}_{N-1}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{N-1})&-\mathfrak{p}_ {N-2}(\mathfrak{a}_{2},\cdots,\mathfrak{a}_{N-1})\end{array}\right)=M( \mathfrak{a}_{1},\cdots,\mathfrak{a}_{N})=\left(\begin{array}{cc}\mathfrak{ m}_{11}&\mathfrak{m}_{12}\\ \mathfrak{m}_{21}&\mathfrak{m}_{22}\end{array}\right)=\mathfrak{M}, \tag{108}\] even more, the matrix sequence \((\mathfrak{a}_{2},\cdots,\mathfrak{a}_{N})\) is a cyclic solution of the equation \(M(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{N})=\mathfrak{M}\). But then, necessarily each block matrix \(M(\mathfrak{a}_{k})\) must commute with \(\mathfrak{M}\) and this leads to the following facts \(\mathfrak{m}_{11}=\mathfrak{m}_{22}=\mathfrak{m}\), \(\mathfrak{m}_{12}=\mathfrak{m}_{21}=O\) and \([\mathfrak{m},\mathfrak{a}_{k}]=0\) where \(k=1,2,\cdots,N\). Let \((\mathfrak{l}_{n})_{1\leq n}\) and \((\mathfrak{s}_{n})_{1\leq n}\) be two sequences of \(l\times l\) matrices. We define a pair of left signed Chebychev polynomials \(((\mathfrak{p}_{n})_{1\leq n},(\mathfrak{q}_{n})_{1\leq n})\) of the following form \[\mathfrak{p}_{n}=\mathfrak{l}_{n}\mathfrak{p}_{n-1}+\mathfrak{s}_{n}\mathfrak{ p}_{n-2},\qquad\qquad\mathfrak{p}_{0}=I,\mathfrak{p}_{-1}=O, \tag{109}\] and \[\mathfrak{q}_{n}=\mathfrak{l}_{n}\mathfrak{q}_{n-1}+\mathfrak{s}_{n}\mathfrak{ q}_{n-2},\qquad\qquad\mathfrak{q}_{0}=O,\mathfrak{q}_{-1}=I, \tag{110}\] for instance \(\mathfrak{p}_{1}=\mathfrak{l}_{1}\), \(\mathfrak{q}_{1}=\mathfrak{s}_{1}\), \(\mathfrak{p}_{2}=\mathfrak{l}_{2}\mathfrak{l}_{1}+\mathfrak{s}_{2}\), \(\mathfrak{q}_{2}=\mathfrak{l}_{2}\mathfrak{s}_{1}\), etc. We should call to \(((\mathfrak{p}_{n})_{1\leq n},(\mathfrak{q}_{n})_{1\leq n})\) a left Chebychev polynomial pair. Consider the recurrent equation \[\mathfrak{Y}_{k+1}=\mathfrak{l}_{k}\mathfrak{Y}_{k}+\mathfrak{s}_{k}\mathfrak{ Y}_{k-1}, \tag{111}\] where for all \(1\leq k\), \(\mathfrak{Y}_{k}=(\mathfrak{p}_{k}^{1}\;\;\mathfrak{y}_{k}^{2})\in M_{l\times 2l}( \mathbb{C})\). We have \[\mathfrak{Y}_{k+1}=\mathfrak{p}_{k}\mathfrak{Y}_{1}+\mathfrak{q}_{k}\mathfrak{ Y}_{0}, \tag{112}\] indeed, it is clear that \(\mathfrak{Y}_{2}=\mathfrak{l}_{1}\mathfrak{Y}_{1}+\mathfrak{s}_{1}\mathfrak{Y}_{0}= \mathfrak{p}_{1}\mathfrak{Y}_{1}+\mathfrak{q}_{1}\mathfrak{Y}_{0}\) and \(\mathfrak{Y}_{3}=\mathfrak{l}_{2}\mathfrak{Y}_{2}+\mathfrak{s}_{2}\mathfrak{Y }_{1}=\mathfrak{l}_{2}(\mathfrak{p}_{1}\mathfrak{Y}_{1}+\mathfrak{q}_{1} \mathfrak{Y}_{0})+\mathfrak{s}_{2}\mathfrak{Y}_{1}=\mathfrak{l}_{2}(\mathfrak{ l}_{1}\mathfrak{Y}_{1}+\mathfrak{s}_{1}\mathfrak{Y}_{0})+\mathfrak{s}_{2} \mathfrak{Y}_{1}=\mathfrak{p}_{2}\mathfrak{Y}_{1}+\mathfrak{q}_{2}\mathfrak{Y }_{0}\), thus (112) holds for \(k=1\) and \(k=2\). Let us suppose that (112) is true for \(k\leq r\) then combining (111)-(112) conveniently, we obtain \[\mathfrak{Y}_{r+1}=\mathfrak{l}_{r}\mathfrak{Y}_{r}+\mathfrak{s}_{r}\mathfrak{ Y}_{r-1}=\mathfrak{l}_{r}(\mathfrak{p}_{r-1}\mathfrak{Y}_{1}+\mathfrak{q}_{r-1} \mathfrak{Y}_{0})+\mathfrak{s}_{r}(\mathfrak{p}_{r-2}\mathfrak{Y}_{1}+ \mathfrak{q}_{r-2}\mathfrak{Y}_{0})=\mathfrak{p}_{r}\mathfrak{Y}_{1}+ \mathfrak{q}_{r}\mathfrak{Y}_{0}.\] Observe that for \(1\leq n\) \[\left(\begin{array}{cc}\mathfrak{p}_{n}&\mathfrak{q}_{n}\\ \mathfrak{p}_{n-1}&\mathfrak{q}_{n-1}\end{array}\right)=\left(\begin{array}{ cc}\mathfrak{l}_{n}&\mathfrak{s}_{n}\\ I&O\end{array}\right)\left(\begin{array}{cc}\mathfrak{p}_{n-1}&\mathfrak{q}_{ n-1}\\ \mathfrak{p}_{n-2}&\mathfrak{q}_{n-2}\end{array}\right),\qquad\qquad\left( \begin{array}{cc}\mathfrak{p}_{0}&\mathfrak{q}_{0}\\ \mathfrak{p}_{-1}&\mathfrak{q}_{-1}\end{array}\right)=\left(\begin{array}{ cc}I&O\\ O&I\end{array}\right), \tag{113}\] and so from (113) follows that \[M((\bar{\mathfrak{t}};\overline{\mathfrak{s}})_{n})=\left(\begin{array}{cc} \mathfrak{l}_{n}&\mathfrak{s}_{n}\\ I&O\end{array}\right)\cdots\left(\begin{array}{cc}\mathfrak{l}_{1}&\mathfrak{ s}_{1}\\ I&O\end{array}\right)=\left(\begin{array}{cc}\mathfrak{p}_{n}&\mathfrak{q}_{n}\\ \mathfrak{p}_{n-1}&\mathfrak{q}_{n-1}\end{array}\right). \tag{114}\] ## Acknowledgment The author was supported by CONAHCYT project 45886.
2305.15493
Square-free values of random polynomials
The question of whether or not a given integral polynomial takes infinitely many square-free values has only been addressed unconditionally for polynomials of degree at most 3. We address this question, on average, for polynomials of arbitrary degree.
Tim Browning, Igor Shparlinski
2023-05-24T18:32:08Z
http://arxiv.org/abs/2305.15493v1
# Square-free values of random polynomials ###### Abstract. The question of whether or not a given integral polynomial takes infinitely many square-free values has only been addressed unconditionally for polynomials of degree at most \(3\). We address this question, on average, for polynomials of arbitrary degree. 2010 Mathematics Subject Classification: 11N32 (11D79) ###### Contents * 1 Introduction * 2 Preliminaries * 3 Solutions to families of polynomial congruences * 4 Proofs of main results ## 1. Introduction Let \(f\in\mathbb{Z}[X]\) be an irreducible polynomial of degree \(k\), without a fixed square divisor. We denote by \(S_{f}(N)\) the number of positive integers \(n\leqslant N\) such that \(f(n)\) is square-free. It is expected that \[S_{f}(N)=c_{f}N+o(N), \tag{1.1}\] as \(N\to\infty\), where \[c_{f}=\prod_{p\text{ prime}}\left(1-\frac{\rho_{f}(p^{2})}{p^{2}}\right)\] and \(\rho_{f}(m)=\#\{n\in\mathbb{Z}/m\mathbb{Z}:f(n)\equiv 0\bmod m\}\), for any positive integer \(m\). When \(k\leqslant 3\) this expectation follows from pioneering work of Hooley [15]. (In fact, when \(k=3\), Reuss [22] has produced an asymptotic formula for \(S_{f}(N)\) with a power saving error term.) However for polynomials of degree \(k\geqslant 4\) we only have a conditional treatment under the \(abc\)-conjecture, thanks to work of Granville [13]. In this paper, for \(k\geqslant 4\), we lend support to the expectation (1.1) by showing that it holds for almost all polynomials of degree \(k\), when they are ordered by naive height. This sits in the framework of a great deal of recent work aimed at understanding the average size of various arithmetic functions over the values of Introduction Let \(A\geqslant 1\) and let \(\varepsilon>0\) be a positive integer. Let \(\lambda\) be a positive integer. Let \(\lambda\) be a positive integer. Let \(\varepsilon>0\) be a positive integer. Let \(\lambda_{1},\ldots,\lambda_{k}\) be the smallest integer such that \(\lambda_{1},\ldots,\lambda_{k}\) are positive integers. Inspired by recent work on Vinogradov's mean value theorem we can also treat a related problem in which we only vary one coefficient. Let \[g(X)=b_{1}X+\cdots+b_{k}X^{k}\in\mathbb{Z}[X]\] be given and note that \(g(0)=0\). Then we may consider the set \[\mathcal{G}_{g}(H)=\{a+g(X)\in\mathbb{Z}[X]:\ a\in\mathcal{I}_{g}(H)\},\] where \[\mathcal{I}_{g}(H)=\{a\in\mathbb{Z}:\ \gcd(a,b_{1},\ldots,b_{k})=1,\ |a|\leqslant H\}.\] While the approach of [24] also applies to polynomials from \(\mathcal{G}_{g}(H)\), we supplement it with some new bounds for residues of polynomials falling in short intervals, which complement those of [3, 8, 9, 12, 18]. To formulate the result, we define \[\eta(k)=\begin{cases}2^{-k+1},&\text{if }2\leqslant k\leqslant 5,\\ 1/(k(k-1)),&\text{if }k\geqslant 6,\end{cases} \tag{1.3}\] with which notation we have following result. **Theorem 1.2**.: _Let \(k\geqslant 2\) be fixed and assume that_ \[H\geqslant N^{(k-1)/2+\eta(k)+\varepsilon},\] _for \(\varepsilon>0\). Then, for a fixed polynomial \(g\in\mathbb{Z}[X]\) of degree \(k\), there exists \(\delta>0\) depending only on \(\varepsilon\) such that_ \[\frac{1}{\#\mathcal{G}_{g}(H)}\sum_{f\in\mathcal{G}_{g}(H)}|S_{f}(N)-c_{f}N| \ll N^{1-\delta},\] _with the implied constant depending only on \(A\), \(k\) and \(\varepsilon\)._ Again, in Theorem 4.2 we prove a version with an explicit upper bound for the left hand side, without making any assumptions about the relative sizes of \(H\) and \(N\). We note that the range for \(H\) in Theorem 1.2 is significantly broader than in Theorem 1.1, which seems counterintuitive, since we have less averaging in Theorem 1.2. However, the problem stems from the fact that the values of polynomials \(f(n)\) with \(f\in\mathcal{F}_{k}(H)\) and \(1\leqslant n\leqslant N\) could be of order \(HN^{k}\), while for \(f\in\mathcal{G}_{g}(H)\) they are of much smaller order \(H+N^{k}\), which has a strong effect on the set of moduli for which we have to sieve. ### Acknowledgements This work started during a very enjoyable visit by the second author to IST Austria whose hospitality and support are very much appreciated. The first author was supported by FWF grant P 36278 and the second author by ARC grant DP230100534. ## 2. Preliminaries ### Notation and conventions We adopt the Vinogradov notation \(\ll\), that is, \[C\ll D\iff C=O(D)\iff|C|\leqslant cD\] for some constant \(c>0\) which is allowed to depend on the integer parameter \(k\geqslant 1\) and the real parameters \(A,\varepsilon>0\). For a finite set \(\mathcal{S}\) we use \(\#\mathcal{S}\) to denote its cardinality. We also write \(\mathbf{e}(z)=\exp(2\pi iz)\) and \(\mathbf{e}_{m}(z)=\mathbf{e}(z/p)\). In what follows we make frequent use of the bound \[\tau(r)\leqslant|r|^{o(1)},\qquad\text{for }r\in\mathbb{Z},\ r\neq 0, \tag{2.1}\] for the divisor function \(\tau\), and its cousins, as explained in [16, Equation (1.81)], for example. Finally, as usual, \(\mu(r)\) denotes the Mobius function. ### Lattice points in boxes We use some tools from the geometry of numbers, as explained in Cassels [7]. Let \[\Lambda=\{u_{1}\mathbf{b}_{1}+\ldots+u_{s}\mathbf{b}_{s}:\ (u_{1},\ldots,u_{s}) \in\mathbb{Z}^{s}\}\] be an \(s\)-dimensional lattice defined by \(s\) linearly independent vectors \(\mathbf{b}_{1},\ldots,\mathbf{b}_{s}\in\mathbb{Z}^{s}\). We denote by \(\lambda_{1}\leqslant\ldots\leqslant\lambda_{s}\) the successive minima of \(\Lambda\), which for \(j=1,\ldots,d\) is defined to be \[\lambda_{j}=\inf\{\lambda>0:\ \lambda\mathfrak{B}_{s}\text{ contains }j\text{ linearly independent elements of }\Lambda\},\] where \(\lambda\mathfrak{B}_{s}\) is the homothetic image of \(\mathfrak{B}_{s}\) of the unit ball \(\mathfrak{B}_{s}\subseteq\mathbb{R}^{s}\) at the origin with the coefficient \(\lambda\). We also recall that the discriminant \(\Delta\) of \(\Lambda\) is an invariant that is independent of the choice of basis for \(\Lambda\). We have \[\Delta\leqslant\lambda_{1}\ldots\lambda_{s}\ll\Delta, \tag{2.2}\] where the implied constant only depends on \(s\). Next, we need the following consequence of the classical result of Schmidt [23, Lemma 2] on counting lattice points in boxes. **Lemma 2.1**.: _Let \(\lambda_{1}\leqslant\ldots\leqslant\lambda_{s}\) be the successive minima of a full rank lattice \(\Lambda\subseteq\mathbb{Z}^{s}\). Then_ \[\#\left(\Lambda\cap[-H,H]^{s}\right)\ll\frac{H^{s}}{\Delta}+\left(\frac{H}{ \lambda_{1}}\right)^{s-1}+1,\] _where the implied constant depends only on \(s\)._ Proof.: By [23, Lemma 2], we have the following asymptotic formula \[\left|\#\left(\Lambda\cap[-H,H]^{s}\right)-\frac{\left(2H+1\right)^{s}}{ \Delta}\right|\ll\sum_{j=0}^{s-1}\frac{H^{j}}{\lambda_{1}\ldots\lambda_{j}}.\] It follows that \[\#\left(\Lambda\cap[-H,H]^{s}\right)\ll\frac{H^{s}}{\Delta}+\sum_{j=0}^{s-1} \left(\frac{H}{\lambda_{1}}\right)^{j},\] which yields the result. ### Univariate polynomial congruences For \(f\in\mathbb{Z}[X]\), we define \[\rho_{f}(m)=\#\{n\in\mathbb{Z}/m\mathbb{Z}:\ f(n)\equiv 0\bmod m\}. \tag{2.3}\] To estimate \(\rho_{f}(m)\) we may use the following result. **Lemma 2.2**.: _Let \(p\) be a prime and let \(k\in\mathbb{N}\). Let \(f\in\mathbb{Z}[X]\) be a polynomial of degree \(k\). Assume that \(\Delta_{f}\neq 0\) and \(f\) has content coprime to \(p\). Then_ \[\rho_{f}(p^{j})\leqslant k\min\left\{p^{j(1-\frac{1}{k})},p^{j-1}\right\}.\] _Additionally, if \(p\nmid\Delta_{f}\), then \(\rho_{f}(p^{k})\leqslant d\)._ Proof.: The final bound is a straightforward consequence of Lagrange's theorem and Hensel's lemma. For the remaining bounds, the first bound follows from Corollary 2 and Equation (44) of Stewart [26] and the second bound is a consequence of Lagrange's theorem. The next bound is an easy consequence of Lemma 2.2 and the Chinese remainder theorem. **Corollary 2.3**.: _Let \(f\in\mathbb{Z}[X]\) be a polynomial of degree \(k\) with content \(1\). For any square-free positive integer \(q\) we have_ \[\rho_{f}(q^{2})\leqslant q^{o(1)}\gcd\left(\Delta_{f},q\right).\] Note that the bound of Lemma 2.3 also holds for \(\Delta_{f}=0\). We also need to look at averages of \(\rho_{f}(m)\), for which we require the following useful result. **Lemma 2.4**.: _Let \(f\in\mathbb{Z}[X]\) be a polynomial of degree \(k\) with \(\Delta_{f}\neq 0\) and content \(1\). Then_ \[\sum_{m\leqslant M}\rho_{f}(m)\leqslant M^{1+o(1)},\] _uniformly over \(f\)._ Proof.: Any \(m\in\mathbb{N}\) admits a factorisation \(m=e_{1}e_{2}^{2}\ldots e_{k-1}^{k-1}h\) where \[\mu^{2}(e_{1}\ldots e_{k-1})=\gcd(e_{1}\ldots e_{k-1},h)=1\] and \(h\) is \(k\)-full. Combining Lemma 2.2 with the Chinese remainder theorem, we deduce that \[\rho_{f}(m)=\rho_{f}(e_{1})\rho_{f}(e_{2}^{2})\cdots\rho_{f}(e_{k-1}^{k-1}) \rho_{f}(h)\leqslant m^{o(1)}\cdot e_{2}e_{3}^{2}\cdots e_{k-1}^{k-2}h^{1-1/k}.\] Hence we deduce that \[\sum_{m\leqslant M}\rho_{f}(m) \leqslant M^{o(1)}\sum_{h}h^{1-1/k}\sum_{e_{k-1}}e_{k-1}^{k-2} \cdots\sum_{e_{2}}e_{2}\cdot\frac{M}{e_{2}^{2}\cdots e_{k-1}^{k-1}h}\] \[\leqslant M^{1+o(1)}\sum_{h}\frac{1}{h^{1/k}}\sum_{e_{k-1}}\frac{1 }{e_{k-1}}\cdots\sum_{e_{2}}\frac{1}{e_{2}}.\] The sums over \(e_{2},\ldots,e_{k-1}\) contribute \(O((\log M)^{k-2})\). Moreover, the sum over \(h\) contributes \(O(\log M)\), since there are \(O(B^{1/k})\)\(k\)-full positive integers in the dyadic interval \((B/2,B]\), for any \(B\geqslant 1\) and the result follows. ### Polynomial values with a large square divisor For a given polynomial \(f\in\mathbb{Z}[X]\) of degree \(k\), we denote by \(\Delta_{f}\in\mathbb{Z}\) its discriminant. This is a homogeneous polynomial of degree \(k(k-1)/2\) in the coefficients of \(f\). In this section we are interested in the size of \[Q_{f}(S,N)=\#\left\{(n,r,s)\in\mathbb{Z}^{3}:1\leqslant n\leqslant N,\ 1 \leqslant s\leqslant S,\ f(n)=sr^{2}\right\},\] for given \(N,S\geqslant 1\). For a polynomial \(f\in\mathcal{F}_{k}(H)\) with \(\Delta_{f}\neq 0\), this quantity has been estimated in [21, Theorem 1.3], with the outcome that \[Q_{f}(S,N)\leqslant N^{1/2}S^{3/4}(HN)^{o(1)}.\] The following result improves on this via the determinant method. **Lemma 2.5**.: _Let \(f\in\mathcal{F}_{k}(H)\) such that \(f\) is irreducible. Then_ \[Q_{f}(S,N)\leqslant\left(N^{1/2}S^{1/2}+S\right)(HN)^{o(1)}.\] Proof.: Note that the bound is trivial if \(S>N\) and so we may proceed under the assumption that \(S\leqslant N\). We fix a choice of \(s\) and begin by breaking into residue classes modulo \(s\), giving \[Q_{f}(S,N)\leqslant\sum_{0<s\leqslant S}\sum_{\begin{subarray}{c}\nu=0\\ f(\nu)\equiv 0\bmod s\end{subarray}}^{s-1}N(s,\nu),\] where \[N(s,\nu) =\#\left\{(n,r)\in\mathbb{Z}^{2}:\ 1\leqslant n\leqslant N,\ n \equiv\nu\bmod s,\ f(n)=sr^{2}\right\}\] \[\leqslant\#\left\{(u,r)\in\mathbb{Z}^{2}:\ u\leqslant N/s+1,\ f( \nu+su)=sr^{2}\right\}.\] At this point we call upon work of Heath-Brown [14, Theorem 15]. Given \(\varepsilon>0\) and an absolutely irreducible polynomial \(F\in\mathbb{Z}[u,v]\) of degree \(D\), this shows that there are at most \[(UV)^{o(1)}\exp\left(\frac{\log U\log V}{\log T}\right)\] choices of \((u,v)\in\mathbb{Z}^{2}\) such that \(|u|\leqslant U\), \(|v|\leqslant V\) and \(F(u,v)=0\). Here \(T\) is defined to be the maximum of \(U^{e_{1}}V^{e_{2}}\), taken over all monomials \(u^{e_{1}}v^{e_{2}}\) which appear in \(F(u,v)\) with non-zero coefficient. Moreover, this is uniform over all absolutely irreducible polynomials \(F\) of a given degree \(D\). We apply this bound with \(U=N/s+1\) and \(F(u,v)=f(\nu+su)-sv^{2}\), noting that the absolute irreducibility of \(F\) follows from the irreducibility of \(f\). In particular we may take \(T\geqslant V^{2}\) and it follows that \[Q_{f}(S,N)\leqslant(NH)^{o(1)}\sum_{0<s\leqslant S}\rho_{f}(s)\left(\frac{N}{s }+1\right)^{1/2},\] where \(\rho_{f}(s)\) is defined in (2.3). We now appeal to Lemma 2.4, which we combine with partial summation, to obtain the desired upper bound on recalling that \(S\leqslant N\). ### Exponential sums and discrepancy Given a sequence \(\xi_{n}\in[0,1)\) for \(n\in\mathbb{N}\), we denote by \(\Delta(N)\) its _discrepancy_ \[\Delta(N)=\sup_{\alpha\in[0,1)}\left|\#\{n\leqslant N:\ \xi_{n}\leqslant \alpha\}-\alpha N\right|.\] As explained in [20, Theorem 2.5], for example, the celebrated _Erdos-Turan inequality_, allows us to give an upper bound on the discrepancy \(\Delta(N)\) in terms of exponential sums. **Lemma 2.6**.: _Let \(\xi_{n}\), \(n\in\mathbb{N}\), be a sequence in \([0,1)\). Then for any integer \(L\geqslant 1\),its discrepancy \(\Delta(N)\) satisfies_ \[\Delta(N)\ll\frac{N}{L}+\sum_{h=1}^{L}\frac{1}{h}\left|\sum_{n=1}^{N}\mathbf{e }(h\xi_{n})\right|.\] We proceed by recalling some bounds of exponential sums with polynomial arguments. We make use of a bound which follows from the recent spectacular results of Bourgain, Demeter and Guth [5] (for \(k\geqslant 4\)) and Wooley [27, 28] (for \(k=3\)), towards the optimal form of the _Vinogradov mean value theorem_. The current state-of-the-art bounds for _Weyl sums_ has been conveniently summarised by Bourgain [4]. We need the following special case covered by [4, Theorems 4 and 5], for which we do not assume anything about the arithmetic structure of the modulus. **Lemma 2.7**.: _For any fixed polynomial \(g\in\mathbb{Z}[X]\) of degree \(k\geqslant 2\) and any integers \(m,N\geqslant 1\) we have_ \[\left|\sum_{n=1}^{N}\mathbf{e}_{m}\left(hg(n)\right)\right|\leqslant N^{1+o(1 )}\left(\frac{\gcd(h,m)}{m}+\frac{1}{N}+\frac{m}{\gcd(h,m)N^{k}}\right)^{ \eta(k)},\] _where \(\eta(k)\) is given by (1.3)._ ## 3. Solutions to families of polynomial congruences ### Preliminaries Let \(U_{k}(m,H,N)\) be the number of solution to the congruence \[a_{0}+a_{1}n+\cdots+a_{k}n^{k}\equiv 0\pmod{m},\] in the variables \[(a_{0},\ldots,a_{k})\in\mathcal{B}_{k}(H)\qquad\text{and}\qquad 1\leqslant n \leqslant N.\] Similarly, for given \(g\in\mathbb{Z}[X]\), let \(W_{g}(m,H,N)\) be the number of solution to the congruence \[a+g(n)\equiv 0\pmod{m},\] in the variables \[a\in\mathcal{I}_{g}(H)\qquad\text{and}\qquad 1\leqslant n\leqslant N.\] It is observed in [24, Equation (3.2)] that we have the trivial upper bounds \[U_{k}(m,H,N)\ll H^{k}(H/m+1)N\] and \[W_{g}(m,H,N)\ll(H/m+1)N. \tag{3.1}\] Our aim is to improve on these bounds in appropriate ranges of \(H\) and \(N\). ### Using exponential sum bounds and discrepancy Our next result is based on treating the question of estimating \(W_{g}(m,H,N)\) as a question of uniformity of distribution and hence we use the tools from Section 2.5. **Lemma 3.1**.: _Let \(g\in\mathbb{Z}[T]\) be of degree \(k\geqslant 2\). For any positive integers \(H\leqslant m\) and \(N\), we have_ \[W_{g}(m,H,N)=\frac{HN}{m}+O\left(N\left(\frac{1}{m}+\frac{1}{N}+\frac{m}{N^{k} }\right)^{\eta(k)}(mN)^{o(1)}\right)\] _where \(\eta(k)\) is given by (1.3)._ Proof.: We observe that \(W_{g}(m,H,N)\) is the number of fractional parts \(\{g(n)/m\}\) which fall in the interval \([1-H/m,1]\). Hence, on taking \(L=N\) in Lemma 2.6, we obtain \[W_{g}(m,H,N)=\frac{HN}{m}+O(\Delta), \tag{3.2}\] where \[\Delta\ll 1+\sum_{h=1}^{N}\frac{1}{h}\left|\sum_{n=1}^{N}\mathbf{e}_{m}\left( \frac{hg(n)}{m}\right)\right|. \tag{3.3}\] Lemma 2.7 yields \[\sum_{h=1}^{N}\frac{1}{h}\left|\sum_{n=1}^{N}\mathbf{e}_{m}\left( \frac{hg(n)}{m}\right)\right|\] \[\qquad\leqslant N^{1+o(1)}\sum_{h=1}^{N}\frac{1}{h}\left(\frac{ \gcd(h,m)}{m}+\frac{1}{N}+\frac{m}{\gcd(h,m)N^{k}}\right)^{\eta(k)}\] \[\qquad\leqslant N^{1+o(1)}\sum_{h=1}^{N}\frac{1}{h}\left(\frac{ \gcd(h,m)}{m}+\frac{1}{N}+\frac{m}{N^{k}}\right)^{\eta(k)}\] \[\qquad\leqslant N^{1+o(1)}\sum_{h=1}^{N}\frac{\gcd(h,m)^{\eta(k)} }{hm^{\eta(k)}}+N^{1+o(1)}\left(\frac{1}{N}+\frac{m}{N^{k}}\right)^{\eta(k)} \sum_{h=1}^{N}\frac{1}{h}\] \[\qquad=\frac{N^{1+o(1)}}{m^{\eta(k)}}\sum_{h=1}^{N}\frac{\gcd(h,m )^{\eta(k)}}{h}+N^{1+o(1)}\left(\frac{1}{N}+\frac{m}{N^{k}}\right)^{\eta(k)}.\] Furthermore, \[\sum_{h=1}^{N}\frac{\gcd(h,m)^{\eta(k)}}{h}\leqslant\sum_{r|m}r^ {\eta(k)}\sum_{\begin{subarray}{c}h=1\\ \gcd(h,m)=r\end{subarray}}^{N}\frac{1}{h} \leqslant\sum_{r|m}r^{\eta(k)}\sum_{1\leqslant h\leqslant N/r} \frac{1}{hr}\] \[\ll\tau(m)\log N.\] Recalling (2.1), the lemma follows from (3.2) and (3.3). ### Using the geometry of numbers We now use Lemma 2.1 to estimate \(U_{k}(q^{2},H,N)\) on average over square-free integers \(q\) in a dyadic interval. **Lemma 3.2**.: _For \(Q\geqslant 1\), we have_ \[\sum_{q\sim Q}\mu^{2}(q)U_{k}(q^{2},H,N)\ll Z(NQ)^{o(1)},\] _where_ \[Z=\frac{H^{k+1}N}{Q}+NQ+H^{k}Q+H^{k}NQ^{2/(k+1)}.\] Proof.: For \(m,n\in\mathbb{N}\) we define the lattice \[\Lambda_{m,n}=\left\{\mathbf{a}\in\mathbb{Z}^{k+1}:\ \mathbf{a}\cdot\mathbf{n} \equiv 0\pmod{m}\right\},\] where \(\mathbf{n}=\left(1,n,\ldots,n^{k}\right)\in\mathbb{N}^{k+1}\) and \(\mathbf{a}\cdot\mathbf{n}\) is the scalar product. Note that \(\Lambda_{m,n}\) has full rank and discriminant \[\Delta_{m,n}=\frac{m}{\gcd(m,1,n,\ldots,n^{k})}=m. \tag{3.4}\] By Lemma 2.1, we have \[U_{k}(q^{2},H,N)\ll\sum_{1\leqslant n\leqslant N}\left(\frac{H^{k+1}}{q^{2}}+ \frac{H^{k}}{s(q^{2},n)^{k}}+1\right)\] where \(s(q^{2},n)\) is the smallest successive minima of \(\Lambda_{q^{2},n}\). Therefore, \[U_{k}(q^{2},H,N)\ll\frac{H^{k+1}N}{q^{2}}+N+H^{k}S_{k}(q,N), \tag{3.5}\] where \[S_{k}(q,N)=\sum_{1\leqslant n\leqslant N}\frac{1}{s(q^{2},n)^{k}}.\] Since \(s(q^{2},n)\) is the smallest successive minimum of \(\Lambda_{q^{2},n}\), it follows from (2.2) and (3.4) that \(s(q^{2},n)^{k+1}\ll\Delta_{q^{2},n}=q^{2}\), whence \[s(q^{2},n)\leqslant q^{2/(k+1)}.\] We now define an integer \(I\ll\log Q\) by the inequalities \[2^{I-1}<Q^{2/(k+1)}\leqslant 2^{I}\] and write \[S_{k}(q,N)\leqslant\sum_{i=0}^{I}\sum_{\begin{subarray}{c}1\leqslant n \leqslant N\\ s(q^{2},n)\sim 2^{i}\end{subarray}}\frac{1}{s(q^{2},n)^{k}}\ll\sum_{i=0}^{I}2^{- ik}\sum_{\begin{subarray}{c}1\leqslant n\leqslant N\\ s(q^{2},n)\sim 2^{i}\end{subarray}}1. \tag{3.6}\] Note that if \(s(q^{2},n)\leqslant t\) then there is a non-zero vector \(\mathbf{c}\in\Lambda_{q^{2},n}\) such that \(\|\mathbf{c}\|_{2}\leqslant t\). Therefore \[\sum_{\begin{subarray}{c}1\leqslant n\leqslant N\\ s(q^{2},n)\sim 2^{i}\end{subarray}}1\leqslant\sum_{\begin{subarray}{c} \mathbf{c}=(c_{0},\ldots,c_{k})\in\mathbb{Z}^{k+1}\\ 0<\|\mathbf{c}\|_{2}\leqslant 2^{i}\end{subarray}}\rho_{f_{\mathbf{c}}}(q^{2},N), \tag{3.7}\] where \[f_{\mathbf{c}}(X)=c_{0}+c_{1}X+\cdots+c_{k}X^{k}\] and \[\rho_{f}(m,N)=\#\{n\in[1,N]:\ f(n)\equiv 0\bmod m\} \tag{3.8}\] Using (3.6) and then changing the order of summation in (3.7), we obtain \[\sum_{q\sim Q}\mu^{2}(q)S_{k}(q,N)\ll\sum_{i=0}^{I}2^{-ik}R\left(Q,N,2^{i} \right), \tag{3.9}\] where \[R(Q,N,t)=\sum_{\begin{subarray}{c}\mathbf{c}\in\mathbb{Z}^{k+1}\\ 0<\|\mathbf{c}\|_{2}\leqslant t\end{subarray}}\sum_{q\sim Q}\mu^{2}(q)\rho_{f_ {\mathbf{c}}}(q^{2},N).\] But clearly \[R(Q,N,t) \leqslant\sum_{\begin{subarray}{c}\mathbf{c}\in\mathbb{Z}^{k+1}\\ 0<\|\mathbf{c}\|_{2}\leqslant t\end{subarray}}\sum_{q\leqslant Q}\sum_{ \begin{subarray}{c}n\leqslant N\\ q^{2}|f_{\mathbf{c}}(n)\end{subarray}}1\] \[\leqslant\sum_{n\leqslant N}\sum_{\begin{subarray}{c}\mathbf{c} \in\mathbb{Z}^{k+1}\\ \|\mathbf{c}\|_{2}\leqslant t\end{subarray}}\#\left\{q\leqslant Q:\ q^{2}\mid f _{\mathbf{c}}(n)\right\}.\] If \(f_{\mathbf{c}}(n)=0\) then there are \(Q\) choices for \(q\). Let \(\mathbf{c}\) be a non-zero vector such that \(f_{\mathbf{c}}\) has a root over \(\mathbb{Z}\). Then \(f_{\mathbf{c}}\) must be reducible over \(\mathbb{Z}\). There are at most \(t^{k+o(1)}\) choices of non-zero vectors \(\mathbf{c}\) for which \(f_{\mathbf{c}}\) is reducible over \(\mathbb{Z}\), on appealing to work of Kuba [19]. (See also [10] and the references therein.) Moreover, each such vector \(\mathbf{c}\) yields at most \(k\) choices for \(n\). If \(f_{\mathbf{c}}(n)\neq 0\), on the other hand, then we have at most \((tN)^{o(1)}\) choices for \(q\) by the divisor bound (2.1). Hence we arrive at the bound \[R(Q,N,t)\leqslant\left(Qt^{k}+Nt^{k+1}\right)(Nt)^{o(1)}.\] On returning to (3.9), we therefore obtain \[\sum_{q\sim Q}\mu^{2}(q)S_{k}(q,N) \leqslant\sum_{i=0}^{I}\left(Q+N2^{i}\right)(2^{i}Q)^{o(1)}\] \[\leqslant\left(Q+NQ^{2/(k+1)}\right)Q^{o(1)}.\] We substitute this into (3.5) and sum over \(q\). This yields \[\sum_{q\sim Q}\mu^{2}(q)U_{k}(q^{2},H,N)\leqslant\frac{H^{k+1}N}{Q}+NQ+\left( Q+NQ^{2/(k+1)}\right)H^{k}(NQ)^{o(1)},\] and the result now follows. ## 4. Proofs of main results ### Proof of Theorem 1.1 Fix a choice of \(A\geqslant 1\). We proceed under the assumption that \(N^{1/k}\leqslant H\leqslant N^{A}\), and allow all of our implied constants to depend on \(A\). There are \(O(H^{k+o(1)})\) choices of polynomials \(f\in\mathcal{F}_{k}(H)\) for which \(f\) fails to be irreducible, the latter bound following from work of Kuba [19], as in the proof of Lemma 3.2. The overall contribution from such \(f\) is therefore \(O(H^{k+o(1)}N)\). We may henceforth restrict to the set \(\mathcal{F}_{k}^{*}(H)\) of irreducible polynomials \(f\in\mathcal{F}_{k}(H)\). Our argument proceeds along standard lines, beginning with an application of Mobius inversion to interpret \[\mu^{2}(n)=\sum_{d^{2}|n}\mu(d)\] as a sum over divisors. This leads to the expression \[S_{f}(H)=\sum_{d\ll\sqrt{HN^{k}}}\mu(d)\rho_{f}(d^{2},N).\] where \(\rho_{f}(m,N)\) is given by (3.8). Hence, for arbitrary \(E\geqslant D\geqslant 1\), we have \[S_{f}(H)=M_{f}(H)+O(R_{f}^{(1)}(H))+O(R_{f}^{(2)}(H)),\] where \[M_{f}(H)=\sum_{d\leqslant D}\mu(d)\rho_{f}(d^{2},N)\] and \[R_{f}^{(1)}(H)=\sum_{D<d\leqslant E}\mu^{2}(d)\rho_{f}(d^{2},N),\quad R_{f}^{( 2)}(H)=\sum_{E<d\ll\sqrt{HN^{k}}}\mu^{2}(d)\rho_{f}(d^{2},N).\] To begin with, it is shown in [24, Equation (4.8)] that \[M_{f}(H)=c_{f}N+O\left(DH^{o(1)}+ND^{-1}H^{o(1)}\right), \tag{4.1}\] where the implied constant in this estimate depends only on \(k\). Next, on recalling the notation \(Q_{f}(S,N)\) that has been defined in Section 2.4, we may write \[R_{f}^{(2)}(H)\leqslant Q_{f}(S,N)+Q_{-f}(S,N),\] for some \(S\ll HN^{k}/E^{2}\). Thus, Lemma 2.5 implies that \[R_{f}^{(2)}(H)\leqslant\left(N^{1/2}\left(\frac{HN^{k}}{E^{2}}\right)^{1/2}+ \frac{HN^{k}}{E^{2}}\right)(HN)^{o(1)}. \tag{4.2}\] Turning to the remaining error term \(R_{f}^{(1)}(H)\), we are only able to estimate it well on average over \(f\in\mathcal{F}_{k}^{*}(H)\). Recall the definition of \(U_{k}(d^{2},H,N)\) from Section 3.1. On changing the order of summation, we obtain \[\sum_{f\in\mathcal{F}_{k}^{*}(H)}R_{f}^{(1)}(H)\leqslant\sum_{D<d\leqslant E }\mu^{2}(d)U_{k}(d^{2},H,N).\] To estimate the right hand side, we use Lemma 3.2. After splitting the summation range in dyadic intervals, we derive \[\sum_{f\in\mathcal{F}_{k}^{*}(H)}R_{f}^{(1)}(H)\leqslant\left(\frac{H^{k+1}N }{D}+NE+H^{k}E+H^{k}NE^{2/(k+1)}\right)H^{o(1)}.\] Since we are assuming \(N\leqslant H^{k}\), we may drop the second term in this estimate. Hence \[\sum_{f\in\mathcal{F}_{k}^{*}(H)}R_{f}^{(1)}(H)\leqslant\left(\frac{H^{k+1}N }{D}+H^{k}E+H^{k}NE^{2/(k+1)}\right)H^{o(1)}. \tag{4.3}\] On accounting for the \(O(H^{k+o(1)})\) choices of \(f\in\mathcal{F}_{k}(H)\setminus\mathcal{F}_{k}^{*}(H)\), it therefore follows from (4.1)-(4.3) that \[\sum_{f\in\mathcal{F}_{k}(H)}|S_{f}(N)-c_{f}N| \leqslant H^{k+o(1)}N+\left(D+ND^{-1}\right)H^{k+1+o(1)}\] \[\quad+\left(N^{1/2}\left(\frac{HN^{k}}{E^{2}}\right)^{1/2}+\frac {HN^{k}}{E^{2}}\right)H^{k+1+o(1)}\] \[\quad+\left(\frac{H^{k+1}N}{D}+H^{k}E+H^{k}NE^{2/(k+1)}\right)H^{ o(1)}.\] Hence, on noting that \(\#\mathcal{F}_{k}(H)\gg H^{k+1}\), it follows that \[\frac{1}{\#\mathcal{F}_{k}(H)}\sum_{f\in\mathcal{F}_{k}(H)}|S_{f}(N)-c_{f}N| \leqslant\Delta H^{o(1)},\] where \[\Delta=D+\frac{N}{D}+\frac{E}{H}+\frac{NE^{2/(k+1)}}{H}+\frac{H^{1/2}N^{(k+1) /2}}{E}+\frac{HN^{k}}{E^{2}}.\] We take \(D=N^{1/2}\), leading to \[\frac{1}{\#\mathcal{F}_{k}(H)}\sum_{f\in\mathcal{F}_{k}(H)}|S_{f}(N)-c_{f}N| \leqslant\Delta_{0}H^{o(1)},\] where \[\Delta_{0}=\inf_{\sqrt{N}\leqslant E\ll\sqrt{HN^{k}}}\Delta_{0}(E)\] and \[\Delta_{0}(E)=N^{1/2}+\frac{E}{H}+\frac{NE^{2/(k+1)}}{H}+\frac{H^{1/2}N^{(k+1 )/2}}{E}+\frac{HN^{k}}{E^{2}}.\] We expect the dominant contribution to come from the second and fourth terms and so we choose \[E=\min\left\{H^{3/4}N^{(k+1)/4},\sqrt{HN^{k}}\right\},\] in order to minimise their contribution. Note that \(E\geqslant\sqrt{N}\) with this choice. This therefore leads to the bound \[\Delta_{0}\ll N^{1/2}+\frac{N^{(k+1)/4}}{H^{1/4}}+\frac{N^{3/2}}{H^{1-3/(2k+2 )}}+\frac{N^{(k-1)/2}}{H^{1/2}},\] which thereby concludes the proof of the following result. **Theorem 4.1**.: _Let \(A\geqslant 1\) and \(k\geqslant 2\) be fixed and assume that \(H,N\to\infty\) in such a way that \(N^{1/k}\leqslant H\leqslant N^{A}\). Then we have_ \[\frac{1}{\#\mathcal{F}_{k}(H)}\sum_{f\in\mathcal{F}_{k}(H)} \left|S_{f}(N)-c_{f}N\right|\] \[\leqslant\left(N^{1/2}+\frac{N^{(k+1)/4}}{H^{1/4}}+\frac{N^{3/2}} {H^{1-3/(2k+2)}}+\frac{N^{(k-1)/2}}{H^{1/2}}\right)H^{o(1)}.\] To deduce Theorem 1.1 we assume that \(k\geqslant 4\) and proceed to assess when each of the terms is \(O(N^{1-\delta})\) for some \(\delta>0\). The first term is obviously satisfactory. One sees that the second term and fourth terms are satisfactory if \(H\geqslant N^{k-3+\varepsilon}\) for some \(\varepsilon>0\). Finally, the third term is only satisfactory if \(H\geqslant N^{1/2+3/(4k-2)+\varepsilon}\), but this is implied by the latter condition. This completes the proof of Theorem 1.1. ### Proof of Theorem 4.2 The aim of this section is to estimate the quantity \[\Sigma=\sum_{f\in\mathcal{G}_{g}(H)}\left|S_{f}(N)-c_{f}N\right|.\] It is convenient to define \[M=\max\{H,N^{k}\},\] so that \(f(n)=a+g(n)=O(M)\) if \(f\in\mathcal{G}_{g}(H)\) and \(1\leqslant n\leqslant N\). Mimicking the previous argument and using (4.1), we obtain \[\Sigma\leqslant DH^{1+o(1)}+\frac{NH^{1+o(1)}}{D}+\sum_{D<d\leqslant c\sqrt{M }}W_{g}(d^{2},H,N), \tag{4.4}\] where \(c>0\) is a constant depending only on the polynomial \(g\) and \(W_{g}(d^{2},H,N)\) is defined in Section 3.1. Suppose first that \(H\geqslant N^{k}\). Then we simply apply (3.1) and get \[\Sigma \ll DH^{1+o(1)}+\frac{NH^{1+o(1)}}{D}+\sum_{D<d\leqslant c\sqrt{H }}(H/d^{2}+1)N\] \[\leqslant\left(DH+\frac{HN}{D}+H^{1/2}N\right)H^{o(1)}.\] Taking \(D=N^{1/2}\), we derive \[\Sigma\leqslant\left(HN^{1/2}+H^{1/2}N\right)H^{o(1)}\leqslant H^{1+o(1)}N^{1 /2}, \tag{4.5}\] if \(H\geqslant N^{k}\). We may henceforth assume that \(H\leqslant N^{k}\) and thus \(M=N^{k}\). We now choose \(D=N^{1/2}\) and two more parameters \(F\) and \(E\) with \(F\geqslant E\geqslant N^{1/2}\). Then we may write \[\sum_{N^{1/2}<d\leqslant c\sqrt{M}}W_{g}(d^{2},H,N)=\mathfrak{W}_{1}+ \mathfrak{W}_{2}+\mathfrak{W}_{3} \tag{4.6}\] where \[\mathfrak{W}_{1} =\sum_{N^{1/2}<d\leqslant E}W_{k}(d^{2},H,N),\] \[\mathfrak{W}_{2} =\sum_{E<d\leqslant F}W_{k}(d^{2},H,N),\] \[\mathfrak{W}_{3} =\sum_{F<d\leqslant cN^{k/2}}W_{k}(d^{2},H,N).\] To begin with, we appeal to (3.1) to estimate \[\mathfrak{W}_{1}\ll\sum_{N^{1/2}<d\leqslant E}(H/d^{2}+1)N\ll HN^{1/2}+EN. \tag{4.7}\] It is convenient to choose \(E=\max\{H^{1/2},N^{1/2}\}\), so that \[\mathfrak{W}_{1}\ll HN^{1/2}+H^{1/2}N. \tag{4.8}\] Indeed, for \(H\leqslant N\) we have \(E=N^{1/2}\) and thus \(\mathfrak{W}_{1}=0\), while for \(H>N\) we have \(E=H^{1/2}\) and (4.8) follows from (4.7). Therefore, combining (4.4), (4.6) and (4.8), we obtain \[\Sigma\ll H^{1+o(1)}N^{1/2}+H^{1/2}N+\mathfrak{W}_{2}+\mathfrak{W}_{3}. \tag{4.9}\] It remains to estimate \(\mathfrak{W}_{2}\) and \(\mathfrak{W}_{3}\). To estimate \(\mathfrak{W}_{2}\) we appeal to Lemma 3.1 to derive \[\mathfrak{W}_{2}\ll\sum_{E<d\leqslant F}\left(\frac{HN}{d^{2}}+N^{1+o(1)} \left(\frac{1}{d^{2}}+\frac{1}{N}+\frac{d^{2}}{N^{k}}\right)^{\eta(k)}\right),\] where \(\eta(k)\) is given by (1.3). Therefore, noticing that we have \[\frac{1}{d^{2}}<\frac{1}{N}\] for \(d>E\geqslant N^{1/2}\), we obtain \[\begin{split}\mathfrak{W}_{2}&\ll HN/E+FN^{1-\eta(k )+o(1)}+F^{1+2\eta(k)}N^{1-k\eta(k)+o(1)}\\ &\leqslant HN^{1/2}+FN^{1-\eta(k)+o(1)}+F^{1+2\eta(k)}N^{1-k \eta(k)+o(1)}.\end{split} \tag{4.10}\] Finally, as in the proof of Theorem 4.1, we treat \(\mathfrak{W}_{3}\) via Lemma 2.5. Recalling our assumption \(H\leqslant N^{k}\) and observing that there are \(O(1)\) choices of \(f\in\mathcal{G}_{g}(H)\) that fail to be irreducible, we derive \[\begin{split}\mathfrak{W}_{3}&\leqslant\left(N+HN^ {1/2}\left(N^{k}/F^{2}\right)^{1/2}+HN^{k}/F^{2}\right)N^{o(1)}\\ &\leqslant\left(N+HN^{(k+1)/2}/F+HN^{k}/F^{2}\right)N^{o(1)}. \end{split} \tag{4.11}\] We now observe that if \(N^{(k+1)/2}/F\geqslant N\), which is equivalent to \(F\leqslant N^{(k-1)/2}\), then the bound becomes trivial. Thus we always assume that \[F\geqslant N^{(k-1)/2},\] in which case we see that the third term in (4.11) is dominated by the second term. Substituting the bounds (4.10) and (4.11) in (4.9), we are led to the upper bound \[\Sigma\leqslant\big{(}HN^{1/2}+H^{1/2}N+FN^{1-\eta(k)}+F^{1+2\eta(k)}N^{1-k \eta(k)}+HN^{(k+1)/2}/F\big{)}H^{o(1)}.\] Since \(F\geqslant N^{(k-1)/2}\), we see that \(FN^{1-\eta(k)}\leqslant F^{1+2\eta(k)}N^{1-k\eta(k)}\) and so \[\Sigma\leqslant\big{(}HN^{1/2}+H^{1/2}N+F^{1+2\eta(k)}N^{1-k\eta(k)}+HN^{(k+1 )/2}/F\big{)}H^{o(1)}. \tag{4.12}\] To optimise (4.12), we choose \[F=\max\Big{\{}\big{(}HN^{(k-1)/2+k\eta(k)}\big{)}^{1/(2+2\eta(k))},N^{(k-1)/2} \Big{\}}\] for which \[F^{1+2\eta(k)}N^{1-k\eta(k)} =HN^{(k+1)/2}/F\] \[=H^{1-\frac{1}{2+2\eta(k)}}N^{(k+3)/4-\frac{(k+1)\eta(k)}{4+4\eta( k)}}.\] After substitution in (4.12), this completes our treatment of the case \(H\leqslant N^{k}\). Taken together with (4.5), and observing that \(\#\mathcal{G}_{g}(H)\gg H\), this therefore concludes the proof of the following theorem. **Theorem 4.2**.: _For a fixed polynomial \(g\in\mathbb{Z}[X]\) of degree \(k\geqslant 2\), we have_ \[\frac{1}{\#\mathcal{G}_{g}(H)}\sum_{f\in\mathcal{G}_{g}(H)} |S_{f}(N)-c_{f}N|\] \[\leqslant\left(N^{1/2}+\frac{N}{H^{1/2}}+\frac{N^{(k+1)/2-\eta(k )}}{H}+\frac{N^{(k+3)/4-\frac{(k+1)\eta(k)}{4+4\eta(k)}}}{H^{\frac{1}{2+2\eta( k)}}}\right)H^{o(1)},\] _where \(\eta(k)\) is given by (1.3)._ Finally, to deduce Theorem 1.2 we need to discover when each of the terms is \(O(N^{1-\delta})\) for some \(\delta>0\). The first term is obviously satisfactory. One sees that the second term is satisfactory if \(H\geqslant N^{\varepsilon}\) for any \(\varepsilon>0\). Finally, the fourth term is only satisfactory if \(H\geqslant N^{(k-1)/2+\eta(k)+\varepsilon}\), under which assumption the third term is also satisfactory. This therefore completes the proof of Theorem 1.2.
2301.07560
Extended FastSLAM Using Cellular Multipath Component Delays and Angular Information
Opportunistic navigation using cellular signals is appealing for scenarios where other navigation technologies face challenges. In this paper, long-term evolution (LTE) downlink signals from two neighboring commercial base stations (BS) are received by a massive antenna array mounted on a passenger vehicle. Multipath component (MPC) delays and angle-of-arrival (AOA) extracted from the received signals are used to jointly estimate the positions of the vehicle, transmitters, and virtual transmitters (VT) with an extended fast simultaneous localization and mapping (FastSLAM) algorithm. The results show that the algorithm can accurately estimate the positions of the vehicle and the transmitters (and virtual transmitters). The vehicle's horizontal position error of SLAM fused with proprioception is less than 6 meters after a traversed distance of 530 meters, whereas un-aided proprioception results in a horizontal error of 15 meters.
Junshi Chen, Russ Whiton, Fredrik Tufvesson
2023-01-18T14:26:55Z
http://arxiv.org/abs/2301.07560v2
# Extended FastSLAM Using Cellular Multipath Component Delays and Angular Information ###### Abstract Opportunistic navigation using cellular signals is appealing for scenarios where other navigation technologies face challenges. In this paper, long-term evolution (LTE) downlink signals from two neighboring commercial base stations (BS) are received by a massive antenna array mounted on a passenger vehicle. Multipath component (MPC) delays and angle-of-arrival (AOA) extracted from the received signals are used to jointly estimate the positions of the vehicle, transmitters, and virtual transmitters (VT) with an extended fast simultaneous localization and mapping (FastSLAM) algorithm. The results show that the algorithm can accurately estimate the positions of the vehicle and the transmitters (and virtual transmitters). The vehicle's horizontal position error of SLAM fused with proprioception is less than 6 meters after a traversed distance of 530 meters, whereas un-aided proprioception results in a horizontal error of 15 meters. MPC delay, AOA, LTE, massive antenna array, positioning, localization, SLAM, FastSLAM. ## I Introduction Intelligent transportation systems hold promise for traffic safety and efficiency. Localization performance is important for such systems, from use cases ranging from traffic optimization [1] to autonomous driving [2]. A broad sensor suite has been developed for the localization problem for these challenging use cases, but the limitations have motivated efforts toward finding additional sensors to provide localization information. Cellular communication also has a long history of use for localization as an alternative or complement to satellite-based navigation [3], and cellular technologies are expected to further develop towards joint communication and sensing [4]. Not only does this offer benefits for optimizing communication system performance [5], but also can play a role in addressing even the most demanding localization use cases such as autonomous driving [6]. The manner in which the cellular signals are generated and utilized for positioning can take many forms, and various classification schemes have been suggested, e.g. in [7], it is categorized into three types: triangulation and multilateration, machine learning based positioning, and simultaneous localization and mapping (SLAM) [8]. SLAM has multiple uses, including for tracking purposes [9] or in augmenting proprioception sensors on vehicles [7], but successful implementation requires associating measurements from different snapshots, which can be difficult depending on the type of sensor used and data quality. Significant effort has been spent on achieving accurate data association, and many advanced algorithms have been developed, e.g., joint probability data association (JPDA) [10], and belief propagation (BP) [11]. On the other hand, FastSLAM [12] takes another approach to simplify the data association problem. It uses a particle filter mechanism, and for each particle, only a simple maximum likelihood data association is applied independently. During the update of the FastSLAM, only the particles with the highest likelihood of data association can survive. In this way, it can keep the effective data association in general. In this paper, the FastSLAM algorithm is applied to the parameters extracted from the cellular signals including multipath component (MPC) delays, azimuthal angle-of-arrival (AOA), and elevational AOA. The parameters are extracted from the signals of multiple commercial long-term evolution (LTE) base stations (BS) received by a massive antenna array mounted on a passenger vehicle in an urban environment. FastSLAM is extended to work with observations from multiple antenna ports and BSs. This extended FastSLAM can further simplify the data association problem by further factoring the posterior model and processing each port independently. Results from field measurements show that the extended FastSLAM algorithm works well in complicated urban environments, and the vehicle's positioning error is less than 6 meters after a traversed distance of 530 meters. The structure of the paper is as follows. Sec. II introduces the wireless signal system model. Sec. III describes the extended FastSLAM model using the estimated MPC parameters, Sec. IV describes the iterative update of the extended FastSLAM algorithm applied to localize the positions of the vehicle, the transmitters and the virtual transmitters. Sec. V presents the measurement setup and analysis of the results from the measurement data and the proposed algorithm. Finally, Sec. VI summarizes the paper. _Notation_: Matrices and vectors are denoted as uppercase and lowercase boldface letters, e.g., \(\mathbf{A}\) and \(\mathbf{a}\). The identity matrix is denoted as \(\mathbf{I}\). The matrix transpose and matrix inverse are denoted as superscripts \((\cdot)^{T}\) and \((\cdot)^{-1}\) respectively. The Euclidian norm is denoted as \(\left\lVert\cdot\right\rVert\). The speed of light is \(c\simeq 3\cdot 10^{8}\) m/s. ## II system model In LTE systems, orthogonal frequency division multiplexing (OFDM) is used and the baseband signal transmitted from one antenna port of one BS is described as [13] \[\begin{split} s^{j,k}(t)&=\sum_{n=-N_{\text{sc}}/2}^{n =-1}x^{j,k}[n+N_{\text{sc}}/2]e^{i2\pi n\Delta ft}\\ &+\sum_{n=0}^{n=N_{\text{sc}}/2-1}x^{j,k}[n+N_{\text{sc}}/2]e^{i2 \pi(n+1)\Delta ft}\end{split} \tag{1}\] here \(x^{j,k}[n]\), \(n\in\{0,N_{\text{sc}}-1\}\) is the transmitted signal at the \(n\)-th subcarrier from the \(j\)-th antenna port of the \(k\)-th BS, \(j\in\{1,\ldots,4\}\) is the antenna port number of the cell-specific reference symbol (CRS), \(k\in\{1,\ldots,K\}\) is the number of BSs transmitting signals, and \(N_{\text{sc}}\) is the number of subcarriers in the OFDM symbol. Further, \(t\) is limited to \([-T_{\text{CP}},T_{\text{s}}]\) denoting continuous time, \(T_{\text{CP}}\) is the duration of the cyclic prefix (CP), and \(T_{\text{s}}=1/\Delta f\) is the duration of one OFDM symbol with \(\Delta f\) being the subcarrier spacing. CRS \(x_{CRS}^{j,k}\) are transmitted on specific subcarriers and symbols depending on the cell ID, antenna port number, CP type, and bandwidth of the LTE system [14]. CRS are appealing for positioning because, unlike synchronization signals, they span the full channel bandwidth (giving time resolution) and are transmitted more frequently. In this paper, they are used exclusively to estimate the position of the vehicle and those of the transmitters and virtual transmitters. A 128-port stacked uniform circular antenna array is used with the receiver. The antennas are switched in a fixed sequence with a switching interval of 0.5 ms, and all 128 ports are sampled for a complete snapshot every 75 ms including 11 ms for automatic gain control. The receiver moves at a relatively low average speed of 1.0 m/s because of the switched nature of the measurement system [13]. The channel frequency response from the \(j\)-th port of the \(k\)-th BS is modeled as a summation of \(M\) MPCs parameterized by their delay \(\tau^{m,j,k}\), direction-of-arrival (DOA) \(\Omega^{m,j,k}\), and Doppler shift \(\nu^{m,j,k}\). The DOA is further divided into azimuth AOA \(\varphi^{m,j,k}\) and elevation AOA \(\theta^{m,j,k}\). The time-varying directional transfer function at the \(n\)-th subcarrier is represented as \[\mathbf{h}^{j,k}[n]=\sum_{m=1}^{M}\mathbf{b}_{R}(\Omega^{m,j,k})\mathbf{\Gamma }^{m,j,k}\mathbf{b}_{T}^{j,k}e^{-i2\pi(n\Delta f\tau^{m,j,k}-\nu^{m,j,k}t)} \tag{2}\] where \(\mathbf{b}_{R}(\Omega^{m,j,k})\in\mathbb{C}^{128\times 2}\) is the receive antenna array pattern, \(\mathbf{b}_{T}^{j,k}\in\mathbb{C}^{2\times 1}\) is the \(j\)-th port of the \(k\)-th BS antenna response. \(\mathbf{\Gamma}^{m,j,k}\) is the polarimetric path weight matrix defined as \[\mathbf{\Gamma}^{m,j,k}=\begin{bmatrix}\gamma_{\text{HH}}^{m,j,k}&\gamma_{ \text{VH}}^{m,j,k}\\ \gamma_{\text{HV}}^{m,j,k}&\gamma_{\text{VV}}^{m,j,k}\end{bmatrix}. \tag{3}\] The matrix elements represent different polarization combinations of the transmitter and the receiver, e.g., HV is horizontal-to-vertical. The aggregate received CRS in the frequency domain at the \(n\)-th subcarrier is given as follows \[\mathbf{y}[n]=\sum_{k=1}^{K}\sum_{j=1}^{J}\mathbf{h}^{j,k}[n]\cdot x_{CRS}^{j, k}\left[n\right]. \tag{4}\] Using the channel parameter estimation and interference cancellation methods described in [13], signals from different antenna ports of different BSs are separated and the MPC parameters delay, azimuth AOA, elevation AOA and signal-to-noise ratio (SNR) are estimated by the improved RIMAX algorithm. The estimated MPC parameters can be used for positioning, and for this purpose, the MPC delays are converted into the distance domain by adding the unknown and fixed clock offset between the \(k\)-th BS and the vehicle \(t_{\text{offset}}^{k}\) and then multiplying with the speed of light \[d^{m,j,k}(t)=\left(\tau^{m,j,k}(t)+t_{\text{offset}}^{k}\right)\cdot c. \tag{5}\] The estimated parameters of all the MPCs at time index \(t\) can be represented as \[\mathbf{Z}_{t} =\left[\mathbf{z}^{1,1,1}(t),\ldots,\mathbf{z}^{M,J,K}(t)\right] \tag{6}\] \[\mathbf{z}^{m,j,k}(t) =\left[d^{m,j,k}(t),\varphi^{m,j,k}(t),\theta^{m,j,k}(t)\right]^{T} \tag{7}\] and the SNR of all MPCs at time index \(t\) can be represented as \[\boldsymbol{\lambda}_{t}=\left[\lambda^{1,1,1}(t),\ldots,\lambda^{M,J,K}(t) \right]. \tag{8}\] ## III Extended FastSLAM model using the estimated MPC parameters MPCs from BSs with direct line-of-sight (LOS), or MPCs from reflectors and scatters in the environment with non line-of-sight (NLOS) are considered as synchronized and independent transmitters and virtual transmitters (VT) respectively [15]. For convenience of representation, the term VT is used to refer to all transmitters. The problem to be solved is to use all the parameters extracted from the wireless signals and fuse with the velocity information from the vehicle to estimate the positions of the vehicle and the VTs accurately, and also associate VTs across measurements at different times. The posterior can be represented as \[p\left(\mathbf{V},\mathbf{c}_{1:t},\mathbf{r}_{1:t}\mid\mathbf{Z}_{1:t}, \mathbf{u}_{1:t}\right) \tag{9}\] here \(\mathbf{V}\) represents positions of the VTs, \(\mathbf{r}_{1:t}\) is the time series of the vehicle state vector, \(\mathbf{Z}_{1:t}\) are the measurements, \(\mathbf{c}_{1:t}\) is the association between measurements and VTs, and \(\mathbf{u}_{1:t}\) is the input velocity from the vehicle. The index \(1:t\) represents the time from time index \(1\) to \(t\). The positions of the VTs are given as \[\mathbf{V} =\left[\mathbf{v}^{1},\ldots,\mathbf{v}^{L,J,K}\right] \tag{10}\] \[\mathbf{v}^{l,j,k} =\left[v_{x}^{l,j,k},v_{y}^{l,j,k},v_{z}^{l,j,k}\right]^{T} \tag{11}\] here \(\mathbf{v}^{l,j,k}\) is the position of the VT with the index \((l,j,k)\) in Cartesian coordinates. The number of VTs is not necessarily equal to the number of measurements due to the existence of spurious measurements (false alarms that are not coming from any VTs) and the absence of measurements (missed detections that should have come from VTs). The association between the measurement and VT \(c_{t}^{l,j,k}=m\) means that the VT \(\mathbf{v}^{l,j,k}\) is associated with the measurement \(\mathbf{z}^{m,j,k}(t)\). The state vector of the vehicle can be represented as \[\mathbf{r}_{1:t} =\left[\mathbf{r}_{1},\ldots,\mathbf{r}_{t}\right] \tag{12}\] \[\mathbf{r}_{t} =\left[r_{x}(t),r_{y}(t),r_{z}(t),r_{\psi}(t),r_{\theta}(t),r_{ \phi}(t)\right]^{T} \tag{13}\] here \(\mathbf{r}_{p}(t)=[r_{x}(t),r_{y}(t),r_{z}(t)]^{T}\) is the position of the vehicle in Cartesian coordinates at time index \(t\), and \([r_{\psi}(t),r_{\theta}(t),r_{\phi}(t)]\) are the yaw, pitch, and roll of the vehicle. The vehicle's velocity \(\mathbf{u}_{t}\) includes longitudinal, lateral, vertical, yaw, pitch, and roll velocities. Rotational velocities are observed by the inertial measurement unit (IMU), and longitudinal speed can also be observed with wheel odometry. The velocity can be represented as \[\mathbf{u}_{t}=\left[u_{x},u_{y},u_{z},u_{\psi},u_{\theta},u_{\phi}\right]^{T}. \tag{14}\] The FastSLAM algorithm in [12] is adopted to solve the posterior problem. If the data association is known (the method to acquire the data association is described at the end of Sec. IV), FastSLAM can decompose the posterior into a factored form of \[p\left(\mathbf{V},\mathbf{r}_{1:t}\mid\mathbf{Z}_{1:t},\mathbf{ c}_{1:t},\mathbf{u}_{1:t}\right) =p\left(\mathbf{r}_{1:t}\mid\mathbf{Z}_{1:t},\mathbf{c}_{1:t},\mathbf{u}_{1:t}\right)\] \[\prod_{n\in\left\{K,J,L\right\}}p\left(\mathbf{v}_{n}\mid \mathbf{r}_{1:t},\mathbf{Z}_{1:t},\mathbf{c}_{1:t}\right). \tag{15}\] Since MPCs from different antenna ports and different BSs can be separated by cell ID, they are independent and should not be associated, so the FastSLAM model is extended here and the posterior can be further factored as \[p\left(\mathbf{V},\mathbf{r}_{1:t}\mid\mathbf{Z}_{1:t},\mathbf{ u}_{1:t},\mathbf{c}_{1:t}\right)\] \[=p\left(\mathbf{r}_{1:t}\mid\{\mathbf{Z}_{1:t}^{j,k},\mathbf{c}_ {1:t}^{j,k}\}_{j\in J,k\in K},\mathbf{u}_{1:t}\right) \tag{16}\] \[\prod_{k=1}^{K}\prod_{j=1}^{J}\prod_{l=1}^{L}p\left(\mathbf{v}^{ l,j,k}\mid\mathbf{r}_{1:t},\mathbf{Z}_{1:t}^{j,k},\mathbf{c}_{1:t}^{l,j,k}\right)\] here \(\mathbf{v}^{l,j,k}\) represents the \(l\)-th VT from the \(j\)-th antenna port of the \(k\)-th BS, and \(\mathbf{Z}_{1:t}^{j,k}\) and \(\mathbf{c}_{1:t}^{j,k}\) represent the measurements and association of the MPCs from the \(j\)-th antenna port of the \(k\)-th BS. \(\{\mathbf{Z}_{1:t}^{j,k},\mathbf{c}_{1:t}^{j,k}\}_{j\in J,k\in K}\) represents the combination of the measurements and the correspondence that is constrained to the MPCs from the same antenna port and BS. This method can process the position estimation of VTs from different antenna ports and BSs separately. It reduces computational complexity and provides flexibility to add or remove BSs. ## IV Extended FastSLAM update with the estimated MPC parameters Vehicle pose evolves as a function of control inputs and physical motion constraints, and it is defined as the motion model \[p(\mathbf{r}_{t}\mid\mathbf{r}_{t-1},\mathbf{u}_{t}) \tag{17}\] here \(\mathbf{r}_{t}\) is a probabilistic function of the vehicle's control input \(\mathbf{u}_{t}\) and the previous pose state \(\mathbf{r}_{t-1}\). The FastSLAM algorithm employs a particle filter [16] to estimate the vehicle pose posterior. At each time index, it preserves a set of particles representing the posterior \(p(\mathbf{r}_{1:t}\mid\{\mathbf{Z}_{1:t}^{j,k},\mathbf{c}_{1:t}^{j,k}\}_{j \in J,k\in K},\mathbf{u}_{1:t})\), and the set is denoted as \(\mathbf{R}_{1:t}\). Each particle \(\mathbf{r}_{i,1:t}\) represents the \(i\)-th hypothesis of the vehicle's path, i.e., \[\mathbf{R}_{1:t}=\{\mathbf{r}_{i,1:t}\}_{i}=\{\mathbf{r}_{i,1}, \ldots,\mathbf{r}_{i,t}\}_{i}. \tag{18}\] The particle \(\mathbf{r}_{i,t-1}\) at time index \(t-1\) is used to generate a probabilistic hypothesis of the vehicle's pose \(\mathbf{r}_{i,t}\) at time index \(t\) by sampling from the probabilistic motion model \[\mathbf{r}_{i,t}\sim p\left(\mathbf{r}_{t}\mid\mathbf{r}_{i,t-1},\mathbf{u}_{ t}\right). \tag{19}\] After each particle is generated, the FastSLAM algorithm updates the posterior over the VT estimates associated with each particle. For the VT connected to the \(i\)-th particle of vehicle state, if there is no clearly associated observation, then it will keep the status unchanged, otherwise, the posterior at the time index \(t\) will be updated as follows \[p\left(\mathbf{v}_{i}^{l,j,k}\mid\mathbf{r}_{i,1:t},\mathbf{Z}_ {1:t}^{j,k},\mathbf{c}_{i,1:t}^{l,j,k}\right)=\] \[\eta p\left(\mathbf{z}_{t}^{l,j,k}\mid\mathbf{r}_{i,t},\mathbf{ v}_{i}^{l,j,k},\mathbf{c}_{i,t}^{l,j,k}\right) \tag{20}\] \[p\left(\mathbf{v}_{i}^{l,j,k}\mid\mathbf{r}_{i,1:t-1},\mathbf{Z}_ {1:t-1}^{j,k},\mathbf{c}_{i,1:t-1}^{l,j,k}\right)\] here \(\eta\) is the normalization factor, and the posterior of \(\mathbf{v}_{i}^{l,j,k}\) at the moment \(t-1\) is assumed to be Gaussian with the following mean and variance. \[p\left(\mathbf{v}_{i}^{l,j,k}\mid\mathbf{r}_{i,1:t-1},\mathbf{Z}_ {1:t-1}^{j,k},\mathbf{c}_{i,1:t-1}^{l,j,k}\right)\sim\mathcal{N}\left(\mathbf{v} _{i}^{l,j,k};\mathbf{\mu}_{i,t-1}^{l,j,k},\mathbf{\Sigma}_{i,t-1}^{l,j,k}\right). \tag{21}\] To ensure that the estimate of VT at the time index \(t\) is Gaussian, FastSLAM maximizes the perceptual model \(p(\mathbf{z}_{t}^{l,j,k}\mid\mathbf{r}_{i,t},\mathbf{v}_{i,t}^{l,j,k}\mid \mathbf{r}_{i,t},\mathbf{v}_{i,t}^{l,j,k},\mathbf{c}_{i,t}^{l,j,k}\right)\), and the measurement function can be approximated by Taylor expansion as \[h\left(\mathbf{v}_{i}^{l,j,k},\mathbf{r}_{i,t}\right) =\hat{\mathbf{z}}_{i,t}^{l,j,k}+\mathbf{H}_{i,t}^{l,j,k}\left( \mathbf{v}_{i}^{l,j,k}-\mathbf{\mu}_{i,t-1}^{l,j,k}\right) \tag{22}\] \[\hat{\mathbf{z}}_{i,t}^{l,j,k} =h\left(\mathbf{\mu}_{i,t-1}^{l,j,k},\mathbf{r}_{i,t}\right) \tag{23}\] here the function \(h\) is defined to estimate the distance, azimuth AOA, and elevation AOA from the positions of the vehicle and the VT, and \(\mathbf{H}_{i,t}^{l,j,k}\) is the Jacobian of \(h\). The function \(h\) is defined as follows \[\tilde{d}_{i}^{l,j,k}(t) =\left\|\mathbf{\mu}_{i,t-1}^{l,j,k}-\mathbf{r}_{i,p}(t)\right\| \tag{24}\] \[\hat{\varphi}_{i}^{l,j,k}(t) =\text{atan}\left(\frac{\hat{y}}{\hat{x}}\right)\] (25) \[\tilde{\theta}_{i}^{l,j,k}(t) =\text{asin}\left(\frac{\sqrt{\hat{x}^{2}+\hat{y}^{2}}}{\hat{z}}\right) \tag{26}\] here \([\hat{x},\hat{y},\hat{z}]^{T}\) is acquired by applying Euler's rotation theorem [17] with the rotation matrix \(\mathbf{\hat{R}}\left(r_{i,\psi}(t),r_{i,\theta}(t),r_{i,\phi}(t)\right)\) \[[\hat{x},\hat{y},\hat{z}]^{T}=\mathbf{R}\left(r_{i,\psi}(t),r_{i,\theta}(t),r_{i, \phi}(t)\right)\left(\mathbf{\mu}_{i,t-1}^{l,j,k}-\mathbf{r}_{i,p}(t)\right). \tag{27}\] With the approximation, the mean and covariance of the VT at time index \(t\) can be updated with the standard EKF [18] as follows \[\mathbf{K}_{i,t}^{l,j,k} =\mathbf{\Sigma}_{i,t-1}^{l,j,k}\mathbf{H}_{i,t}^{l,j,k}T\left( \mathbf{H}_{i,t}^{l,j,k}\mathbf{\Sigma}_{i,t-1}^{l,j,k}\mathbf{H}_{i,t}^{l,j,k}T +\mathbf{Q}_{t}\right)^{-1} \tag{28}\] \[\mathbf{\mu}_{t,t}^{l,j,k} =\mathbf{\mu}_{i,t-1}^{l,j,k}+\mathbf{K}_{i,t}^{l,j,k}(\mathbf{z} _{i,t}^{l,j,k},-\hat{\mathbf{z}}_{i,t}^{l,j,k})\] (29) \[\mathbf{\Sigma}_{i,t}^{l,j,k} =(\mathbf{I}-\mathbf{K}_{i,t}^{l,j,k}\mathbf{H}_{i,t}^{l,j,k}) \mathbf{\Sigma}_{i,t-1}^{l,j,k}. \tag{30}\] After the posterior of the VTs is updated, the importance factors of all the particles are calculated and used to resample the particles proportionally. The calculation of the importance factor of the \(i\)-th particle is given as follows \[w_{i,t}= \frac{\text{target distribution}}{\text{proposal distribution}} \tag{31}\] \[= \frac{p(\mathbf{r}_{i,1:t}\mid\{\mathbf{Z}_{1:t}^{j,k},\mathbf{ c}_{i,1:t}^{j,k}\}_{j\in J,k\in K},\mathbf{u}_{1:t})}{p(\mathbf{r}_{i,1:t}\mid\{ \mathbf{Z}_{1:t}^{j,k},\mathbf{c}_{1,t-1}^{j,k-1}\}_{j\in J,k\in K},\mathbf{ u}_{1:t})}\] \[\propto \prod_{k\in K}\prod_{j\in J}\prod_{l\in L}\int p\left(\mathbf{z }_{t,i}^{c_{i,l}^{l,j,k},j,k}\mid\mathbf{r}_{i,t},\mathbf{v}_{i}^{l,j,k},c_{ i,t}^{l,j,k}\right)\] \[p\left(\mathbf{v}_{i}^{l,j,k}\mid\mathbf{r}_{i,1:t-1},\mathbf{Z }_{1:t-1}^{j,k},\mathbf{c}_{i,1:t-1}^{l,j,k}\right)d\mathbf{v}_{i}^{l,j,k}.\] The last part in the equation is already defined in eq. (21). With the same linearization as in eq. (22), the importance factor can be calculated as \[w_{i,t}\approx\eta\prod_{k\in K}\prod_{j\in J}\prod_{l\in L}\left| 2\pi\mathbf{Q}_{i,t}^{l,j,k}\right|^{-\frac{1}{2}} \tag{32}\] \[e^{-\frac{1}{2}(\mathbf{z}_{t}^{l,j,k}_{i,t},j,k}-\hat{\mathbf{z }}_{i,t}^{l,j,k})^{T}(\mathbf{Q}_{i,t}^{l,j,k})^{-1}(\mathbf{z}_{t}^{l,j,k},j,k}-\hat{\mathbf{z}}_{i,t}^{l,j,k})\] and the covariance is \[\mathbf{Q}_{i,t}^{l,j,k}=\left(\mathbf{H}_{i,t}^{l,j,k}\right)^{T}\mathbf{ \Sigma}_{i,t-1}^{l,j,k}\mathbf{H}_{i,t}^{l,j,k}+\mathbf{Q}_{t} \tag{33}\] here \(\mathbf{Q}_{t}\) is the covariance matrix of the measurement. Since the SNR is related to the accuracy of the estimated parameters, it is used to update the covariance matrix of the measurement, and \(\mathbf{Q}_{t}\) can be written as \[\mathbf{Q}_{t}=\mathbf{Q}\cdot\text{diag}(\boldsymbol{\lambda}_{t}) \tag{34}\] here \(\text{diag}(\boldsymbol{\lambda}_{t})\) is the diagonal matrix constituted of the elements of the SNR vector \(\boldsymbol{\lambda}_{t}\). In the FastSLAM with maximum likelihood data association, the association \(c_{i,t}^{l,j,k}\) is determined by maximizing the following likelihood \[c_{i,t}^{l,j,k}=\mathop{\arg\max}_{l^{\prime}}p\left(z_{t}^{l^{ \prime},j,k}\mid l^{\prime},\mathbf{c}_{i,1:t-1}^{l,j,k},\mathbf{r}_{i,1:t}, \mathbf{Z}_{1:t-1},\mathbf{u}_{1:t}\right) \tag{35}\] \[=\mathop{\arg\min}_{l^{\prime}}\left((\mathbf{z}_{t}^{l^{\prime},j,k}-\hat{\mathbf{z}}_{i,t}^{l,j,k})^{T}(\mathbf{Q}_{i,t}^{l,j,k})^{-1}( \mathbf{z}_{t}^{l^{\prime},j,k}-\hat{\mathbf{z}}_{i,t}^{l,j,k})\right).\] For multiple VTs and multiple measurements, the Hungarian algorithm [19] is applied to find the maximum likelihood data association among them. ## V Measurement setup and SLAM results analysis A measurement system with a USRP controlling the 128-port stacked uniform circular antenna array mounted on the roof of a vehicle is shown in Fig. 1. The system was used to receive and log CRS symbols from commercial LTE BSs in the city of Lund, Sweden. A rubidium standard disciplined by GPS beforehand was used as a stable frequency reference for the USRP to minimize clock drift, and the clock offsets between different BSs and the vehicle were assumed to be unknown constants. An OXTS RT3003G [20] was used for ground truth position and orientation of the vehicle and the antenna array. The GPS receiver inside the USRP was used for time alignment between ground truth and data logging. Yaw velocity observations from the IMU and the longitudinal speed observations from wheel odometry were used as the input velocity of the extended FastSLAM algorithm, and the vertical, pitch, and roll velocities were assumed to be zero owing to the flat terrain and constrained vehicle dynamics. The parameters for the measurement system are listed in Table I. The measurement trajectory is shown in Fig. 2. The outer figure gives an overview of the relative positions of the BSs and trajectory, and the inset plot gives a more detailed view of the trajectories of the ground truth, SLAM estimation fusing cellular signals with IMU and wheel odometry, and proprioception using the IMU and wheel odometry alone. The estimated positions of the virtual transmitters from SLAM are mapped to physical reflectors with an assumption of first-order reflection, and the physical reflectors are also shown in the figure as dots. A particularly noteworthy long-lived NLOS MPC is shown inside the red ellipse in Fig. 4a. This MPC \begin{table} \begin{tabular}{|l|l|} \hline **Parameter Name** & **Value** \\ \hline Center frequency & 2.66 GHz \\ \hline System bandwidth & 20 MHz \\ \hline BS number & 2 \\ \hline Cell IDs of BS A & 375, 376, 377 \\ \hline Cell IDs of BS B & 177, 178, 179 \\ \hline Tx antenna port number & 2 \\ \hline Rx antenna port number & 128 \\ \hline Snapshot interval & 75 ms \\ \hline Total snapshot number & 6850 \\ \hline Total test time & 85 minutes \\ \hline Traversed distance & 530 meters \\ \hline \end{tabular} \end{table} TABLE I: Measurement system information Fig. 1: The massive antenna array on top of the measurement vehicle [13]. is mapped to the physical environment as a blue dot shown in Fig. 2 and the associated building is plotted with darker colors. The reflections come from a wall 230 meters away from the BS, and they are 170-350 meters away from the vehicle as it drives apart. It shows the potential of using NLOS MPCs for positioning in complicated urban environments The absolute error of the estimated vehicle trajectory from SLAM and proprioception only are shown as a function of time in Fig. 3. It can be observed that the extended FastSLAM can greatly improve positioning performance. It has a maximum absolute horizontal error of 6 meters after 290 seconds and has 3 meters of horizontal error after a total traversed distance of 530 meters, while the absolute error of the IMU and wheel odometry alone have a maximum horizontal error of 20 meters and 15 meters at the end of the measurement. The MPC delay estimates from RIMAX for sector 376 of BS A and sector 178 of BS B are shown in Fig. 4a and Fig. 4b respectively. The associated MPC delays from one particle of the SLAM are also shown in the corresponding figures. It can be observed that the extended FastSLAM can associate the estimated MPC delays accurately, while effectively suppressing spurious measurements. ## VI conclusion In this paper, an extended FastSLAM algorithm using multipath component delays and angular information is developed, which simplifies the data association problem and processes the multipath components from different antenna ports and base stations independently. The multipath component delays and angular information extracted from the commercial LTE signals received by the 128-port antenna array are processed by the extended FastSLAM algorithm, and the results validate the algorithm and demonstrate the capability of using cellular signals for high accuracy positioning in complicated urban environments. ## acknowledgement This work was financed in part by the Swedish Innovation Agency VINNOVA through the MIMO-PAD Project (Reference number 2018-05000). Computational resources were provided by the Swedish National Infrastructure for Computing (SNIC) at HPC2N, partially funded by the Swedish Research Council through grant agreement no. 2018-05973. Fig. 3: The absolute error of SLAM and proprioception. Fig. 2: The ground truth trajectory together with the SLAM estimation and proprioception-only, and the positions of reflectors from one sector at one time index. The building that provides the long-lived VT is emphasized with a darker color and outline.
2307.03488
Line-Constrained $k$-Semi-Obnoxious Facility Location
Suppose we are given a set $\cal B$ of blue points and a set $\cal R$ of red points, all lying above a horizontal line $\ell$, in the plane. Let the weight of a given point $p_i\in {\cal B}\cup{\cal R}$ be $w_i>0$ if $p_i\in {\cal B}$ and $w_i<0$ if $p_i\in {\cal R}$, $|{\cal B}\cup{\cal R}|=n$, and $d^0$($=d\setminus\partial d$) be the interior of any geometric object $d$. We wish to pack $k$ non-overlapping congruent disks $d_1$, $d_2$, \ldots, $d_k$ of minimum radius, centered on $\ell$ such that $\sum\limits_{j=1}^k\sum\limits_{\{i:\exists p_i\in{\cal R}, p_i\in d_j^0\}}w_i+\sum\limits_{j=1}^k\sum\limits_{\{i:\exists p_i\in{\cal B}, p_i\in d_j\}}w_i$ is maximized, i.e., the sum of the weights of the points covered by $\bigcup\limits_{j=1}^kd_j$ is maximized. Here, the disks are the obnoxious or undesirable facilities generating nuisance or damage (with quantity equal to $w_i$) to every demand point (e.g., population center) $p_i\in {\cal R}$ lying in their interior. In contrast, they are the desirable facilities giving service (equal to $w_i$) to every demand point $p_i\in {\cal B}$ covered by them. The line $\ell$ represents a straight highway or railway line. These $k$ semi-obnoxious facilities need to be established on $\ell$ to receive the largest possible overall service for the nearby attractive demand points while causing minimum damage to the nearby repelling demand points. We show that the problem can be solved optimally in $O(n^4k^2)$ time. Subsequently, we improve the running time to $O(n^3k \cdot\max{(\log n, k)})$. The above-weighted variation of locating $k$ semi-obnoxious facilities may generalize the problem that Bereg et al. (2015) studied where $k=1$ i.e., the smallest radius maximum weight circle is to be centered on a line. Furthermore, we addressed two special cases of the problem where points do not have arbitrary weights.
Vishwanath R. Singireddy, Manjanna Basappa, N. R. Aravind
2023-07-07T09:54:56Z
http://arxiv.org/abs/2307.03488v2
# Line-Constrained \(k\)-Semi-Onoxious Facility Location ###### Abstract Suppose we are given a set \(\mathcal{B}\) of blue points and a set \(\mathcal{R}\) of red points, all lying above a horizontal line \(\ell\), in the plane. Let the weight of a given point \(p_{i}\in\mathcal{B}\cup\mathcal{R}\) be \(w_{i}>0\) if \(p_{i}\in\mathcal{B}\) and \(w_{i}<0\) if \(p_{i}\in\mathcal{R}\), \(|\mathcal{B}\cup\mathcal{R}|=n\), and \(d^{0}(=d\setminus\partial d)\) be the interior of any geometric object \(d\). We wish to pack \(k\) non-overlapping congruent disks \(d_{1}\), \(d_{2}\),..., \(d_{k}\) of minimum radius, centered on \(\ell\) such that \(\sum\limits_{j=1}^{k}\sum\limits_{\{i:\exists p_{i}\in\mathcal{R},p_{i}\in d _{j}^{0}\}}w_{i}+\sum\limits_{j=1}^{k}\sum\limits_{\{i:\exists p_{i}\in \mathcal{B},p_{i}\in d_{j}\}}w_{i}\) is maximized, i.e., the sum of the weights of the points covered by \(\bigcup\limits_{j=1}^{k}d_{j}\) is maximized. Here, the disks are the obnoxious or undesirable facilities generating nuisance or damage (with quantity equal to \(w_{i}\)) to every demand point (e.g., population center) \(p_{i}\in\mathcal{R}\) lying in their interior. In contrast, they are the desirable facilities giving service (equal to \(w_{i}\)) to every demand point \(p_{i}\in\mathcal{B}\) covered by them. The line \(\ell\) represents a straight highway or railway line. These \(k\) semi-obnoxious facilities need to be established on \(\ell\) to receive the largest possible overall service for the nearby attractive demand points while causing minimum damage to the nearby repelling demand points. We show that the problem can be solved optimally in \(O(n^{4}k^{2})\) time. Subsequently, we improve the running time to \(O(n^{3}k\cdot\max{(n,k)})\). Furthermore, we addressed two special cases of the problem where points do not have arbitrary weights. In the first case, the objective is to encompass the maximum number of blue points while avoiding red points. The second case aims to encompass all the blue points with the minimum number of red points covered. We show that these two special cases can be solved in \(O(n^{3}k\cdot\max{(\log{n},k)})\) time. For the first case, when \(k=1\), we also provide an algorithm that solves the problem in \(O(n^{3})\) time, and subsequently, we improve this result to \(O(n^{2}\log{n})\). For the latter case, we give \(O(n\log{n})\) time algorithm that uses the farthest point Voronoi diagram. The above-weighted variation of locating \(k\) semi-obnoxious facilities may generalize the problem that Bereg et al. (2015) studied where \(k=1\) i.e., the smallest radius maximum weight circle is to be centered on a line. Furthermore, we consider a generalization of the weighted problem where we are given \(t\) horizontal lines instead of one line. We give an \(O(n^{4}k^{2}t^{5})\) time algorithm for this problem. Finally, we consider a discrete variant where a set of \(s\) candidate sites (in convex position) for placing \(k\) facilities is pre-given \((k<s)\). We propose an algorithm that runs in \(O(n^{2}s^{2}+ns^{5}k^{2})\) time for this discrete variant. **Keywords:** Semi-obnoxious Facility Location, Complete Weighted Directed Acyclic Graph, Minimum-weight \(k\)-link Path, Concave Monge Property, Farthest-point Voronoi Diagram, Delaunay Triangulation, Dynamic Programming ## 1 Introduction Given a set \(\mathcal{B}\) of blue points and a set \(\mathcal{R}\) of red points above a horizontal line \(\ell\), each point \(p_{i}\in\mathcal{B}\cup\mathcal{R}\) has a weight \(w_{i}>0\) if \(p_{i}\in\mathcal{B}\) and \(w_{i}<0\) if \(p_{i}\in R\). Let \(|\mathcal{B}\cup\mathcal{R}|=n\), and let \(d^{0}\) denote the interior of any geometric object \(d\), i.e., \(d^{0}=d\backslash\partial d\) where \(\partial d\) denotes the boundary of \(d\). The objective is to pack \(k\) non-overlapping congruent disks \(d_{1},d_{2},\ldots,d_{k}\) of minimum radius, centered on \(\ell\), such that \(\sum\limits_{j=1}^{k}\sum\limits_{i\in[n]:\exists p_{i}\in\mathcal{R},p_{i} \in d_{j}}w_{i}+\sum\limits_{j=1}^{k}\sum\limits_{i\in[n]:\exists p_{i}\in \mathcal{B},p_{i}\in d_{j}}w_{i}\) is maximized, i.e., the sum of the weights of the points covered by \(\bigcup_{j=1}^{k}d_{j}\) is maximized. We name this problem a Constrained Semi-Obnoxious Facility Location (CSOfl) problem on a Line. Typically, facility location problems involve two types of facilities: desirable ones like hospitals, fire stations, and post offices that should be located as close as possible to demand points (population centers), and undesirable ones like chemical factories, nuclear plants, and dumping yards that should be located as far away as possible from demand points to minimize their negative impact. However, the semi-obnoxious facility location (Sofl) problems have the unified objective of optimizing both negative and positive impacts on the repelling and attractive demand sites, respectively. In Sofl problems, the aim is to locate facilities at an optimal distance from both attractive and repulsive demand points. This creates a bi-objective problem where two objectives must be balanced. For example, when building an airport, it should be located far enough from the city to avoid noise pollution but close enough to customers to minimize transportation costs. In [14], the semi-obnoxious facility location problem is modeled as Weber's problem, where the repulsive points are assigned with negative weights. This problem is solved by designing a branch and bound method with the help of rectangular subdivisions [14]. The problem of locating multiple capacitated semi-obnoxious facility location problem is solved using a bi-objective evolutionary strategy algorithm where the objective is to minimize both non-social and social costs [20]. The problem of locating a single semi-obnoxious facility within a bounded region is studied by constructing efficient sets (the endpoints of the efficient segments) [15]. A bi-objective mixed integer linear programming formulation was introduced and applied to this semi-obnoxious facility location problem [8]. Golpayegani et al. [12] introduced a semi-obnoxious median line problem in the plane using Euclidean norm and proposed a particle swarm optimization algorithm. Later, Golpayegani et al. [13] proposed a particle swarm optimization that solved the rectilinear case of the semi-obnoxious median line problem. Recently, Gholami and Fathali [11] solved the circular semi-obnoxious facility location problem in the Euclidean plane using a cuckoo optimization algorithm which is known to be a metaheuristic method. The problem of locating a single semi-obnoxious facility within a bounded region is studied by constructing efficient sets. Wagner [21] gave duality results in his thesis for a non-convex single semi-obnoxious facility location problem in the Euclidean space. Singireddy and Basappa [18, 19] studied the \(k\) obnoxious facility location problem restricted to a line segment. They initially proposed an \((1-\epsilon)\)-approximation algorithm [18], and then two exact algorithms based on two different approaches that run in \(O((nk)^{2})\) and \(O((n+k)^{2})\) time, respectively, for any \(k>0\) and finally, an \(O(n\log n)\) time algorithm for \(k=2\)[19]. In [18], they also examined the weighted variant of the problem (where the influencing range of obnoxious facilities is fixed and demand points are weighted). They gave a dynamic programming-based solution that runs in \(O(n^{3}k)\) time for this variant. Subsequently, Zhang [22] refined this result to \(O(nk\alpha(nk)\log^{3}nk)\) by reducing the problem to the \(k\)-link shortest path problem on a complete, weighted directed acyclic graph whose edge weights satisfy the convex Monge property, where \(\alpha(\cdot)\) refers to the inverse Ackermann function. Section 4 of this paper follows a similar strategy of reducing the weighted CSOfl to the \(k\)-link path problem, but the edge weights satisfy the concave Monge property. The Sofl problem is also closer to the class of geometric separability problems, where we need to separate the given two sets of points with a linear or non-linear boundary or surface in a high-dimensional space. Geometric separability is an important concept in machine learning and pattern recognition. It is used to determine whether a set of data can be classified into distinct categories (say, good and bad points type) using a specific algorithm or model. Then, given an arbitrary data point, we can predict whether this point is good or bad depending on which side of the separation boundary hyperplane it falls. O'Rourke et al. [17] gave a linear time algorithm based on linear programming to check whether a circular separation exists. They also showed that the smallest disk and the largest separating circle could be found in \(O(n)\) and \(O(n\log n)\) time, respectively. If a convex polygon with \(k\) sides separation exists, then for \(k=\Theta(n)\), the lower bound for computing the minimum enclosing convex polygon with \(k\) sides is \(\Omega(n\log n)\)[9] and can be solved in \(O(nk)\) time. While the separability problem using a simple polygon [10] was shown to be NP-hard, Mitchell [16] gave \((\log n)\)-approximation algorithm for an arbitrary simple polygon. Recently, Abidha and Ashok [1] have explored the geometric separability problems by examining rectangular annuli with fixed (the axis-aligned) and arbitrary orientation, square annuli with a fixed orientation, and an orthogonal convex polygon. For rectangular annuli with a fixed orientation, they gave \(O(n\log n)\) time algorithm. They gave \(O(n^{2}\log n)\) time algorithm for cases with arbitrary orientation. For a fixed square case, the running time of their algorithm is \(O(n\log^{2}n)\), while for the orthogonal convex polygonal cases, it is \(O(n\log n)\) time. Preliminaries This section briefly introduces various notations and definitions that will be used in further sections. Let \(\mathcal{L}_{\textsc{can}}\) denote the set of candidate radii and \(r_{\textsc{can}}\in\mathcal{L}_{\textsc{can}}\) a candidate radius. The optimal radius is denoted as \(r_{opt}\). Let \(dist(u,v)\) denote the minimum Euclidean distance between two points \(u\) and \(v\). Given a graph \(G(V,E)\), the weight of an edge is denoted as \(w(i,j)\) where \(\vv{ij}\in E\). The path between any two vertices \(v,u\in V\) is denoted as \(\Pi(v,u)\). **Definition 1**.: **Configurations**: Arrangement of \(k\)-disks in any feasible solution to the CSofl problem with different placements of red and blue points on the boundaries of the disks (critical regions). A configuration is said to be critical if it corresponds to some candidate radius \(r_{\textsc{can}}\in\mathcal{L}_{\textsc{can}}\). **Definition 2**.: **DAG:** It stands for Directed Acyclic Graph. It is a directed graph that has no directed cycles. In other words, it is a graph consisting of a set of nodes connected by directed edges, where the edges have a specific direction. There is no way to start at any node and follow a sequence of edges that eventually loops back to that node. **Definition 3**.: **The minimum weight \(k\)-link path:** Given a complete weighted DAG \(G(V,E)\), and two vertices \(s\) (source) and \(t\) (target), the minimum weight \(k\)-link path problem seeks to find a minimum weight path from \(s\) to \(t\) such that the path has exactly \(k\) edges (links) in it. **Definition 4**.: **Concave Monge property:** The weight function \(w\) for a given weighted, complete DAG \(G(V,E)\) satisfies the concave Monge property if for all \(i,j\in V\) we have the inequality \(w(i,j)+w(i+1,j+1)\leq w(i,j+1)+w(i+1,j)\) satisfied, where \(1<i+1<j<n\). The outline of the algorithm for the CSofl problem is as follows: 1. First, all possible configurations of the \(k\) disks and red and blue points in any feasible solution to the CSofl problem are identified. We show that a finite number of distinct configurations exist, and there are specifically \(O(1)\) distinct critical-configuration types. 2. The next step entails computing all possible candidate radii, \(\mathcal{L}_{\textsc{can}}\), where we have one \(r_{\textsc{can}}\) corresponding to each of the configurations identified in the previous step. 3. After obtaining a candidate radius \(r_{\textsc{can}}\), the given instance of the CSofl problem will be transformed into an instance of the problem of computing a minimum weight \(k\)-link path problem on a complete weighted DAG \(G\). 4. The semi-obnoxious facilities (disks) should then be positioned (i.e., the centers of these disks are to be positioned) at the points on \(\ell\) corresponding to vertices of the aforementioned minimum weight \(k\)-link path \(\Pi_{k}^{*}(s,t)\) in \(G\). The total weight of the points covered by these facilities can be computed using the \(\Pi_{k}^{*}(s,t)\) weight. 5. To determine the set of all radii \(\mathcal{L}_{\textsc{can}}=\{\lambda_{1},\lambda_{2},\dots\}\) for which the total weight of the covered points is the largest, the above process must be repeated for every candidate radius \(r_{\textsc{can}}\). 6. Finally, the locations of the \(k\) semi-obnoxious facilities placed with the smallest \(\lambda\in\mathcal{L}_{\textsc{can}}\) and covering the points with the largest total weight is returned as the output. ## 3 Computing the candidate radii In this section, we find all the candidate radii by considering all configurations involving disks as well as blue and red points. _Configuration-_**0**:**: Suppose that all the red points lie closer to \(\ell\) than the blue points and have significantly more negative weights than the blue points (see Figure 1). In this scenario, covering any blue points would also cover some red points since the disks must be centered on \(\ell\). This, in turn, would result in a negative total weight. As a result, we can opt to keep zero-radius disks that do not cover any points rather than covering any of the blue points. This way, the maximum weight will be zero. It also implies the following observation. **Observation 1**.: _An optimal (feasible) solution always exists for any given problem instance._ **Configuration-1**: We consider a specific configuration in which the radius of the disks in the optimal solution is determined by only one blue point. As shown in Figure 2, we can observe that the disks \(d_{i}\) and \(d_{i+1}\), which have one blue point on each of their boundaries, will have a smaller (optimal) radius compared to the dotted disk \(d_{i}^{\prime}\), which also covers the same blue points. This is because our problem is to find the minimum radius disks that cover the maximum weight. Here, we consider at least one of the blue points lying on the boundary of at least one of the \(k\) disks, which determines the radius of the disks in the optimal solution for a radius greater than zero. Observe that the radius of the disk is the \(y\)-coordinate value of the point that lies on the disk boundary. Hence, we add \(O(n)\) candidate radii to \(\mathcal{L}_{\mbox{\tiny\sc can}}\) and the radii are \(r_{\mbox{\tiny\sc can}}=y_{p_{i}}\), where \(y_{p_{i}}\) denotes the \(y\)-coordinate of the point \(p_{i}\) for each \(p_{i}\in\mathcal{B}\), \(i=1,2,\ldots,n\). **Configuration-2**: In this scenario, we consider the case where the optimal solution is determined by two points on the boundary of at least one of the \(k\) disks, which can either be two blue points or one blue and one red point. We notice that no two red points on any disk's boundary will determine the disk's radius, as we can further reduce the disk's radius until its boundary touches at least one of the blue points. To calculate the candidate radii for the disks, we proceed as follows: Consider a point \(p_{i}\in\mathcal{B}\) and a point \(p_{j}\in\mathcal{R}\). If they determine the minimum radius of the disks in a solution to CSofl problem, then the candidate radius \(r_{\mbox{\tiny\sc can}}\) can be computed by drawing a bisector line \(\ell_{i,j}\) until \(\ell_{i,j}\) cuts across \(\ell\) where \(p_{i}\) is in the counter-clockwise direction from \(\ell_{i,j}\) and \(p_{j}\) in the clockwise direction from \(\ell_{i,j}\) (see Figure 3). Let \((x_{p_{i},p_{j}},y_{p_{i},p_{j}})\) be the center of the disk, \((x_{p_{i}},y_{p_{i}})\) and \((x_{p_{j}},y_{p_{j}})\) are the coordinates of the points \(p_{i}\) and \(p_{j}\) respectively. Then, we have \[(x_{p_{i}}-x_{p_{i},p_{j}})^{2}+(y_{p_{i}}-y_{p_{i},p_{j}})^{2}=(x_{p_{j}}-x_{ p_{i},p_{j}})^{2}+(y_{p_{j}}-y_{p_{i},p_{j}})^{2}\] After simplification, we have \(r_{\mbox{\tiny\sc can}}=\sqrt{(x_{p_{i}}-x_{p_{i},p_{j}})^{2}+y_{p_{i}}^{2}}\) for the two cases: * \(p_{i}\in\mathcal{B}\) and \(p_{j}\in\mathcal{R}\). * \(p_{i},p_{j}\in\mathcal{B}\). where \(x_{p_{i},p_{j}}=\frac{(y_{p_{j}}-y_{p_{i}})(y_{p_{j}}+y_{p_{i}})}{2(x_{p_{j}}- x_{p_{j}})}+\frac{(x_{p_{j}}+x_{p_{i}})}{2}\), \(y_{p_{i},p_{j}}=0\). Here, as we consider a candidate radius for every pair of points except the red-red pair, we add \(O(n^{2})\) candidate radii to \(\mathcal{L}_{\mbox{\tiny\sc can}}\) for this critical-configuration type. Figure 1: Configuration-0. Figure 2: Configuration-1. **Observation 2**.: _There are only \(O(1)\) critical configuration types._ Proof.: In any of the optimal placements of the disks, either one blue point or a pair of blue points, or a pair of red and blue points will determine the disk's radius. Even though there may be more than two points lying on the boundary of the disks in any optimal packing, only two points among them will determine the radius of the disk since there exists a unique disk that passes through two points and is centered on \(\ell\). Hence, any configurations of points lying on the boundary of the disk can be transformed into any of the above-mentioned configurations by perturbing the center or reducing the radius of the disks. Therefore, we have a constant number of critical configuration types, namely, four types, including the configuration-0. **Lemma 1**.: \(|\mathcal{L}_{\mbox{\tiny CAN}}|=O(n^{2})\)_._ Proof.: It follows from Observation 2 since a constant number of critical configuration types (viz. no points, one blue point, a pair of blue points, and a pair of blue and red points) contribute to the candidate radii. In any of the configurations, at most, two points will determine the radius of the disks. Hence we have \(O(n^{2})\) candidate radii corresponding to that configuration. Thus, the lemma follows. ## 4 Transformation to the minimum weight \(k\)-link path problem In this section, we demonstrate that the CSofl problem can be reduced to the problem of computing a minimum weight \(k\)-link path between a pair of vertices in a weighted DAG \(G(V,E)\). Each edge \(\overline{ij}\in E\) in \(G\) is assigned a weight \(w:(i,j)\rightarrow\mathbb{R}\) that is either a positive or negative real number \(w(i,j)\in\mathbb{R}\). The minimum weight \(k\)-link path \(\Pi(s,t)\) is a path from the source \(s\) to the target vertex \(t\), consisting of exactly \(k\) edges, and has the minimum total weight among all \(k\)-link paths between \(s\) and \(t\), where the weight of a \(k\)-link path is the sum of weights of the edges in the path, i.e., \[w(\Pi(s\to i_{1}\to i_{2}\rightarrow\cdots\to i_{k-1} \to t))=\sum_{j=1}^{k-2}w(i_{j},i_{j+1})+w(s,i_{1})+w(i_{k-1},t)\] Let \(\lambda=r_{\mbox{\tiny CAN}}\). Next, we transform an instance of the CSofl problem to an instance of \(k\)-link path problem on a DAG \(G(V,E)\) as follows: Let us call the maximal interval \(f_{i}^{+}=[l_{i},r_{i}]\) on \(\ell\) as the influence interval (within which a facility or a disk with radius \(r_{\mbox{\tiny CAN}}\) centered will influence or cover the point \(p_{i}\)) for the point \(p_{i}\in\mathcal{B}\) if the distance between any point on \([l_{i},r_{i}]\) and \(p_{i}\) is at most \(r_{\mbox{\tiny CAN}}\). Similarly, \(f_{i}^{-}=[l_{i},r_{i}]\) is the influence interval on \(\ell\) for \(p_{i}\in\mathcal{R}\). Let the set of all influence intervals be \(F=\{f_{i}^{+}\mid i\in[n],p_{i}\in\mathcal{B}\}\cup\{f_{i}^{-}\mid i\in[n],p_{ i}\in\mathcal{R}\}\). Let the vertex set \(V=\{l_{1},r_{1},l_{2},r_{2},\ldots,l_{n},r_{n}\}\) be the end points of the intervals in \(F\). For each \(l_{i},i\in[n]\), we also add \(2(k-1)\) extra vertices corresponding to points on \(\ell\) at distance \(l_{i}+2\lambda,l_{i}+4\lambda,\ldots,l_{i}+2(k-1)\lambda,l_{i}-2\lambda,l_{i}- 4\lambda,\ldots,l_{i}-2(k-1)\lambda\), to \(V\). Similarly, we add \(2(k-1)\) vertices for every \(r_{i},i\in[n]\), placed at points at distance \(r_{i}-2\lambda,r_{i}-4\lambda,\ldots,r_{i}-2(k-1)\lambda,r_{i}+2\lambda,r_{i}+4 \lambda,\ldots,r_{i}-2(k-1)\lambda\). The addition Figure 3: Illustration of calculating \(r_{\mbox{\tiny CAN}}\). of these extra \(2(k-1)\) points on the sides of both endpoints of each influence interval is because it is possible to have disks centered at points on \(\ell\) other than the endpoints of influence intervals in an optimal solution (see Figure 4 for an illustration). However, at least one disk must be centered at an endpoint in any optimal solution. In Figure 4, we can observe that the disks \(d_{i-1}\) and \(d_{i+1}\) are not centered at any endpoint of the intervals in \(F\) since none of the points in \(\mathcal{B}\cup\mathcal{R}\) lie on their boundary. Without loss of generality, the vertices may be relabeled as \(V=\{v_{1},v_{2},\ldots,v_{m}\}\) based on the increasing order of \(x\)-coordinates of all \(l_{i}\) and \(r_{i}\) for \(i\in[n]\), and the extra added points, where \(m=O(kn)\). Furthermore, we can update \(V\) so that all the corresponding points in \(V\) have distinct \(x\)-coordinates. Let \(s\) and \(t\) be the points placed on \(\ell\) at a distance of \(2k\lambda\) from the left endpoint of the leftmost interval \(l_{1}\) and from the right endpoint of the rightmost interval \(r_{\mathrm{L}}\), respectively, where \([l_{\mathrm{L}},r_{\mathrm{L}}]\) denotes the rightmost influence interval (see Figure 5). Note that \(s\) lies on the left of all the points in \(V\), and \(t\) lies on the right of all the points in \(V\). Let \(w(d_{i})\) denote the total weight of the points covered by the disk with radius \(\lambda\) centered at \(c_{i}\in V\). We calculate \(w(d_{i})\) for the disks centered at each point \(c_{i}\in V\). Now, the weight of an edge \(\overline{ij}\in E\) is calculated as follows: * \(w(i,j)=+\infty\) if the \(dist(i,j)<2\lambda\). * \(w(i,j)=-(w(d_{i})+w(d_{j}))\) if \(dist(i,j)\geq 2\lambda\) for all \(i,j\in V\), i.e., we add a directed edge between every pair of vertices \(i,j\in V\) (\(i<j\)), if the distance between them is at least \(2\lambda\) and we add the corresponding weights (see Figure 6) and negate it. * Clearly observe that \(w(d_{s})\) and \(w(d_{t})\) is zero. It is also zero for the disks centered at first (at most) \(k-1\) and last (at most) \(k-1\) points (since they are the points on \(\ell\) which are separated by a Figure 4: An optimal packing of \(3\) disks for a candidate radius \(\lambda\), \(d_{i}\) is centered at an endpoint of the influence interval due to \(p_{i}\). Figure 5: Adding of extra-points including \(s\) and \(t\). Figure 6: Calculation of the weight of an edge. distance of at least \(2\lambda\) on the left of \(l_{1}\) and the right of \(r_{\lambda}\), respectively for every endpoint of the influence interval. Without loss of generality, let \(G^{\prime}(V^{\prime},E^{\prime})\) be the graph obtained by the above transformation. There will be \(O(nk)\) vertices in \(V^{\prime}\), and a directed edge from \(i\) to \(j\) for all \(i,j\in V^{\prime}\) such that \(i<j\). Then, \(G^{\prime}\) is a complete DAG with \(|V^{\prime}|=O(nk)\) vertices and \(|E^{\prime}|=O(n^{2}k^{2})\) edges, and every edge \((i,j)\in E^{\prime}\) is assigned a weight as discussed above. Hence, we have the following lemma. **Lemma 2**.: _The graph \(G^{\prime}\) can be constructed in \(O(n^{2}k^{2})\) time._ Proof.: We start by considering the way we constructed \(G^{\prime}\). Every demand point \(p_{i}\in\mathcal{B}\cup\mathcal{R}\) can contribute at most two endpoints of an interval on \(\ell\) at a distance of \(2\lambda\) from each other. If we center a disk on that interval, the demand points will either lie on the boundary or the interior of the disk. Next, we add \(2(k-1)\) points on both sides of each endpoint on \(\ell\) with a separation distance of \(2\lambda\) between any two consecutive of them. Thus, we have a total of \(O(nk)\) points on \(\ell\), which includes \(s\) and \(t\) and are added to \(V^{\prime}\). Now, from every point in \(V^{\prime}\), we add a weighted directed edge to all the points of \(V^{\prime}\) that lie on the right of that point on \(\ell\). This will result in a total of \(O(n^{2}k^{2})\) edges, where each edge is assigned a corresponding weight, as discussed earlier. Therefore, the resulting graph \(G^{\prime}(V^{\prime},E^{\prime})\) has \(O(nk)\) vertices, \(O(n^{2}k^{2})\) edges, and can be constructed in \(O(n^{2}k^{2})\) time. The edge weights of \(G^{\prime}\) will satisfy the concave Monge property for any four vertices \(i,i+1,j,j+1\) of \(G^{\prime}\) such that \(i<i+1<j<j+1\), the weights of the directed edges from \(i\) to \(j\) and from \(i+1\) to \(j+1\) are not greater than the weights of the directed edges from \(i\) to \(j+1\) and from \(i+1\) to \(j\). **Observation 3**.: _The edge weights of \(G^{\prime}\) satisfy the concave Monge property._ Proof.: Recall the definition of the concave Monge property, i.e., \(w(i,j)+w(i+1,j+1)\leq w(i,j+1)+w(i+1,j)\) (see Figure 7). The weights assigned to the edges of \(G^{\prime}\) are assigned based on the following two rules: 1. \(w(i,j)=+\infty\) if the \(dist(i,j)<2\lambda\). 2. \(w(i,j)=-(w(d_{i})+w(d_{j}))\) if \(dist(i,j)\geq 2\lambda\) for all \(i,j\in V\) Suppose we select any four vertices with their index labels satisfying \(i<i+1<j<j+1\) in \(G^{\prime}\) such that the distance between them is at least \(2\lambda\). Then, according to rule 2, the corresponding weights assigned to the edges satisfy \(w(i,j)+w(i+1,j+1)=w(i,j+1)+w(i+1,j)\). Now, consider another set of four vertices \(i<i+1<j<j+1\) in \(G^{\prime}\) such that the distance between the two closest points among them is less than \(2\lambda\), i.e., \(dist(i+1,j)<2\lambda\). According to rule 1, the corresponding weight assigned to the edges between these vertices is \(+\infty\). Then, we have \(w(i,j)+w(i+1,j+1)<w(i,j+1)+w(i+1,j)\), which satisfies the concave Monge property. Finally, suppose all four selected vertices \(i<i+1<j<j+1\) in \(G^{\prime}\) have a distance between any two consecutive of them less than \(2\lambda\). In this case, according to rule 1, the weights assigned to the corresponding edges satisfy \(w(i,j)+w(i+1,j+1)=w(i,j+1)+w(i+1,j)\). Therefore, we have shown that if we select any four vertices \(i<i+1<j<j+1\) in \(G^{\prime}\), the weights of all these edges satisfy the concave Monge property. Thus, the observation follows. Since the constructed graph \(G^{\prime}\) is a weighted complete DAG and its edge weights satisfy the concave Monge property, we have the following theorem for finding the minimum weight \((k+1)\)-link path between any pair of vertices of \(G^{\prime}\). Figure 7: Any four vertices of \(G^{\prime}\) that satisfy the concave Monge property. **Theorem 1**.: _[_2_]_ _The minimum weight \((k+1)\)-link path \(\Pi_{(k+1)}^{*}(s,t)\) in \(G^{\prime}\) can be computed in \(O(nk\sqrt{k\log{(nk)}})\) time._ **Theorem 2**.: _We can solve the CSofl problem in polynomial time._ Proof.: It follows from Lemma 1, Lemma 2 and Theorem 1. The running time of the algorithm is \(n^{2}\cdot(O(n^{2}k^{2})+O(nk\sqrt{k\log{(nk)}}))=O(n^{4}k^{2})\). The algorithm will return the minimum \(r_{\mbox{\tiny CAN}}=r_{opt}\), corresponding to which the computed \((k+1)\)-link path between \(s\) and \(t\) has the minimum total weight \(w(\Pi_{(k+1)}^{*}(s,t))\). The disks with radius \(r_{opt}\) can be centered at \(k\) internal vertices of the \((k+1)\)-link path (excluding the terminal vertices \(s\) and \(t\)). The total weight is \(\sum_{i\in Sol}w(d_{i})=-w(\Pi_{(k+1)}^{*}(s,t))/2\), where \(d_{i}\) is a disk centered at \(i\) having radius \(r_{opt}\) and \(Sol=V(\Pi_{(k+1)}^{*}(s,t))\setminus\{s,t\}\) is the set of vertices (the corresponding points on \(\ell\)) of \((k+1)\)-link path except \(s\) and \(t\). **Improvement:** Recall that the points corresponding to the vertices in \(V^{\prime}\) are labeled as \(1,2,\ldots,m\), where \(m=O(nk)\). Here, we show that we can improve the runtime of Theorem 2 (by almost linear factor) by not explicitly constructing the complete graph \(G^{\prime}\). As we have seen in the proof of Lemma 2, this construction requires \(O(n^{2}k^{2})\) time for each of \(O(n^{2})\) candidate radius. However, for every candidate radius \(r_{\mbox{\tiny CAN}}\), we need to precompute the two arrays \(w[]\) and \(p[]\), each of size \(O(nk)\). Here, \(w[i]\) stores the sum of weights of the demand points covered by the disk of radius \(r_{\mbox{\tiny CAN}}\) centered at a point labeled \(i\in V^{\prime}\) on \(\ell\), for each \(i\in[m]\). The element \(p[i]\) stores the index \(i^{\prime}\in V^{\prime}\) of the rightmost point at a distance of at least \(2\lambda\) from \(i\) such that \(i^{\prime}<i\). We now give a dynamic program algorithm to compute the maximum weight of a \((k+1)\)-link path from point \(1\) to point \(m\) (here, the points labeled \(1\) and \(m\) are the vertices \(s\) and \(t\) respectively, in \(G^{\prime}\)). For a pair of points \(i,j\) (\(i<j\)) on \(\ell\); we redefine the weight of the edge \((i,j)\) to be equal to \(w(j)\) as we need not negate the sum of weights and construct the whole directed graph. We define the subproblem \(\phi(i,j)\) as the problem of finding the maximum weight \(j\)-link path in the subgraph \(G^{\prime}_{i}\) induced by the vertices \(1,2,\ldots,i\). That is, \(\phi(i,j)=\max\limits_{i^{\prime}}\{w(\Pi_{j}(1,i^{\prime}))\}\), where \((j+1)\leq i^{\prime}\leq i\). Then we have the following recurrence: \[\phi(i,j)=\max\{\phi(i-1,j),\phi(p[i],j-1)+w[i]\} \tag{1}\] As \(i=1,2,\ldots,O(nk)\), and \(j=1,2,\ldots,k+1\), there are \(nk\cdot k=O(nk^{2})\) entries in the DP table table \(\phi(i,j)\), each requiring \(O(1)\) time to compute. Hence, for a given \(r_{\mbox{\tiny CAN}}\), the bottom-up implementation of the above dynamic programming algorithm takes time \(O(nk^{2})\) provided that we have the entries \(w[i]\) and \(p[i]\) precomputed for each \(i\in[m]\). **Theorem 3**.: _The above-improved algorithm for the CSofl problem has the time complexity of \(O(n^{3}k\cdot\max{(n,k)})\)._ Proof.: The proof is as follows: * There are \(O(n^{2})\) candidate radii in \(\mathcal{L}_{\mbox{\tiny CAN}}\). * For each candidate radius \(\lambda\in\mathcal{L}_{\mbox{\tiny CAN}}\), we find \(O(nk)\) points on \(\ell\) as discussed above. * To compute the weight \(w[i]\) of each point \(i\in V^{\prime}\) on \(\ell\), we answer a circular range reporting query [6], which takes \(O(\log{n}+\kappa)\) time for a query circle centered at each of \(O(nk)\) points on \(\ell\), where \(\kappa\) is the number of points reported. It has a preprocessing time of \(O(n\log{n})\) and requires \(O(n)\) space. * For each \(\lambda\in\mathcal{L}_{\mbox{\tiny CAN}}\), the above dynamic programming algorithm will take \(O(nk^{2})\) time. However, the bottleneck in computing the optimal value \(\phi(m,k+1)\) is in computing the arrays \(w[]\) for each of \(O(n^{2})\) candidate radius. The total time for computing this array is \(n\log{n}+\sum\limits_{j=1}^{|\mathcal{L}_{\mbox{\tiny CAN}}|}(\sum\limits_{i=1} ^{m}(\log{n}+\kappa_{i}))\), where \(\kappa_{i}\) is the number of points reported by the query algorithm for a given query disk centered at \(i\in[m]\), where \(m=O(nk)\). We see that \(\sum\limits_{j=1}^{|\mathcal{L}_{\mbox{\tiny CAN}}|}(\sum\limits_{i=1}^{m}(\log {n}+\kappa_{i}))=n^{3}k\log{n}+\sum\limits_{j=1}^{|\mathcal{L}_{\mbox{\tiny CAN }}|}n\chi\), where \(\chi=\max\{\chi_{1},\chi_{2},\ldots,\chi_{O(n^{2})}\}\), and \(\chi_{j}\) is the \(\mbox{ply}\)1 of the pointset \(\mathcal{B}\bigcup\mathcal{R}\) with respect to the set of all disks \(i\in[m]\) for a candidate radius \(\lambda_{j}\in\mathcal{L}_{\mbox{\tiny CAN}}\). Observe that the ply of \(\mathcal{B}\bigcup\mathcal{R}\) for a given set of \(O(nk)\) disks is \(O(n)\) only since a point from \(\mathcal{B}\bigcup\mathcal{R}\) lying in a disk centered at an endpoint \(i\) of influence interval can not be contained inside the \(2(k-1)\) disks centered at distances \(l_{i}+2\lambda,l_{i}+4\lambda,\ldots,l_{i}+2(k-1)\lambda,l_{i}-2\lambda,l_{i}-4 \lambda,\ldots,l_{i}-2(k-1)\lambda\). Therefore, \(\sum\limits_{j=1}^{|\mathcal{L}_{\text{\tiny{CSM}}}|}(\sum\limits_{i=1}^{m} \kappa_{i})\leq\sum\limits_{j=1}^{|\mathcal{L}_{\text{\tiny{CSM}}}|}(nk\chi)=O(n ^{4}k)\) as \(\chi=O(n)\). * Hence the total running time is \(n^{2}\cdot(O(nk^{2})+O(nk\log n)+O(n^{2}k))=O(n^{3}k\cdot\max{(n,k)})\). Now, we prove the correctness of the dynamic programming recurrence relation 1 by inducting on the number of disks placed. **Correctness:** Fix a radius \(\lambda\in\mathcal{L}_{\text{\tiny{CSM}}}\). By induction on \(i+j\), we can prove the recurrence relation 1 is correct, as follows. For the base case \(j=1\), \(i\geq 2\), we have \(\phi(i,1)=w(\Pi_{1}(1,i))=\max\limits_{2\leq q\leq i}{(w[q])}\), which is the maximum weight of a 1-link path originating at vertex 1 in the subgraph \(G^{\prime}_{i}\) induced by the vertices \(1,2,\ldots,i\). This is the optimal solution for the subproblem \(\phi(i,1)\). For the base case \(j\geq 2\), \(i=2\), we have that \(\phi(i,j)=\phi(i,2)\) as the weight contributed by the remaining \(j-2\) disks centered on \(\ell\) is zero. Let \(p[s]=\max\{q\ |\ (dist(s,q)\geq 2\lambda,2\leq q<s\}\) for \(2\leq s\leq i\). Assume that the recurrence relation holds for all subproblems \(\phi(i^{\prime},j^{\prime})\), where \((i^{\prime}+j^{\prime})<(i+j)\). Consider the subproblem \(\phi(i,j)\), and for solving this subproblem, we consider two cases for the vertex \(i\): either \(\Pi_{j}(1,i)\) uses vertex \(i\) or it doesn't. Case 1: \(\Pi_{j}(1,i)\) does not use vertex \(i\). In this case, \(\Pi_{j}(1,i)\) is also an optimal \(j\)-link path in \(G^{\prime}_{i-1}\) by induction hypothesis. Therefore, the optimal solution for the subproblem \(\phi(i,j)\) is the same as for the subproblem \(\phi(i-1,j)\). Case 2: \(\Pi_{j}(1,i)\) uses vertex \(i\). Let \(i^{\prime}=p[i]\), which is the predecessor of \(i\) on \(\Pi_{j}(1,i)\). Then, \(\Pi_{j}(1,i)\) can be decomposed into two parts: an optimal \((j-1)\)-link path in \(G^{\prime}_{i^{\prime}}\) (by induction hypothesis), denoted by \(\Pi_{j-1}(1,i^{\prime})\), and the edge \((i^{\prime},i)\) with weight \(w[i]\). Since \(\Pi_{j}(1,i)\) is an optimal path ending at \(i\) in the subgraph \(G^{\prime}_{i}\), the weight of \(\Pi_{j}(1,i)\) is equal to the sum of the weights of \(\Pi_{j-1}(1,i^{\prime})\) and \((i^{\prime},i)\), i.e., \(w(\Pi_{j}(1,i))=w(\Pi_{j-1}(1,i^{\prime}))+w[i]\). Therefore, the optimal solution for the subproblem \(\phi(i,j)\) is the maximum weight of all \(j\)-link paths in \(G^{\prime}_{i}\). This is achieved by either taking the optimal solution for the subproblem \(\phi(i-1,j)\) or by taking the optimal solution for the subproblem \(\phi(i^{\prime},j-1)\) and adding the weight of edge \((i^{\prime},i)\), i.e., \(\phi(i,j)=\max{(\phi(i-1,j),\phi(p[i],j-1)+w[i])}\). By using the recurrence relation for \(\phi(i-1,j)\) and \(\phi(p[i],j-1)\), which can be computed by solving the subproblems \(\phi(i-1,j-1)\) and \(\phi(p[i],j-1)\). Therefore, we can use the recurrence relation to obtain the optimal solution for \(\phi(i,j)\). By the principle of mathematical induction, the recurrence relation holds for all subproblems \(\phi(i,j)\), where \(1\leq j\leq k\). Given that we have precomputed all the values in the arrays \(w[]\) and \(p[]\), it takes constant time to compute the optimal solution to each subproblem by combining optimal solutions to smaller subproblems. Further, we have \(O(nk^{2})\) distinct subproblems in total for the recurrence. Hence, the overall time complexity of the algorithm is \(O(nk^{2})\). ## 5 Special cases of CSofl In this section, we consider the following two special cases of the CSofl problem with some specific application. **Problem 4**.: **AllBlue-MinRed:** The problem aims to cover all blue points while covering the minimum number of red points. To solve this problem, we modify the weights of the demand points as follows: for every point \(p_{i}\in\mathcal{R}\), let \(w_{i}=\delta\) and for every point \(p_{i}\in\mathcal{B}\), the weight \(w_{i}>-|\mathcal{R}|\delta\), where \(\delta\in\mathbb{R}\) is an arbitrary real value and \(\delta<0\). This problem has some specific applications in defense, as will be discussed: assuming a scenario where there are two groups of points along a horizontal line, one represented by blue points (enemy forces) and the other by red points (civilians), the goal is to determine the center locations and blast radius required for a fixed number of explosives to target all enemy forces while isolating the civilians as much as possible. Alternatively, suppose the scenario is such that the red points represent enemy forces, and the objective is to establish wireless communication among our own forces (represented by blue points). In that case, the goal is to place \(k\) base stations to cover all blue forces while minimizing the transmissions intercepted by the red forces (enemy forces). **Problem 5**.: MaxBlue-NoRed**::** In this problem, we need to cover the maximum number of blue points, and at the same time, none of the red points need to be covered. To solve this problem, we modify the weights of the demand points as follows: for every point \(p_{i}\in\mathcal{B}\), let \(w_{i}=\delta\), and for every point \(p_{i}\in\mathcal{R}\), the weight \(w_{i}<-|\mathcal{B}|\delta\), where \(\delta\in\mathbb{R}\) is an arbitrary real value and \(\delta>0\). This problem has the following specific applications: place a set of \(k\) sensors on a horizontal line to cover as many blue points as possible while avoiding red ones. This scenario can arise, for example, in battlefield surveillance, where the red points represent friendly forces, and the blue points represent enemy forces. The goal is to deploy sensors to monitor the enemy forces while avoiding the friendly forces. Similarly, the problem can arise in wildlife conservation, where the blue points represent areas of high animal activity, and the red points represent protected or private residential areas. The goal is to deploy sensors to monitor animal activity while avoiding private or protected areas. **Claim 1**.: _The algorithm of Theorem 2 will eventually find an optimal solution (i.e., selects at most \(k\) facility locations on \(\ell\) to cover all the blue points) for the AllBlue-MinRed problem._ Proof.: Consider an instance of CSofl with \(w_{i}=\delta\) for every \(p_{i}\in\mathcal{R}\) and the weight \(w_{i}>-|\mathcal{R}|\delta\) for every \(p_{i}\in\mathcal{B}\), where \(\delta\in\mathbb{R}\) is an arbitrary negative real value. **Feasibility:** The weight assignment of \(w_{i}\) (\(>-|\mathcal{R}|\delta\)) guarantees that all blue points will be covered. In the worst-case scenario, a single disk can cover all points, both blue and red points, whose total weight is positive due to the weight assignment. The remaining \(k-1\) disks can be centered on \(\ell\) to cover none of the points. This ensures that a feasible solution exists for the AllBlue-MinRed problem. **Optimality:** Suppose there is a feasible solution for the CSofl problem that places at most \(k\) facility centers on \(\ell\). Let these facilities cover all points in \(\mathcal{B}\) and some points in \(\mathcal{R}\), with total weight equal to \(\rho\). Now observe that it is impossible to improve the weight \(\rho\) to \(\rho^{\prime}\) (\(>\rho\)) by relocating one of the center locations, which then uncovers \(m^{\prime}\) red points and one blue point (whose weight is, say, \(-|\mathcal{R}|\delta+\epsilon\) for some \(\epsilon>0\)). If we do so, then the updated weight would be \(\rho^{\prime}=\rho-m^{\prime}\delta+|\mathcal{R}|\delta-\epsilon\). But, \(\rho^{\prime}\) is no better than the earlier weight \(\rho\) since \(m^{\prime}\leq|\mathcal{R}|\), \(\delta<0\) and \((-m^{\prime}\delta+|\mathcal{R}|\delta-\epsilon)<0\). Further, the optimal solution with total weight \(\rho\) for the CSofl (computed by using the algorithm of Theorem 2) is also optimal for this particular variant since \(\rho\) can not be improved by uncovering only red points. Hence, the above proposed algorithm for the CSofl problem will also correctly solve the AllBlue-MinRed problem. **Claim 2**.: _The algorithm of Theorem 2 solves the MaxBlue-NoRed problem optimally._ Proof.: Consider an instance of the CSofl problem, in which every \(p_{i}\in\mathcal{B}\) is associated with the weight \(w_{i}=\delta\), and every \(p_{i}\in\mathcal{R}\) is associated with the weight \(w_{i}<-|\mathcal{B}|\delta\), where \(\delta\in\mathbb{R}\) and \(\delta>0\). **Feasibility:** Consider the following trivial feasible solution for the CSofl problem. Let us place \(k\) facility center locations on \(\ell\) so that they don't cover any blue points. Further, we reduce their radius so that no red points lie in the interior. Note that the total weight of the demand points covered by these facilities is zero. Hence, these \(k\) center locations form a feasible solution for the MaxBlue-NoRed problem since none of the red points are covered, and the total weight is zero. **Optimality:** Suppose we have a feasible solution with a total weight \(\rho\) for the CSofl problem. We will try to increase this weight by relocating one of the centers covering additional \(n^{\prime}\) blue points and (at least) one red point. The update weight would be \(\rho^{\prime}=\rho+n^{\prime}\delta-|\mathcal{B}|\delta\) which is smaller than \(\rho\) since \(n^{\prime}\leq|\mathcal{B}|\) and \(\delta>0\). Hence, we cannot improve the total weight by perturbing some centers to cover one more red point with the hope that it may allow us to cover some more (or even all) blue points. When we have an optimal solution with the total weight \(\rho\) for an instance of the CSofl problem, the weight the covered blue points contribute can not be increased due to its optimality. On the other hand, this solution also can not cover any red point because we can get a better weight \(\rho^{\prime}=\rho-n^{\prime}\delta+|\mathcal{B}|\delta\) (by reducing the radius to uncover these red points. While doing so, we possibly uncover some blue points, say \(n^{\prime}\).) This would contradict that \(\rho\) is the optimum. Hence, the above proposed algorithm (of Theorem 2) for the CSofl problem will also correctly solve the MaxBlue-NoRed problem. **Corollary 1**.: _The AllBlue-MinRed and MaxBlue-NoRed problems can be solved in \(O(n^{3}k\cdot\max{(\log{n},k)})\) time._ Proof.: Since there are only two types of weights (namely, \(\delta\) and \(|\mathcal{B}|\delta\) or \(-|\mathcal{R}|\delta\)), instead of answering circular range reporting queries, we answer range counting queries [6], viz. blue count and red count for each of \(O(nk)\) query circles of every candidate radius. Hence the total running time is \(n^{2}\cdot(O(nk^{2})+O(nk\log{n}))=O(n^{3}k\cdot\max{(\log{n},k)})\). Hence, the theorem follows from Theorem 2 and Claims 1 and 2. ### The MaxBlue-NoRed problem for \(k=1\) In this section, we address the problem of determining the minimum enclosing disk with center on \(\ell\), which encloses the maximum number of blue points without enclosing any red points. Recall that in the MaxBlue-NoRed problem, we are given two sets of points, blue points \(\mathcal{B}\) and red points \(\mathcal{R}\), lying above a horizontal line \(\ell\), where \(|\mathcal{B}|+|\mathcal{R}|=n\), the goal is to compute a minimum enclosing disk that maximizes the count of blue points (being enclosed in that disk) while ensuring that no red point is enclosed. **Observation 4**.: _If the perpendicular bisector of any two points \(p_{i}\) and \(p_{j}\) intersects \(\ell\) at \(c_{i}\), then there exists a disk centered at \(c_{i}\) which has \(p_{i}\) and \(p_{j}\) on its boundary._ The method for solving the problem is as follows: * For each pair of points in the set \(\mathcal{B}\cup\mathcal{R}\), compute the perpendicular bisector of the line segment connecting them. Store the intersection points of these perpendicular bisectors with the line \(\ell\) in a set \(I\). Also, add to the set \(I\) all intersection points of \(\ell\) with a vertical line through each blue point since only one blue point may also lie on the boundary of a disk in the optimal solution. * For each \(p_{i}\in I\), construct a disk centered at \(p_{i}\) that passes through the pair of points for which the perpendicular bisector was computed in the previous step. * For each disk centered at \(p_{i}\in I\), determine whether it contains any point from the set \(\mathcal{R}\). If so, remove \(p_{i}\) from the set \(I\). Otherwise, compute the number of blue points contained in the disk. * If \(|I|=0\), then there exists no feasible solution. Otherwise, among the disks centered at points in \(I\), select the one that contains the maximum number of blue points. **Theorem 6**.: _The MaxBlue-NoRed problem for \(k=1\) can be solved in \(O(n^{3})\) time._ Proof.: In order to compute \(I\), it requires \(O(n^{2})\) time. If all points in the set \(\mathcal{B}\cup\mathcal{R}\) have distinct \(x\)-coordinates, then \(|I|=O(n^{2})\). Next, for each point \(p_{i}\in I\), the time required to check the interiority of points is \(O(n)\). Therefore, the total time complexity of the algorithm is \(O(n^{2})+O(n^{3})=O(n^{3})\). **Improved algorithm:** Here, we improve the running time of the algorithm of Theorem 6 by almost a linear factor. Let us recall the notations, for a point \(p\), we denote its coordinates by \((x_{p},y_{p})\), and for a pair of points \(p,q\), we let \(C_{p,q}\) denote the circle whose center lies on the line \(y=0\) and whose boundary passes through \(p,q\), and we let \((x_{p,q},0)\) denote the center of \(C_{p,q}\). Let \(C_{p}\) be a circle with its boundary passing through a point \(p\), and its center lying on \(\ell\) such that its radius equals \(y_{p}\) and its center at the coordinates \((x_{p},0)\). **Claim 3**.: _Given three points \(p,q,r\), the point \(r\) lies on or inside \(C_{p,q}\) if and only if one of the following is true:_ _(i) \(x_{p}<x_{r}\) and \(x_{p,q}\geq x_{p,r}\);_ _(OR)_ _(ii) \(x_{p}>x_{r}\) and \(x_{p,q}\leq x_{p,r}\)._ Figure 8: The optimal solution with only one blue point for \(k=1\). **Proposition 7**.: _There is an algorithm that accepts two sets \(\mathcal{B},\mathcal{R}\) of \(n\) (\(|\mathcal{B}|+|\mathcal{R}|\)) points on the plane and finds for every pair \(p,q\) of points in \(\mathcal{B}\), the number of points of \(\mathcal{R}\) that lie on or inside the circle \(C_{p,q}\). Further, this algorithm runs in time \(O(n^{2}\log n)\)._ Proof.: We first describe the algorithm. 1. Sort the point sets \(\mathcal{B}\cup\mathcal{R}\) based on their \(x\)-coordinates from left to right. 2. For each \(p\in\mathcal{B}\), compute three lists: \(C_{p,\mathcal{B}}=\{x_{p,q}|q\in\mathcal{B}\setminus\{p\}\}\), \(C_{p,\mathcal{R},1}=\{x_{p,r}|r\in\mathcal{R}\text{ and }x_{p}<x_{r}\}\), \(C_{p,\mathcal{R},2}=\{x_{p,r}|r\in\mathcal{R}\text{ and }x_{p}>x_{r}\}\). 3. For each \(p\), sort the lists \(L_{p,1}=C_{p,\mathcal{B}}\cup C_{p,\mathcal{R},1}\) and \(L_{p,2}=C_{p,\mathcal{B}}\cup C_{p,\mathcal{R},2}\). 4. For each \(p\), do the following: by making a single pass over \(L_{p,1}\), compute for every \(q\in\mathcal{B}\setminus\{p\}\), the value \(N_{p,q,1}\), which is defined to be the number of elements of \(C_{p,\mathcal{R},1}\) that appear before \(x_{p,q}\) in \(L_{p,1}\). 5. For each \(p\), compute for every \(q\in\mathcal{B}\setminus\{p\}\), the value \(N_{p,q,2}\), which is defined to be the number of elements of \(C_{p,\mathcal{R},2}\) that appear after \(x_{p,q}\) in \(L_{p,2}\). 6. For each \(p,q\), the desired count (i.e., the number of red points covered by the disk \(C_{p,q}\)) is \(N_{p,q,1}+N_{p,q,2}\). 7. To examine the scenario where only a single blue point resides on the circle, we construct a list in the following manner: * For each \(p\in\mathcal{B}\), we select an arbitrary point \(p_{temp}\) that lies on the circle \(C_{p}\). * Let \(C_{\mathcal{B}}=\{(p,p_{temp})|\ p\in\mathcal{B}\text{ and }p_{temp}\text{ is an arbitrary point lying on }C_{p}\}\) be a list. * Now, assign \(C_{p,\mathcal{B}}=\{x_{p,q}|(p,q)\in C_{\mathcal{B}}\}\) in step 2 and compute the lists \(C_{p,\mathcal{R},1}\) and \(C_{p,\mathcal{R},2}\), then repeat the remaining steps till step 6. 8. Lastly, we determine the circle that encloses the maximum number of blue points and none of the red points by answering circular range counting queries for every circle \(C_{p,q}\) which has the red count \(N_{p,q,1}+N_{p,q,2}=0\) **Analysis:** The correctness follows from Claim 3. The running time is dominated by steps 3, 4, 5, and 8. Step 3 takes time \(O(n\log n)\) for a single point \(p\) and hence total time \(O(n^{2}\log n)\); steps 4 and 5 take time \(O(n^{2})\) each. The time complexity of Step 8 is \(O(n^{2}\log n)\) due to the repetition of the algorithm to determine the maximum number of blue points (i.e., the value \(N_{p,q,1}+N_{p,q,2}\) is maximum for the blue points) enclosed by each circle \(C_{p,q}\) satisfying \(N_{p,q,1}+N_{p,q,2}=0\) for the points in \(\mathcal{R}\). **Theorem 8**.: _The MaxBlue-NoRed problem for \(k=1\) can be solved in \(O(n^{2}\log n)\) time._ ### The AllBlue-MinRed problem for \(k=1\) In [5], the problem of finding the largest and smallest disk that covers all blue points and as few red points as possible is studied in its unrestricted version, and in [7], they gave linear time (expected) algorithm based on linear programming. On the other hand, in [4], the problem is studied when the center of the disk is restricted to a line segment, which has the same time complexity as our farthest-point Voronoi diagram based algorithm. This problem has many bichromatic variants studied in the Ph.D. thesis [3]. Next we describe our algorithm which is based on the farthest-point Voronoi diagram. We first construct a farthest point Voronoi diagram \(\mathcal{FVD}\) for the set of blue points. Then, we find all the intersection points of Voronoi edges with \(\ell\). Since every Voronoi edge corresponds to the farthest pair of two points, we can determine a disk centered at an intersection point of the Voronoi edge and \(\ell\), such that the disk's boundary passes through the respective two points. We find all such disks for each intersection point, which are candidate locations for a facility in any feasible solution to the AllBlue-MinRed problem for \(k=1\). We pick a disk that covers the minimum number of red points among these disks. To this end, we again employ range searching algorithms of [6]. **Theorem 9**.: _We can solve the AllBlue-MinRed problem for \(k=1\) in \(O(n\log n)\) time._ Proof.: The construction of the \(\mathcal{FVD}\) concerning the blue points will take \(O(n\log n)\) time. There will be, at most, \(n\) Voronoi edges, which will intersect with the line \(\ell\). We need to find a disk among the disks centered at all candidate locations, as identified above. This disk should cover the minimum number of red points. To do this, we first pre-process the red points in \(O(n\log n)\) time [6]. Each query corresponding to a disk centered at a candidate location will take \(O(\log n)\) time [6]. Hence, the total running time is \(O(n\log n)\). The algorithm's correctness follows because the Voronoi edge on which we center the disk will correspond to the farthest pair of blue points, and the disk whose boundary passes through this pair will cover all the blue points. ## 6 Sofl on \(t\)-lines Let us consider a given set \(\mathcal{B}\) of blue points and a set \(\mathcal{R}\) of red points, which are positioned around \(t\) parallel lines denoted as \(\ell_{1},\ell_{2},\ldots,\ell_{t}\) in the plane. These lines may have arbitrary vertical displacements. Each point \(p_{i}\in\mathcal{B}\cup\mathcal{R}\) is assigned a weight denoted as \(w_{i}\), where \(w_{i}>0\) if \(p_{i}\in\mathcal{B}\) and \(w_{i}<0\) if \(p_{i}\in\mathcal{R}\). The cardinality of the set \(\mathcal{B}\cup\mathcal{R}\) is denoted as \(n\), and the interior of any geometric object \(d\) is represented as \(d^{0}\) (excluding its boundary \(\partial d\)). The objective is to pack \(k\) non-overlapping congruent disks, denoted as \(d_{1}\), \(d_{2}\), \(\ldots\), \(d_{k}\), with the smallest possible radius. These disks must be centered on the parallel lines closest to the points covered by each disk. The goal is to maximize the sum of the weights of the points covered by the interior of the disks. This sum is represented as \(\sum\limits_{j=1}^{k}\sum\limits_{i:\exists p_{i}\in\mathcal{R},p_{i}\in d^{0} _{j}}w_{i}+\sum\limits_{j=1}^{k}\sum\limits_{i:\exists p_{i}\in\mathcal{B},p_ {i}\in d_{j}}w_{i}\). We may consider this as a generalization of the Sofl problem, where \(t\) horizontal lines are present, and facilities can be centered on any of these lines. Following a similar approach as in Section 3, we obtain all the candidate radii \(\mathcal{L}_{\textsc{can}}\) independently for each of the \(t\) lines and let us denote it as \(\mathcal{L}_{\textsc{tcan}}\). Note that the cardinality of \(\mathcal{L}_{\textsc{tcan}}\) is \(O(tn^{2})\). Hence we have the following lemma. **Lemma 3**.: \(|\mathcal{L}_{\textsc{tcan}}|=O(tn^{2})\) Figure 9: The pink disk \(d_{1}\) covers all blue points with a fewer number of red points. Next, we fix a radius \(r_{\textsc{can}}\in\mathcal{L}_{\textsc{can}}\). We can transform the problem into finding the minimum weight \(k\)-link path in a directed acyclic graph (DAG) \(G(V^{\prime},E^{\prime})\), as discussed in Section 4. However, the cardinality of the set \(V^{\prime}\) is \(O(nkt^{2})\), since each point \(p_{i}\in\mathcal{B}\cup\mathcal{R}\) can create an influence interval on each of the \(t\) lines, resulting in \(O(nt)\) endpoints of the influence intervals and adding \(O(kt)\) additional points (see Figure 10). Figure 10 depicts the candidate locations on \(\ell_{i+1}\) and \(\ell_{i-1}\), located at a distance of \(2\lambda\) to the right of \(p_{\ell_{i}}\). Similarly, the mirror case can be considered for the point situated at a distance of \(2\lambda\) to the left of \(p_{\ell_{i}}\) on \(\ell_{i+1}\) and \(\ell_{i-1}\). Without loss of generality, we assume that all the points in \(V^{\prime}\) have distinct \(x\)-coordinates. We can construct \(G^{\prime}\) in \(O(n^{2}k^{2}t^{4})\) time by employing the sweeping technique, specifically sweeping from left to right. Therefore, the following lemma holds. **Lemma 4**.: _The DAG \(G^{\prime}\) on \(t\)-lines can be constructed in \(O(n^{2}k^{2}t^{4})\) time._ Proof.: Follows from the Lemma 2 as the cardinalities \(|V^{\prime}|=O(nkt^{2})\) and \(|E^{\prime}|=O(n^{2}k^{2}t^{4})\). **Theorem 10**.: _The Sofl problem of \(t\)-lines can be solved exactly in \(O(n^{4}k^{2}t^{5})\) time._ Proof.: Follows from the Lemma 3 and Lemma 4 since there are \(O(n^{2}t)\) candidate radii and the total time is \(O(n^{2}t)\times O(n^{2}k^{2}t^{4})=O(n^{4}k^{2}t^{5})\). ## 7 Discrete Sofl with all facility sites in convex position Suppose we are given a set \(\mathcal{B}\) of blue points, a set \(\mathcal{R}\) of red points, and a set \(\mathcal{F}\) of \(s\) candidate locations in convex position; all these three sets are in the plane. Let the weight of a given point \(p_{i}\in\mathcal{B}\cup\mathcal{R}\) be \(w_{i}>0\) if \(p_{i}\in\mathcal{B}\) and \(w_{i}<0\) if \(p_{i}\in\mathcal{R}\), \(|\mathcal{B}\cup\mathcal{R}|=n\), and \(d^{0}(=d\setminus\partial d)\) be the interior of any geometric object \(d\). We wish to pack \(k\) non-overlapping congruent disks \(d_{1}\), \(d_{2}\),..., \(d_{k}\) of minimum radius, centered at points in \(\mathcal{F}\) such that \(\sum\limits_{j=1}^{k}\sum\limits_{\{i:2p_{i}\in\mathcal{R},p_{i}\in d_{j}^{ \prime}\}}w_{i}+\sum\limits_{j=1}^{k}\sum\limits_{\{i:2p_{i}\in\mathcal{B},p_{i }\in d_{j}\}}w_{i}\) is maximized, i.e., the sum of the weights of the points covered by \(\bigcup\limits_{j=1}^{k}d_{j}\) is maximized. The above problem is a discrete variation of the Sofl problem (Dsofl) because a finite number of candidate facility sites (in convex position) are pre-given. Even though it is the discrete version of the Sofl problem, similar to the continuous line case, we know that there exists only a constant number of Figure 10: Candidate locations corresponding to the endpoint of the infeasible region \(p_{\ell_{i}}\) and candidate radius \(\lambda\). critical configuration types for the points in \(\mathcal{R}\cup\mathcal{B}\) and candidate facilities in \(\mathcal{F}\). It follows from the latter that we also have a finite number of candidate radii here. Let \(\mathcal{L}_{\textsc{Dcan}}\) denote the set of all candidate radii. **Lemma 5**.: \(|\mathcal{L}_{\textsc{Dcan}}|=O(ns)\)_._ Proof.: Since there will be only a constant number of critical configuration types concerning points \(\mathcal{B}\cup\mathcal{R}\) and candidate facilities \(\mathcal{F}\), we can consider the following situation where the candidate radius is determined based on a point in \(\mathcal{B}\cup\mathcal{R}\) and a candidate facility location (on whose boundary that point lies) in \(\mathcal{F}\). The cardinality of the set of radii from this situation is \(O(ns)\) (see Figure 11). The radius of the disks in the optimal packing cannot be determined solely by the distance between the candidate sites (see Fig 12). In Figure 12, we can observe that the closest pair of disks \(d_{i-1}\) and \(d_{i}\) will never touch in any optimal packing (i.e., the distance between them will not determine the radii of the disks in the optimal packing). Suppose they touch in any optimal packing, then we can reduce the radii of the disks until one of the blue points lie on the boundary of any of the disk (see Figure 12, \(p_{i}\) lying on the boundary of the disk \(d_{i-1}\)). Figure 11: The blue point \(p_{i}\) lying on the boundary of \(d_{3}\) will determine the radius. Figure 12: Illustration of the candidate facilities will not determine the radius of disks in the optimal packing. ### Dynamic programming algorithm In this section, first, we show a relationship between the Voronoi diagram of points in an optimal solution and the cost of an optimal solution to the Dsofl problem. We then present a dynamic programming-based solution for the discrete Sofl problem utilizing this property of the Voronoi diagram (\(\mathcal{VD}\)) of the \(k\) sites in an optimal solution. This process is repeated for each \(\lambda\in\mathcal{L}_{\text{Dcan}}\). The Voronoi diagram of points in convex position forms a tree-like structure except for its infinite edges. If we overlay the diagram with a sufficiently big bounding rectangle, we have the following observation. **Observation 5**.: _The Voronoi diagram of points in convex position is a tree._ Since the points in \(\mathcal{F}\) are in convex position, Observation 5 implies that the \(\mathcal{VD}\) of any subset of points in \(\mathcal{F}\) is also a tree. Hence, \(\mathcal{VD}\) of the optimal \(k\) facility sites is a tree. This \(\mathcal{VD}\) tree structure allows us to employ dynamic programming. To find this tree or a subtree of it from its rightmost node, we define a subproblem that explores all possible edges from the rightmost node and then further this exploration recursively. This leads to recursively constructing an optimal solution once we guess the rightmost node of the tree. Furthermore, we show no circular dependencies between subproblems. Without loss of generality, let us consider that the points in \(\mathcal{F}=\{p_{1},p_{2},\ldots,p_{s}\}\) are ordered clockwise. It is known that the Delaunay triangulation is the dual of the Voronoi diagram. Denote \(\mathcal{DT}\) as the Delaunay triangulation formed by the points corresponding to the Voronoi centers in the Voronoi diagram, \(\mathcal{VD}\). Observe that the smallest edge length of \(\mathcal{DT}\) of points in an optimal solution to Dsofl is at least twice the radius of disks in the optimal solution. For a given \(\lambda\in\mathcal{L}_{\text{Dcan}}\), we precalculate the weight of points covered by a disk with radius \(\lambda\) centered at a facility site \(f_{i}\in\mathcal{F}\) and denote this weight as \(w(f_{i})\). Then, our dynamic program-based algorithm is as follows. First, we guess the Delaunay triangle corresponding to the rightmost Voronoi node, the three rightmost facility centers, say, \(p_{i},p_{\ell},p_{j}\), (where \(p_{l}\) is the rightmost and \(p_{i}\) is below \(p_{j}\)) in the optimal solution. We make all possible \(\binom{s}{3}\) guesses to find these three optimal centers. Then, define a subproblem \(\Gamma(p_{i},p_{\ell},p_{j},\mathcal{F}^{\prime};\mathcal{K})\), which corresponds to the maximum (optimal) weight of the points covered by \(\mathcal{K}\) facilities located at some points in \(\mathcal{F}\) with a radius of \(\lambda\), and the points \(p_{i}\), \(p_{\ell}\) and \(p_{j}\) are the rightmost ordered points in the optimal solution. Initially, we set \(\mathcal{K}=k-3\) and \(\mathcal{F}^{\prime}=\mathcal{F}\setminus\{p_{i},\ldots,p_{\ell},\ldots,p_{j}\}\), where the indices \(i,j,\ell,\ell^{\prime}\) are to be read modulo \(s\). Now, consider reconstructing \(\mathcal{VD}\) with three \(p_{i}\), \(p_{j},p_{\ell}\) fixed on the right. We do this by determining the corresponding \(\mathcal{DT}\) triangle with its corner points \(p_{i}\), \(p_{j}\) and \(p_{\ell^{\prime}}\). We extend the \(\mathcal{VD}\) by choosing the next point \(p_{\ell^{\prime}}\) that lies left of \(\overline{p_{j}p_{i}}\) such that it is at least \(2\lambda\) from \(p_{i}\), \(p_{j}\) and \(p_{\ell}\). Observe that \(p_{\ell^{\prime}}\) is outside the circumcircle of \(p_{i}\), \(p_{j}\) and \(p_{\ell}\). Then we have the following recurrence, \[\Gamma(p_{i},p_{\ell},p_{j},\mathcal{F}^{\prime};\mathcal{K})=\] \[\max_{\begin{subarray}{c}\mathcal{K}\subseteq\mathcal{K}-1,\\ p_{\ell^{\prime}}\in\mathcal{F}^{\prime}and\\ p_{\ell^{\prime}}\in(p_{\ell^{\prime}},\{p_{i},p_{j},p_{\ell}\})\geq 2 \lambda\end{subarray}}\begin{cases}w(p_{\ell^{\prime}})+\Gamma(p_{\ell^{\prime} },p_{i},p_{j},\mathcal{F}^{\prime\prime};\mathcal{K}^{\prime})+\Gamma(p_{\ell ^{\prime}},p_{j},p_{i},\mathcal{F}^{\prime\prime\prime};\mathcal{K}-1- \mathcal{K}^{\prime})&\text{ if }|\mathcal{F}^{\prime}|\geq 1\\ 0&\text{ otherwise}\end{cases}\] where \(\zeta(p_{\ell^{\prime}},\{p_{i},p_{j},p_{\ell}\})=\min\{dist(p_{\ell^{\prime}}, p_{i}),dist(p_{\ell^{\prime}},p_{j}),dist(p_{\ell^{\prime}},p_{\ell})\}\), \(w(p_{\ell^{\prime}})\) denotes the total weight of the points that are covered by a disk of radius \(\lambda\) centered at \(p_{\ell^{\prime}}\), \(\mathcal{F}^{\prime\prime}=\mathcal{F}^{\prime}\setminus\{p_{j},\ldots,p_{\ell^ {\prime}}\}\) and \(\mathcal{F}^{\prime\prime\prime}=\mathcal{F}^{\prime}\setminus\{p_{\ell^{ \prime}},\ldots,p_{i}\}\). Base cases are \(\Gamma(p_{i},p_{\ell},p_{j},\mathcal{F}^{\prime};0)=w(p_{i})+w(p_{\ell})+w(p_{ j})\), \(\Gamma(p_{i},p_{\ell},p_{j},\emptyset;\mathcal{K})=0\). ### Proof of Correctness: The correctness of the dynamic programming algorithm can be established based on the following observations: * The minimum edge length of \(\mathcal{DT}\) formed by the \(k\) sites in the optimal solution is at least \(2\lambda\). This ensures that the disks in the optimal solution do not overlap, as the distance between any two points in the solution is greater than or equal to \(2\lambda\). * There always exists a solution for a given set of points \(\mathcal{B}\cup\mathcal{R}\) and \(\mathcal{F}\). If it is impossible to place \(k\) disks with a radius of \(\lambda\), the algorithm returns a zero weight, indicating that a solution does not exist for given \(\lambda\). * By assuming that \(p_{i}\), \(p_{\ell}\), and \(p_{j}\) are the rightmost points in the optimal solution, we have \(O(s^{3})\) choices for these points. This assumption ensures that a \(\mathcal{DT}\) with \(k\) vertices corresponding to the optimal solution always exists if a solution exists for a given \(\lambda\). Based on these observations, we can conclude that the dynamic programming algorithm is correct in determining the optimal solution for the given set of points \(\mathcal{B}\cup\mathcal{R}\) and \(\mathcal{F}\), considering the assumptions made and the properties of the \(\mathcal{DT}\) formed by the candidate sites. **Theorem 11**.: _Discrete Sofl with candidate facility sites in convex position can be solved in polynomial time._ Proof.: The running time of the algorithm is calculated as follows: * From Lemma 5 we have \(|\mathcal{L}_{\textsc{Dscan}}|=O(ns)\). * For each \(\lambda\in\mathcal{L}_{\textsc{tcmax}}\) we call the dynamic programming algorithm. * Dynamic programming algorithm for a given \(\lambda\): * For each \(f_{i}\in\mathcal{F}\), calculating weight of points covered by a disk of radius \(\lambda\) centered at \(f_{i}\) will take \((ns)\) time. * There are \(O(s^{3}k)\) subproblems and each subproblem will take \(O(sk)\) time. * The total time complexity of the algorithm is \(O(n^{2}s^{2}+ns^{5}k^{2})\). Additionally, we designate the vertices of \(\mathcal{DT}\) as the optimal solution that yields the maximum weight out of all the invocations of the dynamic programming algorithm with three rightmost points \(p_{i}\), \(p_{j}\), \(p_{\ell}\). ## 8 Conclusion This paper studied the problem of locating \(k\) semi-obnoxious facilities constrained to a line (CSofl) when the given demand points have positive and negative weights. Specifically, we solved the problem of locating \(k\) semi-obnoxious facilities on a line to locate facilities with the maximum weight of the covered demand points in \(O(n^{4}k^{2})\) time. Subsequently, we improved the running time to \(O(n^{3}k\cdot\max{(n,k)})\). Furthermore, we addressed two special cases of the problem where points do not have arbitrary weights. We showed that these two special cases can be solved in \(O(n^{3}k\cdot\max{(\log{n},k)})\) time. For the first case, when \(k=1\), we also provide an algorithm that solves the problem in \(O(n^{3})\) time, and subsequently, we improve this result to \(O(n^{2}\log{n})\). For the latter case, we give \(O(n\log{n})\) time algorithm that uses the farthest point Voronoi diagram. We also studied the Sofl for \(t\)-lines and showed that it can be solved in polynomial time but with a high order degree in \(t\). Further, we investigated the complexity of discrete semi-obnoxious facility location (DSofl) for the given candidate locations in convex position, and we showed that this problem can also be solved in polynomial time. Following are some of the open problems that are worth considering as future work: * The continuous unrestricted variant of the semi-obnoxious facility location problem: given two sets (red and blue) of demand points with positive and negative weights (respectively) in the plane and an integer \(k\). The objective is to maximize the sum of the weights of the points covered by the union of \(k\) congruent non-overlapping disks of minimum radius centered anywhere in the plane (i.e., disks (facilities) may be centered anywhere in the plane). * The discrete unrestricted variant of the semi-obnoxious facility location problem: given two sets (red and blue) of points with positive and negative weights (respectively), a set of candidate facility locations in the plane, and an integer \(k\). The objective is to maximize the sum of the weights of the points covered by the union of \(k\) congruent non-overlapping disks of minimum radius centered at some of the candidate facility locations. * To investigate the scenario when the disks are centered on the boundary of a convex polygon instead of a horizontal line or at vertices of convex polygon. * To investigate the scenario when the disks are restricted to be centered at the grid points of a \(t\times t\) grid in the plane. * Finding a better than \(O(n^{2}\log{n})\) time algorithm for the MaxBlue-NoRed problem for \(k=1\).
2303.09334
Depth-Aware Image Compositing Model for Parallax Camera Motion Blur
Camera motion introduces spatially varying blur due to the depth changes in the 3D world. This work investigates scene configurations where such blur is produced under parallax camera motion. We present a simple, yet accurate, Image Compositing Blur (ICB) model for depth-dependent spatially varying blur. The (forward) model produces realistic motion blur from a single image, depth map, and camera trajectory. Furthermore, we utilize the ICB model, combined with a coordinate-based MLP, to learn a sharp neural representation from the blurred input. Experimental results are reported for synthetic and real examples. The results verify that the ICB forward model is computationally efficient and produces realistic blur, despite the lack of occlusion information. Additionally, our method for restoring a sharp representation proves to be a competitive approach for the deblurring task.
German F. Torres, Joni-Kristian Kämäräinen
2023-03-16T14:15:32Z
http://arxiv.org/abs/2303.09334v2
# Depth-Aware Image Compositing Model for ###### Abstract Camera motion introduces spatially varying blur due to the depth changes in the 3D world. This work investigates scene configurations where such blur is produced under parallax camera motion. We present a simple, yet accurate, Image Compositing Blur (ICB) model for depth-dependent spatially varying blur. The (forward) model produces realistic motion blur from a single image, depth map, and camera trajectory. Furthermore, we utilize the ICB model, combined with a coordinate-based MLP, to learn a sharp neural representation from the blurred input. Experimental results are reported for synthetic and real examples. The results verify that the ICB forward model is computationally efficient and produces realistic blur, despite the lack of occlusion information. Additionally, our method for restoring a sharp representation proves to be a competitive approach for the deblurring task. Keywords:blur formation, image compositing blur, neural representations, deblurring ## 1 Introduction Motion blur is a common problem in photography and in certain computer vision tasks such as feature matching [22] or object detection [16]. In essence, motion blur occurs when either the camera or the scene objects, or both, are in motion during exposure. Recovering the edges and textures of the latent sharp image, _i.e._ deblurring, remains as an open problem since there are infinite latent sharp sequences consistent with the generated blur. In conventional deblurring approaches, there is a model that describes the formation of the blur, coupled with an image prior that regularizes the solution space of the optimization problem. A major part of the research has been conducted upon suitable image priors that characterize natural images [3, 14, 15, 41, 24, 20]. However, the applicability in real scenarios also depends on the accuracy of the assumed blur formation model. Pioneering works assume that the blur results from the shift-invariant convolution of the sharp image with an unknown Point Spread Function (PSF) [7, 18, 19, 39]. For this to be precise, either of the two possible scenarios must hold, apart from the scene being static: 1) the camera shake only involves in-plane translation while the scene is either planar or sufficiently far from the camera, 2) the focal length of the camera is large and there is no in-plane rotation [38]. Otherwise, camera shake generally induces non-uniform (spatially varying) blur. Several deblurring algorithms have been proposed to deal with spatially-varying blur that is produced by more realistic 6D camera motion [30, 8, 10, 38, 41]. Nevertheless, these works fail at modeling the induced blur in 3D scenes, especially at depth discontinuities. With the advances in deep learning, several network architectures have been proposed to handle multiple types of blur by learning from data [23, 17, 32, 42, 44, 4, 34, 5, 43, 35]. They benefit from not requiring an explicit description of the blur formation process. In such works, a neural network is trained over large-scale datasets to restore the sharp image. Deep deblurring represents state-of-the-art on multiple benchmarks, but their performance depends on the type of blur that is present in the training set. Due to the parallax effect, objects positioned at different depths from the camera produce spatially varying blur, when the camera moves during capture. Following this line, a number of works have incorporated the depth in their deblurring methods as an extra auxiliary input [25, 27, 21] or by a joint estimation process [40, 11, 47], but they do not provide a concrete blur model. In this work, we study the impact of the depth variation on the motion blur focusing on _parallax camera motion_, _i.e_. when the camera moves parallel to the image plane. By analyzing the geometry of this type of camera motion, we identify two realistic scene types where depth plays a significant role in the produced blur: 1) _Macro_ Photography and 2) _Trucking_ Photography. For such configurations, we propose a tractable Image Compositing Blur (ICB) model for parallax motion assuming that the depth and camera trajectory are available. This model accurately approximates the camera blur under parallax motion (Fig. 1). In addition, we provide evidence that our ICB model, in conjunction with coordinate-based Multi-Layer Perceptron (MLP) models, can be used to extract a sharp neural representation from a single blurry image. In summary, the main contributions are: **1)** insight analysis about the scene configurations and capture settings for which depth becomes meaningful in the Figure 1: Our blur formation model accurately describes parallax motion blur. By providing the depth and camera trajectory, we can fit a set of motion kernels \(k_{l}\) and alpha-matte terms \(\mathcal{A}_{l}\), which are used to blend the blur from different layers. blur formation; **2)** a simplified, yet accurate enough, Image Compositing Blur (ICB) model for parallax depth-induced camera motion blur; **3)** an alternative approach to restore sharp images without the need for training over large datasets; and **4)** one synthetic and one real dataset of realistic scenes that include pairs of blurry and sharp images with depth maps and camera trajectories. ## 2 Related work Blur formation models.Blur formation models have been studied in the context of image deblurring. Arguably, the simplest model assumes uniform behavior over the whole image. In this case, the blurred image is presumed to be the result of shift-invariant convolution with a Point Spread Function (PSF) [7]. However, this model only holds for very limited practical scenarios. For the more general case of spatially-varying blur, some works are based on the projective motion path of the camera shake [30]. Gupta _et al._[8] assume that the blur can be accurately modeled by in-plane camera translation and rotation. White _et al._[38] focus on the blur produced by 3D rotations. Furthermore, Hirsch _et al._[10] model the blur as the linear combination of patch-based blur kernel basis. Nevertheless, none of these models precisely determine the blur generation in 3D scenes, especially around abrupt changes in depth Image deblurring.Conventional methods for image deblurring are optimization frameworks that tackle the blur produced by the camera motion. To handle the well-known ill-posed nature, previous works enforced different image priors in their solutions, such as Total-Variation [3], normalized sparsity prior [15], \(L_{0}\)-norm regularization [41], dark channel prior [24], or discriminative prior [20]. With the advances in deep learning, several Convolutional Neural Network (CNN) architectures have been proposed. These architectures only take the blurred image as input and produce the estimated sharp image. Su et al. [29] used an encoder-decoder architecture for video deblurring. Nah et al. [23] incorporated the multi-scale processing approach in their deep network. Following the multi-scale principle, numerous CNN-based methods have been introduced including components such as Generative Adversarial Networks (GAN) [16, 17], Long-Short Term Memory (LSTM) [32], scale-iterative upscaling scheme [42], half instance normalization [4], multi-scale inputs and outputs [5], blur-aware attention [34], and multi-stage progressive restoration [44]. More recently, progress on Transformer [37] and MLP models demonstrate the ability to handle global-local representations for image restoration tasks [43, 35]. The problem with conventional methods is that they do not use depth in deblurring and they are computationally expensive. On the contrary, deep deblurring performance strongly depends on the training data which, in the case of the above works, do not contain spatial blur induced by depth variations. Depth-aware deblurring.The involvement of the depth cue in the motion blur, although not widely studied, it is not new. Xu and Jia [40] proposed the first work on this track. They used a stereopsis setup to estimate the depth information and subsequently perform layer-wise deblurring. Optimization-based solutions have been introduced for the joint estimation of the scene depth and sharp image, employing either expectation-maximization [11] or energy-minimization [25] methods. Sheng _et al_. [27] proposed an algorithm that iteratively refines the depth and estimates the latent sharp image from an initial depth map and a blurry image, using belief propagation and Richardson-Lucy algorithm, respectively. Park and Lee [26] proposed and alternating energy-minimization algorithm for the joint dense-depth reconstruction, camera pose estimation, super-resolution, and deblurring. However, their method requires an image sequence instead of a single image. On the deep learning side, Zhou _et al_. [47] proposed a stereo deblurring network that internally estimates bi-directional disparity maps to convey information about the spatially-varying blur that is caused by the depth variation. Moreover, Li _et al_. [21] introduced a depth-guided network architecture for single-image deblurring, which both refines an initial depth map and restores the sharp image. The above depth-aware deblurring methods properly acknowledge that depth changes produce spatially-varying blur, but it is not clear in which cases this holds. The depth is used as an additional cue for deblurring but, on the other hand, they do not address the spatial blur due to scene depth variation. In contrast, we first identify practical scenarios where the depth variations certainly yield to non-uniform blur. We then characterize how depth and camera motion result in regions with different blur behavior. ## 3 Geometry of camera motion blur ### Fundamentals Projective motion path blur model.For static scenes, image blur comes from the motion of the camera during the exposure time. More precisely, the captured blurry image \(\mathbf{y}\) is the summation of the transient sharp images \(\{\mathbf{x}_{m}\}_{m=1}^{M}\) seen by the camera in the poses \(\{\vartheta_{m}\}_{m=1}^{M}\) that follow its trajectory. Assuming there is a linear transformation \(\mathcal{T}_{\vartheta_{m}}\) that warps the latent sharp image \(\mathbf{x}\) to any transient image \(\mathbf{x}_{m}\), the blurred image \(\mathbf{y}\) can be expressed as: \[\mathbf{y}=\sum_{m=1}^{M}w_{m}\mathcal{T}_{\vartheta_{m}}(\mathbf{x})+\eta\enspace, \tag{1}\] where the weight \(w_{m}\) indicates the time the camera stays at pose \(\vartheta_{m}\) and \(\eta\) error in the model. The transformation \(\mathcal{T}_{\vartheta_{m}}\) is induced by a homography \(H_{m}\) such that a pixel \(\mathbf{p}\) from the latent image \(\mathbf{x}\) is mapped to the pixel \(\mathbf{p}_{m}^{\prime}\) in the transient image \(\mathbf{x}_{m}\). In homogeneous coordinates, \([\mathbf{p}_{m}^{\prime}]_{h}=H_{m}[\mathbf{p}]_{h}\), where \([\cdot]_{h}\) denotes the conversion from Cartesian to homogeneous coordinates. For a camera following a 6D motion trajectory, the homography \(H_{m}\) that relates pixels from the latent image \(\mathbf{x}\) to the transient image \(\mathbf{x}_{m}\), which are captured from a planar scene at depth \(D\), has the form: \[H_{m}=C(R_{m}+\frac{1}{D}T_{m}[0,0,1])C^{-1}\enspace, \tag{2}\] where \(R_{m}\) and \(T_{m}\) stand for the rotation and translation components, and \(C\) is the intrinsic camera matrix. Eq. (2) reveals that there is non-uniform blur caused by the depth-dependence of the translation component, as well as when rotations are introduced. Notwithstanding, the homography model only holds for fronto-parallel scenes since the warping operator would require an estimation of the occluded areas that become visible, particularly at the depth discontinuities. Pixel-Wise Blur (PWB) model.In general, image blur has a spatially-varying nature. To take this into account, the blurred image \(\mathbf{y}\) can be modeled via convolutions with pixel-wise kernels \(\mathbf{k}(\mathbf{p},\mathbf{u})\): \[\mathbf{y}(\mathbf{p})=\mathbf{x}(\mathbf{p})*\mathbf{k}(\mathbf{p},\mathbf{u })+\eta\enspace, \tag{3}\] where \(*\) denotes to the convolution operator, \(\mathbf{p}=(i,j)\) are pixel coordinates and \(\mathbf{u}=(u,v)\) the kernel coordinates. One can blur an image by computing the Empirical Probability Density Function (EPDF) of pixel displacements \(\Delta\mathbf{p}^{\prime}_{m}=\mathbf{p}^{\prime}_{m}-\mathbf{p}\). This model is used as a baseline in our experiments, and its limitations against the proposed blur formation model are demonstrated. In the remainder of this section, we take a closer look at the influence of depth in the blur generation for in-plane camera translations. Here, we provide insights of what are the scenarios, and to what extent, the depth should be considered in the deblurring problem. ### In-plane camera motion Let us first consider a pin-hole camera with a uniform in-plane motion in the horizontal axis of length \(s\) during the exposure time, and two trivial 3D points \(\mathbf{P}^{(1)}\) and \(\mathbf{P}^{(2)}\) such that the former represents the closest point to the camera in the depth direction and the latter is the farthest as depicted in Fig. 2. On the one hand, \(\mathbf{P}^{(1)}\) and \(\mathbf{P}^{(2)}\) are respectively mapped to the points \(\mathbf{p}^{(1)}\) and \(\mathbf{p}^{(2)}\) in Figure 2: Blur induced by camera translation of length \(s\) for two 3D points \(\mathbf{P}^{(1)}\) and \(\mathbf{P}^{(2)}\) with their depth difference of \(\Delta D\). the latent image \(\mathbf{x}\). On the other hand, they are seen, by the camera at pose \(\vartheta_{m}\), on \(\mathbf{p^{\prime}}^{(1)}\) and \(\mathbf{p^{\prime}}^{(2)}\). In this case, the induced homography \(H_{m}(\mathbf{p})\) is given by \[H_{m}(\mathbf{p})=\begin{bmatrix}1&0&T_{m}(\mathbf{p})\\ 0&1&0\\ 0&0&1\end{bmatrix}\ \, \tag{4}\] where \(T_{m}(\mathbf{p})\) is the image plane translation component that is dependent on the pixel depth \(D(\mathbf{p})\) as \[T_{m}(\mathbf{p})=\frac{sF}{D(\mathbf{p})}\ \, \tag{5}\] where \(F\) denotes the focal length of the camera. Due to the simplicity of the motion, the _blur extent_ of an arbitrary 3D point \(\mathbf{P}\) in the blurry image \(\mathbf{y}\) is given by \(T_{m}(\mathbf{p})=\mathbf{p}_{x}^{\prime}-\mathbf{p}_{x}\), where \(x\) denotes the horizontal component. Noteworthy, this is equivalent to the disparity in stereo vision. Blur variation.Since there is a difference in depth \(\Delta D=D(\mathbf{p}^{(2)})-D(\mathbf{p}^{(1)})\), there must be difference in the blur extent for \(\mathbf{P}^{(1)}\) and \(\mathbf{P}^{(2)}\), as illustrated in Fig. 2. Thus, we define the _blur variation_\(\Delta T\) as _the difference in blur extent between two points at different depths_. Expressively, \(\Delta T=T_{m}(\mathbf{p}^{(1)})-T_{m}(\mathbf{p}^{(2)})\). \(\Delta T\) measures the non-uniform behavior of the blur caused by the depth and under in-plane camera movements. By replacing terms, we get \[\Delta T=\frac{sF}{D(\mathbf{p}^{(1)})\Big{[}\frac{D(\mathbf{p}^{(1)})}{ \Delta D}+1\Big{]}}\ . \tag{6}\] To gain intuition of the _blur variation_ in practical scenarios, we describe its behavior in Fig.3 by assuming \(F\)=2.8[mm] and pixel size of 4[\(\mu\)m]. Figure 3: _Blur variation_ determined by Eq. 6, at different depths of the closest point \(D(\mathbf{p}^{(1)})\) (in meters): (a) _blur variation_ in pixels as function of the depth difference (fixed camera displacement baseline of \(s=3[mm]\)); (b) the camera displacement as a function of the depth difference (fixed blur variation \(\Delta T=10\) pixels [px]). Camera focal length is \(F\)=2.8[mm] and pixel size 4[\(\mu\)m] which correspond to settings that can be found in mobile phone cameras (ultrawide lenses). Macro photography scenes.Fig. 3(a) illustrates the _blur variation_\(\Delta T\) as a function of the depth difference \(\Delta D\), at different depths of the closest point \(D(\mathbf{p}^{(1)})\), while keeping a fixed camera displacement \(s\)=3[mm] (a reasonable choice for natural hand shake). It can be seen that whereas the _blur variation_ is negligible for far-field scenes no matter what is the depth variation, non-uniform blur becomes significant for near-field macro scenes (the closest target \(\leq\) 0.1m from the camera) even with rather low depth variation (\(\geq\) 0.1m). Although the _blur variation_ increases as the depth difference gets higher, there is an upper bound that is determined by \(\Delta T<\frac{sF}{D(\mathbf{p}^{(1)})}\). In conclusion, spatially-variant blur is particularly affected by the proximity of the scene whenever there is any variation in depth. Consequently, depth plays a significant role for _Macro Photography_ scenes. In this setting, images suffer from defocus blur due to the limited depth-of-field of optics, but defocus blur is a separate issue addressed in other works [2, 46, 1]. Trucking photography scenes.From another perspective, Fig. 3(b) shows the camera displacement \(s\) as a function of the depth difference \(\Delta D\), by assuming a constant blur variation \(\Delta T=10\) pixels, for different depths of the closest point \(D(\mathbf{p}^{(1)})\). In other words, this plot tells us how much the camera should be moved to produce a _blur variation_ of 10 pixels. In this case, it is observed that a few millimeters are sufficient to produce such _blur variation_ for near-field scenes, regardless of the depth difference. In contrast, in the case of far-field scenes, such a level of blur variation can only be achieved through a camera displacement that ranges from tens of centimeters to a few meters, depending on the depth difference. Such intense movement is unlikely to happen in natural hand shake, but appears in cases where the camera is placed on a fast-moving object. For example, when capturing pictures from inside a moving car. We dub this as _Trucking Photography_ scenes. ## 4 Image Compositing Blur (ICB) model From Fig. 3(a), we see that there are depth ranges that yield to nearly the same amount of blur. Hence, pixels in a particular depth range share a common 2D convolutional kernel that characterizes the blur. Inspired by the defocus blur formation models of Hassinoff _et al_. [9] and Ikoma _et al_. [12], we present a new parallax motion Image Compositing Blur (ICB) model that takes the depth into account: \[\mathbf{y}=\sum_{l=0}^{L-1}(\mathbf{x}*k_{l})\cdot\mathcal{A}_{l}+\eta\enspace, \tag{7}\] where \(\{\mathcal{A}_{l}\}_{l=0}^{L-1}\) and \(\{k_{l}\}_{l=0}^{L-1}\) are the set of alpha-matting terms and blur kernels, respectively; and "." is pixel-wise multiplication. We define each alpha matte as: \[\mathcal{A}_{l}=\frac{\hat{\mathcal{R}}_{l}\cdot\mathcal{M}_{l}}{C}\enspace, \tag{8}\] where \(C\) is a normalization constant over the \(L\) depth layers (_i.e._\(C:=\sum_{l=0}^{L-1}\hat{\mathcal{R}}_{l}\cdot\mathcal{M}_{l}\)). \(\mathcal{M}_{l}\) are the z-buffers from far to near layers: \[\mathcal{M}_{l}=\prod_{l^{\prime}=l+1}^{L-1}(1-\hat{\mathcal{R}}_{l}^{\prime}) \enspace. \tag{9}\] \(\hat{\mathcal{R}}_{l}\) is the smooth spatially-extended version of the depth region \(\mathcal{R}_{l}\). \(\hat{\mathcal{R}}_{l}\) is defined as \(\hat{\mathcal{R}}_{l}:=(\mathcal{R}_{l}\oplus\operatorname{supp}k_{l})*G_{ \sigma,\operatorname{supp}k_{l}}\), with \(\oplus\) denoting the dilation operator, and \(G_{\sigma,\operatorname{supp}k_{l}}\) is a Gaussian smoothing window with the standard deviation \(\sigma\) and a window size of \(\operatorname{supp}k_{l}\). \(\{\mathcal{R}_{l}\}_{l=0}^{L-1}\) comes from the discretization of the depth map, but dilation and smoothing of \(\hat{\mathcal{R}}_{l}\) are used to approximate the mixed blur around the depth discontinuities, and therefore allows to omit explicit estimation of the occluded pixels. Specifically, \(\mathcal{R}_{l}\) is determined by the scene depth as \[\mathcal{R}_{l}=\begin{cases}\mathbf{p}\in\Omega|D(\mathbf{p})\geq D_{0}&,l= 0\\ \mathbf{p}\in\Omega|D_{l-1}<D(\mathbf{p})\leq D_{l}&,l=1,\dots,L-1\end{cases}\enspace, \tag{10}\] where \(\Omega\) refers to the pixel domain in the latent image \(\mathbf{x}\), and \(\{D_{l}\}_{l=0}^{L-1}\) is the sequence of depth values that define the regions with "uniform" blur. In particular, \(D_{0}\) represents the depth limit value, the depth values from \(D_{0}\) to \(\infty\), for which pixels seem not to move at all. Next, we derive how to compute the depth sequence \(\{D_{l}\}_{l=0}^{L-1}\) and the respective kernels \(k_{l}\), for the known camera trajectory \(s\) and depth map \(D(\mathbf{p})\). ### Depth-dependent regions The image regions for which the blur behaves in the same way are completely defined by the depth sequence \(\{D_{l}\}_{l=0}^{L-1}\). Without loss of generality, let us consider a one-dimensional camera movement whose maximum absolute displacement is denoted by \(s_{\max}\). As introduced above, we consider \(D_{0}\) the depth limit where pixels do not move, namely those pixels whose blur extent is less than one pixel (half a pixel for rounding issues, in practice). The pixels must satisfy \[\frac{\delta}{2}=\frac{s_{\max}F}{D_{0}}\enspace, \tag{11}\] Figure 4: Spatially-varying blur from depth: (a) full depth map of the latent image \(\mathbf{x}\), (b) in-plane camera trajectory, (c) depth sequences in the \(x\) and \(y\) axis that delimit the regions with the same amount of blur (see Eq. 13) and (d) the region indicators from \(0\) to \(32\) that denote the amount of blur from less than \(1\) to \(16\) pixels in both dimensions. where \(\delta\) denotes the pixel size. This means that \(D_{0}=2\kappa\) with \(\kappa=\frac{s_{\max}F}{\delta}\). For the rest elements of the sequence, we take into account our definition of _blur variation_ presented in Sec. 3.2. The next element in the sequence is characterized as _the depth that produces a blur variation of \(n\) pixels_1. In other words, the blur extent varies \(n\) pixels from \(\mathcal{R}_{l-1}\) to \(\mathcal{R}_{l}\). This is expressed as: Footnote 1: \(\sigma\) and \(n\) correspond to hyper-parameters in our blur formation model. Ablation studies on those can be found in the supplementary material. \[\Delta T=\frac{s_{\max}F}{D_{l}}-\frac{s_{\max}F}{D_{l-1}}=n\delta\enspace. \tag{12}\] Similarly, by reorganizing the terms, we find an equation for the \(l\)-th element of the sequence. It can be proven by induction that \[D_{l}=\frac{2\kappa}{2ln+1}\enspace,\mbox{ where }\kappa=\frac{s_{\max}F}{ \delta}\enspace. \tag{13}\] To extend this methodology to 2D motion, we simply compute the component-wise sequences \(D_{l(x)}\) and \(D_{l(y)}\), which are obtained by replacing different values of \(s_{\max}\) and \(\delta\) for the \(x\) and \(y\) components of the movement. Then, the complete sequence \(D_{l}\) is the sorted vector of the set union \(\{D_{l(x)}\cup D_{l(y)}\}\). Fig. 4 exemplifies the discrete regions automatically obtained using the above procedure for a synthetically generated image, with \(n=1\). Fig. 4(a) and (b) show the full depth and the 2D camera trajectory, respectively. Fig. 4(c) illustrates the depth sequences \(D_{l(x)}\) and \(D_{l(y)}\) computed by using the aforementioned procedure. Lastly, the set of regions \(\{\mathcal{R}_{l}\}_{l=0}^{L-1}\) whose blur behaves similarly within each layer is shown in Fig. 4(d). It is worth mentioning that the total number \(L\) of regions is completely adaptive to the scene configuration. One only needs to compute the sequences until \(D_{L-1(x)}\) and \(D_{L-1(y)}\) cover the minimum depth. ### Blur kernels synthesis Having the time-dependent in-plane camera trajectory \(s(t)\) and the depth map \(D(\mathbf{p})\), the pixel-wise motion blur kernel is given by \[\mathbf{k}(\mathbf{p})=\hat{f}\Big{(}\Big{\lfloor}\frac{-s(t)F}{\delta D( \mathbf{p})}\Big{\rfloor}\Big{)}\enspace, \tag{14}\] where \(\lfloor\cdot\rfloor\) denotes the rounding operation and \(\hat{f}\) is the operator that computes the EPDF for the discretized values in the argument. Instead of computing kernels \(\mathbf{k}(\mathbf{p})\) at pixel level, we compute a smaller set \(\{k_{l}\}_{l=0}^{L-1}\) where every kernel is paired with a region \(\hat{\mathcal{R}}_{l}\). The pixels in the region \(\hat{\mathcal{R}}_{l}\) share the same motion blur kernel \(k_{l}\): \[k_{l}=\hat{f}\Big{(}\Big{\lfloor}\frac{-s(t)F}{\delta D_{l}^{*}}\Big{\rfloor} \Big{)}\enspace, \tag{15}\] where \(D_{l}^{*}\) is the optimal depth value in the range \([D_{l},D_{l-1}]\) that minimizes the mean-square error in \(D(\mathbf{p})\) for \(\mathbf{p}\in\mathcal{R}_{l}\). The depths \(D(\mathbf{p})\) in \([D_{l},D_{l-1}]\) follow a random variable \(\zeta\) with PDF \(f(\zeta)\) and the mean-square error is determined by \[\int_{D_{l}}^{D_{l-1}}(\zeta-D_{l}^{*})^{2}f(\zeta)d\zeta\enspace. \tag{16}\] It can be proven that the mean depth \(\bar{D}(\mathbf{p})\) in the range minimizes (16). ## 5 Neural representations from blur Advances in implicit neural representations demonstrate that MLPs can learn the high-frequency details in 2D images [31, 28]. In those works, a coordinate-based MLP \(\Phi_{\theta}\) optimizes its parameters \(\theta\) to fit a sharp image, _i.e._\(\Phi_{\theta}:\mathbf{p}\mapsto\mathbf{x}\). We propose a different approach where \(\Phi_{\theta}\) fits the sharp image \(\mathbf{x}\) from its corresponding blurred one \(\mathbf{y}\), by embedding a blur function \(b:\mathbf{x}\mapsto\mathbf{y}\) defined by either the PWB model (3) or our ICB model (7). This provides an alternative solution for deblurring from a single blurred image. Since \(b\) is differentiable, we can use gradient-descent methods to optimize \(\theta\) with the following loss: \[\mathcal{L}=\sum_{\mathbf{p}}\left\|b(\Phi_{\theta}(\mathbf{p}))-\mathbf{y}( \mathbf{p})\right\|_{2}^{2}+\lambda\left\|\nabla_{\mathbf{p}}\Phi_{\theta}( \mathbf{p})\right\|_{1}^{1}\enspace, \tag{17}\] where \(\lambda\) is a hyper-parameter that controls the smoothness of the gradients. This method is similar to the approach presented by Ulyanov _et al._[36], with the exception that we utilize a coordinate-based MLP rather than a CNN for fitting \(\mathbf{x}\). In practice, we use the SIREN architecture [28] for its ability to fit derivatives robustly. ## 6 Experiments ### Evaluation Datasets _Synthetic dataset._ We constructed the Virtual Camera Motion Blur (Virtual-CMB) dataset, where the ground-truth latent images and depth maps are rendered from the 3D scene models. We utilized the Unity engine [33] for rendering 3D scenes in HD resolution. The dataset was built using five high-quality scenes available in the unity asset store. The viewpoints were manually selected to represent a virtual snapshot camera and motion blur for the three studied cases: **1)**_Macro Photography_, **2)**_Trucking Photography_ and **3)**_Standard Photography_. Table 1 summarizes the number of images captured for each case. Macro and Trucking represent practical settings where depth contributes to blurring (see Sec. 3.2). Standard Photography is the typical setting where all scene objects are far from the camera and thus depth-agnostic models work well. In all cases, the camera was moved through pre-defined trajectories. For the Macro and Standard cases, we randomly selected six trajectories from the Kohler dataset [13]. For the Trucking Photography cases, six linear trajectories with a constant speed in the \(xy\) plane were generated. The purpose is to mimic photography from a moving object (e.g., inside a car). To test our method beyond motion parallax, also camera motions of _pan-tilt_ rotations and full 6-DoF camera motion were recorded. Overall, 983 blurred images with corresponding latent sharp images and depth maps were rendered. Real dataset.For evaluation with real images, we used the iOS app introduced by Chugunov _et al._[6] to capture synchronized RGB, LiDAR depth maps, and camera poses. These videos match with the _Macro photography_ case, where a static object is recorded by a hand-held smartphone camera. As preprocessing, RGB frames are down-scaled to the depth map resolution (256x192), blurry frames are obtained by temporal average, while the sharp and depth correspondences are taken from the middle point in the camera trajectory. Accordingly, we built the Real Camera Motion Blur (RealCMB) dataset, comprised of 58 pairs of blurry and sharp images, as well as depth and camera motion; from which 48 come from our own recordings and 10 are available in [6]. ### Model validation Parallax motion blur.Our ICB model in Sec. 4 was particularly designed for the parallax motion of a camera. Thus, we first evaluate the model under in-plane camera motion in the VirtualCMB dataset. For comparison, we considered the PWB model (3). The standard image quality metrics: PSNR and SSIM, and the perceptual quality metric LPIPS [45], are used for performance evaluation. Parallax results are in the "Parallax" column of Table. 2. In the terms of PSNR, SSIM, and LPIPS, the proposed ICB model outperforms the baseline PWB model in both of the main cases: Macro and Trucking, except for the LPIPS in the Trucking case. Although the difference between SSIM and LPIPS is marginal in practice. By nature, PWB cannot properly trace the generated blur over the depth discontinuities where occluded areas become visible during the motion. Conversely, our ICB model merges blur from different depth layers more effectively, resulting in a more realistic blur. Fig. 5(a) and (b) illustrates this finding for the two types of blur, Macro, and Trucking. The error images reveal that the proposed model is more precise at the object edges. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Scene} & \multicolumn{3}{c}{Macro} & \multicolumn{3}{c}{Trucking} & \multicolumn{3}{c}{Standard} \\ \cline{2-9} & 1) & ii) & iii) & i) & ii) & iii) & i) ii) & iii) \\ \hline VikingVillage & 26 & 28 & 26 & 23 & 22 & 23 & - & - & 30 \\ IndustrialSet & - & - & - & 60 & 60 & 60 & - & - & 30 \\ ModularCity & - & - & - & 60 & 60 & 60 & 60 & - & 30 \\ ModernStudio & 58 & 55 & 55 & - & - & - & - & - & 30 \\ LoftOffice & 56 & 50 & 51 & - & - & - & - & - & 30 \\ \hline **Total: 983** & **140** & **133** & **132** & **143** & **142** & **143** & **-** & **150** \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of captured images: i) parallax motion, ii) with _pan-tilt_ rotations (w/ \(xy\) rotations), and iii) 6-DoF. Out-of-plane rotations and 6-DoFNon-uniform blur does not only come from motion parallax but also rotations. Assuming a large focal length as in [38], _pan-tilt_ rotations can be approximated by \(xy\) translations that are non-depth dependent. Thus, we can compute a global uniform kernel which is added on top of the kernels in Sec. 4.2. In particular, this approximation works well in narrow-lens devices. This approach was adopted to handle motion camera blur beyond motion parallax, neglecting the effect of \(z\) translation and _roll_ rotation. The results beyond parallax motion (xy-rotation and 6-DoF) in Table 2 are similar to the parallax motion experiment. Consequently, the used approximation works well for the captured images in the VirtualCMB dataset. Real imagesSurprisingly, the proposed ICB model performs clearly better on real 6-DoF motion in the RealCMB dataset (Table 3) than in the previous experiment with synthetic data. These results indicate that 1) parallax motion can be more dominating in real data than in our simulated cases and 2) our model is robust to depth and trajectory noise that appears in real data. Moreover, as the depth measurements are non-linearly quantized in ICB (see Sec. 4.1), there is no need for high-resolution depth maps. Fig. 5(d) shows a visual example of the blur generation in the RealCMB dataset. Computational resourcesTable 4 reports the averaged run time and memory size of our Pytorch implementations of PWB and ICB. It turns out that the proposed ICB model is slightly slower in the RealCMB dataset but significantly faster in VirtualCMB. Most importantly, ICB demonstrates considerably greater efficiency in terms of memory consumption, with reductions of \(\times 32\) and \(\times 50\) in the RealCMB and VirtualCMB datasets, respectively. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{3}{c|}{RealCMB} & \multicolumn{3}{c}{VirtualCMB} \\ & \multicolumn{2}{c|}{Memory [MB]} & \multicolumn{2}{c|}{Run time [s]} & \multicolumn{2}{c|}{Memory [MB]} & \multicolumn{2}{c}{Run time [s]} \\ \hline PWB & 57.39 & **1.95** & **2799** & 13.41 \\ Ours & **1.74** & 2.78 & **48.97** & **7.27** \\ \hline \hline \end{tabular} \end{table} Table 4: Results in terms of computational resources. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline & \multicolumn{3}{c|}{Parallax} & \multicolumn{3}{c|}{w/ \(xy\) rotation} & \multicolumn{3}{c}{6 DoF} \\ & Macro & Trucking & Macro & Trucking & Macro & Trucking & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ & Ours & PWB & Ours & PWB & Ours & PWB & Ours & PWB & Ours & PWB & Ours & PWB \\ \hline \(\uparrow\)PSNR & **42.48** & 41.54 & **37.42** & 36.59 & **42.16** & 41.11 & **36.99** & 36.21 & **38.84** & 38.37 & **37.08** & 36.29 & **37.38** & 36.97 \\ \(\uparrow\)SSIM & **0.993** & 0.992 & **0.985** & 0.984 & **0.990** & 0.989 & **0.984** & 0.983 & **0.984** & 0.982 & **0.984** & 0.983 & **0.978** & 0.978 \\ \(\downarrow\)LPIPS (\(\times 10^{-5}\)) & **4.639** & 5.049 & 5.870 & **5.238** & **5.399** & 6.316 & 6.158 & **5.676** & **8.883** & 9.744 & 6.088 & **5.613** & **11.17** & 13.01 \\ \hline \hline \end{tabular} \end{table} Table 2: Blur formation results in VirtualCMB. \begin{table} \begin{tabular}{l c c} \hline \hline & \multicolumn{3}{c|}{\(\uparrow\)PSNR \(\uparrow\)SSIM \(\downarrow\)LPIPS (\(\times 10^{-5}\))} \\ \hline PWB & 36.31 & 0.984 & 6.826 \\ Ours & **38.21** & **0.990** & **4.484** \\ \hline \hline \end{tabular} \end{table} Table 3: Blur formation results for RealCMB (avg over 58 test images). ### Neural representations from blur Table 5 summarizes the results of the sharp implicit representations with ICB and PWB models. In addition, we evaluated SOTA deep-deblurring methods. For the task of learning implicit representations from a single blurry image, Figure 5: Examples of the blur formation results. (a) Macro, (b) Trucking (c) Macro with 6-DoF motion, and (d) Real images. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{VirtualCMB} & \multicolumn{3}{c}{RealCMB} \\ & \(\uparrow\)PSNR & \(\uparrow\)SSIM & \(\downarrow\)LPIPS & \((\times 10^{-4})\) & \(\uparrow\)PSNR & \(\uparrow\)SSIM & \(\downarrow\)LPIPS & \((\times 10^{-4})\) \\ \hline SRN [32] & 29.92 & 0.9135 & 7.423 & 28.26 & 0.9146 & 8.492 \\ SIUN [42] & 29.75 & 0.9114 & **7.235** & 28.33 & 0.9139 & 7.231 \\ HINet [4] & 29.86 & 0.9133 & 8.651 & 28.32 & 0.9133 & 7.627 \\ BANet [34] & 29.77 & 0.9099 & 9.007 & 28.34 & 0.9140 & 8.241 \\ MIMO-UNet++ [5] & 28.79 & 0.8964 & 13.12 & 27.92 & 0.9106 & 10.33 \\ MPRNet [44] & 30.01 & 0.9146 & 8.062 & 28.79 & 0.9178 & 7.769 \\ MAXIM [35] & 30.34 & **0.9186** & 7.418 & 28.89 & 0.9207 & 5.695 \\ Restormer [43] & **30.41** & 0.9174 & 7.318 & 29.56 & 0.9243 & 6.887 \\ \hline PWB + SIREN & 27.08 & 0.7975 & 37.25 & 30.61 & 0.9429 & 5.398 \\ Ours + SIREN & 27.14 & 0.8001 & 36.56 & **31.92** & **0.9546** & **4.032** \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of sharp restoration results. our ICB model produces superior reconstruction results compared to the PWB model. Learned sharp representation using ICB does not match the performance of state-of-the-art deep deblurring methods on VirtualCMB, but it performs significantly better than others on RealCMB. The variation in performance between the two datasets can be attributed to the difference in image resolution. The SIREN architecture utilized in the experiments may be better suited to handling low-resolution images, such as those in RealCMB. Visual restoration examples are in Fig. 6. It is observed that the learned representation roughly restores the edges, but global noise remains in the VirtualCMB example. On the contrary, an accurate sharp representation is obtained in the RealCMB case. ## 7 Conclusion This work provides analytical and experimental results about the scene configurations in which the scene depth affects the camera motion blur. In particular, we identified two types of scenes that appear in consumer photography: "Macro" and "Trucking". Primarily, we presented an Image-Compositing Blur (ICB) model that efficiently and accurately describes the induced blur in those cases. Experimental validation was performed in our introduced synthetic and real datasets. Interestingly, we demonstrated the effectiveness of the ICB model to learn sharp neural representations from a single blurry image. Our findings and the new datasets help to develop better deblurring approaches. Limitations.Although our ICB model is derived for parallax motion, the model was found accurate enough under certain scene configurations, _e.g._, Macro and Trucking photography. Besides, the model is computationally efficient and robust against occlusions due to abrupt depth changes. Regarding the deblurring task, our results are still far from being practical. In real scenarios, the depth maps and camera trajectories need to be estimated and that would need a careful study of the suitability of IMU-based odometry and depth sensors in the current hand-held devices. Figure 6: Examples of the deblurring results in (a) VirtualCMB, (b) RealCMB datasets. ## Acknowledgements This project was supported by a Huawei Technologies Oy (Finland) project. We also thank Jussi Kalliola for building the iOS app [6] for data collection.
2302.01441
Commonsense-Aware Prompting for Controllable Empathetic Dialogue Generation
Improving the emotional awareness of pre-trained language models is an emerging important problem for dialogue generation tasks. Although prior studies have introduced methods to improve empathetic dialogue generation, few have discussed how to incorporate commonsense knowledge into pre-trained language models for controllable dialogue generation. In this study, we propose a novel framework that improves empathetic dialogue generation using pre-trained language models by 1) incorporating commonsense knowledge through prompt verbalization, and 2) controlling dialogue generation using a strategy-driven future discriminator. We conducted experiments to reveal that both the incorporation of social commonsense knowledge and enforcement of control over generation help to improve generation performance. Finally, we discuss the implications of our study for future research.
Yiren Liu, Halil Kilicoglu
2023-02-02T22:04:07Z
http://arxiv.org/abs/2302.01441v1
# Commonsense-Aware Prompting for Controllable Empathetic Dialogue Generation ###### Abstract Improving the emotional awareness of pre-trained language models is an emerging important problem for dialogue generation tasks. Although prior studies have introduced methods to improve empathetic dialogue generation, few have discussed how to incorporate commonsense knowledge into pre-trained language models for controllable dialogue generation. In this study, we propose a novel framework that improves empathetic dialogue generation using pre-trained language models by 1) incorporating commonsense knowledge through prompt verbalization, and 2) controlling dialogue generation using a strategy-driven future discriminator. We conducted experiments to reveal that both the incorporation of social commonsense knowledge and enforcement of control over generation help to improve generation performance. Finally, we discuss the implications of our study for future research. 1 Informatics, University of Illinois Urbana-Champaign 2 School of Information Sciences, University of Illinois Urbana-Champaign [email protected], [email protected] ## Introduction Empathetic dialogue generation has been an important task found to be beneficial for the applications of conversational agents in many domains, such as healthcare [12, 13, 14] and mental health consulting [11]. Recent research about pre-trained language models (PLMs, e.g., T5 and GPT-2) has advanced task performance related to dialogue generation. However, such language models still struggle to comprehend commonsense knowledge and emotions within dialogue contexts, as shown in Table. 1. Research [10] has pointed out that the ability to understand users' emotions is crucial for dialogue systems to perform empathetic conversations. For humans, the perception of emotions is based on understanding and inference of commonsense knowledge that is often implicit during conversational interactions [1]. However, more studies are needed to understand how commonsense knowledge can be incorporated into pre-trained language models during dialogue generation tasks. A promising avenue in this respect is prompt-based methods, which enable direct injection of external knowledge by augmenting model input. In this work, we conduct experiments to provide further insight into whether and how commonsense knowledge injection using prompt-based methods can benefit the language model's emotional awareness. Additionally, Plug-and-Play methods [1] are also discovered recently as a promising direction for text generation. yang2021multitask introduced the method of using future discriminators (FUDGE), which enables conditioning of text generation over a given attribute based on pre-trained language models. The method has been evaluated over several different text generation tasks, including poetry generation, topic-based generation and formality transfer [23]. However, few have discussed the possibility of using plug-and-play methods to control generation in a dialogue context. Two main contributions of this study include: * introducing a prompt-based method to incorporate social commonsense knowledge into pre-trained language model for dialogue generation; * proposing a strategy-controlled dialogue generation method that can be used on language models with or without finetuning. \begin{table} \begin{tabular}{l l} \hline \hline **Speaker:** & _It was 100\% their fault_ \\ & _but they hit the water barrels and survived._ \\ & _They had no injuries_ \\ & _but they almost ran me off the road._ \\ **Listener:** & Did you suffer any injuries? \\ **Speaker:** & _No I was not hit._ \\ & _It turned out they were drunk._ \\ & _I felt guilty_ \\ & _but realized it was his fault._ \\ **Listener** & \\ **(GPT-J GB):** & **Why was it your fault?** \\ \hline **Prompt:** & \\ \multicolumn{2}{l}{The speaker was almost run over by a group with a vehicle.} \\ \multicolumn{2}{l}{The speaker reacts with guilt.} \\ \multicolumn{2}{l}{......} \\ **Listener** & \\ **(GPT-J GB):** & **Are you still angry at them?** \\ \hline \hline \end{tabular} \end{table} Table 1: Example of how prompting commonsense knowledge can improve the empathy of dialogue generation. ## Related Work ### Using Commonsense Knowledge for Empathetic Dialogue Generation Recent works have explored the ability of pre-trained language models in dialogue generations [14]. Prior works have shown that these models tend to have difficulty in comprehending emotional expressions from users [10]. Sabour, Zheng, and Huang [14] considered both commonsense knowledge (cognitive) and emotional factors (affective) when encoding knowledge for empathetic dialogue generation using a seq2seq model. This work is built upon the cognitive and affective aspects of the theory of empathy from [11]. Later work [10] also explored a graph-based approach for encoding commonsense knowledge in combination with emotional intensity based on VAD [22] vectors. However, these works did not use PLMs but traditional encoder-decoder models that need to be trained from scratch, thus limiting their comprehension ability to the training dataset used. ### Controllable Dialogue Generation Prior studies have also discussed using external signals to condition dialogue generation. Xu, Wu, and Wu [10] proposed the idea of controlling dialogue generation using dialogue acts, which is found to significantly improve the quality of generated dialogues. Wang et al. [14] has also explored the possibility of using topics to guide dialogue generation. Further work by Wu et al. [10] focused on using external knowledge from Wikipedia to avoid fact hallucination in dialogue generation. Yang and Klein [10] proposed the method of FUDGE, an attribute future discriminator, that enables strong plug-and-play control over text generation. In this work, we propose to use plug-and-play methods in order to control empathetic dialogue generation, which enables controllable generation with or without finetuning the PLM. ## Preliminaries ### Problem Formulation #### Empathetic Response Generation We formulate the task of empathetic dialogue response generation as follows: Given the dialogue history \(C=[X_{1},X_{2},...,X_{T-1}]\), where \(T\) is the total number of dialogue utterances and \(X_{i}\) is the \(i\)-th utterance text sequence, generate the next dialogue response \(Y\). Each utterance \(X_{i}=[x_{1},x_{2},...,x_{m}]\) and the target utterance \(Y=X_{T}=[y_{1},y_{2},...,y_{n}]\), where \(m\) and \(n\) denote the total number of word tokens of each utterance. #### Dialogue Strategy Prediction The generation of empathetic responses can benefit from the guidance of proper dialogue strategies. In this work, we refer to the conversational skills introduced in Hill [12] needed in delivering emotional support as dialogue strategies [22], e.g. _providing suggestions_ and _self-disclosure_. We denote the total dialogue strategy set as \(S=\{s_{i}|1\leq i\leq T\}\), where \(T\) is the total number of strategies available. With the dialogue history \(C\), we need to predict the next correct dialogue strategy \(s_{Y}\in S\) as a conditional signal for generating the correct response \(Y\). ### COMET Commonsense We utilize the COMET-ATOMIC\({}^{20}_{20}\)[12] commonsense reasoning language model to augment dialogue history input with social-related commonsense knowledge. The commonsense knowledge is represented as relation-entailment tuples. For example, \(([xReact,depressed])\) indicates that the subject in the dialogue (PersonX) has the reaction of feeling depressed. The COMET-ATOMIC\({}^{20}_{20}\) dataset contains 23 relation types that are categorized into three categorical types including social commonsense relations, physical relations, and event-centric relations. In this study, we use dialogue history as the context, and used the COMET-BART model pre-trained on the COMET-ATOMIC\({}^{20}_{20}\) dataset to generate the commonsense entailment. ## Approach As shown in Figure. 1, Our proposed method consists of three major components: 1) commonsense prompting, 2) dialogue strategy predictor and 3) FUDGE-controlled response generation. ### Commonsense Prompt Verbalizer We first use the BART model pre-trained on the COMET-ATOMIC\({}^{20}_{20}\) dataset to obtain implicit commonsense knowledge that can be deduced from the dialogue context. We decode the commonsense entailment using 10 social relations from COMET-ATOMIC\({}^{20}_{20}\), including \([oEffect]\), \([oReact]\), \([oWant]\), \([xAttr]\), \([xEffect]\), \([xIntent]\), \([xNeed]\), \([xReact]\), \([xReason]\) and \([xWant]\). Relations starting with \(x\) refers to the main subject of the conversation, and \(o\) refers to others besides the subject. The commonsense knowledge is represented in the form of relation-entailment tuples \((r_{i,j},e_{i,j})\) for each dialogue history \(X_{i}\in C\), where \(1\leq j\leq 10\) refers to the \(j\)-th social relation. We generate entailment with all 10 relations for each dialogue history. To better help the language model to comprehend the commonsense relations obtained from COMET-BART, we verbalize the commonsense tuples with natural language templates using the templates proposed by Hosseini, Broniatowski, and Diab [14]. For example, we convert the tuple \(([xReact],depressed)\) into a sentence _"As a result, PersonX feels depressed."_. We verbalize each tuple obtained from entailment generation, and append it to dialogue history as the input text. ### Dialogue Strategy Predictor To provide stronger control over the response generation, we first need to identify the correct strategy to be deployed in the next utterance given an existing dialogue history. We approach the problem as a text classification task. In addition to using the LM for generating the strategy with constrained decoding, we propose two methods to predict the next utterance strategies. First, we use the complete dialogue histories as the input text sequences and ground truth strategies as the labels to train an end-to-end text classification model. This method is agnostic to the generation LM and can ideally be swapped with any off-the-shelf text classification models available. The other approach is to adopt the encoder of the generation LM and use the encoder representation of the dialogue history text to train a classification module jointly with a generation objective. In this case, the training loss of the generation model for each utterance can be written as a combination of classification and generation loss as follows: \[\ell_{strategy}=-log\ p(s|Encoder([x_{1},x_{2},...,x_{t-1}])) \tag{1}\] \[\ell_{LM}=-\sum_{t=1}^{n}log\ p(x_{t}|x_{1},x_{2},...,x_{t-1},s)\] (2) \[\ell=\ell_{LM}+\alpha\ell_{strategy} \tag{3}\] where \(x_{i}\) is the \(i\)-th token within an utterance, \(\alpha\) is the weight used to control the emphasis of strategy prediction over text generation. Note that strategy \(s\) remains the same for each target utterance. In this study, we experimented using a simple dense layer module as the strategy predictor, in which case we extract the encoder hidden representation at the [CLS] position as the context vector. ### Future Discriminator We then apply additional mechanisms to control the response generation with the predicted strategy. We adopt the method of FUDGE [20] to enforce the strategy control. We introduce a future discriminator specifically for identifying dialogue strategies. The general goal of the future discriminator is to classify the potential strategy category of a given dialogue utterance at each step of the token sequence. More specifically, for an utterance text sequence \(X_{i}=[x_{1},x_{2},...,x_{n}]\), we train a text classifier to predict the ground truth strategy type of \(\{[x_{1},x_{2},...,x_{t}]|1\leq t\leq n\}\) at every token step. Namely, we optimize for the log-likelihood sum for sequences from the first word to each word in each utterance with the loss: \[\ell_{F}=-\sum_{t=1}^{n}log\ p(s|x_{1},x_{2},...,x_{t}) \tag{4}\] where \(n\) is the length of the target utterance sequence. While the objective is determined, the model used for classification can be any model separately trainable for text classification tasks. In our experiments, we follow [20]'s approach and discuss the performance of controlled generation using a light-weight LSTM text classifier, which requires a comparatively low computational cost to train and has been found by prior work [17] to have strong performance over dialogue intent classification task. ## Experimental Settings We experiment with our proposed method using the empathetic dialogue dataset ESConv [14]. The ESConv dataset contains a total of 1,053 multi-turn dialogues with 31,410 utterances. The dataset consists of dialogue history between pairs of help-seekers and emotional support providers, including the cause and annotated dialogue strategies. We compare the results with BART [10] text generation model without any control. We use the BART-large model pre-trained with the objective of text denoising using corrupted input. In addition to BART, we also experimented with the COMET-BART model introduced by [12]. Since COMET-BART is a text generation model pre-trained for commonsense reasoning, by further fine-tuning the model for dialogue generation we want to examine whether COMET-BART has a stronger capability in understanding commonsense knowledge input we provided. We use automatic evaluation metrics including BLEU [13], ROUGE-L [14], and BERTScore [15] to measure the dialogue response generation performance. ## Results and Discussion ### Main Results We investigate the extent to which the introduction of commonsense knowledge input and controlled generation can improve the generation of dialogue response. Our results are shown in Table. 2. Detailed comparisons of generated examples are also provided in the Technical Appendix. _Incorporating commonsense knowledge helps the generation._ We inform the language models with commonsense Figure 1: The Overall Framework of the Proposed Model. knowledge by concatenating verbalized COMET-generated reasoning knowledge with the dialogue history. By prompting the BART models with verbalized COMET commonsense knowledge, the results of models **with COMET input** in Table. 2 show consistently improved results in both BART and COMET-BART models. COMET-BART is observed to perform better than vanilla BART regardless of whether the commonsense knowledge is explicitly prompted. This might imply that 1) commonsense knowledge learned by the COMET-BART during pretraining already benefits dialogue response generation; 2) additionally, prompting commonsense knowledge further improves generation performance, but COMET-BART comprehens the knowledge better than the vanilla BART model since it has already been pre-trained on commonsense reasoning task. _FUDGE strategy control improves the generation._ We compare the generation results from baseline models with models controlled with FUDGE. We first provide ground truth oracle strategies to condition the generation for comparison. As shown in the results of **BART/COMET-BART + oracle + FUDGE** in Table. 4, using FUDGE as a control for dialogue generation is able to improve the performance compared to simply using the language models. This implies that FUDGE enforces stronger control over generation given the correct strategy, which can also be observed from the examples shown in Table. 3. However, this means generation would largely rely on the strategy predictor. This corresponds to the results listed in Table. 2. Using a separate strategy classifier does not fulfill the goal of correctly predicting the strategy, thus leading to worse generation results. Future work needs to be done to improve the performance of dialogue strategy prediction. _Jointly training with generation and strategy prediction loss improves generation performance on vanilla BART._ Another interesting observation is that when finetuning the language model with a joint loss of generation and strategy classification, the performance of vanilla BART improved when not using FUDGE control (as shown in the **BART with strategy classifier** results in Table. 2). The strategy prediction accuracy dropped while generation performance improved. This might be because separating the training objective allows models to have stronger adaptability to generate strategy-specific responses. However, this improvement was not observed in the COMET-BART model. ## Conclusion In this work, we introduced a new language model-based framework for controllable empathetic dialogue response generation. We use experiment results on a public dataset to show that: 1) Our method of incorporating commonsense knowledge through verbalized prompts can improve the dialogue generation quality, 2) By separating the dialogue strategy prediction with the generation, we can use future discriminator to enforce stronger control over generated dialogue responses. Our method can be used on off-the-shelf generative language models with or without fine-tuning. Future work can further explore how dialogue strategy prediction can be improved to benefit dialogue generation, and introduce better prompting methods in addition to using templates to verbalize commonsense knowledge. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **B-2** & **B-4** & **R-L** & **BERTScore** \\ \hline **BART + oracle** & 5.02 & 1.32 & 18.34 & 90.59 \\ COMET-BART + oracle** & 3.83 & 0.75 & 12.04 & 90.34 \\ **BART + oracle + FUDGE** & 7.46 & 2.01 & 21.18 & 91.11 \\ COMET-BART + oracle + FUDGE** & 5.76 & 1.78 & 20.82 & 90.66 \\ \hline \hline \end{tabular} \end{table} Table 4: Generation results using oracle strategies. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Model** & **BLEU-1** & **BLEU-2** & **BLEU-3** & **BLEU-4** & **ROUGE-L** & **BERTScore** & **Strategy Accuracy** \\ \hline **BART** & 17.79 & 5.74 & 2.43 & 1.18 & 14.15 & 90.77 & 30.97\% \\ **COMET-BART** & 16.39 & 6.73 & 3.47 & 2.11 & 16.23 & 90.97 & 31.94\% \\ \hline \hline \multicolumn{8}{c}{**With COMET input (verbalized with templates)**} \\ \hline **BART** & 16.31 & 5.90 & 2.71 & 1.45 & 15.31 & 90.86 & 30.46\% \\ **COMET-BART** & **18.53** & **7.80** & **4.12** & **2.54** & **17.79** & **91.21** & 30.40\% \\ \hline \hline \multicolumn{8}{c}{**With strategy classifier (linear classifier with classification + LM loss)**} \\ \hline **BART** & **20.12** & **8.19** & **4.16** & **2.47** & **15.67** & 90.97 & 16.29\% \\ **COMET-BART** & 9.28 & 3.90 & 2.13 & 1.35 & 13.56 & 90.36 & 16.29\% \\ **BART (+ FUDGE)** & 17.53 & 5.83 & 2.51 & 1.24 & 14.29 & **91.00** & 16.29\% \\ **COMET-BART (+ FUDGE)** & 6.60 & 2.11 & 1.05 & 0.59 & 11.82 & 90.72 & 16.29\% \\ \hline \hline \multicolumn{8}{c}{**With strategy classifier (BERT classifier, trained separately from the LM)**} \\ \hline **BART (+ FUDGE)** & 14.00 & 5.53 & 2.96 & 1.86 & 14.86 & 90.09 & 18.46\% \\ **COMET-BART (+ FUDGE)** & 4.74 & 1.71 & 0.86 & 0.52 & 11.36 & 90.08 & 18.46\% \\ \hline \hline \end{tabular} \end{table} Table 2: The experimental results. The **Strategy Accuracy** column refers to the accuracy of predicted dialogue strategies. For the linear classifier, we train the classifier using a joint loss with LM, while the strategy classifier is trained separately. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **B-2** & **B-4** & **R-L** & **BERTScore** \\ \hline **BART + oracle** & 5.02 & 1.32 & 18.34 & 90.59 \\ COMET-BART + oracle** & 3.83 & 0.75 & 12.04 & 90.34 \\ **BART + oracle + FUDGE** & 7.46 & 2.01 & 21.18 & 91.11 \\ COMET-BART + oracle + FUDGE** & 5.76 & 1.78 & 20.82 & 90.66 \\ \hline \hline \end{tabular} \end{table} Table 3: Examples of generated Responses w/o FUDGE control.
2303.16214
Tetra-AML: Automatic Machine Learning via Tensor Networks
Neural networks have revolutionized many aspects of society but in the era of huge models with billions of parameters, optimizing and deploying them for commercial applications can require significant computational and financial resources. To address these challenges, we introduce the Tetra-AML toolbox, which automates neural architecture search and hyperparameter optimization via a custom-developed black-box Tensor train Optimization algorithm, TetraOpt. The toolbox also provides model compression through quantization and pruning, augmented by compression using tensor networks. Here, we analyze a unified benchmark for optimizing neural networks in computer vision tasks and show the superior performance of our approach compared to Bayesian optimization on the CIFAR-10 dataset. We also demonstrate the compression of ResNet-18 neural networks, where we use 14.5 times less memory while losing just 3.2% of accuracy. The presented framework is generic, not limited by computer vision problems, supports hardware acceleration (such as with GPUs and TPUs) and can be further extended to quantum hardware and to hybrid quantum machine learning models.
A. Naumov, Ar. Melnikov, V. Abronin, F. Oxanichenko, K. Izmailov, M. Pflitsch, A. Melnikov, M. Perelshtein
2023-03-28T12:56:54Z
http://arxiv.org/abs/2303.16214v1
# Tetra-AML: Automatic Machine Learning via Tensor Networks ###### Abstract Neural networks have revolutionized many aspects of society but in the era of huge models with billions of parameters, optimizing and deploying them for commercial applications can require significant computational and financial resources. To address these challenges, we introduce the Tetra-AML toolbox, which automates neural architecture search and hyperparameter optimization via a custom-developed black-box Tensor train Optimization algorithm, TetraOpt. The toolbox also provides model compression through quantization and pruning, augmented by compression using tensor networks. Here, we analyze a unified benchmark for optimizing neural networks in computer vision tasks and show the superior performance of our approach compared to Bayesian optimization on the CIFAR-10 dataset. We also demonstrate the compression of ResNet-18 neural networks, where we use 14.5 times less memory while losing just 3.2% of accuracy. The presented framework is generic, not limited by computer vision problems, supports hardware acceleration (such as with GPUs and TPUs) and can be further extended to quantum hardware and to hybrid quantum machine learning models. ## Introduction Over the past decade, neural networks have influenced practically every aspect of human society [1]. When the cutting-edge neural network AlexNet triumphed in the prestigious ImageNet challenge in the 2010s [2], the startling journey began. Today, AI systems like DALLE-2 that can produce realistic visuals and art from a description in natural language are still working toward the same overall goal [3]. While neural networks have grown in value for both people and enterprises, putting them into practice in the context of commercial operations is getting more and more difficult every year. The process of deploying a model involves several steps, such as model training, architecture and hyperparameter optimization, and testing, which can be computationally intensive and require significant resources. Furthermore, deploying a large-size model can require significant storage and computation resources, which can increase costs [4]. Additionally, if the model needs to be implemented on a small device with limited memory [5], the model may need to be optimized for size, which can add additional complexity and time to the deployment process. At the same time, the model's high accuracy must be maintained. In addition, the practice of creating models with greater complexity in order to increase accuracy is still developing. There are about 60 million trainable parameters in AlexNet (2012); GPT-2 contains 1.5 billion parameters (2019); DALLE-2 and ChatGPT contain roughly 3.5 billion and 175 billion parameters, respectively, (2022), with the promise to increase the size by at least two orders of magnitude in the coming years. It is clear that the powerful and complicated models are getting more and more expensive in terms of optimization and deployment [6; 7]. To address these challenges, we develop an Automatic Machine Learning toolbox based on tensor networks, Tetra-AML. A tensor network is a powerful numerical tool that can advance the solution of the high-dimensional problems [8]. Here we develop the framework that allows us to apply tensor networks for automatic machine learning [9], including an automatic search for the best architectures of models - neural architecture search - with optimal parameters - hyperparameters optimization - with the help of our black-box optimization approach, TetraOpt[10]. Besides, it allows compression of the models via a combination of common quantization techniques and pruning approach [11; 12] augmented by compression using tensor networks [13; 14]. The general scheme of Tetra-AML is shown in Fig. 1. Tetra-AML offers the flexibility of bringing your own model or defining a use case to receive a suitable model. A user provides a dataset and specifies the search space for optimization. After that, the tool initiates parallel training of the models and applies post-training tensor network compression, pruning and quantization to create an optimal, compressed and accelerated model. Once the model is ready, users can download it for deployment. In this work, we mainly focus on computer vision problems, as one of the most challenging ones, but the framework is generic and can be applied for any machine learning problem. We consider well-known CIFAR dataset and NATS benchmarking for neural architecture search [15]. Besides, Tetra-AML allows for hardware acceleration, such as with GPUs (Graphics Processing Units) and TPUs (Tensor Processing Units), and, moreover, can leverage the power of _quantum computers_ for better optimization of classical networks and boosting the performance of quantum and hybrid quantum/classical machine learning models, which we discuss in this work. ## Neural Architecture Search and Hyperparameters Optimization The first step in building a model is choosing the best model architecture for the task at hand. Neural Architecture Search (NAS) is an algorithm, or set of algorithms, that automates the process of finding the best architecture for a particular ML problem and dataset [16]. Such an approach explores the space of potential architectures and assesses their performance during an optimization method. Finding an architecture with the best performance while being computationally efficient is the ultimate aim of NAS. When compared to manually designing and fine-tuning architectures, this can **significantly save the development time and costs** and produce new, better-performing architectures. In addition to NAS, it is crucial to conduct hyperparameters optimization (HPO) for the model to maximize its accuracy within the specific problem and dataset. Hyperparameters can be considered values that are set for the model and do not change during the training regime, and may include variables such as learning rate, decay rates, choice of optimizer for the model, etc. This tuning can be also done in an automatic manner, e.g., using black-box optimization methods [17]. Unlike other approaches, such as Random Search, Bayesian optimization methods, Reinforcement Learning or Genetic algorithms, our method utilizes a black-box optimization approach based on tensor networks, TetraOpt. TetraOpt is a versatile global optimization algorithm that can handle various types of variables, including integers and categorical data, and is well-suited Figure 1: General scheme of Tetra-AML. Both Neural Architecture Search (NAS) and Hyperparameter Optimization (HPO) are performed via TetraOpt (Terra Quantum’s black-box optimizer based on Tensor Trains). Then the model is compressed via tensor network methods, quantization and pruning. Figure 2: Validation accuracy dependence on the number of neural network models (architectures) runs for TetraOpt, Bayesian, and Random Search algorithms. TetraOpt achieves 93.7% accuracy, while Bayesian optimization and Random Search find architectures only with 93.5% and 92.8% accuracy, respectively, with the same number of model runs. The experiments were carried out on a standard NAS benchmark [15], where the CIFAR-10 dataset is used. for parallel computing and hardware-acceleration via, e.g., GPUs or TPUs. Such an optimization method is also extended to quantum processing units and can advance classical and hybrid quantum-classical machine learning models; see more discussion below. A deep understanding of tensor networks and the ability to develop a custom optimizer advances our solution with a unique advantage in the automatic machine learning market and enables us to deliver faster and more accurate results for innovative business that actively utilizes machine learning models. In Fig. 2, we present the results of a comparison between TetraOpt, Bayesian optimization and Random Search on a standard NAS benchmark, NATS [15]. NATS is a well-known benchmark used to evaluate neural architecture search algorithms - it uses the CIFAR-10 dataset. The search space of the NATS benchmark contains a wide variety of neural network architectures, including different numbers of layers, types of layers, and connections between layers. It includes a search space of 15,625 neural architectures, which makes NATS a comprehensive and universal benchmark that enables fair comparisons between different NAS algorithms. We observe that TetraOpt achieves the best validation accuracy for the same number of model runs. ## III Compression of Neural Networks Using Tensor Networks Besides complicated tuning, the significant issue with the large models lies in the huge size of a model, which creates additional computational and financial costs that limits the efficient deployment and support. It is even more relevant for devices with limited memory, such as local machines, mobile devices, and Internet of Things in general [5]. In this case, the model compression is a key technology that allows for keeping the accuracy at a decent level while substantially reducing the costs. While many existing approaches mostly use pruning and quantization [18], we focus on the development of new compression algorithms based on tensor networks [13; 19]. Tensor network compression is a cutting-edge technology that potentially offers significant advantages for businesses by compressing deep neural networks while maintaining their accuracy. This approach reduces the number of model parameters and, consequently, the size of the model. The main idea of this method is to decompose a huge matrix (tensor) used in a neural network layer into smaller tensors with an exponential reduction in the memory and a substantial reduction in the runtime [8]. The crucial fact is that this technique can be applied to compress any machine learning model, including feed-forward layers [13], recurrent neural networks [20], convo Figure 3: (A) General scheme of Neural Network compression. (top) General Neural network scheme for image recognition. (bottom) Compressed Convolution layer - 4D convolution kernel is represented as a sum of tensor product of small tensors (Canonical Decomposition). Initial layer has \(C_{in}\times C_{out}\times D^{2}\) parameters, while after compression remains only \(C_{in}\times R+D^{2}\times R^{2}+R\times C_{out}\). For small \(R\), it provides significant compression in the occupied memory. (B) Compression of state-of-the-art ResNet-18 on the CIFAR-10 dataset via tensor networks. The diagram shows the achieved accuracy depending on the compression of the model. Bottom bar: Uncompressed Base Model. Middle bar: TN Compressed Model (Compression coefficient: 4.5). Top bar: TN Compressed Model (Compression coefficient: 14.5). lutional neural networks [19], and potentially, state-of-art transformers [21]. Moreover, by combining the powerful techniques of pruning, quantization and tensor network compression, we gain even more efficiency in resource savings [14]. Here, to illustrate the capabilities of our framework we focus on the image classification problem and ResNet neural networks, which we previously used in Ref. [22]. We show the results of the compression of ResNet-18 on the CIFAR-10 dataset in Fig. 3. During the experiment, we compress only CNN layers since they occupy an overwhelming amount of memory in the ResNet-18 architecture. To measure the occupied memory, we estimated the total number of parameters in the compressed and uncompressed models, taking into account fully-connected layers. As one can see, with a higher compression ratio, the accuracy slightly worsens, so the desired accuracy and compression might be defined by a user according to a specific problem. ## VI Pathway to quantum computing We develop the Tetra-AML framework keeping in mind the quantum performance enhancement that we can obtain from actively developing quantum hardware. For instance, TetraOpt is well extended to a quantum version, which is theoretically capable of providing more optimal points for an optimized objective - a new optimal set of hyperparameters and neural architectures. On the other hand, when building hybrid quantum-classical models that combine parametrized quantum circuits with classical neural networks, the process of searching for hyperparameter configurations that result in improved model accuracy and training presents a significant challenge. Hyperparameter Optimization and Neural Architecture Search techniques can be applied to searching the optimal parameters in Quantum Machine Learning [22] and optimal balance between classical and quantum contribution in a hybrid model. Such methods can be used for pure quantum models and help to find the best quantum ansatz for a given dataset or a problem. Overall, NAS provides a promising avenue for developing optimal model architectures for quantum and hybrid quantum computing [23], which could lead to significant advancements in the quantum machine learning field [24; 25; 26; 27]. ## VII Conclusion Deep Neural Networks are becoming increasingly prevalent in business as they are able to handle large and complex data sets, leading to improved performance in various tasks, such as image recognition, natural language processing and prediction. However, as networks become larger and more complex, the costs for development and deployment also increase. Here, we propose an off-the-shelf solution that addresses these challenges by using tensor network techniques to find the optimal neural network architecture and hyperparameter optimization for a particular task and dataset and then compress the model to reduce its size while maintaining desired accuracy. We develop a tool for optimizing neural networks that will significantly enhance computational efficiency, reduce the number of parameters, and ultimately improve the overall performance of deep learning models including hybrid quantum neural networks.
2310.01650
CoDBench: A Critical Evaluation of Data-driven Models for Continuous Dynamical Systems
Continuous dynamical systems, characterized by differential equations, are ubiquitously used to model several important problems: plasma dynamics, flow through porous media, weather forecasting, and epidemic dynamics. Recently, a wide range of data-driven models has been used successfully to model these systems. However, in contrast to established fields like computer vision, limited studies are available analyzing the strengths and potential applications of different classes of these models that could steer decision-making in scientific machine learning. Here, we introduce CodBench, an exhaustive benchmarking suite comprising 11 state-of-the-art data-driven models for solving differential equations. Specifically, we comprehensively evaluate 4 distinct categories of models, viz., feed forward neural networks, deep operator regression models, frequency-based neural operators, and transformer architectures against 8 widely applicable benchmark datasets encompassing challenges from fluid and solid mechanics. We conduct extensive experiments, assessing the operators' capabilities in learning, zero-shot super-resolution, data efficiency, robustness to noise, and computational efficiency. Interestingly, our findings highlight that current operators struggle with the newer mechanics datasets, motivating the need for more robust neural operators. All the datasets and codes will be shared in an easy-to-use fashion for the scientific community. We hope this resource will be an impetus for accelerated progress and exploration in modeling dynamical systems.
Priyanshu Burark, Karn Tiwari, Meer Mehran Rashid, Prathosh A P, N M Anoop Krishnan
2023-10-02T21:27:54Z
http://arxiv.org/abs/2310.01650v1
# CoDBench: A Critical Evaluation of Data-driven Models for Continuous Dynamical Systems ###### Abstract Continuous dynamical systems, characterized by differential equations, are ubiquitously used to model several important problems: plasma dynamics, flow through porous media, weather forecasting, and epidemic dynamics. Recently, a wide range of data-driven models has been used successfully to model these systems. However, in contrast to established fields like computer vision, limited studies are available analyzing the strengths and potential applications of different classes of these models that could steer decision-making in scientific machine learning. Here, we introduce CoDBench, an exhaustive benchmarking suite comprising 11 state-of-the-art data-driven models for solving differential equations. Specifically, we comprehensively evaluate 4 distinct categories of models, _viz._, feed forward neural networks, deep operator regression models, frequency-based neural operators, and transformer architectures against 8 widely applicable benchmark datasets encompassing challenges from fluid and solid mechanics. We conduct extensive experiments, assessing the operators' capabilities in learning, zero-shot super-resolution, data efficiency, robustness to noise, and computational efficiency. Interestingly, our findings highlight that current operators struggle with the newer mechanics datasets, motivating the need for more robust neural operators. All the datasets and \({}^{3}\)codes will be shared in an easy-to-use fashion for the scientific community. We hope this resource will be an impetus for accelerated progress and exploration in modeling dynamical systems. 1 Footnote 1: Computer Science and Automation \({}^{2}\) Electrical Communication Engineering \({}^{3}\) Upon acceptance, all codes and datasets utilized in this study will be made publicly accessible through GitHub and available for future exploration. ## 1 Introduction Nature is in a continuous state of evolution. "Rules" governing the time evolution of systems in nature, also known as dynamics, can be captured mathematically through partial differential equations (PDEs). In the realm of science and engineering, PDEs are widely used to model and study several challenging real-world systems, such as fluid flow, deformation of solids, plasma dynamics, robotics, mechanics, and weather forecasting, to name a few [6; 22; 25]. Due to their highly non-linear and coupled nature, these PDEs can be solved analytically only for trivial or model systems. Thus, accurate numerical solutions for the PDEs are the cornerstone in advancing scientific discovery. Traditionally, the PDEs are solved using classical numerical methods such as finite difference, finite volume, or finite element methods [27]. However, these numerical methods exhibit major challenges in realistic systems in terms of system size, timescales, and numerical instabilities. Specifically, simulating the systems for longer timescale or for large domains is extremely computationally intensive to the extent that performing them in real-time for decision-making is a major challenge. Further, in the case of large/highly non-linear fields, these simulations often exhibit numerical instabilities, rendering them ineffective. [29] The recent surge in artificial intelligence-based approaches suggests that neural models can efficiently capture continuous dynamical systems in a data-driven fashion [2]. These models are extremely time-efficient in comparison to traditional solvers and can capture highly non-linear input-output relationships. Earlier approaches in this direction relied directly on learning the input-output map through multilayer perceptrons (MLPs), convolutional neural networks, or graph neural networks. However, these approaches faced challenges in terms of generalizing to unseen initial or boundary conditions, geometries, or resolutions. This could be attributed to the fact that the neural models essentially learn the input-output relationship in a finite-dimensional approximation. To address this challenge, a seminal theory, extending the universal approximation theorem of neural networks [5] to neural operators was proposed, namely, the universal operator approximation theory [4]. This theory unveiled the neural networks' prowess in handling infinite-dimensional inputs and outputs. Theoretically, directly learning the solution operator through specialized neural network architectures offers several key advantages. (i) They can directly learn input-output function mappings from data, thereby obviating the necessity for prior knowledge of the underlying PDE. (ii) They offer significantly improved time efficiency compared to traditional numerical solvers. (iii) They exhibit zero-shot generalizability, extending their applicability to systems of larger scale and complexity than those encompassed within the training dataset. (iv) They provide superior approximations of the solution operator compared to existing neural architectures, spanning from feed-forward networks to specialized models like convolutional networks and conditional generative adversarial networks (GANs). Thus, the neural operators attempt [13] to combine the best of both data-driven and physics-based numerical models. This motivated the exploration of neural operator architectures [1], [23], capable of directly learning the solution operator. For instance, consider DeepONet[18], which leverages the universal approximation theorem introduced by Chen and Chen to directly address PDEs. On a different front, FNO [16], one of the most widely used Neural Operators, focuses on parameterizing the integral kernel within Fourier space. Moreover, a noteworthy study [3] highlights the notion that all transformers are essentially operators. This insight has sparked endeavors to create operator transformers. Given their proven effectiveness in sequence-to-sequence learning tasks, these transformer-based designs open avenues for enhancing the approximation of spatiotemporal PDEs. Prior studies, such as those by [8], have delved into the realm of PINNs [24] and some neural operator architectures, like DeepONet, FNO, and their variants. However, unlike fields like computer vision, comprehensive comparative evaluations of these neural operators are absent. Moreover, due to the variations and incompatibilities Figure 1: A glimpse of the data diversity under consideration. From left to right, we present visual representations of three distinct datasets — Biaxial, Darcy Flow, and Navier-Stokes. in architectures, a direct comparison of all these architectures is extremely cumbersome. Such evaluations are pivotal to discerning the distinctive advantages of diverse architectural paradigms, especially when addressing equations from a wide spectrum of scientific domains. This study aims to bridge this gap by rigorously evaluating data-driven models that encompass a wide range of classes and methods, including the foundational deep operator regression model, frequency domain parameterization models, and transformer-based architectures, to achieve state-of-the-art performance comparison on selected PDE datasets. Moreover, we integrate conventional neural architectures to underscore the merits of PDE-specialized structures. Our dataset selection is methodical, designed to challenge each model with equations from various scientific disciplines. We incorporate four prevalent equations from fluid dynamics and four standard differential equations from solid mechanics into the neural operator domain, ensuring a holistic comparison within the realm of neural operators. **Our Contribution:** In this work, we critically analyze 11 data-driven models, including operators and transformers, on 8 PDE datasets. The major contributions of our research are as follows: 1. **CoDBench:** We present a package that allows seamless analysis of several data-driven approaches on PDEs. We thoroughly assess state-of-the-art data-driven neural models for solving PDE datasets across diverse scientific realms, such as fluid and solid mechanics, shedding light on their precision and efficacy. 2. **Super-resolution:** We analyze the ability of neural operators' to generalize to systems of different resolutions than that of their training sets' discretizations. 3. **Data efficiency and robustness to noise:** We critically assess the efficiency of these models to learn from small amounts of data or noisy data. This is an important aspect since the data available can be scarce and noisy in practical applications. 4. **Out-of-distribution task:** A novel task to gain insights into what these models are truly learning to determine whether the underlying operator is genuinely being learned or if the training dataset is simply being fitted. Two closely related Stress and Strain datasets are interchanged during training and testing to dig deeper into whether the solvers are actually operators. ## 2 Preliminaries This section provides a concise mathematical framework to illustrate how traditional PDE solving can be transitioned and addressed using data-driven methodologies via neural networks. 1. **Function Domains**: Consider a bounded open set, represented as \(\mathcal{D}\subset\mathbb{R}^{d}\). Within this domain, we define \(\mathcal{F}=\mathcal{F}(\mathcal{D};\mathbb{R}^{d_{f}})\) and \(\mathcal{G}=\mathcal{G}(\mathcal{D};\mathbb{R}^{d_{g}})\) as separable Banach spaces. These spaces correspond to input and output functions, which represent elements in \(\mathbb{R}^{d_{f}}\) and \(\mathbb{R}^{d_{g}}\), respectively. 2. **The Solution Operator**: In our exploration, we introduce \(T^{\dagger}:\mathcal{F}\rightarrow\mathcal{G}\), a mapping that is typically nonlinear. This mapping emerges as the solution operator for PDEs, playing a pivotal role in scientific computations. 3. **Data Generation**: For training purposes, models utilize PDE datasets constructed as \(\mathcal{D}=\{(\mathcal{F}_{k},\mathcal{G}_{k})\}_{1\leq k\leq D}\), where \(\mathcal{G}_{k}=T^{\dagger}(\mathcal{F}_{k})\). Given the inherent challenges in directly representing functions as inputs to neural networks, the functions are discretized using mesh generation algorithms [33] over domain \(\mathcal{D}\). We sample both input and output functions on a uniform grid, as it ensures compatibility with all selected solvers. For the input function \(\mathcal{F}_{k}\), we discretize it on the mesh \(\{x_{i}\in\Omega\}_{1\leq i\leq R}\), and the discretized \(\mathcal{F}_{k}\) is \(\{(x_{i},f_{ik})\}_{1\leq i\leq R}\), where \(f_{ik}=\mathcal{F}_{k}(x_{i})\). Similarly, For the solution function \(\mathcal{G}_{k}\), we discretize it on the mesh \(\{y_{i}\in\Omega\}_{1\leq i\leq R}\), and the discretized \(\mathcal{G}_{k}\) is \(\{(y_{i},g_{ik})\}_{1\leq i\leq R}\), where \(g_{ik}=\mathcal{G}_{k}(y_{i})\). It's worth noting that models such as POD-DeepONet and SNO utilize only the function values for representation, excluding grid locations from the model input. 4. **Objective**: The overarching goal for each model is to craft an approximation of \(T^{\dagger}\). This is achieved by developing a parametric mapping, denoted as \(T:\mathcal{F}\times\Theta\rightarrow\mathcal{G}\) or, in an equivalent form, \(T_{\theta}:\mathcal{F}\rightarrow\mathcal{G}\), where \(\theta\in\Theta\). This mapping operates within a bounded parameter space, \(\Theta\). 5. **Metric**: Evaluating the efficacy of the parametric mapping involves comparing its outputs, \(T_{\theta}(\mathcal{F}_{k})\) = {\(\widetilde{g}_{ik}\}_{1\leq i\leq R}\), with the actual data, aiming to minimize the relative L2 loss, given by: \[\min_{\theta\in\Theta}\frac{1}{D}\sum_{k=1}^{D}\frac{1}{R}\frac{\|T_{\theta}( \mathcal{F}_{k})-\{\widetilde{g}_{ik}\}_{1\leq i\leq R}\|_{2}^{2}}{\|\{ \widetilde{g}_{ik}\}_{1\leq i\leq R}\|_{2}^{2}},\] (1) Here, \(R\) denotes the function discretization parameter. ## 3 Model Architectures **Standard Neural Network Architectures:** This study encompasses a broad spectrum of architectures, as illustrated in Figure 2. The Feed-Forward Neural Network (FNN) serves as the foundational component, distinguished by its pointwise configuration. Prevalent CNN-based architectures like UNet, ResNet, and cGAN are also incorporated. UNet, delineated in [26], employs a U-shaped encoder-decoder design augmented by skip connections, facilitating the capture of both granular and abstract features. ResNet, described in [11], consist of a series of residual blocks and are commonly used in computer vision tasks [31]. Conditional Generative Adversarial Networks (cGAN), introduced in [21], are an evolution of the GAN framework, facilitating conditional generation via the incorporation of label information in both the generator and discriminators. **Deep Operator-Based Regression Models:** Neural Operators represent a novel ML paradigm, predominantly employed in scientific machine learning to decipher PDEs. These operators discern mappings between infinite-dimensional Euclidean spaces, relying solely on data and remaining agnostic to the underlying PDE. Within this study, we delve into deep operator regression models. DeepONet bifurcates into two sub-networks: the branch net, which encodes the input function at fixed sensor locations, and the trunk net, encoding solution function locations [18]. The solution emerges from the inner product of the outputs from these nets. In POD-DeepONet, the bases are determined by executing proper orthogonal decomposition (POD) on training data, replacing the self-learned basis of output functions [19]. This POD basis forms the trunk net, leaving only the branch net as the trainable component, which discerns the coefficients of the POD basis. **Frequency-Based Operators:** Frequency-based solvers like FNO employ a finite-dimensional parameterization using truncated Fourier modes [16]. By integrating this with an integral operator Figure 2: An overview of the various models being benchmarked and the relationship between them. The term ‘pod-basis’ denotes the basis of the output function, derived directly from proper orthogonal decomposition, as opposed to being learned through a neural network. restricted to convolution, and instantiated via a linear transformation in the Fourier domain, the FNO operator is conceived. WNO, or Wavelet Neural Operator, amalgamates the prowess of wavelets in time-frequency localization with an integral kernel. By learning the kernel in the wavelet domain, convolution operates on wavelet decomposition coefficients rather than direct physical space convolution [32]. SNO, the Spectral Neural Operator, addresses the often-overlooked aliasing error in the Fourier Neural Operator. By representing both input and output functions using coefficients in truncated Fourier or Chebyshev series, SNO offers an aliasing-free approach [7]. Any transformation between these coefficients can be executed using neural networks, and methods employing these series are termed spectral neural operators. In their approach, a straightforward feed-forward neural network architecture in the complex domain is utilized. **Transformer Operators:** "GNOT introduces the Heterogeneous Normalized (linear) Attention (HNA) block and a geometric gating mechanism, specifically tailored for enhanced performance on PDE datasets [9]. This architecture effectively performs a soft domain decomposition [10], treating each decomposed domain independently and subsequently integrating them using a mixture-of-experts approach to predict the underlying truth. This design allows GNOT to serve as a versatile operator adept at handling a variety of PDE types. In contrast, the OFormer model builds upon the seminal work presented in [34]. It incorporates random Fourier projection to counteract spectral bias, thereby enhancing its efficacy on PDEs [15]. ## 4 Datasets Here, we briefly describe the 8 datasets used in the present. While previous approaches have mostly focussed on fluid datasets, here we present 4 datasets on fluid flow and 4 on the deformation of solids; for complete dataset details, refer A.1, A.2. 1. **Burgers**: This dataset models the one-dimensional flow of a viscous fluid. The input is the fluid's initial velocity distribution at time \(t=0\), and the output is the fluid's velocity at a time \(t>0\)[30]. 2. **Darcy**: The Darcy Flow dataset describes the steady-state flow of a fluid through a porous medium in two dimensions. The input is the spatial distribution of the medium's resistance to flow (viscosity), and the output is the fluid's velocity distribution across the domain at steady-state [30]. 3. **Navier Stokes**: This dataset models the time evolution of a 2D viscous, incompressible fluid. The input includes the fluid's initial swirling motion (vorticity) and external forces acting on the fluid. The output is the fluid's velocity distribution over a specified time period [30]. 4. **Shallow Water**: The shallow-water equations simulate the behavior of water that flows over a shallow surface in 2D. The input consists of the initial water depth and velocity distribution, and the output predicts the water flow dynamics in response to gravitational forces and varying underwater terrain (bathymetry) [30]. 5. **Stress**: This dataset models the stress distribution in a 2D binary composite material subjected to mode-I tensile loading. The input is the material microstructure (distribution of two materials), and the output is the stress field (Stress) distribution of the digital composite [20]. 6. **Strain**: The strain dataset describes the deformation of a 2D binary composite material subjected to mode-I tensile loading. The input is the material microstructure and the output is the resulting strain fields (Strain) [20]. 7. **Shear**: Part of the mechanical MNIST collection, this dataset simulates the deformation of a heterogeneous material block when forces are applied parallel to its surface (Shear). The input is the material microstructure, and the output captures element-wise displacements subjected to shear loading [14]. 8. **Biaxial**: Another subset of the mechanical MNIST experiments, this dataset models the material's response when stretched equally in two perpendicular directions (equibiaxial loading). The input is the material microstructure, and the output records the full field displacement under Biaxial stretching [14]. ## 5 Benchmarking Results We present the results of rigorous experimentation on PDE solvers across six tasks, each designed to showcase the unique capabilities and strengths of the models. The diversity of the selected PDEs, sourced from [30], [14], and [20], encompasses both time-dependent and time-independent challenges, capturing the intrinsic computational complexity inherent to these tasks. Specifically, the experiments conducted on novel mechanical datasets not previously encountered by the solvers, offer invaluable insights for the broader scientific community. In alignment with established experimental protocols, the dataset was split as follows: \(\sim 80\%\) for training, \(\sim 10\%\) for validation, and \(\sim 10\%\) for testing. Our ensemble training methodology ensured a level playing field for each operator by defining a hyperparameter range and selecting the best subset for experimentation. Model optimization was achieved using the Adam [12] and AdamW [17] optimizers. Depending on the specific task, we employed either step-wise or cycle learning rate scheduling [28], with the optimal learning rate chosen from the set \(\sim\{10^{-1},10^{-2},10^{-3},10^{-4},10^{-5}\}\). The training was conducted under optimal hyperparameter configurations, introducing variability through distinct random seeds and data splits. All experiments adhered to a fixed batch size of 20 and were executed on 1 \(\sim\) 8 NVIDIA A6000 GPUs, with memory capacities of 48 GBs. To ensure fairness and accuracy in results, each experiment was replicated thrice with different seeds. We report the mean and deviation in Relative L2 Error. ### Accuracy Table 1 shows the performance of the models on the 8 datasets. FNO architecture stands out on all datasets, consistently delivering results among the best three. Its strength lies in its transformation in the frequency space. By capturing and transforming the lower frequencies present in the data, the FNO can approximate the solution operators of scientific PDEs. This approach, which uses the integral kernel in the Fourier space, facilitates a robust mapping between input and output function spaces, making it particularly adept at handling the complexities of the datasets in this study. Following FNO, GNOT, employing a mixture-of-experts approach, showcases exemplary performance on most (6/8) datasets. Its unique soft domain decomposition technique divides the problem into multiple scales, allowing it to capture diverse features of the underlying PDE. Each expert or head in the model focuses on a different aspect of the PDE, and their combined insights lead to a comprehensive understanding, especially for challenging datasets like Shear and Biaxial. The OFormer architecture, that employs an innovative approach to solving spatio-temporal PDEs, exhibits best results in Navier Stokes dataset. By unrolling in the time dimension and initiating with a reduced rollout ratio, it efficiently forwards the time step dynamics in the latent space. This method conserves significant space during training on time-dependent datasets. It also significantly enhances the approximation while tested on one of the most complex dataset in the study, the time-dependent navier stokes PDE. \begin{table} \begin{tabular}{l|c c c c c c c c} \hline \hline & & & & & & & & \\ \hline Models & **Bueces** & **Darcy** & **Navier Stokes** & **Swallow Water** & **Stress** & **Strain** & **Shear** & **Biaxial** \\ \hline Finn & 5.583\(\pm\)1.46 & 3.47\(\pm\)0.14 & 34.77\(\pm\)0.19 & 2.241\(\pm\)0.06 & 25.69\(\pm\)0.39 & 23.09\(\pm\)0.76 & 11.11\(\pm\)0.00 & 36.00\(\pm\)0.01 \\ ResNet & 11.37\(\pm\)1.20 & 5.14\(\pm\)0.23 & 29.52\(\pm\)0.14 & 0.257\(\pm\)0.00 & 20.05\(\pm\)0.19 & 14.64\(\pm\)0.31 & 3.02\(\pm\)0.00 & 13.58\(\pm\)0.07 \\ UNet & 30.87\(\pm\)0.200 & 2.10\(\pm\)0.08 & 24.02\(\pm\)0.05 & 0.295\(\pm\)0.007 & 10.57\(\pm\)0.19 & 9.05\(\pm\)0.35 & 7.09\(\pm\)0.04 & 16.63\(\pm\)2.30 \\ cGAN & 34.906\(\pm\)0.006 & 8.15\(\pm\)0.14 & 24.09\(\pm\)0.08 & 0.291\(\pm\)0.02 & 6.66\(\pm\)0.94 & 6.21\(\pm\)0.30 & 5.63\(\pm\)0.00 & 15.74\(\pm\)1.00 \\ FNO & 0.160\(\pm\)0.004 & 1.08\(\pm\)0.06 & 14.13\(\pm\)0.34 & 0.128\(\pm\)0.18 & 6.08\(\pm\)0.18 & 5.61\(\pm\)0.28 & 2.25\(\pm\)1.14 & 7.40\(\pm\)1.51 \\ WNO & 7.320\(\pm\)0.37 & 2.23\(\pm\)0.14 & 37.08\(\pm\)1.23 & 0.572\(\pm\)0.00 & 17.24\(\pm\)0.06 & 12.05\(\pm\)0.26 & 4.37\(\pm\)0.00 & 21.22\(\pm\)2.26 \\ SNO & 0.4623\(\pm\)0.47 & 8.55\(\pm\)0.13 & 98.64\(\pm\)0.25 & 94.89\(\pm\)0.00 & 51.31\(\pm\)0.01 & 62.34\(\pm\)1.17 & 4.37\(\pm\)0.00 & 21.93\(\pm\)0.07 \\ DeepONT & 10.561\(\pm\)1.192 & 4.27\(\pm\)0.24 & 55.48\(\pm\)0.16 & 8.602\(\pm\)0.43 & 24.59\(\pm\)0.09 & 23.75\(\pm\)0.09 & 28.55\(\pm\)0.18 & 8.28\(\pm\)0.37 \\ POD-DeepPONT & 3.990\(\pm\)0.04 & 3.45\(\pm\)0.04 & 33.37\(\pm\)1.30 & 1.50\(\pm\)0.16 & 29.63\(\pm\)0.52 & 18.51\(\pm\)1.17 & 4.14\(\pm\)0.04 & 36.00\(\pm\)0.00 \\ OFormer & 0.165\(\pm\)0.15 & 3.21\(\pm\)0.06 & 10.97\(\pm\)0.38 & 6.597\(\pm\)0.39 & 27.33\(\pm\)0.28 & 25.08\(\pm\)1.36 & 41.75\(\pm\)0.19 & 61.66\(\pm\)0.40 \\ GNOT & 0.677\(\pm\)0.01 & 2.04\(\pm\)0.05 & 23.73\(\pm\)0.97 & 0.102\(\pm\)0.007 & 13.02\(\pm\)0.01 & 9.99\(\pm\)0.02 & 0.43\(\pm\)0.02 & 0.71\(\pm\)0.04 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of different models across diverse datasets from distinct domains. The Relative L2 Error, expressed as (\(\times 10^{-2}\)), is presented as the evaluation metric. Lower scores denote better performance. The optimal outcomes are highlighted in bold and dark blue, followed by the second-best in dark blue, and the third-best in light blue. Interestingly, most models, with the notable exception of GNOT, struggle to accurately learn the underlying PDE for the Biaxial and Shear datasets. Here, the simpler FNN architecture demonstrates significant proficiency in learning these datasets. Interestingly, architectures like cGAN, originally designed for 2D image data analysis with its U-Net encoder, demonstrate impressive performance across tasks. This underscores the versatility of such architectures, even when they aren't explicitly designed as operators. ### Robustness to Noise In practical applications, it's common to encounter noise in measurements. To understand how various neural operators handle such real-world challenges, we simulated conditions with noisy data. During our testing phase, we intentionally introduced corrupted input function data to each model. The goal was to see how well these models could predict the ground truth amidst this noise. Figure 3 shows the performance of the models on noisy data. Transformer-based architectures have shown commendable performance on the Darcy dataset. Even when noise is introduced, these models continue to perform well. However, their resilience is tested when faced with the Stress dataset. In scenarios where they already find it challenging to learn the underlying PDEs, the addition of noise exacerbates their performance issues, causing a noticeable decline in accuracy. On the other hand, the spectral neural operator SNO shows superior robustness to noisy data. While its performance in a noise-free environment is far from best, it performs remarkably when exposed to noisy dataset, especially on the Stress dataset. This resilience can be attributed to its unique approach: unlike other frequency-based methods that transition between the time and frequency domains, SNO exclusively processes data in the frequency domain. This design choice allows it to effectively filter out noise, identifying it as a high-frequency disturbance, before it even begins its prediction process. ### Data Efficiency For the data-efficiency experiments, we utilized the Darcy dataset with 1700 samples of \(47\times 47\) dimensions. To assess the data efficiency of the models, we trained all models on reduced subsets: 25% (425 samples) and 50% (850 samples) of the original dataset, while maintaining the same testing and validation datasets. The exceptional performance of frequency-based methods, notably FNO and WNO, even with limited data, is rooted in their operation within the frequency domain (see Table 2). The notable capability of these methods to capture the essential dynamics of the underlying PDEs through the Figure 3: Robustness Analysis Against Noise: Performance metrics, in terms of Relative L2 Error, are presented for models subjected to random Gaussian noise. The evaluation encompasses the Darcy dataset (left) and the Stress dataset (right). The right diagram provides a detailed comparison of the most noise-resilient models on the Stress dataset, specifically SNO, OFormer, and ResNet. lower frequencies present in the data enables data-efficient learning, a crucial feature for realistic data where the number of observations may be limited. Transformer-based neural operator architectures have demonstrated potential in approximating operators. However, their efficacy diminishes when data is sparse. GNOT, which typically excels with a rich dataset, struggles to outperform even basic neural network architectures in a data-limited scenario. This trend underscores the inherent data dependency of transformer architectures, highlighting the challenges faced by many models, except frequency-based operators, when trained on limited data. ### Zero-shot Super-resolution Directly approximating the solution operator offers a theoretical advantage: the potential for a mesh invariant continuous dynamical system. Once trained, such a system can ideally maintain accuracy even when applied to much larger systems than those it was trained on. This capability is termed "zero-shot super-resolution." Note that FNO and GNOT enable zero shot super resolution without any modifications. Other models such as SNO and DeepONet, upon closer examination, reveals that they cannot have a straightforward application on zero-shot super-resolution. Instead, it leans on certain adjustments and workarounds to achieve the desired results. While these modifications might enable super-resolution in practice, they diverge from the concept of zero-shot super-resolution from an architectural perspective. Accordingly, for our evaluation, we consider only FNO and GNOT. We trained both FNO and GNOT on the Darcy dataset at a resolution of \(47\times 47\). We then tested their performance on higher resolutions: \(64\times 64\) and \(128\times 128\). As seen in Table 4, both models exhibited a rapid decline in performance as the resolution increased. While GNOT fared slightly better, its results were still not up to the mark. ### Out-of-distribution Generalization The equations for Stress and Strain are intrinsically linked, differing primarily by the coefficient of elasticity, commonly known as Young's modulus. Given that our training and testing processes utilize normalized data, it's reasonable to anticipate that the models trained on the Stress dataset \begin{table} \begin{tabular}{l|c c} \hline \hline \multicolumn{2}{c|}{Darcy} & \multicolumn{2}{c}{Models} \\ \hline Resolution & FNO & GNOT \\ \hline \(47\times 47\) & \(\mathbf{1.08_{+0.06}}\) & \(\mathbf{2.04_{+0.05}}\) \\ \(64\times 64\) & \(60.50_{+5.49}\) & \(55.32_{+5.65}\) \\ \(128\times 128\) & \(59.99_{+5.48}\) & \(55.42_{+5.68}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Zero-shot super-resolution. Comparing on various resolutions (left column) with corresponding model performance (right column). The original training resolution and its associated performance are highlighted in bold. \begin{table} \begin{tabular}{l|l|c c c c c c c c c c c} \hline \hline \multicolumn{2}{c|}{Dataset} & \multicolumn{8}{c}{Models} & \multicolumn{8}{c}{Models} & \multicolumn{2}{c}{POD.} & \multirow{2}{*}{OFOMER} & \multirow{2}{*}{GNOT} \\ \cline{2-2} \cline{6-12} \multicolumn{1}{c|}{Train} & & & & & & & & & & & & & \\ \hline Stress & Stress & \(25.60_{+0.06}\) & \(20.05_{+0.03}\) & \(10.75_{+0.01}\) & \(66.66_{+0.04}\) & \(80.50_{+0.01}\) & \(17.24_{+0.04}\) & \(51.31_{+0.04}\) & \(24.50_{+0.04}\) & \(29.63_{+0.02}\) & \(27.33_{+0.03}\) & \(13.09_{+0.03}\) \\ \hline Stress & Stress & \(51.91_{+0.04}\) & \(50.33_{+0.05}\) & \(59.75_{+0.05}\) & \(59.34_{+0.05}\) & \(59.34_{+0.05}\) & \(59.17_{+0.05}\) & \(59.26_{+0.04}\) & \(59.66_{+0.04}\) & \(59.33_{+0.05}\) & \(66.33_{+0.05}\) & \(66.33_{+0.05}\) & \(59.43_{+0.05}\) \\ \hline Strain & Strain & \(23.06_{+0.04}\) & \(14.64_{+0.01}\) & \(10.01_{+0.01}\) & \(12.42_{+0.04}\) & \(56.41_{+0.02}\) & \(12.06_{+0.04}\) & \(62.34_{+0.01}\) & \(27.26_{+0.04}\) & \(18.31_{+0.04}\) & \(25.08_{+0.04}\) & \(3.99_{+0.02}\) \\ \hline Stress & Stress & \(75.63_{+0.05}\) & \(77.79_{+0.05}\) & \(77.74_{+0.03}\) & \(79.49_{+0.05}\) & \(79.90_{+0.05}\) & \(80.36_{+0.02}\) & \(31.65_{+0.05}\) & \(77.49_{+0.05}\) & \(86.32_{+0.03}\) & \(80.26_{+0.05}\) & \(80.24_{+0.05}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Data-Efficiency Analysis: The Relative L2 Error (\(\times 10^{-2}\)) is reported when trained with reduced subsets of 25% and 50% of the training dataset (left column). The testing and validation datasets remain consistent across all experiments. \begin{table} \begin{tabular}{l|c c c c c c c c c c c c} \hline \hline \multicolumn{2}{c|}{Dataset} & \multicolumn{8}{c}{Models} & \multicolumn{8}{c}{Models} & \multicolumn{8}{c}{POD.} & \multirow{2}{*}{OFOMER} & \multirow{2}{*}{GNOT} \\ \cline{2-2} \cline{6-12} \multicolumn{1}{c|}{Train} & & & & & & & & & & & & & \\ \hline \(47\times 47\) & \(\mathbf{1.08_{+0.06}}\) & \(\mathbf{2.04_{+0.06}}\) & \(64.05_{+0.49}\) & \(55.32_{+5.65}\) & \(1.05_{+0.05}\) & \(1.05_{+0.05}\) & \(1.05_{+0.05}\) & \(5.09_{+0.05}\) & \(3.94_{+0.03}\) & \(3.61_{+0.03}\) \\ \(50\%\) & \(3.95_{+0.04}\) & \(5.20_{+0.02}\) & \(2.10_{+0.01}\) & \(2.54_{+0.01}\) & \(1.32_{+0.04}\) & \(2.37_{+0.05}\) & \(24.70_{+0.12}\) & \(6.15_{+0.04}\) & \(4.17_{+0.02}\) & \(3.22_{+0.00}\) & \(2.70_{+0.13}\) \\ \(100\%\) & \(3.47_{+0.14}\) & \(5.14_{+0.03}\) & \(2.20_{+0.00}\) & \(1.85_{+0.01}\) & \(1.08_{+0.06}\) & \(2.23_{+0.01}\) & \(8.55_{+0.03}\) & \(4.72_{+0.01}\) & \(3.43_{+0.01}\) & \(3.21_{+0.00}\) & \(2.04_{+0.05}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Data-Efficiency Analysis: The Relative L2 Error (\(\times 10^{-2}\)) is reported when trained with reduced subsets of 25% and 50% of the training dataset (left column). The testing and validation datasets remain consistent across all experiments. should be adept at predicting strain in the material microstructures, and vice versa. This expectation is particularly true for neural operators, which are designed to grasp the underlying partial differential equations (PDEs) governing such relationships. Table 3 shows the OOD evaluation on all the models. Interestingly, for SNO, the error on the strain test dataset remains consistent, whether it was trained on the strain dataset or the stress dataset. The same holds true when tested on the stress dataset. This consistency underscores SNO's ability to learn the underlying PDE. In stark contrast, other models don't exhibit this adaptability. Their accuracy levels decline when the testing set is swapped, indicating a potential limitation in their ability to generalize across closely related tasks. ### Time Efficiency Neural Operators are gaining traction over traditional numerical solvers due to their promise of rapid inference once trained. For this assessment, we've bench-marked various continuous dynamical systems on two criteria: the duration required to train on the Darcy dataset and the time needed to predict output function values for 200 test samples, each mapped on a uniform \(47\times 47\) grid. As anticipated, the FNN, with its straightforward architecture, stands out by requiring the least amount of time for both training and inference. However, when we delve into the other models, those based on deep operator regression methods, show training duration on par, with some of the complex but standard neural network architectures. For better visualization, see Figure 4. The narrative shifts when we consider inference time, a pivotal metric in practical applications. While FNO is relatively efficient during training, it, along with the transformer-based models, takes a longer inference stride. Though all these models show promising performance on different metrics, inference time efficiency remains a challenge for them. In stark contrast, most other models edge closer to offering real-time inference, highlighting the inherent time complexity trade-offs one must consider when opting for a particular neural operator. ## 6 Concluding Insights The key insights drawn from this work are as follows. 1. **Operator and transformer**: Both FNO and GNOT emerge as the superior models across various metrics, suggesting that the architectural novelty in these models can indeed capture the maps between infinite-dimensional functional spaces. 2. **Spectral resilience to noise and OOD**: Despite its underwhelming performance, SNO exhibits extreme resilience against noise. Similarly, SNO demonstrates impressive results on out-of-distribution dataset as well. The merit lies in its singular Fourier and inverse Fourier transformations, mapping input to output entirely in the frequency domain. 3. **Attention alone is not enough**: OFormer, employing the attention-based transformer architecture, showcases notable advantages on the Navier Stokes dataset. It also demonstrates commendable results on specific PDEs like Burgers, Darcy. However, a glaring Figure 4: Time Efficiency: We report the time taken by each model during training (on left) and inference time on test set (on right). Results are collected during the training on Darcy dataset. limitation surfaces when these architectures are applied to other PDEs, whether of comparable complexity or even simpler ones. They fail to generalize. This shortcoming starkly contrasts with one of the primary advantages anticipated from data-driven PDE solvers: the capacity to discern the solution operator solely from data, independent of prior knowledge of the underlying PDE. 4. **Data-driven models work**: Surprisingly, the cGAN, a standard architecture for image tasks, excels in performance, even though it isn't inherently an operator. This prowess, however, wanes during cross-dataset evaluations, underscoring the importance of truly learning the underlying PDE rather than merely excelling on a given dataset. 5. **Challenges with Shear and Biaxial Datasets**: The collective struggle of most operators with the Shear and Biaxial datasets underscores the importance of studying complex deformation patterns. Specifically, it suggests clear and well-defined failure modes in operators, where future works can be focused. 6. **Time efficiency should be improved**: While the models give reasonable performance, they grapple with time efficiency. Especially, the best performing models such as transformer-based architectures are time-intensive both during training and inference, FNO is relatively swift in training, still intensive in inference. **Limitations and future work:** Although FNO and GNOT exhibit superior results, their inconsistent results in cross-dataset evaluations and zero-shot super-resolution raise the questions whether they are truly learning approximate solution to the underlying PDE. Similarly, although resilient to noise and OOD, the internal neural network architecture SNO remains largely unexplored and often yields subpar outcomes. Future endeavors leveraging SNO might pave the way to operators with improved robustness. Failure modes of operators in datasets require further investigations to build more robust operators that can capture complex shear deformations. Finally, the inference time of the model requires improvement so that they can be applied to largescale real-world problems.
2310.02359
Descriptive Discriminant Analysis of Multivariate Repeated Measures Data: A Use Case
Psychological research often focuses on examining group differences in a set of numeric variables for which normality is doubtful. Longitudinal studies enable the investigation of developmental trends. For instance, a recent study (Voormolen et al (2020), https://doi.org/10.3390/jcm9051525) examined the relation of complicated and uncomplicated mild traumatic brain injury (mTBI) with multidimensional outcomes measured at three- and six-months after mTBI. The data were analyzed using robust repeated measures multivariate analysis of variance (MANOVA), resulting in significant differences between groups and across time points, then followed up by univariate ANOVAs per variable as is typically done. However, this approach ignores the multivariate aspect of the original analyses. We propose descriptive discriminant analysis (DDA) as an alternative, which is a robust multivariate technique recommended for examining significant MANOVA results and has not yet been applied to multivariate repeated measures data. We provide a tutorial with annotated R code demonstrating its application to these empirical data.
Ricarda Graf, Marina Zeldovich, Sarah Friedrich
2023-10-03T18:35:08Z
http://arxiv.org/abs/2310.02359v1
# Descriptive Discriminant Analysis of Multivariate Repeated Measures Data: A Use Case ###### Abstract Psychological research often focuses on examining group differences in a set of numeric variables for which normality is doubtful. Longitudinal studies enable the investigation of developmental trends. For instance, a recent study (Voormolen et al, 2020, [https://doi.org/10.3390/jcm9051525](https://doi.org/10.3390/jcm9051525)) examined the relation of complicated and uncomplicated mild traumatic brain injury (mTBI) with multidimensional outcomes measured at three- and six-months after mTBI. The data were analyzed using robust repeated measures multivariate analysis of variance (MANOVA), resulting in significant differences between groups and across time points, then followed up by univariate ANOVAs per variable as is typically done. However, this approach ignores the multivariate aspect of the original analyses. We propose descriptive discriminant analysis (DDA) as an alternative, which is a robust multivariate technique recommended for examining significant MANOVA results and has not yet been applied to multivariate repeated measures data. We provide a tutorial with annotated R code demonstrating its application to these empirical data. **Keywords: descriptive discriminant analysis, MANOVA, multivariate repeated measures data, nonnormality, robustness, traumatic brain injury, variable ordering** In their study, Voormolen et al. (2020) examined the association between two patient categories formed by patients having experienced either complicated or uncomplicated mild traumatic brain injury (mTBI), respectively, and a multidimensional outcome based on data at three and six months after TBI. The multidimensional outcome comprised various clinical and psychological test scores. Data were obtained from the Collaborative European NeuroTrauma Effectiveness Research (CENTER-TBI) project (Maas et al, 2015, 2017). Voormolen et al. (2020) emphasized the need for research in the field due to high annual numbers of hospitalisations resulting from mild TBI. The longitudinal functional as well as cognitive differences in outcomes after uncomplicated and complicated mild TBI, respectively, were of particular interest since researchers have come to contradictory conclusions in that regard. Computed tomography is used as a standard examination tool for diagnosing the complication of mTBI (Williams et al, 1990), which nowadays allows precise diagnoses. The outcome comprised seven clinical and psychological scores: The physical component summary (PCS) and the mental component summary (MCS) of the 36-item Short Form (SF-36v2) Health Survey (Ware and Sherbourne, 1992) and the 37-item Quality of Life after Brain Injury (QOLIBRI) instrument (von Steinbuchel et al, 2010), which are both self-report questionnaires assessing generic and disease-specific health-related quality of life, respectively. Furthermore, the Glasgow Outcome Scale-Extended (GOSE) for measuring functional recovery after TBI (Jennett et al, 1981), the Post-traumatic Stress Disorder Checklist-5 (PCL-5), a 20-item self-report questionnaire measuring symptoms of Post Traumatic Stress Disorder (PTSD) (Blevins et al, 2015), the Patient Health Questionnaire (PHQ-9), a nine-item self-report questionnaire evaluating depression symptoms experienced over the past two weeks (Kroenke and Spitzer, 2002), and the Generalized Anxiety Disorder questionnaire (GAD-7), a seven-item self-report questionnaire assessing anxiety symptoms experienced over the past two weeks (Spitzer et al, 2006), were included as outcome measures. The sample included patients who completed all outcomes at both time points (three and six months after TBI), i.e. 569 patients with uncomplicated and 535 patients with complicated mild TBI, respectively. For further information on the study design and study sample, the interested reader is referred to Voormolen et al. (2020). Summary statistics of the dataset are visualized in Figure 1. For statistical analysis, Voormolen et al. (2020) chose a multivariate repeated-measures approach (MANOVA-RM) to screen variables for differences between patient groups, time points and possible interactions between these two effects (Friedrich et al, 2018, 2022), which was then followed up by multiple univariate analyses (ANOVA-RM), thus relinquishing the attempt of analyzing the combined influence of multiple correlated variables, overlooking the advantage of a multivariate follow-up analysis. Follow-up questions resulting from significant MANOVA findings are: How can the multivariate results be interpreted? Which variables contribute to the group differences and which time points matter the most? The usually recommended post-hoc technique to answer these questions is descriptive DA (e.g. Huberty and Morris, 1989; Maxwell, 1992; Huberty and Petoskey, 2000; Bird and Hadzi-Pavlovic, 2014), a robust multivariate method for assessing the relative contribution of each variable to group separation. We propose using descriptive DA also in case of repeated measures and demonstrate its application to the empirical data set by Voormolen et al. (2020) providing the R code. ## 1 Multivariate Analysis of Variance (MANOVA) ### 1.1 MANOVA in Psychological Research Multivariate methods are of particular importance in educational and psychological research where measurements of multiple (continuous) correlated outcome variables are typically compared among groups (categorical variables). There is usually no rationale for prioritizing single variables. Furthermore, variables are often measured by using scores corresponding to sums or averages of multiple measurements taken on a Likert scale. These scores are non-normally Figure 1: Boxplots showing the summary statistics of each variable per group (uncomplicated and complicated mild traumatic brain injury) and time point (3 and 6 months after TBI) in the CENTER-TBI data. Abbreviations: mTB = mild traumatic brain injury; SF-PCS = Short Form (36) Health Survey (physical component score); SF-MCS = Short Form (36) Health Survey (mental component score); PCL-5 = Posttraumatic Stress Disorder Checklist; PHQ-9 = Patient Health Questionnaire; GAD-7 = Generalized Anxiety Disorder questionnaire; QOLIRRI = Quality of Life after Brain Injury; GOSE = Glasgow Outcome Scale-Extended; 3mo = 3 months; 6mo = 6 months. distributed due to the confined number of possible values (Warner, 2013), requiring methods suitable for non-normally distributed data. MANOVA became available to applied researchers in the 1960s (Cooley and Lohnes, 1962; Cramer and Bock, 1966) and it is still one of the most commonly used statistical methods applied in the social sciences as well as other fields (Warne et al, 2012). On the other hand, appropriate multivariate follow-up methods seem to be relatively unknown and are rarely applied. Despite extensive methodological discussions, univariate follow-up techniques for examining significant MANOVA effects prevail. Table 1 gives an overview about some reviews of the social science literature that examined the frequency of uni- and multivariate post-hoc techniques for MANOVA. Separate univariate F tests were among the first methods recommended for the analysis of group differences (Cramer and Bock, 1966; Leary and Altmaier, 1980) and also described in textbooks (Stevens, 1996; Tabachnick and Fidell, 1996), but the methodological literature clearly began opposing the MANOVA-ANOVAs approach (Enders, 2003). Keselman et al. (1998) pointed out that in 84% of the cases where MANOVA was applied, conclusions were based on results obtained from separate univariate ANOVAs after Bonferroni correction for the type 1 error, an observation also made by Huang (2020). The only reason to conduct the MANOVA was in fact a perceived additional control for type 1 error as promoted in the aforementioned sources. Since MANOVA tests the null hypothesis of no difference among group means regarding the combined effect of several correlated variables, the result cannot be compared to multiple independent results obtained from univariate tests which ignore any dependencies between the variables. Due to the inherent characteristics of behavioral science variables, multiple ANOVAs will likely lead to redundant results (Huberty and Morris, 1989). Multivariate analyses techniques will increase the sensitivity of detecting a particular variables' impact on group separation that may not be detected in univariate analyses (Gnanadesikan and Kettenring, 1984) if the variables are correlated. Also, the idea of an additional type 1 error protection is a misconception since the type 1 error rates are only maintained under circumstances that are not given in these applications (Maxwell, 1992). In particular, the type 1 error rates will only be maintained in case the MANOVA null hypothesis either holds for all the variables (type 1 error will not exceed 5%), is completely false (type 1 error cannot occur) or is false for all but one variable. Due to the different nature of multivariate and univariate analysis methods, a significant MANOVA effect does not necessarily imply any significant ANOVA effect (Huberty and Morris, 1989). Since there is no theoretical support for the MANOVA-ANOVAs approach, other authors have also warned against it (Thompson, 1999; Huberty and Petoskey, 2000; Enders, 2003; Bird and Hadzi-Pavlovic, 2014). Their advice is that researchers should rather decide at the beginning of their study whether uni- or multivariate effects are of interest and follow through with this initial plan. Huberty and Smith (1982) first supported the idea of multivariate follow-up analyses of significant MANOVA effects using descriptive discriminant analysis (DDA). Since then, further sources besides the literature reviews listed in Table 1 have encouraged researchers in the social sciences to use descriptive DA as a more appropriate approach to examine group differences when considering a combination of correlated variables (Sherry, 2006; Warne, 2014; Barton et al, 2016; Pitch and Stevens, 2016). Here, we would like to demonstrate the application of descriptive DA in non-normally distributed multivariate repeated measures data using a real data example. ## 1.2 Repeated-Measures Manova In general, we would like to analyze measurements of \(p\) variables taken at \(t\) time points for each individual \(j=1,..,n_{i}\) (where \(\sum_{i=1}^{g}n_{i}=N\)) in each group \(i=1,\ldots,g.\) The vector \(\mathbf{X}_{ij}=\{\mathbf{X}_{ij1}^{T},\ldots,\mathbf{X}_{ijt}^{T}\}\in \mathds{R}^{pt\times 1}\) contains the observations of the \(j\)th individual in the \(i\)th group, where \(\mathbf{X}_{ijk}\in\mathds{R}^{p\times 1}\) is the vector of observations at time \(k=1,\ldots,t.\) The within group covariance matrices are denoted as \(\mathbf{S}_{i}\in\mathds{R}^{pt\times pt}\) and the group means as \(\boldsymbol{\mu}_{i}=(\boldsymbol{\mu}_{i1}^{T},\ldots,\boldsymbol{\mu}_{it}^ {T})^{T}\in\mathds{R}^{pt}.\) The group variable \(i=1,\ldots,g\) represents the between-subject factor and the time variable \(k=1,\ldots,t\) the within-subject factor. Friedrich and Pauly (2018) propose a method for potentially nonnormally distributed data with unequal group covariance matrices for the general MANOVA model \[\mathbf{X}_{ij}=\boldsymbol{\mu}_{i}+\boldsymbol{\varepsilon}_{ij}, \tag{1}\] which is suitable for a multivariate outcome at a single time point. A factorial structure for multivariate repeated measures data can be incorporated in this model and is described in the appendix of Voormolen et al. (2020). It is implemented in the R package MANOVA.RM(Friedrich et al, 2022). This MANOVA model extended to multivariate repeated measures data is also suitable for nonnormally distributed data and assumes that group means \(\boldsymbol{\mu}_{i}\) and within-group covariance matrices \(\boldsymbol{\Sigma}_{i}\) exist. The within-group covariance matrices may have any structure, and they may also be dissimilar (heteroscedastic). The null hypotheses are formulated with respect to the group means \(\boldsymbol{\mu}=(\boldsymbol{\mu}_{1}^{T},\ldots,\boldsymbol{\mu}_{g}^{T})^ {T}\in\mathds{R}^{gpt}\): \[H_{0}:\mathbf{T}\boldsymbol{\mu}=0 \tag{2}\] where \(\mathbf{T}\) is a suitable contrast matrix. In particular, the main and interaction effects are tested by: \[H_{0}:\mathbf{T}_{G}=\mathbf{P}_{g}\otimes\frac{1}{t}\mathbf{J} _{t}\otimes\mathbf{I}_{p} \text{(no group effect)}\] \[H_{0}:\mathbf{T}_{T}=\frac{1}{g}\mathbf{J}_{g}\otimes\mathbf{P }_{t}\otimes\mathbf{I}_{p} \text{(no time effect)}\] \[H_{0}:\mathbf{T}_{GT}=\mathbf{P}_{g}\otimes\mathbf{P}_{t} \otimes\mathbf{I}_{p} \text{(no interaction between group and time)}\] where \(\mathbf{I}_{p}\) is the \(p\times p\) identity matrix, \(\mathbf{J}_{t}=\mathbf{1}\mathbf{1}^{T}\) is the \(t\times t\) matrix of ones, and \(\mathbf{P}_{t}=\mathbf{I}_{t}-\frac{1}{t}\mathbf{J}_{t}\) the \(t\times t\) centering matrix, \(\otimes\) the Kronecker product. The modified ANOVA-type statistic (MATS) proposed by Friedrich and Pauly (2018) can still be applied in case of singular covariance matrices \(\widehat{\boldsymbol{\Sigma}}_{N}=\text{diag}(N\widehat{\boldsymbol{\Sigma} }_{i}/n_{i})\) and it is scale-invariant. It is given by: \[Q_{N}=N\overline{\mathbf{X}}.^{T}\mathbf{T}(\mathbf{T}\widehat{\mathbf{D}}_{N} \mathbf{T})^{+}\mathbf{T}\overline{\mathbf{X}}. \tag{3}\] where \(\widehat{\mathbf{D}}_{N}=\text{diag}(N/n_{i}\cdot\hat{\sigma}_{\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{ \ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref:MANOVA}}}_{i\text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{ \ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}}_{i \text{\tiny{\ref{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}}_{i\text{\tiny{ \ref{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}_{i\text{ \tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}_{i\text{\tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}}_{i \text{\tiny{eq:MANOVA}}_{i\text{\tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}_{i \text{\tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}_{i\text{\tiny{eq:MANOVA}}_{i\text{ \tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}_{i\text{\tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}_{i \text{\tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}_{i\text{ \tiny{eq:MANOVA}}}_{i\text{\tiny{eq:MANOVA}}_{i\text{\tiny{eq:MANOVA} coefficients are not available, only the obtained point estimates can be interpreted (Pituch and Stevens, 2016). The within group covariance matrices \(\mathbf{\Sigma}_{i}\in\mathds{R}^{pt\times pt}\) are assumed to be equal, i.e. \(\mathbf{\Sigma}_{i}=\mathbf{\Sigma}_{W}\quad\text{for all}\quad i\in\{1,\dots,g\}\) (homoscedasticity), and non-singular (absence of multicollinearity). The between-group covariance matrix is denoted by \(\mathbf{\Sigma}_{B}\). In its original version, descriptive DA considers measurements taken at a single time point (\(t=1\)) and determines a linear function of \(p\) measurements (\(X_{ij11}:=X_{1}\in\mathds{R},\dots,X_{ij1p}:=X_{p}\in\mathds{R}\) for an arbitrary but fixed combination of \(i\in\{1,\dots,g\}\) and \(j\in\{1,\dots,n_{i}\}\), i.e. \(N\) vectors (\(X_{1},\dots,X_{p}\)) exist), also called Fisher discriminant function, which maximizes the ratio of between- to within-group variation (Fisher, 1936; Venables and Ripley, 2002): \[d=\lambda_{1}X_{1}+\dots+\lambda_{p}X_{p},\quad s.t.\quad\max_{\mathbf{\lambda}\in \mathds{R}^{p}}\frac{\mathbf{\lambda}^{T}\mathbf{\Sigma}_{B}\mathbf{\lambda}}{\mathbf{\lambda}^ {T}\mathbf{\Sigma}_{W}\mathbf{\lambda}} \tag{5}\] where \(\mathbf{\lambda}\in\mathds{R}^{p}\) represents the nonstandardized (or raw) DFCs and \(d\) the Fisher discriminant scores, i.e. projections of the original measurements onto the linear discriminant function. The vector of DFCs best separating two groups (\(i_{1}\) and \(i_{2}\)) can be estimated by \[\widehat{\mathbf{\lambda}}=\widehat{\mathbf{\Sigma}}_{P}^{-1}(\widehat{\mathbf{\mu}}_{i_{ 1}}-\widehat{\mathbf{\mu}}_{i_{2}}),\quad\text{where}\quad\widehat{\mathbf{\Sigma}}_{P }=\frac{(n_{i_{1}}-1)\widehat{\mathbf{\Sigma}}_{i_{1}}+(n_{i_{2}}-1)\widehat{\mathbf{ \Sigma}}_{i_{2}}}{n_{i_{1}}+n_{i_{2}}-2} \tag{6}\] Standardized DFCs are products of each variables nonstandardized (or raw) DFCs with their respective standard deviation, and represent the variable's association with the discriminant function holding the effects of all other variables constant. Variables associated with the largest absolute value coefficient contribute most to the group difference. The use of standardized DFCs has been recommended (Pituch and Stevens, 2016; Rencher, 1992; Finch, 2009; Finch and Laking, 2008) in contrast to structure coefficients. Structure coefficients are not recommended by these authors since they reflect only univariate information and frequently result in incorrect decisions. In Figure 2 we have visualized a simple 2D example in order to explain the basic terms and concept of descriptive discriminant analysis. We used a small subset of observations from "The Kentucky Inventory of Mindfulness Skills" dataset (Baer et al, 2004, 2012). We extracted scores on the two scales "describing" and "acting" (\(p=2\)) of six male (purple) and ten female (yellow) participants for whom observations are made at a single time point (\(t=1\)). The original observations are shown as points in bold. Through determining the optimal discriminant function (equation 5 or 6, respectively), we have found the nonstandardized vector of DFCs, \(\mathbf{\lambda}\), and obtain the projections \(d\), which are shown as circles. The Fisher discriminant function is the line passing through these projections. In this case, the group covariance matrices, \(\widehat{\mathbf{\Sigma}}_{\text{Male}}\) and \(\widehat{\mathbf{\Sigma}}_{\text{Female}}\) (shown as ellipses with solid lines), are not equal, rather the correlation between the two variables has different signs. The pooled covariance matrix, \(\widehat{\mathbf{\Sigma}}_{P}\) (shown as ellipse with dashed lines), which is the weighted average of the within-group covariances, and the group means (indicated by the crosses) are also shown. Fisher discriminant analysis was developed for multivariate data measured at a single time point (Fisher, 1936), has also been applied to univariate repeated measures data (Sajobi et al, 2012) and can be applied to factorial data, where each observation is assigned to exactly one of the categories of each factor (Warner, 2013). To the best of our knowledge, descriptive discriminant analysis has not yet been applied to multivariate repeated measures data, where measurements of the same variable at different time points are regarded as two different but correlated variables. ## 3 Results and Discussion: Application of Manova-Rm and DDA to the CENTER-TBI Data In this section, we replicate the repeated measures (M)ANOVA results for the CENTER-TBI data as presented in Voormolen et al. (2020) and discuss discriminant analysis as an alternative to multiple independent repeated measures ANOVAs. ## 3.1 (M)ANOVA-Rm Significant differences in outcomes between uncomplicated and complicated mTBI and between time points were found in MANOVA-RM at a significance level of \(\alpha\)=.05. The interaction between main effects was not significant. Replicated values of the test statistics and respective \(p\)-values are shown in Table 2. In Voormolen et al. (2020), multiple ANOVA-RM were chosen as post-hoc technique to examine the significant MANOVA-RM results. Significant test results (\(\alpha_{\mathrm{adj}}\)=.007) are shown in bold in Table 2. \(P\)-values may differ from the results in Voormolen et al. (2020) because the computation depends on the specific seed that is chosen but the test results are identical. Table 3 gives an overview about the significance of the ANOVA-RM test results shown in Table 2. Additionally, mean values and standard deviations per group and time point are shown in Table 4. Values corresponding to significant differences found in ANOVA-RM are shown in bold. For more details, please see the original analysis (Voormolen et al., 2020). ## 3.2 DDA as Alternative Post-Hoc Analysis For descriptive DA, we consider the seven outcome variables (\(p=7\)), each measured at two time points (\(t=2\)) as 14 distinct correlated variables measured in two groups (\(g=2\)). Thus, the concept of descriptive DA can simply be applied to repeated measures data as well. In contrast to univariate follow-up strategies, it takes the correlation between variables and time points into account. With descriptive DA, an understanding of the relative importance of each of the correlated variables, i.e. its relative contribution to the overall significant multivariate effect found in MANOVA-RM, can be obtained. We have \(n_{1}\) = 569 and \(n_{2}\) = 535 patients with uncomplicated and complicated mTBI, respectively, and consider (within-group and pooled) covariance matrices \(\mathbf{\Sigma}\in\mathds{R}^{14\times 14}\) and group means \(\mathbf{\mu}_{1},\mathbf{\mu}_{2}\in\mathds{R}^{14}\). ## 3.2.1 Assessment of Descriptive DA Assumptions First, we will assess whether the assumption of homogeneity of (within-group) covariance matrices is fulfilled (i.e. if \(\mathbf{\Sigma}_{1}=\mathbf{\Sigma}_{2}\)). Several strategies have been suggested. The Box \(M\) test (Box, 1949) can be used to test the equality of \(g\) covariance matrices, but it assumes multivariate normality and even a few outliers may strongly effect the test result. Another drawback is that the test is extremely powerful and inequality of the covariance matrices may not be identified even with a cutoff value of \(p<\).001 (Huberty and Petoskey, 2000; Enders, 2003; Barton et al., 2016). Since we assume that the data deviate from multivariate normality, we directly use alternative approaches. Friendly and Sigal (2018) suggest to compare scree plots showing the log-eigenvalues of the Figure 2: Illustrative 2D example of Fisher (descriptive) discriminant analysis: The actual data samples are shown as points in bold, the projected samples (Fisher discriminant function scores) are shown as circles, the actual (unequal) group covariances, \(\widehat{\mathbf{\Sigma}}_{\mathrm{Male}}\) and \(\widehat{\mathbf{\Sigma}}_{\mathrm{Female}}\), are shown as solid ellipses. From the pooled covariance matrix \(\widehat{\mathbf{\Sigma}}_{P}\), indicated as a dashed ellipse, and the actual group means, indicated as crosses, the Fisher discriminant function is computed (solid line). within-group covariance matrices and the pooled covariance matrix as an alternative to the Box M test. The Box M test considers the sum of differences between the log-determinants of each within-group covariance matrix and the pooled covariance matrix and evaluates their similarity. The log-determinant equals the sum of the log-eigenvalues. Huberty and Lowman (2000) proposed indices of generalized variance, i.e. to compare the equality of traces or equality of log-determinants, respectively, of the (within-class) covariances. If values are similar, the assumption of homogeneity can be assumed to be met. Visual inspection using pairwise (2D) scatterplots of the data have also been suggested (Barton et al 2016) as well as pairwise comparison of the shape and direction of the within-group covariance matrices (Friendly and Sigal 2018). Using these visual tools, it may be difficult to come to a conclusion due to the potentially high number of pairwise comparisons. Figure S 1 indicates similarity of the scatter plots for the two groups in the CENTER-TBI data set. The scree plot (Figure 3) shows approximate equality of the log-eigenvalues of each within-group covariance matrix and the pooled covariance matrix as an alternative to the Box M test. The box M test considers the sum of differences between the log-determinants of each within-group covariance matrix and the pooled covariance matrix and evaluates their similarity. The log-determinant equals the sum of the log-eigenvalues. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multicolumn{1}{c}{**Dependent variable**} & \multicolumn{1}{c}{**Independent variable**} & **MATS** & **dfl** & **dfl2** & \(p\)**-value** \\ \hline \multirow{2}{*}{MANOVA RM} & \multirow{2}{*}{All seven outcomes RM} & mTBI & 197.538 & — & — & \(<\)**.001** \\ & & & Time points & 34.708 & — & — & \(<\)**.001** \\ & & & mTBI:Time points & 2.932 & — & — &.152 \\ \hline \multirow{8}{*}{ANOVA RM} & SF-36 PCS & mTBI & 5.897 & 1 & 1365.422 &.015 \\ & & Time points & 61.133 & 1 & — & \(<\)**.001** \\ & & mTBI:Time points & 4.361 & 1 & — &.037 \\ \cline{2-6} & SF-36 MCS & mTBI & 7.879 & 1 & 1399.985 & **.005** \\ & & Time points & 10.502 & 1 & — & **.001** \\ & & mTBI:Time points & 3.058 & 1 & — &.08 \\ \cline{2-6} & PCL-5 & mTBI & 5.481 & 1 & 1388.071 &.019 \\ & & Time points & 16.902 & 1 & — & \(<\)**.001** \\ & & mTBI:Time points & 0.653 & 1 & — &.419 \\ \cline{2-6} & PHQ-9 & mTBI & 2.632 & 1 & 1386.136 &.105 \\ & & Time points & 9.075 & 1 & — & \(<\)**.003** \\ & & mTBI:Time points & 0.032 & 1 & — &.858 \\ \cline{2-6} & GAD-7 & mTBI & 3.216 & 1 & 1425.187 &.073 \\ & & Time points & 3.137 & 1 & — &.077 \\ & & mTBI:Time points & 0.026 & 1 & — &.872 \\ \cline{2-6} & QOLBRI & mTBI & 12.25 & 1 & 1337.174 & **.001** \\ & & Time points & 8.588 & 1 & — & **.003** \\ & & mTBI:Time points & 2.980 & 1 & — &.084 \\ \hline \multirow{2}{*}{GOSE} & mTBI & 80.944 & 1 & 1444.067 & **<.001** \\ & & Time points & 26.150 & 1 & — & \(<\)**.001** \\ \cline{1-1} & mTBI:Time points & 1.057 & 1 & — &.304 \\ \hline \hline \end{tabular} \end{table} Table 2: Replicated repeated measures (M)ANOVA results for the CENTER-TBI data analyzed in Voormelen et al. (2020). MATS = modified ANOVA-type statistic, between-subject factor —TBI severity (uncomplicated and complicated mTBI), within-subject factor — time (time points and 6 months after TBI), \(p=p\)-value based on parametric bootstrapping, bold \(p\)-values are significant at \(\alpha=.05\) for MANOVA-RM and \(a_{\alpha\beta}=.007\) for ANOVA-RM, respectively. Abbreviations: mTBI = mild traumatic brain injury; SF-PCS = Short Form (36) Health Survey (physical component score); PCLS = Posttraumatic Stress Disorder Checklist; PHQ-9 = Patient Health Onsentiance; GAD-7 = Generalized Anxiety Disorder questionnaire; QOLBRI = Quality of Life after Brain Injury; GOSE = Glasgow Outcome Scale-Extended. \begin{table} \begin{tabular}{l c c c} \hline \hline **Dependent variable** & **mTBI severity** & **Time points** & **Interaction** \\ \hline SF-36 PCS & — & ++ & — \\ SF-36 MCS & ++ & ++ & — \\ PCL-5 & — & ++ & — \\ PHQ-9 & — & ++ & — \\ GAD-7 & — & — & — \\ QOLBRI & ++ & ++ & — \\ GOSE & ++ & ++ & — \\ \hline \hline \end{tabular} \end{table} Table 3: Significance of the ANOVA-RM group effect (uncomplicated and complicated mTBI), time effect (3 and 6 months after mTBI) and interaction between both effects. ++ = significant for \(a_{\alpha\beta}=.007\), — = not significant. Abbreviations: mTBI = mild traumatic brain injury; SF-PCS = Short Form (36) Health Survey (physical component score); SF-MCS = Short Form (36) Health Survey (mental component score); PCLS = Posttraumatic Stress Disorder Checklist; PHQ-9 = Patient Health Questionnaire; GAD-7 = Generalized Anxiety Disorder questionnaire; QOLBRI = Quality of Life after Brain Injury; GOSE = Glasgow Outcome Scale-Extended. covariance matrix compared to the log-eigenvalues of the pooled covariance matrix. Traces (sums of diagonal elements) of \(\mathbf{\Sigma}_{1}\) and \(\mathbf{\Sigma}_{2}\) are 1434.8 and 1570.1, respectively. Log-determinants for both matrices are 39.4 and 42.1, respectively. Equality of traces only evaluates the equality of variances among the groups, while log-determinants incorporate the covariances as well. Both measures have similar values for the CENTER-TBI dataset. Figure 4 shows the pairwise comparison of within-group covariances, where direction and shape of the ellipses appear to be overall comparable. Comparison of pairwise scatter plots of the data provide the same evidence and are shown in the supplementary Figure S 1. In total, we conclude that the assumption of homogeneous (within-group) covariance matrices is fulfilled. We will also examine if sample sizes are large enough (Huberty 1975; Barckowski and Stevens 1975) and whether multicollinearity may complicate interpretation of the results (Wilkinson 1975; Borgen and Seling 1978; Finn 1978), two factors that may influence the interpretability of the DFCs as mentioned above (Section 2 ). The ratio of total sample size to number of variables is \(1104/14=78.9>20\) and thus parameter estimates are assumed to be stable. There are several diagnostic tools for measuring multicollinearity, one of which is to inspect the scaled condition indices and their corresponding scaled variance decomposition proportions (Belsley et al 1980; Belsley 1991) that can be arranged in a table for easy assessment. The standard approach, as described in Liao and Valliant (2012), is to check whether two or more large (scaled) variance decomposition proportions are associated with a (scaled) condition index, that is also large. Scaled condition indices of greater than 30 and scaled variance decomposition proportions of greater than.3 are usually considered as large. In combination, they may indicate the presence of multicollinearity between the respective variables. The results in Table 5 indicate that there may be multicollinear variables in the CENTER-TBI dataset, since there are two (scaled) condition indices higher than 30 associated with at least two (scaled) variance decomposition proportions higher than.3. Having determined the collinearity pattern, correlated variables are discarded one at a time and the collinearity measures are computed again. Unfortunately, in this case, variables can only be dropped simultaneously at both time points since complete data are required. Different sets of variables can be dropped in order to eliminate multicollinearity. We did not consider excluding the GOSE score because it seems to be the most important variable, i.e. it has the highest association with group and time differences in the repeated measures ANOVA (Table 2) and the largest weights in descriptive DA in both cases, before and after removal of multicollinear variables (Table 8, 9, S 2). Only after removal of at least two of the three quality of life indices (QOLIBRI, SF-36 (MCS), SF-36 (PCS)) collinearity measures did not indicate multicollinearity anymore (Table 6, 7, S 1). We will compare the descriptive DA results before and after removal of these potentially multicollinear variables. For comparison, we will repeat the same analyses for each time point (three and six months after mTBI) separately. \begin{table} \begin{tabular}{l \begin{table} \begin{tabular}{l ### Application of Descriptive DA to Repeated Measures Data Table 8, Table 9, and Table S 2 show the standardized descriptive discriminant coefficients (DFCs) before and after removal of potentially multicollinear variables from the CENTER-TBI dataset analyzed by Voormolen et al. (2020), respectively. The DFCs for the entire set of variables (Table 8) may be misleading because higher weights can be assigned randomly to one out of several highly correlated variables. Table 9 shows the DFCs after exclusion of redundant variables. Here, the high absolute values of the standardized DFCs coincide with the significant group and time effect of GOSE in ANOVA-RM (Table 2). QOLIRI and SF-36 (MCS), the only other variables where both the time and group effect of the RM-ANOVA are significant, each have one DFC ranging after the highest DFCs corresponding to GOSE (Table 9 (a) and (b)) indicating an influence of around 2/3 of that of GOSE. The other variables with either significant time or group effects, respectively, (SF-36 PCS, PCL-5, PHQ-9) still have higher DFCs compared to GAD-7, which does not have any significant RM-ANOVA effect. The quality of life measures, SF-36 (PCS), SF-36 (MCS), and QOLIRI, may reflect relations between the variables and severity of mTBI already included in other scores, and especially may contain repetitive information among each other since multicollinearity could only be removed after exclusion of at least two of the variables (according to the scaled condition indices and their respective scaled variance decomposition proportions). Multicollinear variables should be removed before computing standardized DFCs because in case of multicollinearity, relative weights can be arbitrarily distributed among them. The exclusion of multicollinear variables does not affect the RM-MANOVA test results (Table S 3) - time and group effects remain significant. \begin{table} \begin{tabular}{l ### R Code Assuming the data are arranged in long format (columns: group variable "Complicated", "timepoints", "id", and the seven measurements of psychological and clinical scores) in a data frame "data_long", and in wide format (columns: group variable "Complicated", 2 \(\times\) 7 measurements of psychological and clinical scores) in a data frame "data_wide", respectively, the following R code shows how the analyses from the previous sections can be performed in R. ``` library(MANOVA.RM) library(MASS) library(heplots) library(gplot2) library(Rfast) library(cardiac) library(mctest) #arrangedatainlongformatandinwideformat ##RepeatedmeasuresMANOVA(Table 2) manova_tbi<-multRM(cbind(SF36_PCS,SF36_MCS,PCL5, PH99,GAD7,) QOLIBRI,GOSE)-Complicated*timepoints, data=data_TBI_long, subject="id", within="timepoints", iter=10000, resampling="paramBS", seed=123) ##RepeatedmeasuresANOVA(Table 2) anova_tbi_SF36_PCS<-RM(SF36_PCS~Complicated*timepoints, data=data_TBI_long, subject="id", within="timepoints", iter=1000, resampling="paramBS", seed=123) anova_tbi_SF36_MCS<-RM(SF36_MCS~Complicated*timepoints, data=data_TBI_long, subject="id", within="timepoints", iter=1000, resampling="paramBS", seed=123) anova_tbi_PCL5<-RM(PCL5~Complicated*timepoints, data=data_TBI_long, subject="id", within="timepoints", resampling="paramBS", seed=123) anova_tbi_PH09<-RM(PHQ9~Complicated*timepoints, data=data_TBI_long, subject="id", within="timepoints", iter=1000, resampling="paramBS", seed=123) anova_tbi_GAD7<-RM(GAD7~Complicated*timepoints, data=data_TBI_long, subject="id", within="timepoints", iter=1000, resampling="paramBS", seed=123) anova_tbi_QOLIBRI<-RM(QOLIBRI~Complicated*timepoints, data=data_TBI_long, subject="id", iter=1000, resampling="paramBS", seed=123) anova_tbi_GOSE<-RM(GOSE~Complicated*timepoints, data=data_TBI_long, subject="id", within = "timepoints", iter = 1000, resampling = "paramBS", seed = 123) ## Pairwise comparison of within-group covariances (Figure 4) heplots:covEllipses(x = data_wide[,c("SF-36 PCS (1)","SF-36 MCS (1)", "PCL-5 (1)","PHQ-9 (1)","GAD-7 (1)", "QOLIRRI (1)","GOSE (1)","SF-36 PCS (2)", "SF-36 MCS (2)","PCL-5 (2)","PHQ-9 (2)", "GAD-7 (2)","QOLIRRI (2)","GOSE (2)")], group = as.factor(data_wideScomplicated), fill = c(rep(FALSE,2), TRUE), variables=1:14, fill.alpha =.1, center = TRUE, label.pos = c("top","bottom"), pooled = FALSE, col = c("#ce8a14", "#542785")) ## Indices of generalized variance (Section 4.2.1) # data per group data0 <- data_wide[which(data_wideScomplicated == 0), c("SF-36 PCS (1)","SF-36 MCS (1)", "PCL-5 (1)","PHQ-9 (1)","GAD-7 (1)", "QOLIRRI (1)","GOSE (1)","SF-36 PCS (2)", "SF-36 MCS (2)","PCL-5 (2)","PHQ-9 (2)", "CAD-7 (2)","QOLIRI (2)","GOSE (2)")] data1 <- data_wide[which(data_wideScomplicated == 1), c("SF-36 PCS (1)","SF-36 MCS (1)", "PC-5 (1)","PHQ-9 (1)","GAD-7 (1)", "QOLIRI (1)","GOSE (1)","SF-36 PCS (2)", "SF-36 MCS (2)","PCL-5 (2)","PHQ-9 (2)", "GAD-7 (2)","QOLIRI (2)","GOSE (2)")] # Traces of within-group cavariance matrices sum(diag(data0)) sum(diag(data1)) # Log-determinants of within-group cavariance matrices log(det(data0)) log(det(data1)) ## Scree-Plot of log-eigenvalues (Figure 3) covp <- pooled.cov(as.matrix(cbind(data0,data1)), data_wideSComplicated) cov0 <- cov(data0) cov1 <- cov(data1) log_eig <- data.frame(n = rep(c(1:14),3), g = c(rep("p",14),rep("u",14),rep("c",14)), e = c(log(eigen(cov0))Sval), log(eigen(cov0)Sval), log(eigen(cov1)Sval))) ggplot(log_eig, ase(x = n, y = e, group = g, color = g)) + geom_line(aes(linetype = g)) + xlab("i") + ylab(expression("log("*paste(lambda)[i]*")")) + quides(shape = guide_legend(override.aes = list(size = 5))) + theme_bw() + geom_point(data = log_eig, aes(x = n, y = e, group = g, color = g, shape = g), size = 2) + scale_color_manual(labels = c(expression("u03A3"[Pooled]), expression("u03A3"[Uncomp1]), expression("u03A3"[Compl])), name = "", values = c("black","#542785", "#ce8a14")) + scale_shape_manual(labels = c(expression("u03A3"[Pooled]), expression("u03A3"[Uncomp1]), expression("u03A3"[Compl])), name = "", values = c(16,3,17)) + scale_linetype_manual(labels = c(expression("u03A3"[Pooled]), expression("u03A3"[Uncomp1]), expression("u03A3"[Uncomp1]), name = "", values = c("solid","twodash", "twodash")) + scale_x_continuous(breaks = c(1:14)) + theme(legendent_text = element_text(size = 30), legend.key.size = unit (3,"line"), axis.text = element_text(size = 27), axis.title = element_text(size = 30), panel.grid.minor.x = element_blank()) Multicollinearity assessment (Table 5) # scaled condition indices cond.index(formula = Complicated ~ SF-36 PCS (1) + SF-36 MCS (1) + PCL-5 (1) + PHQ-9 (1) + GAD-7 (1) + COLIBRI (1) + GOSE (1) + SF-36 PCS (2) + SF-36 MCS (2) + PCL-5 (2) + PHQ-9 (2) + GAD-7 (2) + QOLIBRI (2) + GOSE (2), data = data_wide) %>= round(.,2) # scaled variance decomposition proportion eigprop(mod = lm(Complicated ~ SF-36 PCS (1) + SF-36 MCS (1) + PCL-5 (1) + PHQ-9 (1) + GAD-7 (1) + QOLIBRI (1) + GOSE (1) + SF-36 PCS (2) + SF-36 MCS (2) + PCL-5 (2) + PHQ-9 (2) + GAD-7 (2) + QOLIBRI (2) + GOSE (2), data = data_wide), na.rm = FALSE, Inter = TRUE, prop = 0.5)$pi Discriminant function coefficients (DFC) # non-standardized DFC MASS::1ida(Complicated", #SF-36 PCS (1)","SF-36 MCS (1)","PCL-5 (1)", #PHO-9 (1)","GAD-7 (1)","OOLIBRI (1)", #GOSE (1)","SF-36 PCS (2)","SF-36 MCS (2)", #PCL-5 (2)","PHQ-9 (2)","GAD-7 (2)", #OOLIBRI (2)","GOSE (2)")])$scaling standardized DFC (Table 9) x = lm(formula = cbind(SF-36 PCS (1) + SF-36 MCS (1) + PCL-5 (1) + PHQ-9 (1) + GAD-7 (1) + QOLIBRI (1) + GOSE (1) + SF-36 PCS (2) + SF-36 MCS (2) + PCL-5 (2) + PHQ-9 (2) + GAD-7 (2) + QOLIBRI (2) + GOSE (2)) ~ Complicated, data = data_wide) y = candisc(x, terms = "Groups") y5coeffs.std ## 4 Discussion Longitudinal study designs are common in the social sciences. MANOVA-RM (Friedrich and Pauly, 2018; Voormolen et al, 2020), a robust extension of MANOVA to repeated measures data, can be used even for non-normal data with unequal group-covariance matrices. However, meaningful post-hoc analyses following significant multivariate results are seldom used in the literature. In this tutorial, we demonstrated the application of descriptive DA to multivariate repeated measures data. This method has been suggested in methodological literature reviews as a more suitable post-hoc analysis to study significant MANOVA effects. Another possibility for judging (clinical) relevance of significant test results are effect sizes. For potentially non-normal multivariate repeated measures data, such effect measures are not yet available. A nonparametric effect size \(A_{w}\) has been proposed for multivariate data measured at a single time point (Li et al, 2021) but robust effect sizes measures for repeated measures data are part of future research. Although descriptive DA only assumes equality of group covariance matrices, estimates of the pooled covariance matrix required for computing the relative variable weights may by unstable in case of certain deviations from multivariate normality: Sajobi et al. (2012) examined this effect in a simulation study for smaller sample sizes, i.e. some of their settings do not comply with the recommended ratio of total sample size to number of variables for descriptive DA (Huberty, 1975; Barcikowski and Stevens, 1975). Sajobi et al. (2012) apply descriptive DA to repeated measures of a single variable and suggest more stable estimators of the covariance matrix. Possibly, this approach can be extended to estimate DFCs in case of deviations from multivariate normality in multivariate repeated measures data. More extensive simulation studies for special situations specific to multivariate repeated measures data might help in developing guidelines for the use of descriptive DA methods. Predictive discriminant analysis is often used in psychology to assess the discriminative ability of a set of variables in order to determine the usefulness of the particular set or in order to compare the usefulness of different sets of variable combinations among each other with respect to group separation (Akaboha and Kwofie, 2016; Bhutta et al, 2015; Kleinberg et al, 2019; Shinba et al, 2021; Nan et al, 2018). In this study, we have only examined relative importance of variables to group separation through computing the descriptive discriminant coefficients. Predictive discriminant analysis may provide information on how useful these variables are in distinguishing the two groups. Since predictive discriminant analysis in its initial version assumes multivariate normality of the data, several robust extensions have been developed for multivariate repeated measures data (Brobbey, 2021; Brobbey et al, 2022). With respect to the data and research question, quality of life measures (SF-36 PCS, SF-36 MCS, QOLIBRI) seem to include partially redundant information for distinguishing patients with uncomplicated and complicated mild TBI when the functional outcome (GOSE) and post-traumatic stress (PCL-5), depression (PHQ-9), and anxiety (GAD-7) are measured repeatedly. This outcome set has already been examined for cross-sectional sensitivity in preselected patient groups at 3, 6, and 12 months after TBI, respectively (von Steinbuchel et al, 2023), suggesting that a reduced set of outcome measures would sufficiently discriminate between them, given the intercorrelation between the outcome measures from a clinical content perspective. Only then can the research findings make a valuable contribution to the development of clinical implications and tailored therapies. ## Supplementary information [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] ## References * [1] A. A. K. K. (1979) The \(\alpha\)-function of the \(\ **Fig. S 1**: Scatterplots of each pair of variables present in the CENTER-TBI dataset analyzed by Voormolen et al. (2020) in order to examine equality of the within-group covariances. Observations belonging to patients with uncomplicated mTBI are indicated as purple crosses while observations of patients with complicated mTBI are shown as yellow triangles. **Fig. S 2**: Scatterplots of each pair of variables present in the CENTER-TBI dataset analyzed by Voormolen et al. (2020) in order to examine equality of the within-group covariances. Observations belonging to patients with uncomplicated mTBI are indicated as purple crosses while observations of patients with complicated mTBI are shown as yellow triangles. **Fig. S 3**: Scatterplots of each pair of variables present in the CENTER-TBI dataset analyzed by Voormolen et al. (2020) in order to examine equality of the within-group covariances. Observations belonging to patients with uncomplicated mTBI are indicated as purple crosses while observations of patients with complicated mTBI are shown as yellow triangles. **Fig. S 4**: Scatterplots of each pair of variables present in the CENTER-TBI dataset analyzed by Voormolen et al. (2020) in order to examine equality of the within-group covariances. Observations belonging to patients with uncomplicated mTBI are indicated as purple crosses while observations of patients with complicated mTBI are shown as yellow triangles. **Fig. S 5**: Scatterplots of each pair of variables present in the CENTER-TBI dataset analyzed by Voormolen et al. (2020) in order to examine equality of the within-group covariances. Observations belonging to patients with uncomplicated mTBI are indicated as purple crosses while observations of patients with complicated mTBI are shown as yellow triangles. **Fig. S 6**: Scatterplots of each pair of variables present in the CENTER-TBI dataset analyzed by Voormolen et al. (2020) in order to examine equality of the within-group covariances. Observations belonging to patients with uncomplicated mTBI are indicated as purple crosses while observations of patients with complicated mTBI are shown as yellow triangles. **Fig. S 7**: Scatterplots of each pair of variables present in the CENTER-TBI dataset analyzed by Voormolen et al. (2020) in order to examine equality of the within-group covariances. Observations belonging to patients with uncomplicated mTBI are indicated as purple crosses while observations of patients with complicated mTBI are shown as yellow triangles. \begin{table} \begin{tabular}{l l} \hline \hline Variable & Stand. DFC \\ \hline GOSE (1) & -0.66 \\ GOSE (2) & -0.62 \\ SF-36, PCS (2) & -0.4 \\ PCL-5 (2) & -0.34 \\ PHQ-9 (2) & -0.25 \\ PCL-5 (1) & -0.17 \\ PHQ-9 (1) & -0.12 \\ SF-36, PCS (1) & -0.07 \\ \hline \hline \end{tabular} \end{table} Table **S 2** Standardized discriminant function coefficients (DFC) ordered by highest absolute value after removal of multicollinear variables SF-36 MCS, GAD-7, QOLIBRI. \begin{table} \begin{tabular}{l l l l l l l} \multirow{2}{*}{**Analysis**} & \multicolumn{2}{l}{**Excluded multicollinear**} & \multicolumn{2}{l}{**Independent variable**} & \multicolumn{1}{l}{**MATS**} & \multicolumn{1}{l}{**dfl**} & \multicolumn{1}{l}{**df2**} & \multicolumn{1}{l}{\(p\)**-value**} \\ \cline{3-8} & \multicolumn{1}{l}{**dependent variables**} & & & & & & \\ \hline \multirow{8}{*}{MANOVA RM} & (a) SF-36 PCS, QOLIBRI & mTBI & 165.021 & – & – & **\textless{}.001** \\ & & Time points & 19.68 & – & – & **\textless{}.001** \\ & & mTBI-Time points & 1.377 & – & – & **\textless{}.398** \\ \cline{2-8} & \multirow{3}{*}{(b) SF-36 PCS, SF-36 MCS} & mTBI & 173.359 & – & – & **\textless{}.001** \\ & & Time points & 18.64 & – & – & **\textless{}.001** \\ & & mTBI-Time points & 1.172 & – & – & **\textless{}.46** \\ \cline{2-8} & \multirow{3}{*}{(c) SF-36 MCS, GAD-7, QOLIBRI} & mTBI & 156.369 & – & – & **\textless{}.001** \\ & & Time points & 29.261 & – & – & **\textless{}.001** \\ \cline{1-1} & & mTBI-Time points & 1.505 & – & – & **\textless{}.227** \\ \hline \hline \end{tabular} \end{table} Table **S 3**: Repeated measures (M)ANOVA results for the CENTER-TBI data after exclusion of different sets of multicollinear variables. MATS = modified ANOVA-type statistic, between-subject factor = TBI severity (uncomplicated and complicated mTBI), within-subject factor = time (time points 3 and 6 months after TBI), \(p=p\)-value based on parametric bootstrapping, bold \(p\)-values are significant at \(\alpha=0.05\) for MANOVA-RM. Abbreviations: mTBI = mild traumatic brain injury; SF-PCS = Short Form (36) Health Survey (physical component score); SF-MCS = Short Form (36) Health Survey (mental component score); GAD-7 = Generalized Anxiety Disorder questionnaire; QOLIBRI = Quality of Life after Brain Injury;.
2310.15625
The First Light Curve Analysis of Twin Binary System V1175 Cas using Ground-based and TESS data
Eclipsing binary systems hold a central position within astrophysics in that the fundamental parameters of stars can be determined by direct observations. The simultaneous analyses of high-quality space observations, combined with ground-based photometric data, have allowed more sensitive detection of fundamental stellar parameters by multicolor photometry. In the paper, the fundamental parameters of the component stars for the V1175 Cas binary system were sensitively obtained by a simultaneous analysis of the Transiting Exoplanet Survey Satellite (TESS) light curve, and new CCD observations in {\it BVRI} filters obtained with a 60 cm robotic telescope at the TUBITAK National Observatory. Following the analysis, the masses and radii of the primary and secondary binary components were determined as $M_{1}= 1.64\pm 0.04\,M_\odot$, $M_{2}= 1.63\pm0.07\,M_\odot$, and $R_{1}=1.77\pm 0.05\,R_\odot$, $R_{2}= 1.77\pm 0.25\,R_\odot$, respectively. Moreover, the distance of V1175 Cas was computed as $280\pm32$ pc. The photometric analysis reveals that the components of the system are in a similar evolutionary state. The primary and secondary components exhibit nearly the same masses, while their radii are perfectly matched. Additionally, the ages of the components are also consistent within the statistical uncertainties. Consequently, the system's overall age is assessed to be approximately $750\pm70$ Myr.
Neslihan Alan
2023-10-24T08:51:03Z
http://arxiv.org/abs/2310.15625v1
# The First Light Curve Analysis of Twin Binary System V1175 Cas using Ground-based and TESS data ###### Abstract Eclipsing binary systems hold a central position within astrophysics in that the fundamental parameters of stars can be determined by direct observations. The simultaneous analyses of high-quality space observations, combined with ground-based photometric data, have allowed more sensitive detection of fundamental stellar parameters by multicolor photometry. In the paper, the fundamental parameters of the component stars for the V1175 Cas binary system were sensitively obtained by a simultaneous analysis of the Transiting Exoplanet Survey Satellite (_TESS_) light curve, and new CCD observations in _BVRI_ filters obtained with a 60 cm robotic telescope at the TUBITAK National Observatory. Following the analysis, the masses and radii of the primary and secondary binary components were determined as \(M_{1}=1.64\pm 0.04\,M_{\odot}\), \(M_{2}=1.63\pm 0.07\,M_{\odot}\), and \(R_{1}=1.77\pm 0.05\,R_{\odot}\), \(R_{2}=1.77\pm 0.25\,R_{\odot}\), respectively. Moreover, the distance of V1175 Cas was computed as \(280\pm 32\) pc. The photometric analysis reveals that the components of the system are in a similar evolutionary state. The primary and secondary components exhibit nearly the same masses, while their radii are perfectly matched. Additionally, the ages of the components are also consistent within the statistical uncertainties. Consequently, the system's overall age is assessed to be approximately \(750\pm 70\) Myr. keywords: techniques: photometric: -- Stars; binaries; eclipsing, Stars; fundamental parameters, Stars; evolution -- stars: individual: V1175 Cas + Footnote †: journal: New Astronomy ## 1 Introduction Eclipsing binary stars are considered invaluable laboratories for astronomical research, playing a vital role in understanding stellar structure and evolution, as well as galaxy dynamics. The fundamental stellar parameters such as mass (\(M\)), radius (\(R\)), and luminosity (\(L\)) can be obtained directly from their observations. These systems offer a unique chance to accurately determine these essential parameters. The fundamental stellar parameters can be obtained more precisely, especially by using the high-quality photometric data of space telescopes such as _TESS_(Ricker et al., 2015). Enhanced sensitivity in the fundamental parameters of stars enables theoretical models to be checked and evolutionary models to be compared with observational findings to create more realistic models. Detached eclipsing binary systems are particularly suitable for this goal because of the relatively weak interactions between the components. The evolutionary models of single stars can be investigated with fundamental stellar parameters calculated from observations of detached eclipsing binaries, and the consistency of theoretical evolutionary models with observations can be meticulously tested. Furthermore, an equally important aspect is the introduction of fundamental stellar parameters into the literature through detailed light curve analyses of detached eclipsing binaries, which have never been investigated in detail before. To address this, turn attention to the case of V1175 Cas, a detached binary system for which only minima times have been reported and no light curve analysis has been performed until now. V1175 Cas is classified as an Algol-type eclipsing binary system according to Kazarovets et al. (2011). Despite this classification, an exhaustive investigation of the system has remained absent thus far, and the fundamental stellar parameters of V1175 Cas remain a mystery. To unravel these parameters, an essential approach involves a thorough analysis of the system's light curve. With this objective in mind, this study undertakes the first light curve analysis of V1175 Cas, employing data from _TESS_ and T60 observations. Through this analysis, the fundamental parameters of the system were derived. Table 1 lists general catalogue information about the V1175 Cas. \begin{table} \begin{tabular}{l c} \hline RA1 & 03212\({}^{\rm m}\)26\({}^{\rm s}\).53 \\ DEC1 & +73\({}^{\circ}\)26 07\({}^{\prime\prime}\).93 \\ Type2 & EA \\ Parallax (mas)1 & 3.60 \\ \(V\) magnitude (mag)1 & 9.52 \\ \(P\) (day)2 & 3.46 \\ \hline \end{tabular} \end{table} Table 1: Catalogue information of V1175 Cas The structure of the paper that presents the first light curve analysis findings for V1175 Cas is briefly described in the following: In Section 2, details about the observational data and an outline of the methodology for calculating the new light elements are given. Moving on to Section 3, the process of conducting a simultaneous analysis of the _TESS_ data alongside ground-based photometric data is elucidated. The results of this analysis, along with the derivation of the fundamental parameters of V1175 Cas, are elaborated upon in Section 4. Finally, Section 5 presents a comprehensive discussion of the results and the state of evolution of the system. ## 2 Observational data The new multicolor CCD observations of V1175 Cas were performed over 117 nights between September 2018 and October 2019 with the 60 cm robotic telescope (T60) at the TUBITAK National Observatory (TUG). Controlling the T60 is achieved through the implementation of the OCAAS open-source software, formally named TALON (see Parmaksizoglu et al. (2014)). This telescope is equipped with the FLI ProtoLine 3041-UV CCD until the 23rd of July, 2019. Subsequently, a transition was made to the Andor iKon-L 936 BEX2-DD model camera for operations after that particular date. The FLI ProLine 3041-UV CCD offers an image scale where each pixel corresponds to 0.51 arcsec, providing a field of observation (FOV) of 17.4 arcmin. The Andor iKon-L 936 BEX2-DD camera, on the other hand, provides a finer image scale of 0.456 arcsec per pixel, giving an observation FOV of 15.6 arcmin. In the observations of V1175 Cas, Bessell _BVRI_ filters were used (see Bessell (1990)). The exposure time was set to 5 s, the same for all _BVRI_ filters. Calibration images, including sky flats and bias frames, were taken at intervals throughout the observations to correct for pixel-to-pixel variations on the CCD chip. For comparison and confirmation purposes, TYC 4338-973-1 and TYC 4338-1025-1 were used as comparison and check stars, respectively. In addition to ground-based observations, light curve analysis involved using data from the _TESS_. _TESS_ extensively scans most of the entire sky in sectors, with a dedicated observing time of 27.4 days per sector. Operating within the 600-1000 nm wavelength range, _TESS_ provides broadband photometric data (Ricker et al., 2015). The V1175 Cas _TESS_ data from sector 19 were acquired between November 28th and December 23rd, 2019, utilizing a 120-second exposure time. _TESS_ data of the system were sourced from the Mikulski Archive for Space Telescopes (MAST)3 database. For the analysis, Pre-search Data Conditioning Simple Aperture Photometry light curves, as introduced by Ricker et al. (2015), were employed. The photometric data exhibited an average error of approximately 0.1%. Footnote 3: [https://archive.stsci.edu/](https://archive.stsci.edu/) The data reduction of the T60 observations consists of several steps. Initially, bias and dark frames were subtracted from the science frames, followed by a flat-fielding correction. Subsequently, the resulting reduced CCD images were utilized to compute the differential magnitudes of the target stars. This particular procedure was accomplished using MYRaf software, the IRAF aperture photometry GUI tool developed by Kilic et al. (2016). Notably, no significant light variations were detected in the comparison and check stars throughout the observations. The external uncertainties for comparison minus check magnitudes were quantified to be about 29 mmag in \(B\), 24 mmag in \(V\), 20 mmag in \(R\), and 20 mmag in \(I\) filters. These values were derived from the standard deviation of differential magnitude variation between the comparison and check stars during the same night. Observational data were not converted to the standard Bessell _BVRI_ system and differential magnitudes were used in light curve analyses. The minima times of the system were determined using _TESS_ data. The primary minima times were procured from the first, middle, and last parts of _TESS_ sector 19 observations. Meanwhile, the secondary minima time was derived from the middle part of the _TESS_ dataset. In total, four _TESS_ minima times were obtained. The literature minima times of V1175 Cas were also collected from the O-C Gateway. We used 17 minima times and investigated the period changes of the system. Considering the first The All Sky Automated Survey (ASAS, Paczynski et al., 2006) minima in the O-C Gateway with all the minima times, a parabolic fit for the period change could be obtained. However, due to the fewer minima times, this result was not reliable and the system needs more minima times to obtain more decent results. Therefore, in this study, the first ASAS minima time was ignored, and only determined new light elements of V1175 Cas by finding a linear fit to the other 16 minima times. The new light elements are given in the following equation: Ephemeris \[\mathrm{HJD(MinI)}=2458817.7506(9)+3^{\mathrm{d}}.457179(2)\times E \tag{1}\] The values given inside the parentheses in the equation refer to the errors at the last digit for the light elements. ## 3 Light Curve Modelling A comprehensive analysis of the light curve data for V1175 Cas using various photometric filters, including normalized _BVRI_ and _TESS_ data were performed in this work. The Wilson-Devinney (W-D) (Wilson & Devinney, 1971) code, integrated with a Monte Carlo simulation (Zola et al., 2004, 2010), was employed for this analysis to precisely estimate uncertainties in the parameters being adjusted. The approach employed for analysis with the W-D code involved a combination of fixed parameters based on theoretical models and prior research, as well as parameters that are iteratively adjusted through successive iterations. The fixed and adopted parameters are defined in the following. For the V1175 Cas system, an unreddened color index of \((B-V)_{0}=0.296\pm 0.02\) mag was derived using \(E_{\mathrm{d}}(B-V)=0^{\mathrm{m}}.174\) calculated according to the Schlafly & Finkbeiner (2011) calibration and \(B-V=0^{\mathrm{m}}.399\pm 0.02\) from the Tycho-2 catalogue (Hog et al., 2000). The initial temperature of the primary component of the system was fixed at \(T_{\mathrm{1,eff}}=7150\) K corresponding to the \(B-V\) color index using the astrophysical parameters of main sequence stars presented by Eker et al. (2020). Simultaneous analysis of ground-based \(BVRI\) light curves and _TESS_ data was performed, with mode 2 used for detached binary systems due to the nature of light variations observed. The root square limb darkening law was adopted, and the limb darkening coefficients were taken from the van Hamme (1993) tables, based on the filter wavelengths and temperatures of the components of V1175 Cas. It's worth noting that _TESS_ passband was not directly included in the W-D code, therefore, the \(I\)-band was used for binary modeling as _TESS_ passband aligns with Cousins \(I\)-band. The constant coefficients were also selected taking into account the \(I\)-band. With the convective atmosphere (\(T_{\rm eff}<7200\) K) approximation, the bolometric gravity-darkening exponents of the components were taken as 0.32 from Lucy (1967), retaining their bolometric albedo fixed at 0.5 following Rucinski (1969). Both components were assumed to be in synchronous rotation (\(F_{1}=F_{2}=1\)). The secondary minima of V1175 Cas is in phase 0.5 and there is no noticeable asymmetry in the observed light curve. Moreover, the ascent and the descent durations were the same for the primary and secondary minima. This led to the assumption of a circular orbit (\(e=0\)). The remaining parameters, including the orbital inclination (\(i\)), the surface temperature of secondary (\(T_{2,\rm eff}\)), the dimension-less surface potential of primary and secondary components (\(\Omega_{1,2}\)), phase shift, the mass ratio (\(q\)), and fractional luminosity of primary component (\(L_{1}\)), were considered as adjustable parameters. The presence of a third body contributing to the total light was assessed by considering a free parameter \(l_{3}\) in the solution. However, V1175 Cas was found to have negligible third light effects, and thus, \(l_{3}\) was not included in the final solution. The final model parameters are detailed in Table 2, and the comparison between observed and computed light curves is illustrated in Fig. 1. Additionally, the Roche geometry of the system is depicted in Fig. 2. ## 4 Estimated Fundamental Parameters and Evolutionary Status As no spectral observations of the V1175 Cas detached binary system have been performed, the radial velocity curves for its components are non-existent. Nevertheless, it is possible to roughly estimate the fundamental parameters for these components based on information obtained from light curve analysis. The fundamental parameters are listed in detail in Table 3. By assuming the primary component to be a main sequence star characterized by an effective temperature \(T_{\rm 1,eff}=7150\) K, the mass of the primary component was designated to be \(M_{1}=1.64\,M_{\odot}\) from the correlation between mass and \(T_{\rm eff}\) for main sequence stars given by Eker et al. (2020), for V1175 Cas. The mass of the secondary component was computed from the mass ratio obtained from the photometric light curve analysis. Using the fractional radii from Table 2 and the semi-major axis calculated via Kepler's third law, the radii of the components were ascertained. By adopting standard solar values (\(T_{\rm eff,0}=5777\) K, \(M_{\rm bol,0}=4^{\rm m}.74\)) and employing bolometric corrections from Eker et al. (2020), the luminosities and bolometric magnitudes of the components were estimated. Surface gravity values were determined for the primary and secondary components respectively as follows: \(\log g_{1}=4.16\pm 0.03\) and \(\log g_{2}=4.15\pm 0.12\) in cgs units for V1175 Cas. This finding points to the almost same evolutionary status of each of the components. Utilizing reddening maps from Schlafly & Finkbeiner (2011) and the NASA Extragalactic Database4, a value of \(E_{\rm d}(B-V)\) of about \(0^{\rm m}.174\) was calculated for the Galactic coordinates (\(l=133^{\circ}.22\), \(b=+13^{\circ}.64\)) of the system. Correspondingly, the interstellar visual extinction (\(A_{\rm V}\)) is found as \(0.539\pm 0.029\) mag for V1175 Cas, using the common formula \(A_{\rm V}=3.1\times E_{\rm d}(B-V)\) (for details of the method, see Bilir et al., 2008; Eker et al., 2009). Based on the interstellar extinction given in Table 3, the apparent magnitude of the system, component light ratios listed in Table 2, along with the \(BC_{1}\)= \(0^{\rm m}.077\) and \(BC_{2}\)= \(0^{\rm m}.078\) values calculated according to Eker et al. (2020)), the distance of V1175 Cas was derived as \(280\pm 32\) pc. The accurate fundamental stellar parameters obtained provide a comprehensive insight into the binary component stars' evolutionary status and age. The examination of these parameters led us to utilize the MESA Isochrones & Stellar Tracks (MIST) framework (Choi et al., 2016; Dotter, 2016; Paxton et al., 2011, 2013, 2015, 2018) to delve into the evolutionary status of the components. Notably, the best theoretical fit to the calculated fundamental parameters in the Hertzsprung Russell diagram was found to be the evolutionary track characterized by \(Z=0.020\pm 0.002\) (depicted in Fig. 3). Further insights were gained through analysis of the \(\log R-age\) diagrams, allowing us to deduce that the age of the binary system is approximately \(750\pm 70\) Myr (illustrated in Fig. 4). Footnote 4: [https://ned.ipac.caltech.edu](https://ned.ipac.caltech.edu) respectively. In addition, the light curve analysis indicates that the distance of V1175 Cas is approximately 280 \(\pm\) 32 pc, a value clearly in accordance with the _Gaia_-DR3 distance of 278 \(\pm\) 1 pc (Gaia Collaboration et al., 2023). The temperatures of the primary and secondary components of V1175 Cas were obtained by photometric analysis of the system as \(T_{1,\rm eff}~{}=~{}7150\) K and \(T_{2,\rm eff}~{}=~{}7090\) K, respectively. By comparing the achieved temperatures with the temperatures in the table reported by (Eker et al., 2020) for main sequence stars, the spectral types of both components were found to be F0. No detailed photometric or spectral analysis of the system has been performed before, and therefore there is no information on the spectral types of the component stars in the literature, which is provided by this work. Lucy & Ricco (1979) listed the twin binaries according to their mass ratio and found a peak at \(q\approx 0.97\). Griffin (1985) also reported that double-lined spectroscopic binaries on the main sequence show a net peak around \(q\approx 1\). The mass ratio of V1175 Cas derived in this work is \(q=0.995\pm 0.032\) which is in strong agreement with values given by Lucy & Ricco (1979) and Griffin (1985). Additionally, the calculated \(q\) value is consistent, within statistical uncertainties, with the median mass ratio value \(q=0.931\) given by Eker et al. (2014) for detached binary systems with the primary component of spectral type F. The mass values calculated for each component were substituted into the mass-luminosity correlation in the corresponding mass range given by Eker et al. (2018) for main sequence stars. Based on this correlation, the mean luminosity values for the primary and secondary components were calculated as \(\log L_{1}=0.94\,L_{\odot}\) and \(\log L_{2}=0.93\,L_{\odot}\), respectively. It was found that the mean luminosity values calculated from the Eker et al. (2018) equations were slightly larger than the observational luminosity values (\(\Delta L=0.07\,L_{\odot}\)). This shows that the components of V1175 Cas have lower luminosity values than the stars of Eker et al. (2018) in the \(1.05<M/M_{\odot}<2.40\) mass range, but it also indicates that the age of the system is younger than the average age of the main sequence. Indeed, the average age of Eker et al. (2018) in the \(1.05<M/M_{\odot}<2.40\) mass range is 1.92 Gyr, which is considerably older than the 0.75 Gyr age of V1175 Cas. The components of the system are on the main sequence and have similar evolutionary status as anticipated. The masses of the primary and secondary components are quite close and the radii are exactly the same. The ages of the components are also consistent within the statistical uncertainties. The age of the system is estimated at 750 \(\pm\) 70 Myr. Since V1175 Cas is composed of close-mass components, observing the system's spectral characteristics is necessary to understand the variations arising from small mass differences during their evolutionary processes and to engage in more detailed discussions about their evolution (e.g. Alicavus, 2022). Therefore, this system is significant for future studies involving evolutionary modeling. In this research, the components of the V1175 Cas system were investigated using highly sensitive photometric data. This allowed to determine the fundamental stellar parameters for the components. Detached binary systems such as V1175 Cas provide a unique opportunity to directly measure stellar masses, radii, and luminosities, which are difficult to accurately quantify for single stars. Collectively, the V1175 Cas binary system provides a precious laboratory for extending our understanding of stellar evolution, binary star interactions, and astrophysical phenomena. To improve the accuracy of calculating component mass ratios and their fundamental stellar parameters, acquiring the radial velocity curves of the components is crucial. This necessitates performing spectroscopic observations on V1175 Cas. By combining spectroscopic data with photometric information in the light curve analysis, future observations can provide even more precise findings. The simultaneous analysis of spectroscopic and photometric data from the system has the potential to yield significantly refined results. ## Acknowledgements Many heartfelt thanks to the referee for insightful and constructive suggestions that significantly improved the paper. This study was funded by the Scientific Research Projects Coordination Unit of Istanbul University. Project number: 37903. The author would like to thank TUBITAK National Observatory (TUG) for partial support towards using the T60 telescope via project 18BT60-1324. The author also thanks the on-duty observers and the technical staff at the TUG for their support before and during the observations. A special thanks to Fahri Alicavus and Mehmet Alpsoy for their valuable suggestions and contributions, and Selcuk Bilir for the inspiration and helpful discussions. This research has made use of NASA's (National Aeronautics and Space Administration) Astrophysics Data System and the SIMBAD Astronomical Database, operated at CDS, Strasbourg, France, and NASA/IPAC Infrared Science Archive, which Figure 4: In the plane of \(\log R\) - Age, positions of the primary (blue dot) and secondary (red dot) components of V1175 Cas. Evolutionary tracks according to metallicity value of \(Z=0.020\) are represented by blue and red lines for the primary and secondary component stars, respectively. is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work also has made use of data from the European Space Agency (ESA) mission _Gaia5_, processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC)6. Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement. The _TESS_ data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). Funding for the _TESS_ mission is provided by the NASA Explorer Program. Footnote 5: [https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia) Footnote 6: [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)
2301.03706
First double-differential measurement of kinematic imbalance in neutrino interactions with the MicroBooNE detector
We report the first measurement of flux-integrated double-differential quasielastic-like neutrino-argon cross sections, which have been made using the Booster Neutrino Beam and the MicroBooNE detector at Fermi National Accelerator Laboratory. The data are presented as a function of kinematic imbalance variables which are sensitive to nuclear ground state distributions and hadronic reinteraction processes. We find that the measured cross sections in different phase-space regions are sensitive to different nuclear effects. Therefore, they enable the impact of specific nuclear effects on the neutrino-nucleus interaction to be isolated more completely than was possible using previous single-differential cross section measurements. Our results provide precision data to help test and improve neutrino-nucleus interaction models. They further support ongoing neutrino-oscillation studies by establishing phase-space regions where precise reaction modeling has already been achieved.
MicroBooNE Collaboration
2023-01-09T22:40:14Z
http://arxiv.org/abs/2301.03706v2
# First double-differential measurement of kinematic imbalance ###### Abstract The MicroBooNE Collaboration\({}^{*}\) \({}^{1}\)Argonne National Laboratory (ANL), Lemont, IL, 60439, USA \({}^{2}\)Universitat Bern, Bern CH-3012, Switzerland \({}^{3}\)Brookhaven National Laboratory (BNL), Upton, NY, 11973, USA \({}^{4}\)University of California, Santa Barbara, CA, 93106, USA \({}^{5}\)University of Cambridge, Cambridge CB9 0HE, United Kingdom \({}^{6}\)Centro de Investigaciones Energeticas, Medimoshentales y Tecnologias (CIEMAT), Madrid E-8040, Spain \({}^{7}\)University of Chicago, Chicago, IL, 60637, USA \({}^{8}\)University of Cincinnati, Cincinnati, OH, 45221, USA \({}^{9}\)Colorado State University, Fort Collins, CO, 80523, USA \({}^{10}\)Columbia University, New York, NY, 10027, USA \({}^{11}\)University of Edinburgh, Edinburgh EH9 3FD, United Kingdom \({}^{12}\)Fermi National Accelerator Laboratory (FNAL), Badua, IL 60510, USA \({}^{13}\)Universidad de Granada, Granada E-18071, Spain \({}^{14}\)Harvard University, Cambridge, MA 02138, USA \({}^{15}\)Illinois Institute of Technology (IIT), Chicago, IL 60616, USA \({}^{16}\)Kansas State University (KSU), Manhattan, KS, 66506, USA \({}^{17}\)Lancaster University, Lancaster LA1 4YW, United Kingdom \({}^{18}\)Los Alamos National Laboratory (LANL), Los Alamos, NM, 87545, USA \({}^{19}\)Louisiana State University, Baton Rouge, LA, 70803, USA \({}^{20}\)The University of Manchester, Manchester M13 9PL, United Kingdom \({}^{21}\)Massachusetts Institute of Technology (MIT), Cambridge, MA, 02139, USA \({}^{22}\)University of Michigan, Ann Arbor, MI, 48109, USA \({}^{23}\)University of Minnesota, Minneapolis, MN, 55455, USA \({}^{24}\)New Mexico State University (NMSU), Las Cruces, NM, 88003, USA \({}^{25}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{26}\)University of Pittsburgh, Pittsburgh, PA, 15260, USA \({}^{27}\)Rutgers University, Piscataway, NJ, 08854, USA \({}^{28}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{29}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{20}\)University of Pittsburgh, Pittsburgh, PA, 15260, USA \({}^{27}\)Rutgers University, Piscataway, NJ, 08854, USA \({}^{28}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{29}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{30}\)Brookhaven National Laboratory (BNL), Upton, NY, 11973, USA \({}^{4}\)University of California, Santa Barbara, CA, 93106, USA \({}^{5}\)University of Cambridge, Cambridge CB9 0HE, United Kingdom \({}^{6}\)Centro de Investigaciones Energeticas, Medimoshentales y Tecnologias (CIEMAT), Madrid E-8040, Spain \({}^{7}\)University of Chicago, Chicago, IL, 60637, USA \({}^{8}\)University of Cincinnati, Cincinnati, OH, 45221, USA \({}^{9}\)Colorado State University, Fort Collins, CO, 80523, USA \({}^{10}\)Columbia University, New York, NY, 10027, USA \({}^{11}\)University of Edinburgh, Edinburgh EH9 3FD, United Kingdom \({}^{12}\)Fermi National Accelerator Laboratory (FNAL), Padueira, IL 60510, USA \({}^{13}\)Universidad de Granada, Granada E-18071, Spain \({}^{14}\)Harvard University, Cambridge, MA 02138, USA \({}^{15}\)Illinois Institute of Technology (IIT), Chicago, IL 60616, USA \({}^{16}\)Kansas State University (KSU), Manhattan, KS, 66506, USA \({}^{17}\)Lancaster University, Lancaster LA1 4YW, United Kingdom \({}^{18}\)Los Alamos National Laboratory (LANL), Los Alamos, NM, 87545, USA \({}^{19}\)Louisiana State University, Baton Rouge, LA, 70803, USA \({}^{20}\)The University of Manchester, Manchester M13 9PL, United Kingdom \({}^{21}\)Massachusetts Institute of Technology (MIT), Cambridge, MA, 02139, USA \({}^{22}\)University of Michigan, Ann Arbor, MI, 48109, USA \({}^{23}\)University of Minnesota, Minneapolis, MN, 55455, USA \({}^{24}\)New Mexico State University (NMSU), Las Cruces, NM, 88003, USA \({}^{25}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{26}\)University of Pittsburgh, Pittsburgh, PA, 15260, USA \({}^{27}\)Rutgers University, Piscataway, NJ, 08854, USA \({}^{28}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{29}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{20}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{20}\)University of Pittsburgh, Pittsburgh, PA, 15260, USA \({}^{27}\)Rutgers University, Piscataway, NJ, 08854, USA \({}^{28}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{29}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{20}\)University of Pittsburgh, Pittsburgh, PA, 15260, USA \({}^{29}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{20}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{20}\)University of Pittsburgh, Pittsburgh, PA, 15260, USA \({}^{21}\)Massachusetts Institute of Technology (MIT), Cambridge, MA, 02139, USA \({}^{22}\)University of Michigan, Ann Arbor, MI, 48109, USA \({}^{23}\)University of Minnesota, Minneapolis, MN, 55455, USA \({}^{24}\)New Mexico State University (NMSU), Las Cruces, NM, 88003, USA \({}^{25}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{26}\)University of Pittsburgh, Pittsburgh, PA, 15260, USA \({}^{27}\)Rutgers University, Piscataway, NJ, 08854, USA \({}^{28}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{29}\)University of Oxford, Oxford OX1 3RH, United Kingdom \({}^{30}\)Brookhaven National Laboratory (BNL), Upton, NY, 11973, USA \({}^{4}\)University of California, Santa Barbara, CA, 93106, USA \({}^{5}\)University of Cambridge, Cambridge CB9 0HE, United Kingdom \({}^{6}\)Centro de Investigaciones Energeticas, Medimoshentales y Tecnologias (CIEMAT), Madrid E-8040, Spain \({}^{7}\)University of Chicago, Chicago, IL, 60637, USA \({}^{8}\)University of Cincinnati, Cincinnati, OH, 45221, USA \({}^{9}\)Colorado State University, Fort Collins, CO, 80523, USA \({}^{10}\)Columbia University, New York, NY, 10027, USA \({}^{11}\)University of Edinburgh, Edinburgh EH9 3FD, United Kingdom \({}^{12}\)Fermi National Accelerator Laboratory (FNAL), Padueira, IL 60510, USA \({}^{13}\)Universidad de Granada, Granada E-18071, Spain \({}^{14}\)Harvard University, Cambridge, MA 02138, USA \({}^{15}\)Illinois Institute of Technology (IIT), Chicago, IL 60616, USA \({}^{16}\)Kansas State University (KSU), Manhattan, KS, 66506, USA \({}^{17}\)Lancaster University, Lancaster LA1 4YW, United Kingdom \({}^{18}\)Los Alamos National Laboratory (LANL), Los Alamos, NM, 87545, USA \({}^{19}\)Louisiana State University, Baton Rouge, LA, 70803, USA \({}^{20}\)The University of Manchester, Manchester M13 9PL, United Kingdom \({}^{21 \({}^{28}\)SLAC National Accelerator Laboratory, Menlo Park, CA, 94025, USA \({}^{29}\)South Dakota School of Mines and Technology (SDSMT), Rapid City, SD, 57701, USA \({}^{30}\)University of Southern Maine, Portland, ME, 04104, USA \({}^{31}\)Syracuse University, Syracuse, NY, 13244, USA \({}^{32}\)Tel Aviv University, Tel Aviv, Israel, 69978 \({}^{33}\)University of Tennessee, Knoxville, TN, 37996, USA \({}^{34}\)University of Texas, Arlington, TX, 76019, USA \({}^{35}\)Tufts University, Medford, MA, 02155, USA \({}^{36}\)University College London, London WC1E 6BT, United Kingdom \({}^{37}\)Center for Neutrino Physics, Virginia Tech, Blacksburg, VA, 24061, USA \({}^{38}\)University of Warwick, Coventry CV4 7AL, United Kingdom \({}^{39}\)Wright Laboratory, Department of Physics, Yale University, New Haven, CT, 06520, USA November 3, 2021 ###### Abstract We report the first measurement of flux-integrated double-differential quasielastic-like neutrino-argon cross sections, which have been made using the Booster Neutrino Beam and the MicroBooNE detector at Fermi National Accelerator Laboratory. The data are presented as a function of kinematic imbalance variables which are sensitive to nuclear ground state distributions and hadronic reinteraction processes. We find that the measured cross sections in different phase-space regions are sensitive to different nuclear effects. Therefore, they enable the impact of specific nuclear effects on the neutrino-nucleus interaction to be isolated more completely than was possible using previous single-differential cross section measurements. Our results provide precision data to help test and improve neutrino-nucleus interaction models. They further support ongoing neutrino-oscillation studies by establishing phase-space regions where precise reaction modeling has already been achieved. Neutrino oscillation measurements aim to extract neutrino mixing angles, mass differences, and the charge-parity violating phase, and to search for new physics beyond the Standard Model [1; 2; 3]. The analysis of such measurements traditionally relies on detailed comparisons of measured and theoretically-expected neutrino interaction rates in the corresponding detectors. Therefore, a precise understanding of neutrino-nucleus interactions is required to fully exploit the discovery potential of current and next-generation experiments. With a growing number of neutrino-oscillation experiments employing liquid argon time projection chamber (LArTPC) neutrino detectors [4; 5; 6; 7; 8; 9], high-accuracy modeling of neutrino-argon interactions is becoming of paramount importance [10; 11; 12]. The overarching goal of these efforts is both to achieve few-percent-level modeling of neutrino-argon interaction rates and to provide a detailed understanding of the final-state kinematics of emitted particles that are used to reconstruct the energies of the interacting neutrinos [13; 14]. This Letter reports the first measurement of flux-integrated double-differential cross sections for muon-neutrino-argon (\(\nu_{\mu}\)-Ar) charged-current (CC) quasielastic (QE)-like scattering reactions as a function of transverse kinematic imbalance variables. Building upon a previous analysis of neutrino-argon cross sections with a similar signal event topology [15], we focus on reactions where the neutrino removes a single intact proton from the nucleus without producing any additional detected particles. The results reported here are obtained using the Booster Neutrino Beam (BNB) and the MicroBooNE detector at Fermi National Accelerator Laboratory with an exposure of \(6.79\times 10^{20}\) protons on target. Transverse kinematic imbalance variables were previously shown to be sensitive to the modeling of the nuclear ground-state distribution and to nuclear medium effects, such as hadronic final-state interactions (FSI) [16; 17; 18; 19; 20; 21]. By measuring the components of the muon and proton momenta perpendicular to the neutrino direction, \(\vec{p}_{T}\,^{\mu}\) and \(\vec{p}_{T}\,^{p}\) respectively, we construct the transverse missing momentum, \(\delta\vec{p}_{T}=\vec{p}_{T}\,^{\mu}+\vec{p}_{T}\,^{p}\), and its angular orientation with respect to \(\vec{p}_{T}\,^{\mu}\), \(\delta\alpha_{T}=\arccos{\left(\frac{-\vec{p}_{T}\,^{\mu}\cdot\delta\vec{p}_{ T}}{p_{T}\,^{\mu}\,\,\delta\vec{p}_{T}}\right)}\). Due to the isotropic nature of Fermi motion, \(\delta\alpha_{T}\) is expected to be uniformly distributed in the absence of any FSI. In the presence of FSI, the proton momentum is generally reduced and the \(\delta\alpha_{T}\) distribution becomes enhanced towards \(180^{\circ}\). Similarly, the shape of the \(\delta p_{T}\) distribution encapsulates information related to Fermi motion and is further smeared due to FSI and multi-nucleon effects. Given the sensitivity of \(\delta\alpha_{T}\) to FSI and of \(\delta p_{T}\) to both FSI and Fermi motion, a simultaneous measurement of these two observables can help to disentangle the individual impact of each nuclear effect on the neutrino-nucleus interaction. Similarly, the muon-proton momentum imbalance components transverse and parallel to the transverse lepton momentum, \(\delta p_{T,x}=\delta p_{T}\cdot\sin\delta\alpha_{T}\) and \(\delta p_{T,y}=\delta p_{T}\cdot\cos\delta\alpha_{T}\), provide further handles on Fermi motion and FSI processes, respectively. The active volume of the MicroBooNE LArTPC contains 85 tonnes of argon [22]. It is exposed to the BNB neutrino energy spectrum that peaks around \(0.8\,\mathrm{GeV}\) and extends to about \(2\,\mathrm{GeV}\). Neutrinos are detected by measuring the charged particles produced following their interactions with argon nuclei in the LArTPC active volume. These charged particles travel through the liquid argon, producing both scintillation light and trails of ionization electrons. In the presence of a uniform 273 V/cm electric field, the ionization electrons drift through the argon and are detected by a system of three anode wire planes that are perpendicular to the field. The scintillation light is measured by photomultiplier tubes (PMTs). Events are recorded if the PMT signals are in time coincidence with the beam arrival time. Trigger hardware and software selection cuts reject background events, mostly from cosmic muons, providing enriched data samples in which a neutrino interaction occurs in \(\approx\) 15% of selected beam spills [23]. The Pandora reconstruction package [24] is used to form individual tracks from the measured ionization signals in the enriched data samples. Particle identification and momentum determination are performed using the measured track energy-deposition profile and track length [25, 26]. Candidate muon-proton pairs are identified by requiring exactly two track-like objects and no shower-like objects based on a track-score variable from Pandora[27, 28]. The discriminant described in [29] is used to distinguish muon and proton candidates. We further apply quality cuts to avoid mis-reconstructed tracks. Details are given in [30]. To reduce contributions from cosmic tracks and to minimize bin-migration effects, the event selection considers only muon and proton track pairs that are fully contained within a fiducial volume of 10 cm from the edge of the detector active volume. The signal definition used in this analysis includes all \(\nu_{\mu}\)-Ar scattering events with a final-state muon with momentum \(0.1<p_{\mu}<1.2\) GeV/\(c\) and exactly one final-state proton with \(0.3<p_{p}<1\) GeV/\(c\). Events with final-state neutral pions at any momentum are excluded. Signal events may contain additional protons with momentum less than 300 MeV/\(c\) or greater than 1 GeV/\(c\), neutrons at any momentum, and charged pions with momentum lower than 70 MeV/\(c\). We refer to the signal events as CC1p0\(\pi\). After the application of the event selection, we retain 9051 data events that satisfy all criteria. Event distributions for all the aforementioned variables of interest and details on the CC1p0\(\pi\) event selection can be found in [30]. The flux-averaged differential event rate as a function of a given variable \(x\) in bin \(i\) is obtained by \[\frac{dR}{dx_{i}}=\frac{N_{i}-B_{i}}{T\cdot\Phi_{\nu}\cdot\Delta_{i}} \tag{1}\] where \(N_{i}\) and \(B_{i}\) are the number of measured events and the expected background events, respectively. \(T\) is the number of target argon nuclei in the fiducial volume of interest. \(\Phi_{\nu}\) corresponds to the total BNB flux and, finally, \(\Delta_{i}\) corresponds to the \(i\)-th bin width or area for the single- and double-differential results, respectively. We report the extracted cross sections for the measured interaction using the Wiener singular value decomposition (Wiener-SVD) unfolding technique as a function of unfolded kinematic variables [31]. More details on the unfolding procedure can be found in [30]. The unfolding machinery returns the unfolded differential cross section and the corresponding uncertainties. Apart from the unfolded result, an additional smearing matrix \(A_{C}\) is obtained, which accounts for the regularization and bias of the measurement. When a comparison to the unfolded data is performed, the corresponding \(A_{C}\) matrices must be applied to the true cross section predictions. See Supplemental Material for the data release, the unfolded covariance matrices, and the additional matrices \(A_{C}\). As in previous MicroBooNE measurements [32, 33, 15, 34], the full Monte Carlo (MC) simulation used in the unfolding procedure consists of a combination of simulated neutrino interactions overlaid on beam-off background events. This provides an accurate description of the dominant cosmic backgrounds pertinent to surface detectors Figure 1: The flux-integrated (a) single- and (b-c) double- (in \(\delta\alpha_{T}\) bins) differential CC1p0\(\pi\) cross sections as a function of the transverse missing momentum \(\delta p_{T}\). Inner and outer error bars show the statistical and total (statistical and shape systematic) uncertainty at the 1\(\sigma\), or 68%, confidence level. The gray band shows the separate normalization systematic uncertainty. Colored lines show the results of theoretical cross section calculations with (solid line) and without (dashed line) FSI based on the GENIE (blue) and GiBUU (orange) event generators. using real data. Neutrino interactions are simulated using the GENIE v3.0.6 event generator [35; 36], where the CC QE and CC meson exchange current (MEC) neutrino interaction models have been tuned to T2K \(\nu_{\mu}\)-\({}^{12}\)C CC0\(\pi\) data [37; 38]. We refer to the corresponding prediction as G18. GENIE generates all final-state particles associated with the primary neutrino interaction and propagates them through the nucleus, accounting for FSI. The particle propagation outside the nucleus is simulated using GEANT4[39], with the MicroBooNE detector response modeled using the LArSoft framework [40; 41]. Based on this simulation, we estimate that our efficiency for selecting CC1p0\(\pi\) events is \(\approx\) 10%, with a purity of \(\approx\) 70%. The total covariance matrix \(E=E^{\rm stat}\) + \(E^{\rm syst}\) used in the Wiener-SVD filter includes the statistical and systematic uncertainties associated with our measurement. \(E^{\rm stat}\) is a diagonal covariance matrix including the statistical uncertainties and \(E^{\rm syst}\) is a covariance matrix incorporating the total systematic uncertainties. More details on the construction of these matrices can be found in [30]. These matrices include uncertainties on the integrated cross section due to the neutrino flux prediction (7.3%), neutrino interaction cross section modeling (5.3%), detector response modeling (4.9%), beam exposure (2.3%), statistics (1.5%), number-of-scattering-targets (1.15%), reinteractions (1%), and out-of-cryostat interaction modeling (0.2%). The full fractional uncertainty on the integrated total cross section sums to 11%. Across the results reported in this Letter, statistical uncertainties are shown by the inner error bars on the final results. The systematic uncertainties were decomposed into shape- and normalization-related sources following the procedure outlined in [42]. The cross-term uncertainties were incorporated in the normalization part. The outer error bars on the reported cross sections correspond to statistical and shape uncertainties added in quadrature. The normalization uncertainties are presented with the gray band at the bottom of our results. The single- and double-differential results as a function of \(\delta p_{T}\) are presented in Fig. 1. They are compared with G18 and the theory-driven GiBUU 2021 (GiB) event generator. Additional comparisons to the corresponding event generators when FSI are turned off are also included (G18 no-FSI and GiBUU no-FSI). G18 uses the local Fermi gas (LFG) model of the nuclear ground state [43] and the Nieves CCQE scattering prescription [44] with Coulomb corrections for the outgoing muon [45] and random phase approximation (RPA) corrections [46]. It also uses the Nieves MEC model [47], the KLN-BS resonance (RES) [48; 49; 50; 51] and Berger-Sehgal coherent (COH) [52] scattering models. Furthermore, the hA2018 FSI model [53] and the MicroBooNE-specific tuning of model parameters [38] are utilized. GiBUU uses somewhat similar models, but, unlike GENIE, they are implemented in a coherent way by solving the Boltzmann-Uehling-Uhlenbeck transport equation [54]. The simulation includes the LFG model [43], a standard CCQE expression [55], an empirical MEC model and a dedicated spin-dependent resonances amplitude calculation following the MAID analysis [54]. The deep inelastic (DIS) model is from PYTHIA[56]. The FSI treatment is different as the hadrons propagate through the residual nucleus in a nuclear potential which is consistent with the initial state. The single-differential results as a function of \(\delta p_{T}\) using all the events that satisfy our selection are shown in Fig. 1a. The \(\chi^{2}\)/bins data comparison for each generator shown on all the results takes into account the total covariance matrix, including the off-diagonal elements. Theoretical uncertainties on the models themselves are not included. The peak height of both generator predictions is \(\approx\) 30% higher when FSI effects are turned off. Yet, all distributions illustrate a transverse missing momentum tail that extends beyond the Fermi momentum (\(\approx\) 250 MeV/\(c\)) whether FSI effects are incorporated or not. The double-differential result using events with \(\delta\alpha_{T}<45^{\circ}\) shown in Fig. 1b is dominated by events that Figure 2: The flux-integrated (a) single- and (b-c) double- (in \(\delta p_{T}\) bins) differential CC1p0\(\pi\) cross sections as a function of the angle \(\delta\alpha_{T}\). Inner and outer error bars show the statistical and total (statistical and shape systematic) uncertainty at the 1\(\sigma\), or 68%, confidence level. The gray band shows the separate normalization systematic uncertainty. Colored lines show the results of theoretical cross section calculations with a number of FSI-modeling choices based on the GENIE event generator. primarily occupy the region up to the Fermi momentum and do not exhibit a high-momentum tail. The double-differential results using events with \(135^{\circ}<\delta\alpha_{T}<180^{\circ}\) are shown in Fig. 1c and illustrate high transverse missing momentum up to \(1\,\mathrm{GeV}/c\). The prediction without FSI effects is strongly disfavored. Therefore, the high \(\delta p_{T}\) region is an appealing candidate for neutrino experiments to benchmark and tune the FSI modeling in event generators. Extracted cross sections as a function of \(\delta\alpha_{T}\) are shown in Fig. 2. Here we perform comparisons to the recently added theory driven GENIE v3.0.6 G21_11b_00_000 configuration (G21 hN) [57]. This configuration uses the SuSAv2 model for CCQE and CCMEC interactions [58], and the hN2018 FSI model [59]. The modeling choices for RES, DIS, and COH interactions are the same as for G18. We investigated the effect of the FSI-modeling choice by comparing the G21 hN results to the ones obtained with G21 hA, where the hA2018 FSI model was used instead, and to G21 G4 with the recently coupled GEANT4 FSI framework [60]. The prediction where the FSI effects have been turned off (G21 no-FSI) is also included for comparison. The single-differential results as a function of \(\delta\alpha_{T}\) using all the events that satisfy our selection are shown in Fig. 2a. The prediction without FSI shows a uniform behavior as a function of \(\delta\alpha_{T}\) and is disfavored by the data. The addition of FSI effects leads to a \(\approx 30\%\) asymmetry around \(\delta\alpha_{T}=90^{\circ}\). The three FSI models used here for comparison yield a consistent behavior. The double-differential result shown in Fig. 2b using events with \(\delta p_{T}<0.2\,\mathrm{GeV}/c\) illustrates a uniform distribution indicative of the suppressed FSI impact in that part of the phase-space. The G21 no-FSI prediction is higher than the other FSI predictions. The difference comes from the generation of multiple particles above detection threshold due to reinteraction effects in the FSI-rich samples. Such events do not satisfy the signal definition and therefore introduce the difference in the absolute scale. The double-differential results using events with \(\delta p_{T}>0.4\,\mathrm{GeV}/c\) are shown in Fig. 2c and illustrate the presence of strong FSI effects with a significantly enhanced asymmetry around \(90^{\circ}\). Thus, the high \(\delta\alpha_{T}\) region is highly informative for the FSI-modeling performance in event generators. See Supplemental Material for details on the interaction breakdown of the aforementioned results and [30] for further double-differential results. Finally, Fig. 3 shows the single- and double-differential results as a function of \(\delta p_{T,x}\). The result shows the comparison between the nominal G18 model using the local Fermi gas (LFG) and predictions using the same G18 interaction processes but different nuclear ground-state model options available in the GENIE event generator, namely the Bodek-Ritchie Fermi Gas (RFG) [61] and an effective spectral function (EffSF) [62]. Furthermore, the prediction without RPA effects is shown for comparison (no-RPA) [46]. The single-differential result (Fig. 3a) illustrates a fairly broad symmetric distribution centered around \(0\,\mathrm{GeV}/c\). The double-differential result for events where \(\delta p_{T,y}<\) -0.15 \(\mathrm{GeV}/c\) (Fig. 3b) illustrates an even broader distribution, as can be seen in the widths (\(\sigma_{\mathrm{Data}}\)) of Gaussian fits on the data distributions. Conversely, the double-differential result for events with \(|\delta p_{T,y}|<0.15\,\mathrm{GeV}/c\) (Fig. 3c) shows a much narrower peak which strongly depends on the choice of the underlying model and the inclusion or absence of nuclear effects such as RPA. The LFG and no-RPA predictions are favored in both parts of the phase-space. The Supplemental Material contains details on the interaction breakdown of various generator predictions for the results reported here, and further single- and double-differential results can be found in [30]. In summary, we report the first measurement of muon neutrino double-differential cross sections on argon as a function of kinematic imbalance variables for event topologies with a single muon and a single proton detected in the final state. We identify parts of the phase Figure 3: The flux-integrated (a) single- and (b-c) double- (in \(\delta p_{T,y}\) bins) differential CC1p0\(\pi\) cross sections as a function of the transverse three-momentum transfer component, \(\delta p_{T,x}\). Inner and outer error bars show the statistical and total (statistical and shape systematic) uncertainty at the \(1\sigma\), or \(68\%\), confidence level. The gray band shows the separate normalization systematic uncertainty. Colored lines show the results of theoretical cross section calculations with a number of event generators. The standard deviation (\(\sigma_{\mathrm{Data}}\)) of a Gaussian fit to the data is shown on each panel. space where the Fermi motion can be largely disentangled from FSI and multi-nucleon effects. This disentanglement provides leverage to improve separate parts of the complicated neutrino interaction models that affect single-differential distributions in similar ways. Therefore, the reported results pave the path to substantially reducing cross section systematic uncertainties which will enable precision measurements of fundamental neutrino properties. This document was prepared by the MicroBooNE collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. MicroBooNE is supported by the following: the U.S. Department of Energy, Office of Science, Offices of High Energy Physics and Nuclear Physics; the U.S. National Science Foundation; the Swiss National Science Foundation; the Science and Technology Facilities Council (STFC), part of the United Kingdom Research and Innovation; the Royal Society (United Kingdom); the UK Research and Innovation (UKRI) Future Leaders Fellowship; and The European Union's Horizon 2020 Marie Sklodowska-Curie Actions. Additional support for the laser calibration system and cosmic ray tagger was provided by the Albert Einstein Center for Fundamental Physics, Bern, Switzerland. We also acknowledge the contributions of technical and scientific staff to the design, construction, and operation of the MicroBooNE detector as well as the contributions of past collaborators to the development of MicroBooNE analyses, without whom this work would not have been possible. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission.
2302.02148
Entangling gates for trapped-ion quantum computation and quantum simulation
The trapped-ion system has been a leading platform for practical quantum computation and quantum simulation since the first scheme of a quantum gate was proposed by Cirac and Zoller in 1995. Quantum gates with trapped ions have shown the highest fidelity among all physical platforms. Recently, sophisticated schemes of quantum gates such as amplitude, phase, frequency modulation, or multi-frequency application, have been developed to make the gates fast, robust to many types of imperfections, and applicable to multiple qubits. Here, we review the basic principle and recent development of quantum gates with trapped ions.
Zhengyang Cai, Chunyang Luan, Lingfeng Ou, Hengchao Tu, Zihan Yin, Jing-Ning Zhang, Kihwan Kim
2023-02-04T11:30:39Z
http://arxiv.org/abs/2302.02148v1
# Entangling gates for trapped-ion quantum computation and quantum simulation ###### Abstract The trapped-ion system has been a leading platform for practical quantum computation and quantum simulation since the first scheme of a quantum gate was proposed by Cirac and Zoller in 1995. Quantum gates with trapped ions have shown the highest fidelity among all physical platforms. Recently, sophisticated schemes of quantum gates such as amplitude, phase, frequency modulation, or multi-frequency application, have been developed to make the gates fast, robust to many types of imperfections, and applicable to multiple qubits. Here, we review the basic principle and recent development of quantum gates with trapped ions. ## I I. Introduction In 1995, the first quantum gate for quantum computation was proposed and realized with trapped ions [1; 2; 3]. Since the proposal by Cirac and Zoller [1], the trapped ion system has led the field of quantum computation [4; 5; 6; 7; 8] as well as quantum simulation [9]. The Cirac-Zoller (CZ) gate is based on specifically using the center of mass vibrational mode. The CZ gate requires perfect ground-state cooling and individual addressing on the target ions, which makes it difficult to be a high-fidelity gate. In 1999, Molmer and Sorenson proposed a different type of entangling gate, which relaxes the requirements of perfect ground state cooling and individual addressing, and the gates were realized with up to four qubits by NIST group [10; 11; 12; 13]. Almost at the same time, a gate using geometric phase in the vibrational mode space from conditional displacement operation was proposed and realized with decent fidelity [14; 15; 16], which was also named as light-shift (LS) gate. It has a similar level of experimental requirements to the Molmer-Sorenson (MS) gate. Later, the MS gate was understood in terms of the geometric phase in the phase space of position and momentum, which unifies the understanding of the MS gate and LS gate [12; 17]. Basically, both the LS [16] and the MS gates can be understood as the specific usage of a force dependent on a qubit state. For the LS gate [16], the force is dependent on the ion-qubit state in the \(\sigma_{\rm z}\) basis, and for the MS gate, it is dependent on the ion-qubit state in the \(\sigma_{\phi}\)-basis, where \(\sigma_{\phi}=\cos\phi\ \sigma_{\rm x}+\sin\phi\ \sigma_{\rm y}\)[12; 17]. Therefore, the LS gate and the MS gate are called \(\sigma_{\rm x}\)- and \(\sigma_{\phi}\)-gate, respectively. The LS and the MS gate have been improved and shown the fidelities of over 99.9 %, which have been the highest fidelities so far among all physical platforms for quantum computation [18; 19; 20]. One of the promising schemes to scale up the size of the ion-trap system is to entangle a small number of ion-qubits by using the LS or the MS gate in a single zone and to connect different trapping zones by individually shuttling ions [21; 22], which is called a quantum charge-coupled device (QCCD) architecture. These entangling gates are typically realized by single or Raman laser beams with non-vanishing wave-vector \({\bf k}\). The laser implementations of the trapped-ion quantum gates have been successful with a small number of ion-qubits with high fidelities [18; 19; 20]. However, it could be challenging to apply numerous laser beams to different trapping zones for large-scale quantum computation based on the QCCD approach. The microwave or RF implementation could provide an alternative solution for implementing quantum gates on multiple zones since microwave or RF circuits can be integrated with the trap [23; 24]. These laser-less gates require a significant field gradient to couple qubits and vibrational modes. Two main methods have been explored in this direction; one is using static-magnetic-field gradient combined with microwave [25; 26; 27; 28], and the other one is using oscillating-magnetic-field gradient at near-qubit frequency [23; 24; 29]. Recently, the third scheme has been implemented, which combines a near-motion-frequency field gradient and near-qubit-frequency field [30; 31; 32]. Alternatively, it has been proposed and realized to perform entangling gates on any two qubits in a large number of ions in a single trap zone, where the total number of vibrational modes increases with the number of ions in the zone [33; 34; 35; 36; 37]. In this case, it is necessary to take into account the effect of many collective vibrational modes, which is to disentangle target qubits and all relevant vibrational modes at the end of the gate. Originally, these requirements were proposed to be fulfilled by using amplitude modulation [33; 34; 35; 38]. Later, other methods such as phase [39; 40; 41] or frequency modulation [42; 43; 44], and multi-tone modulation [45; 46; 47; 48], have been introduced for the purpose and others such as robustness or noise-resilience of the pairwise gates [45]. Moreover, these sophisticated methods of quantum gates can be also applied to make two-qubit gates without the speed limit [38]. Furthermore, these schemes were extended to simultaneously or parallelly entangle more qubits [48; 49; 50; 51]. In this article, we provide the basic principle of quantum entangling gates with trapped ions and review the recent development of gates with sophisticated pulse modulations for various purposes. We point out that among many different types of modulations, the multi-frequency method captures the general control that covers all other modulation methods and provides systematic ways of optimizing the gates according to the purposes. Finally, we summarize some applications of the multi-frequency method for various purposes. This review article consists of the following sections. In section II, we introduce the basics of the trapped-ion system, such as different types of qubits, cooling, qubit initialization, detection, and single-qubit operations. In section III, we discuss the interactions between a two-level system coupled to a single vibrational mode, entangling gate operations with and without laser beams based on the qubit-state dependent force. In section IV, we summarize the modulation methods such as amplitude, phase, frequency, and multi-frequency, and show that the multi-frequency method has the most general aspect of control. In section V, we provide a general theoretical framework and examples of the multi-frequency method with or without individual controls to speed up the gate, make the gate robust against various noises and apply the gate on more than two ions simultaneously. Finally, we conclude the review article with an outlook. ## II II. Basics of the trapped-ion system ### Types of ion qubits Served as a qubit, two internal levels of a trapped atomic ion are typically encoded as \(|0\rangle\) and \(|1\rangle\). Different from other artificial qubits, ion qubits are fundamentally identical, which is an advantageous feature for large-scale system development. The ion qubits have more unique advantages over other qubits in different physical platforms, such as the ultra-long coherence time up to the order of hours [52; 53; 54; 55] and near-perfect qubit-state initialization and detection [56; 53; 57]. There are typically three types of qubits, where the energy splitting covers the frequency range of a few to tens of megahertz, gigahertz, and hundreds of terahertz, which correspond to the Zeeman qubit (Fig. 1(a)) [58], the hyperfine qubit (Fig. 1(b)) [16; 19; 59; 60; 61] and the optical qubit (Fig. 1(c)), respectively. The Zeeman qubit is composed of a pair of Zeeman energy levels as shown in Fig. 1(a), and they can be manipulated by applying the radio-frequency(RF) or Raman lasers with frequency difference in the order of MHz [58]. For the Zeeman qubit, Doppler cooling, sideband cooling, and state initialization can be simply implemented without any advanced experimental requirements such as high-frequency sidebands, narrow-line lasers, etc. However, the high-efficiency state discrimination of the Zeeman qubit requires another metastable energy level to shelve one of the qubit states. Moreover, the Zeeman qubit is sensitive to magnetic-field fluctuations, which results in a relatively short coherence time on the order of milliseconds. With the magnetic field shielding and permanent magnets, the coherence time of the Zeeman qubit has been increased to \(\sim 2.1\) s with dynamical decoupling pulses [58]. For the optical qubit, one level in a ground state manifold and the other level in a metastable electronic state are used, which are typically separated by the electric quadrupole or octupole transition, as shown in the Fig. 1(c). The transitions can be driven with a single-frequency laser in the visible to near-IR spectrum region. Due to the long lifetime of metastable levels, the narrow-line laser (\(\sim 1\) Hz) stabilized by the high-finesse cavity is required to perform the optical transition. The coherence time of the optical qubit is similar to that of the Zeeman qubit. Technical efforts have been attempted to suppress phase fluctuations of the laser and push the coherence time to the ultimate limit of the upper-level decay [62]. Consisting of a pair of ground-state hyperfine levels, the hyperfine qubit can be driven by the microwave field [29; 63] or Raman lasers with the frequency difference in the order of GHz, as shown in Fig. 1(b). The hyperfine qubit can be implemented in "clock" transitions that are insensitive to the first-order magnetic field and have shown coherence time up to the order of hours [52; 53; 54; 55]. Moreover, the dressed-states composed of ground-state hyperfine levels coupled with resonant RF field [26; 64] is an available qubit to realize a laser-less gate with a magnetic field gradient. Figure 1: The level structure of three typical types of ion-qubits (energy splittings not to scale). (a) The level splitting of Zeeman qubit is on the order of a few to tens of megahertz (as in \({}^{138}\)Ba\({}^{*}\), etc.). (b) Hyperfine qubit, composed of a pair of ground-state hyperfine levels, has the level splitting on the order of a few to tens of gigahertz (as in \({}^{171}\)Yb\({}^{*}\), etc.). (c) Consisting of one ground state and another metastable level, the optical qubit is driven by a light field on the order of a few hundred terahertz (as in \({}^{88}\)Sr\({}^{*}\), etc.). ### Cooling, state initialization, and detection To obtain high-fidelity quantum entangling gates with trapped ions, motional-ground state cooling is necessary, which is realized by laser cooling methods. First, based on the velocity-dependent radiation force [65, 66], Doppler cooling can typically cool down ions to the order of millikelvin, which is limited by the natural linewidth of the cooled ions. Near ground-state cooling can be achieved by resolved-sideband cooling [67, 68]. Moreover, Sisyphus cooling [69, 70], or electromagnetically-induced-transparency (EIT) cooling provides an alternative to cooling a large number of modes of multiple ions simultaneously [71, 72, 73, 74, 75, 76, 77, 78]. Before applying sequences of quantum gate operations, the ion qubits should be initialized to one of the qubit states. The state initialization is typically performed by the optical pumping technique within less than a few microseconds [79]. At the end of the quantum operations, the states of ion qubits can be measured via the state-dependent fluorescence detection [80, 81]. The ion qubit is projected to a bright state \(\left|1\right\rangle\), which scatters lots of photons with the illumination of a detection laser, or a dark state \(\left|0\right\rangle\) that scatters almost no photons. The scattered fluorescence photons are collected by an imaging system and detected by a photomultiplier tube (PMT) or charge-coupled device (CCD) typically. Recently, over 99.99 % detection fidelities have been demonstrated with optical qubits of \({}^{40}\)Ca\({}^{+}\) and \({}^{138}\)Ba\({}^{+}\)[56, 82]. For the hyperfine qubit of a single \({}^{133}\)Ba\({}^{+}\) ion, around 99.97 % detection fidelity has been achieved [83]. For other hyperfine qubits with \({}^{43}\)Ca\({}^{+}\) and \({}^{171}\)Yb\({}^{+}\), over 99 % detection fidelities have been observed [56, 84]. The duration for the state detection typically takes around a few hundred \(\mu\)s to milliseconds. The shortest detection time was reported to \(\sim\) 11 \(\mu\)s with 99.93 % average fidelity achieved by superconducting nanowire single-photon detectors (SNSPDs) [84]. ### Single-qubit operations The single qubit operation can be driven by a resonant field to the qubit frequency, which can be realized by RF field [53], Raman lasers [19, 85] or the single laser [62]. The single qubit state is described as \(\left|\psi\right\rangle=\cos\left(\theta/2\right)\left|0\right\rangle+e^{i \phi}\sin\left(\theta/2\right)\left|1\right\rangle\), where the Rabi frequency is \(\Omega=\theta/t\), and it represents the coupling strength of field-matter interaction [4]. In the experiment, the highest fidelity of 99.9999 % single-qubit gates for the hyperfine qubit of \({}^{43}\)Ca\({}^{+}\) has been performed by the near-field microwave [53]. With the Raman laser beams, the fidelities of over 99.99% have been demonstrated for the hyperfine qubits of \({}^{43}\)Ca\({}^{+}\) and \({}^{9}\)Be\({}^{+}\)[18, 19]. For optical qubits, over 99.99% fidelities of a single qubit gate have been achieved with a single narrow-line laser [62]. In order to speed up the single-qubit gates, mainly technical efforts have been attempted. For the Raman method, the gate duration has been pushed to less than 50 \(ps\)[85, 86]. By using the microwave method, the gate with timescales of 20 \(ns\) has been performed [24]. In addition, the fidelity of ultra-fast single-qubit gate can be improved with the application of modulated pulse sequences and the suppression of unwanted excited transitions [87, 88]. For the single qubit operations with ion qubits, in particular, the hyperfine qubits, the main error sources are technical imperfections such as fluctuations in the amplitude of microwave fields. For the optical operations, the photon scattering from the relevant dipole transitions is an additional limiting factor [89], which makes lower fidelities than those of microwave gates. The coupling to vibrational modes in the single-qubit operations can be significantly suppressed by using fields with large wavelengths or copropagating-Raman schemes that nullify the coupling to vibrational modes. The coherence time of clock state qubits already reached over hours [52, 53, 54, 55], which is not already a limiting factor for the fidelity of single-qubit operations. ## III III. A brief explanation of laser and laserless gates ### Two-level system coupled to a single vibrational mode For multi-qubit gates, quantum information stored in the internal states between trapped ions is transferred via coupling to the collective vibrational mode, the so-called "quantum bus", which plays a core role in the quantum entangling gates [4, 5, 17]. With the near-resonant field, we just take into account a two-level system and one vibrational mode, and the general interaction Hamiltonian can be expressed as, \[\hat{H}_{\rm int}=\frac{\hbar}{2}\Omega[e^{i(kx-\omega t+\phi)}+e^{-i(kx- \omega t+\phi)}](\hat{\sigma}^{+}+\hat{\sigma}^{-}), \tag{1}\] where \(\mathbf{k}\equiv k\hat{\mathbf{e}}_{x}\), \(\omega\) and \(\phi\) are the wave vector, frequency, and phase of the field, respectively, \(\Omega=\frac{\left|\mathbf{d}:\mathbf{E}\right|}{\hbar}\) is the Rabi frequency, and \(\hat{\sigma}^{+}=\left|1\right\rangle\left\langle 0\right|\), \(\hat{\sigma}^{-}=\left|0\right\rangle\left\langle 1\right|\). We assume the field is coupled to the vibrational mode in the \(x\)-radial direction. Transformed to the interaction picture with respect to \(\hat{H_{0}}=\frac{\hbar}{2}\omega_{0}\hat{\sigma^{z}}+\hbar\omega_{m}\hat{a}^ {\dagger}\hat{a}\), the above interaction Hamiltonian (1) can be simplified after the rotating wave approximation (RWA), \[\hat{H}^{{}^{\prime}}_{\rm int}=\frac{\hbar}{2}\Omega\left\{\hat{\sigma}^{+}e^ {i[\eta(\hat{a}^{\dagger}e^{i\omega_{m}t}+\hat{a}e^{-i\omega_{m}t}-\mu t+ \phi)]}\right\}+\text{h.c.}, \tag{2}\] where \(\mu=\omega-\omega_{0}\) is the detuning between the field frequency and the qubit frequency \(\omega_{0}\), \(\omega_{m}\) is the motional mode frequency, and \(\eta=k\sqrt{\frac{\hbar}{2m\omega_{\rm m}}}\) is the Lamb-Dicke parameter. Generally, the phonon number \(|n\rangle\) of trapped ions is cooled down to the Lamb-Dicke regime [17, 4, 5], where \(\eta\sqrt{n+1}\ll 1\). And in this regime, the interaction Hamiltonian (2) can be further simplified to the following form, \[\tilde{H}_{\rm LD}=\frac{\hbar}{2}\Omega\hat{\sigma}^{+}\left[1+i\eta\left(\hat {a}^{\dagger}e^{i\omega_{\rm m}t}+\hat{a}e^{-i\omega_{\rm m}t}\right)\right]e^{ i\left(\phi-\mu t\right)}+\text{h.c.}. \tag{3}\] In particular, we consider the three types of interaction, the carrier transition (\(\mu=0\)), detuned blue (\(\mu=\omega_{\rm m}+\delta\)), and red sideband transitions (\(\mu=-\omega_{\rm m}-\delta\)), \[\hat{H}_{\rm car} =\frac{\hbar}{2}\Omega(\hat{\sigma}^{+}e^{i\phi_{\rm c}}+\hat{ \sigma}^{-}e^{-i\phi_{\rm c}}), \tag{4}\] \[\hat{H}_{\rm bsb} =i\eta\frac{\hbar}{2}\Omega(\hat{\sigma}^{+}\hat{a}^{\dagger}e^{ i\phi_{\rm b}}e^{-i\delta t}-\hat{\sigma}^{-}\hat{a}e^{-i\phi_{\rm b}}e^{i \delta t}),\] \[\hat{H}_{\rm rsb} =i\eta\frac{\hbar}{2}\Omega(\hat{\sigma}^{+}\hat{a}e^{i\phi_{\rm c }}e^{i\delta t}-\hat{\sigma}^{-}\hat{a}^{\dagger}e^{-i\phi_{\rm r}}e^{-i \delta t}),\] where \(\delta\ll\omega_{\rm m}\). As shown in Fig. 2, the carrier transition \(|0,n\rangle\leftrightarrow|1,n\rangle\) is driven with Rabi frequency \(\Omega\), while the blue (\(|0,n\rangle\leftrightarrow|1,n+1\rangle\)) and red sideband transitions (\(|0,n\rangle\leftrightarrow|1,n-1\rangle\)) are driven with the Rabi frequencies of \(\eta\Omega\sqrt{n+1}\) and \(\eta\Omega\sqrt{n}\), respectively. For an ion-qubit coupled with a single vibrational mode by a near-resonant field, the qubit-state-dependent force can be achieved by combining the blue and red sideband transitions, \[\hat{H}_{\rm SDF} =\hat{H}_{\rm rsb}+\hat{H}_{\rm bsb}\] \[=\eta\frac{\hbar}{2}\Omega(\hat{a}^{\dagger}e^{i\phi_{\rm m}}e^{ -i\delta t}+\hat{a}e^{-i\phi_{\rm m}}e^{i\delta t})\hat{\sigma}^{\phi_{\rm b}}, \tag{5}\] in which we have \(\phi_{\rm S}=(\phi_{\rm b}+\phi_{\rm r}-\pi)/2\), \(\phi_{\rm m}=(\phi_{\rm b}-\phi_{\rm r})/2\), and \(\hat{\sigma}^{\phi_{\rm S}}=\hat{\sigma}^{x}\cos\phi_{\rm S}+\hat{\sigma}^{y} \sin\phi_{\rm S}\). It is clear that \(\phi_{\rm S}\) and \(\phi_{\rm m}\) correlate to the phase of qubit state and vibrational mode, respectively [17, 90]. For the sake of ubiquity, we can generalize the above Hamiltonian in the following way [17, 90], \[\hat{H}_{\rm SDF}=i\hbar(\gamma(t)\hat{a}^{\dagger}-\gamma^{*}(t)\hat{a})\hat{ \sigma}^{\phi_{\rm S}}, \tag{6}\] and the unitary evolution can be written as \[\hat{U}(t)=\hat{D}(\alpha(t)\hat{\sigma}^{\phi_{\rm S}})\exp(i\Phi(t)), \tag{7}\] where \(\hat{D}\) is the displacement operator, and \(\Phi(t)\) is the geometric phase [17]. The expressions of \(\gamma(t)\), \(\alpha(t)\), and \(\Phi(t)\) are expressed as the following way, \[\gamma(t) =-i\frac{\Omega}{2}\eta e^{-i\delta t}e^{i\phi_{\rm m}},\] \[\alpha(t) =\int_{0}^{t}\left(-i\frac{\Omega}{2}\eta e^{-i\delta t}e^{i\phi_ {\rm m}}\right)dt=\frac{\Omega\eta e^{i\phi_{\rm m}}}{2\delta}\left(e^{-i \delta t}-1\right), \tag{8}\] \[\Phi(t) =\text{Im}\left(\int_{0}^{t}\,\alpha(t^{\prime})^{*}\text{d} \alpha(t^{\prime})\right).\] When \(\delta=0\), the spin-dependent force can be simplified to the following expression, \[\hat{H}_{\rm SDF}^{\delta=0}=\eta\frac{\hbar}{2}\Omega\left(\hat{a}^{\dagger}e ^{i\phi_{\rm m}}+\hat{a}e^{-i\phi_{\rm m}}\right)\hat{\sigma}^{\phi_{\rm S}}, \tag{9}\] where the corresponding terms are \(\gamma(t)=-i\Omega\eta e^{i\phi_{\rm m}}/2\), \(\alpha(t)=-i\Omega\eta e^{i\phi_{\rm m}}t/2\), which results in no geometric phase as \(\Phi(t)=0\). However, when the detuning \(\delta\neq 0\), the evolution of \(\alpha\) will go through a circular trajectory, as is drawn in Fig. 3. The left and right branches of this figure which circle in different directions corresponding to the positive and negative eigenstate of \(\hat{\sigma}^{\phi_{\rm S}}\), will generate the same geometric phases equal to the size of the encapsulated area. When the trajectories loop back to the origin, the motional states are disentangled from the spin states. By far, we have illustrated a means of generating \(\sigma^{\phi_{\rm S}}\)-spin-dependent force, which directly leads to the MS gates. We note that the spin-dependent force can be realized in \(\sigma^{\text{x}}\)-basis as discussed in [15, 16, 17], which leads to the LS gates. Figure 2: The illustration for carrier and sideband transitions, where the \(\omega_{0}\) and \(\omega_{\rm m}\) are the qubit frequency and motional mode frequency. The abbreviations of car (black line), rsb (red line) and bsb (blue line) represent carrier, red- and blue-sideband transitions, respectively. Figure 3: The trajectory of motion is driven by the spin-dependent force in the phase space. The state \(|+\rangle\) and \(|-\rangle\) represent the two eigenstates of \(\hat{\sigma}^{\phi_{\rm S}}\) with eigenvalues \(+1\) and -1, respectively. ### Laser gate In ion trap systems, the two-qubit entangling gates can still be implemented via the coupling between the internal states and the collective vibrational modes. In this section, we mainly focus on using a single collective mode and later we will discuss the general situation with more-than one collective vibrational mode. The trapped-ion entangling gates can be realized in various different ways with or without laser beams. Throughout the years, employing combinations of tunable lasers to implement multi-qubit gates has been the most prevalent and exhibited scenario, either for the hyperfine qubits or the optical qubits. The hyperfine qubits are typically incorporated with Raman transitions and the optical qubits are operated by applying direct transitions with narrow-linewidth. The original entangling gate between two ion-qubits was proposed by Cirac and Zoller [1] and realized with a single qubit and two qubits [2; 3]. However, the CZ gate requires cooling to the ground state of the collective vibrational mode and the laser addressing on individual ions, which made it difficult to realize high-fidelity gates. Different from the CZ gate scheme, the Molmer-Sorenson(MS) gate [10; 11; 12] is independent with phonon number \(\ket{n}\) of the vibrational mode of the interest, which is more experimentally favorable than the CZ gate scheme. For the MS gate scheme, a pair of trapped ions are illuminated with bichromatic laser fields simultaneously, and the ions will be driven by the state-dependent force to follow different motional trajectories depending on qubit states in the position-momentum phase space [12; 17]. In order to be an entangling operation, the internal states should be disentangled at the end of the gate with the accumulated geometric phase of \(\pi/4\). The MS gate scheme was firstly demonstrated with \({}^{9}\mathrm{Be}^{+}\) hyperfine qubits [13] in 2000. The MS gates have been popularly demonstrated in many ion-trap groups with up to the fidelity of 99.9 % [18; 91]. The effective gate-error rates of the two-qubit gates were also suppressed to \(\left(0.96\pm 0.10\right)\times 10^{-3}\) from \(10^{-2}\) level of physical infidelity by using the error mitigation technique [92]. For MS gate, the spin-flip process is actualized by using a bichromatic laser whose central frequency corresponds to the single-flip resonance frequency, and the sideband frequency is slightly detuned from the vibrational mode, as Fig. 4 shows [93]. To simplify our discussion, only one axial mode will be included below. In this way, the Hamiltonian may be derived from directly extending Eq. (5) to a two-ion scenario [17], \[\hat{H}_{\mathrm{SDF}}=\sum_{j=1}^{2}\eta_{j}\frac{\hbar\Omega}{2}(\hat{a}^{ \dagger}\mathrm{e}^{-i\delta t}\mathrm{e}^{i\phi_{m,j}}+\hat{a}\mathrm{e}^{i \delta t}\mathrm{e}^{-i\phi_{m,j}})\hat{\sigma}_{j}^{\phi_{B,j}}, \tag{10}\] where \(\eta_{j}\) and \(\phi_{m,j}\) represent the Lamb-Dicke parameter and the phase correlated to the vibrational mode of each ion, respectively. Suppose we have \(\phi_{\mathrm{S},1}=\phi_{\mathrm{S},2}=\phi_{\mathrm{S}}\), then, the above Hamiltonian will lead to the following evolution operator, \[\hat{U}(t)=\exp\left[\sum_{j=1}^{2}\left(\alpha_{j}(t)a^{\dagger}-\alpha_{j}^{ *}(t)a\right)\hat{\sigma}_{j}^{\phi_{\mathrm{S}}}-i\frac{\theta(t)}{2}\hat{ \sigma}_{1}^{\phi_{\mathrm{S}}}\hat{\sigma}_{2}^{\phi_{\mathrm{S}}}\right], \tag{11}\] where \[\alpha_{j}(t)=-i\frac{\eta_{j}\Omega}{2}\int_{0}^{t}\,\mathrm{e}^{-i\delta t} \mathrm{e}^{i\phi_{m,j}}\mathrm{d}t=\frac{\Omega\eta_{j}\mathrm{e}^{i\phi_{m,j}}}{2\delta}\left(\mathrm{e}^{-i\delta t}-1\right), \tag{12}\] \[\theta(t)=\eta_{1}\eta_{2}\Omega^{2}\left(\frac{t}{\delta}-\frac{\sin(\delta t )}{\delta^{2}}\right). \tag{13}\] Viewing from phase space, the internal ion states will be fully disentangled with the vibrational mode after the phase trajectories \(\alpha_{j}(t)\) of Eq. (12) are closed, and the first term of Eq. (11) disappears. At this time, \(\theta(t)\) the geometric phase of the two-ion gate shown in Eq. (13) is obtained [12], and the evolution of MS interaction can be simplified to [94], \[\hat{U}(\theta)=\exp\left(-i\frac{\theta}{2}\hat{\sigma}_{1}^{\phi_{\mathrm{S} }}\hat{\sigma}_{2}^{\phi_{\mathrm{S}}}\right). \tag{14}\] Another typical two-qubit entangling gate scheme was proposed by Milburn in 2000 [15] and demonstrated with \({}^{9}\mathrm{Be}^{+}\) hyperfine qubits [16], which is so-called LS gate. Similar to the MS gate scheme, the LS gate is also driven by the state-dependent force and generates the entangling interaction between ion qubits with the collective vibrational mode. However, the \(\ket{0}\leftrightarrow\ket{1}\) states transition is not required during LS gate operation. LS gate is mainly achieved by applying two beams with a frequency difference close to a target vibration mode, which is coupled in the direction along the wavevector difference. In Figure 4: The illustration of two typical paths for MS gate, which are driven by the bichromatic laser fields regardless of the phonon number \(\ket{n}\), and the \(\omega_{0}\) and \(\delta\) are the qubit frequency and laser detuning. the Lamb-Dicke regime and the rotating wave approximation, the interaction hamiltonian can be written as: \[\hat{H}_{\text{I}}=\sum_{j=1,2}\sum_{s=0,1}\frac{\hbar\eta_{j}}{2}\Omega_{s}\ket{ s}\bra{s}_{j}\left[\hat{a}e^{i(\delta t-\phi_{\text{m},j})}+\hat{a}^{\dagger}e^{-i( \delta t-\phi_{\text{m},j})}\right]. \tag{15}\] When we have \(\Omega_{0}=-\Omega_{1}=\Omega\), we may transform the Hamiltonian into the following form: \[\hat{H}_{\text{SDF}}=\sum_{j=1}^{2}\eta_{j}\frac{\hbar\Omega}{2}(\hat{a}^{ \dagger}\text{e}^{-i\delta t}\text{e}^{i\phi_{\text{m},j}}+\hat{a}\text{e}^{i \delta t}\text{e}^{-i\phi_{\text{m},j}})\hat{\sigma}_{j}^{z}. \tag{16}\] We can immediately discover that the above Hamiltonian of Eq. (16) is almost the same as the MS gate Hamiltonian of Eq. (10), except \(\hat{\sigma}_{j}^{\phi_{\text{S}}}\) is replaced by \(\hat{\sigma}_{j}^{z}\), which indicates the similarity of the MS gate and the LS gate. In this case, we may acquire the evolution operator in a form similar to Eq. (11), \[\hat{U}(t)=\exp\Biggl{[}\sum_{j=1}^{2}\left(\alpha_{j}(t)a^{ \dagger}-\alpha_{j}^{*}(t)a\right)\hat{\sigma}_{z}^{j}-i\frac{\theta}{2}(t) \hat{\sigma}_{1}^{z}\hat{\sigma}_{2}^{z}\Biggr{]}, \tag{17}\] where the definition of \(\alpha_{j}(t)\) and \(\theta(t)\) directly follow Eqs. (12, 13). When the single qubit interaction terms vanish, we have, \[\hat{U}(\theta)=\exp\left(-i\frac{\theta}{2}\hat{\sigma}_{1}^{z} \hat{\sigma}_{2}^{z}\right). \tag{18}\] The imperfection of the LS gate comes from multiple sources, including higher-order terms beyond the Lamb-Dicke regime, laser-control errors, motional decoherence, unwanted mode coupling, scattering error, etc. This type of gate has been demonstrated on hyperfine qubits [16; 19] and optical qubits [20; 95]. Fast gate with fidelity of 99.8% in 1.6 \(\mu\)s has been achieved on \({}^{43}\)Ca\({}^{+}\) hyperfine qubits [96]. Bell-State with infidelity of 6(3)\(\times 10^{-4}\) without subtraction of experimental errors has been directly measured with \({}^{40}\)Ca\({}^{+}\) optical qubits [97]. The LS gate was considered to be impossible to implement with field-insensitive qubits [17]. However, it was proposed to realize the LS gates with field-insensitive qubits by using the laser detunings at the middle of trap frequencies [98; 93] or narrow line transitions [99]. These LS gates with clock states have been realized [100; 101; 102]. The MS and LS gates have also been applied to the multi-species atomic ions [103; 104; 105; 106; 107; 108; 109; 110]. The Oxford group has achieved gate fidelity of 99.8 % between a \({}^{43}\)Ca\({}^{+}\) hyperfine qubit and a \({}^{88}\)Sr\({}^{+}\) Zeeman qubit [109]. ### Laser less gate Microwave or RF field is used to implement high-fidelity single-qubit gates with convenient control capability of amplitude and phase. In order to couple qubit and motion for multi-qubit gates, the strength of the field should vary significantly in the range of effective size of the ion, which is \(z_{0}\sim 10\)\(nm\). However, the gradient of the microwave or RF field is proportional to \(1/\lambda\sim 1/100\)\(mm\), which is too small to drive the spin-motion interaction. In other words, the Lamb-Dicke parameter of the microwave field is around \(\eta\sim 10^{-7}\), which produces negligible spin-motion coupling. A method to address this limitation is adding static magnetic fields with large gradients or using near-fields of the oscillating magnetic field, where the gradient is not limited by the wavelength of the field. The static magnetic-field gradient used to realize spin-motion coupling was first proposed by Mintert and Wunderlich in 2001 [25]. When an ion interacts with a static magnetic-field gradient \(\frac{\partial B_{z}}{\partial z}\) and a near-qubit-frequency microwave, the Hamiltonian can be expressed as below, \[\hat{H} =\frac{1}{2}\hbar\omega_{0}\hat{\sigma}^{z}+\hbar\omega_{m}\hat{a }^{\dagger}\hat{a}\] \[+\frac{1}{2}\mu_{z}(B_{0}+\frac{\partial B_{z}}{\partial z}z) \hat{\sigma}^{z}+\hbar\Omega_{\mu}\hat{\sigma}^{x}\cos(\omega_{\mu}t), \tag{19}\] where \(\omega_{0}\) is the qubit frequency, \(\omega_{m}\) is the vibrational mode frequency, \(\mu_{z}\) is the magnetic dipole moment, \(B_{0}\) is the magnetic field at the equilibrium position of the ion, \(\frac{\partial B_{z}}{\partial z}\) is the gradient of the magnetic field, \(z\) is the displacement around the equilibrium position, \(\Omega_{\mu}\) is the rabi frequency which quantifies the strength of dipole interaction, and \(\omega_{u}\) is the microwave near-qubit-frequency. The displacement \(z\) can be changed to \(z=z_{0}(\hat{a}+\hat{a}^{\dagger})\), where \(z_{0}=\sqrt{\frac{\hbar}{2m\omega_{m}}}\), is the size of ground motional state. When applying the transform \(\hat{H}^{\prime}=e^{\hat{S}}\hat{H}e^{-\hat{S}}\), where \(\hat{S}=\eta^{\prime}(\hat{a}^{\dagger}-\hat{a})\hat{\sigma}_{z}\), \(\eta^{\prime}=\frac{\mu_{z}z_{0}}{2\hbar\omega_{m}}\frac{\partial B_{z}}{ \partial z}\), the interaction part of Eq.19 can be expressed as [111; 25]: \[\hat{H}_{I}^{{}^{\prime}}=\frac{1}{2}\hbar\Omega_{\mu}(\hat{\sigma}^{+}e^{\eta^ {\prime}(\hat{a}^{\dagger}-\hat{a})}+\hat{\sigma}^{-}e^{-\eta^{\prime}(\hat{a}^ {\dagger}-\hat{a})})(e^{i\omega_{\mu}t}+e^{-i\omega_{\mu}t}). \tag{20}\] As a consequence, the ion feels an effective Lamb-Dicke parameter \(\eta^{\prime}\), which means the qubit-motion coupling can be amplified by enlarging the gradient [112; 113]. From another perspective, if the gradient term is removed, the interaction term in Eq. 19 can only drive spin-flip with \(\delta n=0\). However, when introducing the gradient, the overlap between the motion states of \(\ket{0}\) and \(\ket{1}\) changed, which means the spin-flip with \(\delta n\neq 0\) can be achieved, as shown in Fig. 5(a). The static magnetic-field gradient in experiments can be produced by permanent magnets [27; 28; 113] or wires with dc current [114; 115; 116]. Because of the spatially varied magnetic field, the resonance frequencies of ions are position-dependent, which provides a means to realize individual addressing by applying different microwave frequencies [112; 117]. The cross-talk of individual addressing by frequency selection was measured to be as low as \(10^{-5}\)[118], where the cross-talk was quantified by the number of excitations in all non-addressing qubits. The individual addressing methods were extended to the entangling gate operations, and nearest or non-nearest neighbor interactions of ions were demonstrated [27]. A shortcoming of the static magnetic-field gradient scheme is the necessity of using a magnetic-field sensitive qubit, where gate fidelity and coherence time can be seriously degraded by magnetic field fluctuation. A dressed state qubit with microwave, an effective clock qubit, was proposed and realized to show the coherence time is increased by two orders of magnitude to that of the bare magnetic-sensitive qubit [26]. The dressed qubits have been further developed and a two-qubit gate with the fidelity of 98.5 % and duration of 2.7 ms has been demonstrated [119; 120; 121; 28; 64]. Oscillating near-qubit-frequency magnetic field gradient was proposed as an alternative method to couple qubit and motion [23]. Taking the gradient in the x-axis as an example, the interaction Hamiltonian can be written as: \[\hat{H}_{I}=\mu_{x}\frac{\partial B}{\partial x}x_{0}(\hat{a}+\hat{a}^{\dagger })\hat{\sigma}_{x}\cos(w_{g}t), \tag{21}\] where \(\omega_{g}\) is the frequency near the qubit frequency \(\omega_{0}\). When using interaction picture with \(\hat{H}_{0}=\frac{1}{2}\hbar\omega_{0}\hat{\sigma}^{z}+\hbar\omega_{m}\hat{a}^ {\dagger}\hat{a}\), \(\hat{H}_{I}\) can be rewritten as: \[\hat{H}_{I}^{{}^{\prime}}=\hbar\hat{\sigma}_{x}\Omega_{g}\hat{a}e^{-i(\delta+ \omega_{m})t}+h.c., \tag{22}\] where \(\delta=\omega_{g}-\omega_{0}\), and \(\Omega_{g}=\frac{\mu_{x}x_{0}}{2\hbar}\frac{\partial B}{\partial x}\). Therefore, the qubit-motion coupling can be achieved. In the experiment, microwave-based two-qubit \(\hat{\sigma}_{\phi}\hat{\sigma}_{\phi}\) entangling gate was performed with the fidelity of 76(3) % [24] in 2011, whose near-qubit-frequency microwave gradient was created by the current in electrodes integrated into a microfabricated trap. Individual addressing was realized with spin-flip cross-talk errors on the order of \(10^{-3}\)[122]. However, the fidelity of the two-qubit gate is greatly affected by residual microwave field and magnetic field fluctuation [123]. To alleviate these two limitations, the dynamical-decoupling method was applied to improve the fidelity to 99.7 % with a gate time of 3.25 ms [29]. Another method to reduce the error from motional-mode frequency fluctuation by modulating the amplitude of the microwave was proved and realized to reach the mean fidelity of 99.7 % with the gate time of 2.938 ms [124]. The method of oscillating-magnetic field gradient was integrated into a surface trap and got the two-qubit fidelity of 98.2 % [125]. However, it is difficult to generate the gigahertz-frequency magnetic-field gradient for a hyperfine qubit. Recently, inspired by using running optical lattice to control the ion-motion [30], a new method to couple the spin and motion by combining near-motion-frequency oscillating magnetic field gradient with near-qubit-frequency microwave was proposed and applied to perform side-band cooling to its ground state [31]. In the scheme, the interaction Hamiltonian can be shown below: \[\hat{H}_{I}=\hbar\Omega_{g}(\hat{a}+\hat{a}^{\dagger})\hat{\sigma}_{z}\cos( \omega_{g}t)+\hbar\Omega_{\mu}\hat{\sigma}^{x}\cos(\omega_{\mu}t), \tag{23}\] where \(\Omega_{g}=\frac{\mu_{x}x_{0}}{2\hbar}\frac{\partial B_{z}}{\partial x}\), \(\omega_{g}\) is the frequency near the motion frequency \(\omega_{m}\), \(\omega_{\mu}\) is the frequency near the qubit frequency \(\omega_{0}\), and \(\Omega_{\mu}\) is the Rabi frequency of magnetic dipole interaction. Transforming to the interaction picture with respect to \(\hat{H}_{0}=\hbar\omega_{g}\hat{a}^{\dagger}\hat{a}\) and drop the fast term, the total Hamiltonian will change to: \[\hat{H} = \frac{1}{2}\hbar\omega_{0}\hat{\sigma}^{z}+\hbar(\omega_{m}- \omega_{g})\hat{a}^{\dagger}\hat{a} \tag{24}\] \[+ \frac{1}{2}\hbar\Omega_{g}\hat{\sigma}^{z}(\hat{a}+\hat{a}^{ \dagger})+\hbar\Omega_{\mu}\hat{\sigma}^{x}\cos(\omega_{\mu}t), \tag{25}\] which is similar to the Hamiltonian in static magnetic field gradient method, except changing the \(\omega_{m}\) to \(\omega_{m}-\omega_{g}\) in Eq. (19) showed in Fig. 5(b). In the experiment, a robust gate to qubit frequency fluctuations and motional decoherence was proposed by using intrinsic dynamical decoupling [126]. The experimental realization of the gate produced a symmetric entangled state with near-perfect fidelity and antisymmetric with 99.77 % [32]. ## IV IV. Discussion about the control methods Recently many modulation schemes have been developed for the entangling gate operations with trapped ions. The main purposes to include modulations in the gate can be categorized as the following three : (1) to make the gate robust against experimental imperfections, (2) to speed up the gate, and (3) to implement the gates in a single trap with multiple ions. As discussed in the previous section, a perfect two-qubit entangling gate should satisfy two constraints at the end of the gate: (a) The trajectory in the phase space of the vibrational mode of an ion chain is closed. (b) The areas enclosed by the phase-space trajectory in the Figure 5: Schematic description of a qubit coupled to a harmonic oscillator with a spin-dependent displacement from (a) a static magnetic field gradient or (b) an oscillating near-motion-frequency magnetic field gradient. Adapted from Ref. [31] mode are \(\theta=\pi/4\). However, due to the experimental imperfections such as frequency fluctuations of the driving field, timing error of the gate, and amplitude fluctuation, the trajectory of the vibrational mode can not be closed as designed and the accumulated geometric phase may not be exactly \(\pi/4\), which decreases the fidelity of gates. The robustness of gates against imperfections can be improved by including modulations on the driving fields. The duration of two-qubit gates can be shortened by increasing the amplitude of Rabi-frequency \(\Omega\) in Eq. (10). If the strength of sideband transitions \(\eta\Omega\) is comparable to the frequency difference of two vibrational modes, which are the center of mass (COM) and the stretch modes, we cannot use the single mode description discussed in the previous section. It is necessary to include the effects of both vibrational modes for the two-qubit gate operations. That is, the phase-space trajectories of both modes should be closed and the sum of the areas enclosed by the phase-space trajectories in both modes should be \(\pi/4\)[127, 128, 129, 38, 130, 51]. These requirements can be achieved by using the modulation methods and experimentally realized [96, 88]. One promising scheme to scale up the number of ions for quantum computation is to perform quantum gates on ion qubits confined in a long ion chain [82, 34, 73]. In the single-trap scheme, the transverse modes are more popularly used than axial modes. It is mainly due to the fact that better laser cooling can be achieved in transverse modes since the frequencies of transverse modes are larger than those of axial modes in the linear chain with a large number of ions [131, 35]. However, the frequency spacings between the modes are getting smaller as the number of ions increases in the single linear trap, which makes it challenging to address only a single vibrational mode for the entangling gate. Similar to the fast gate, it is necessary to close all the trajectories of the modes and make the sum of the phase to be \(\pi/4\) for target ions as shown in Fig. 6. In order to achieve these goals, many different types of modulation such as amplitude [33, 34, 35, 36, 38, 60, 93, 124, 132], phase [39, 32, 40, 133, 41, 134, 32], frequency modulation [42, 43, 44, 135, 136], and multi-frequency have been developed [45, 46, 47, 48]. Here we briefly overview the developments of those modulations and discuss the multi-frequency modulation can provide a general method covering all the other modulation methods. ### Amplitude The amplitude modulation method is a scenario that changes the amplitude of the driving field, \(\Omega\) in Eq. (10) from a constant to a time-dependent sequence during the gate operation. Continuous-amplitude modulation scheme for two-qubit gates was proposed for robustness to the optical phase fluctuation of driving beams [93]. This scheme was applied to MS gates on optical qubits of \({}^{40}\)Ca\({}^{+}\) ions Figure 6: Scheme for an entangling operation on the two ion-qubits in the middle of a four-ion chain. (a) The modulation scheme of the pulse sequence in motional phase \(\phi_{j}\) and the amplitude, the Rabi frequency \(\Omega_{j}\), which are corresponding to the colored segments and the black solid curve respectively. (b) Accumulation in the geometric phase \(\theta\) on the middle two ion-qubits. (c) Phase-space trajectories of four vibration modes. Modified from [49] with the duration of \(50\mu s\) and the fidelity of 99.3% [60]. The amplitude-shaped pulses were also used to realize MS gates on the ions in the thermal state with average phonon number \(\bar{n}=20\), which showed the fidelity of 97.4 % in 25 \(\mu s\) duration [132]. The continuous amplitude modulation is also applied to the MS gates by using a near-field microwave [124]. It was confirmed that this scheme is robust to frequency fluctuations of the vibrational modes. The gate was implemented with hyperfine qubits in \({}^{9}\)Be\({}^{+}\) ions with 99.7 % fidelity [124]. The pulse timing and consequent phase space trajectory are as shown in Fig. 7. The discrete amplitude modulation was proposed to realize two-qubit gates with multiple vibrational modes [33; 34; 35]. In the scheme, the laser field was divided into isochronous segments with different intensities to fulfill the constraints discussed above for ideal gates. This approach was implemented using hyperfine \({}^{171}\)Yb\({}^{+}\)-ion qubits in a five-ion chain with gate time and fidelity of 190\(\mu s\) and 95%, respectively [36]. Another scheme of discrete amplitude modulation scheme was proposed to regard the duration and amplitude of each optical pulse as variables [38]. The optimal pulse sequence is considered to be insensitive to the optical phase of driving laser [38]. The above scenario was implemented in the hyperfine qubits of \({}^{43}\)Ca\({}^{+}\) ions. The implemented two-bit gate has 99.8 % fidelity during 1.6 \(\mu s\)[96]. Additionally, there are applications of the amplitude modulation gates, such as programmable quantum computer [37], and parallel entangling gates [137; 50]. ### Phase The phase modulation (PM) method is to modulate the vibrational phases \(\phi_{j}\) in Eq. (10) of the spin-dependent force during the gate operation. The PM has an advantage in experimental realization since it can be more precisely controlled in the experiment than amplitude. The first PM was proposed to alternating the vibrational phases between 0 and \(\pi\) in the form of the Walsh functions for suppressing effects of a certain frequency and timing errors, which was realized with \({}^{171}\)Yb\({}^{+}\)[133]. This scheme does not require optimizing the pulse sequence, but the duration of the gate increases as the higher orders of Walsh functions are used for enhanced robustness. The Walsh function method was also utilized in mixed-species [109] and \({}^{40}\)Ca\({}^{+}\)[20] ions system. Similar to the Walsh function method, a phase modulation scheme based on the dynamical decoupling pulses technique has been proposed and implemented, which improved the robustness against dephasing noise of the qubit without much time overhead of the dynamical decoupling pulses [134]. A more flexible piecewise-constant PM scheme with continuous values of phases was proposed to implement two-qubit gates for both robustness and high fidelity with multiple vibrational modes [39]. This scheme was implemented in the hyperfine qubit of two \({}^{171}\)Yb\({}^{+}\) ions, which demonstrated the robustness of this scheme to static or time-varying errors of laser amplitude and detuning. Their two-qubit achieves an average 99.4% fidelity in about 310 \(\mu s\)[40]. We note that compared with amplitude modulation, it can change the orientation of the trajectory in phase space promptly, which is more proper to prevent the vibrational state of ions from being excited beyond the Lamb-Dicke regime and to implement a fast gate [51]. ### Frequency The frequency modulation (FM) method is to modulate the laser frequency \(\omega\), which can be considered as \(\mu\) above the Eq. (10). The FM was introduced for the robustness of the two-qubit gates with multiple vibrational modes [42]. The FM can be conceptually equivalent to the PM, but in the experimental realizations, wider ranges of frequencies are modulated, which cannot be simply converted to the PM. The FM is implemented by directly adjusting the frequency of the driving field through Acusto-Opic Modulator (AOM), but it will also change the deflection angle of the beam, causing errors in the offset of the beam focus point. A continuous frequency-modulated laser was developed to minimize the residual spin-motion entanglement at the end of the two-qubit gate, making it robust to frequency error and achieving 98.3 % fidelity in experiments with 5 ions [42] as shown in Fig. 8. With two individually addressed beams focused on the ion chain, discrete frequency modulation was applied to realize the two-qubit gate fidelity of 99.49 % in the 2-ion chain and 99.30 % fidelity in the 4-ion chain in 200 \(\mu s\)[135]. Inspired by machine learning, the frequency Figure 7: The phase-space trajectories in the case of three amplitude-modulation schemes, which are square pulse (blue), first (orange) and second (green) sin\({}^{2}\) type sequence shown in the inset [124]. modulation gate was optimized by using a large sample set and mini-batches. The batch-optimized frequency-modulation gate reached the gate fidelity of 99.08 % in 120 \(\mu s\)[136]. Apart from the work with a single modulation, there are also progresses with two modulations, such as amplitude modulation plus frequency modulation [43, 44], amplitude modulation plus phase modulation [41, 32, 49]. Table 1 summarizes some representative works of recent experimental realization in two-qubit gates. ### Multi-frequency Methods The multi-frequency modulation method realizes the gate by simultaneously driving multiple-motional modes using the laser field of multiple frequencies with different amplitudes. It can provide the general and systematic control methods including all the effects of amplitude, phase, and frequency modulation. The field for MS gates on \(j\)-th ion can be simplified as \(\gamma_{j}(t)=\Omega_{j}e^{-i\mu t}e^{i\phi_{\alpha,j}}\) as shown in Eq. (10). The amplitude, the phase, and the frequency of the field are \(\Omega_{j}\), \(\phi_{m,j}\), and \(\mu\), respectively. We can generate an arbitrary waveform of the field with amplitude, phase, and frequency modulation by letting them the functions of time, which can be decomposed using Fourier expansion as \[\gamma_{j}(t)=\sum_{n=-\infty}^{\infty}\Omega_{n,j}\exp(-in\omega t), \tag{26}\] where \(\Omega_{n,j}\) is a complex amplitude of \(n^{\rm th}\) component of the frequency \(n\omega\), and \(\omega=\frac{2\pi}{\tau_{g}}\) with \(\tau_{g}\) at the duration of the gate. This Fourier expansion can be seen as applying multi-frequency components. From the completeness of the Fourier series, we find the generality of the multi-frequency method as an arbitrary modulation with amplitude, phase, and frequency. The multi-frequency method was proposed to improve the gate performance and implement a global entangling gate [45]. It pointed out that the phase trajectories of a multi-tone gate are closer to closure under non-ideal conditions than single-tone gate [45]. These multi-frequency MS gates were realized with \({}^{171}\)Yb\({}^{+}\) and \({}^{88}\)Sr\({}^{+}\) ions in experiments and the robustness of the multi-frequency gates was experimentally verified against gate time error and frequency detuning drift [46, 47]. The multi-frequency method was further developed to include general constraints for robustness conditions to mitigate the effects of frequency drift, gate time offset, and carrier coupling to achieve a robust global entangling gate [48, 94]. In the next section, we will discuss the details of the multi-frequency methods for robustness, speed-up, and multi-qubit operations. ## V V. Multi-frequency method In this section, first, we review the trapped-ion gates based on various state-dependant forces in detail, mainly for hyperfine qubits, and discuss the general framework of the multi-frequency methods. Finally, we consider the application of multi-frequency methods for fast and global entangling gates with and without individual addressing capability of the systems. ### Various types of spin-dependent force We first briefly recall the theory of the entangling gates based on the spin-dependent forces, covering both cases of the \(\hat{\sigma}_{\phi}\) gate and the \(\hat{\sigma}_{z}\) gate [17]. As mentioned above, transitions between qubit states can be driven by non-copropagating Raman lasers. Here we consider Raman couplings mediated by an excited electronic state \(\ket{e}\), with laser configurations shown in Figs. 9 (a, b), and assume the net wave vector of the Raman process is parallel to the \(x\)-axis of the trap. To mitigate the spontaneous emission, the detuning \(\Delta\) from the excited state is large, so the condition \(\Delta\gg\omega_{\rm hf}\) holds good in both cases. The interaction Hamiltonian between the ions and multiple lasers is written as follows (set Planck constant \(\hbar=1\)), \[\hat{H}(t) = \sum_{i}\sum_{\alpha}\sum_{s\in\{0,1\}}g_{\alpha,i,s}\cos\left( \mathbf{k}_{\alpha}\cdot\mathbf{r}_{i}-\omega_{\alpha}t-\phi_{\alpha,i}\right)\] \[\times\left(\ket{e}\bra{s}{i_{i}}+\ket{s}\bra{e}{i_{i}}\right),\] with \(\mathbf{k}_{\alpha}\) and \(\omega_{\alpha}\) being the wave vector and the frequency of the corresponding Raman laser. With the capability of single-ion addressing, it is possible to tune independently the single-photon Rabi frequcencie \(g_{\alpha,i,s}\) and the phase \(\phi_{\alpha,i}\), which provides enough freedom to construct Figure 8: Frequency modulation pulse for 2-qubit gate in 5-ion chain, the green lines are the mode frequencies. [42]. multi-qubit entangling operations including the global gate [140; 141]. For the case of the \(\sigma_{\phi}\)-gate, the qubit states \(\left|0\right\rangle\) and \(\left|1\right\rangle\) can be encoded with hyperfine levels insensitive to the magnetic field, i.e. the clock states, which have the same single-photon Rabi frequencies, \(g_{\alpha,i,0}=g_{\alpha,i,1}\equiv g_{\alpha,i}\). Transitions between the qubit levels can be induced by a monochromatic (subscripted by "A") and a biharmonic (subscripted by "Br" and "Bb") Raman lasers, as shown in Fig. 9 (a), with the frequency difference of the two Raman beams being around the hyperfine splitting \(\omega_{\mathrm{hf}}\) and the detuning of the stimulated Raman process to the carrier transition denoted by \(\mu\). If the frequency of Raman laser A is larger or smaller than both of the two frequencies in Raman laser B, the spin part of the effective laser-ion interaction after eliminating the excited state will depend on the optical phase difference between the Raman lasers. As a result, this type of entangling gate is called the phase-sensitive gate. (for example see Fig. 9 (c), where the frequencies of the Raman lasers can be set as \(\omega_{Br}=\omega_{A}-\omega_{\mathrm{hf}}+\mu\) and \(\omega_{Bb}=\omega_{A}-\omega_{\mathrm{hf}}-\mu\)) After eliminating the excited state and introducing the Lamb-Dicke approximation, the effective laser-ion interaction Hamiltonian in the rotating frame is \[\hat{H}_{I}(t)= \sum_{m}\omega_{m}\hat{a}_{m}^{\dagger}\hat{a}_{m}+\cos\mu t\sum _{i}\Omega_{i} \tag{28}\] \[\times\Bigg{[}\hat{\sigma}_{i}^{\phi_{i}}+\sum_{m}\eta_{i,m}\left( \hat{a}_{m}^{\dagger}+\hat{a}_{m}\right)\hat{\sigma}_{i}^{(\phi_{i}-\frac{5}{ 2})}\Bigg{]},\] where the effective Rabi frequency and the spin phase are \(\Omega_{i}=\frac{g_{A,i}g_{Br,i}}{2\lambda}\) and \(\phi_{i}=\phi_{A,i}-\phi_{Br,i}-\delta k\bar{x}_{i}\), respectively, where we assume \(g_{Br,i}=g_{Bb,i}\) and \(\phi_{Br,i}=\phi_{Bb,i}\). Here the net-transfered wave vectors \(\delta k\hat{\mathbf{e}}_{x}=\mathbf{k}_{A}-\mathbf{k}_{Br}=\mathbf{k}_{A}- \mathbf{k}_{Bb}\) are along the \(x\)-direction with the unit vector \(\hat{\mathbf{e}}_{x}\) and \(\bar{x}_{i}\) is the equilivrium position of the \(i\)-th ion. The site- and mode-resolved Lamb-Dicke parameters are \(\eta_{i,m}=\eta_{m}b_{i,m}\) with \(\eta_{m}=\delta k\sqrt{\frac{\hbar}{2M\omega_{m}}}\) and \(b_{i,m}\) being the element of the matrix that diagonalizes the collective motion of the ion crystal. On the other hand, if the frequency of Raman laser \(A\) is lying in between those of the Raman laser B, as shown in Fig. 9 (d), the effective ion-laser interaction Hamiltonian becomes, \[\hat{H}_{I}(t)= \sum_{m}\omega_{m}\hat{a}_{m}^{\dagger}\hat{a}_{m}+\sum_{i} \Omega_{i}\Big{[}\cos\left(\mu t-\phi_{i}\right) \tag{29}\] \[-\sin\left(\mu t-\phi_{i}\right)\sum_{m}\eta_{i,m}\left(\hat{a}_ {m}^{\dagger}+\hat{a}_{m}\right)\Big{]}\hat{\sigma}_{i}^{x}.\] In this case, the spin part is independent of the phase difference of the Raman laser beams, thus leading to the phase-insensitive gate. Besides the \(\hat{\sigma}_{\phi}\)-dependent force, it is also possible to induce \(\hat{\sigma}_{z}\)-dependent force using magnetically sensitive hyperfine states, with the laser configuration shown in Fig. 9 (b), where there are both monochromatic lasers along the \(A\) and \(B\) optical paths. With the laser frequencies shown in Fig. 9 (e), i.e. \(\omega_{B}=\omega_{A}-\mu\), the effective Hamiltonian becomes \[\hat{H}_{I}(t)= \sum_{m}\omega_{m}\hat{a}_{m}^{\dagger}\hat{a}_{m}+\sum_{i} \Omega_{i}\Big{[}\cos\left(\mu t-\phi_{i}\right)\] \[-\sin\left(\mu t-\phi_{i}\right)\sum_{m}\eta_{i,m}\left(\hat{a}_ {m}^{\dagger}+\hat{a}_{m}\right)\Big{]}\hat{\sigma}_{i}^{z}. \tag{30}\] Note that in this case the effective Rabi frequency \(\Omega_{i}\) depends on the differential AC-Stark effect, and thus this scheme only works for magnetically sensitive ion qubits. There is an alternative proposal for the light-shift or \(\hat{\sigma}_{z}\) gate with clock-state qubits [93; 98; 99], which has been adopted by the Honeywell ion-trap group [101]. \begin{table} \begin{tabular}{l c c c c c c c c} \hline Modulation & Ions & Qubit & Control method & Ion number & Fidelity & Gate time & Year & Reference \\ \hline Amplitude & \({}^{40}\)Ca\({}^{*}\) & Optical & Laser & Two & 99.3\% & 50\(\mu s\) & 2008 & [60] \\ Amplitude & \({}^{171}\)Yb\({}^{*}\) & Hyperfine & Laser & Five & 95.0\% & 190\(\mu s\) & 2014 & [36] \\ Amplitude & \({}^{43}\)Ca\({}^{*}\) & Hyperfine & Laser & Two & 99.8\% & 1.6\(\mu s\) & 2018 & [96] \\ Amplitude & \({}^{9}\)Be\({}^{*}\) & Hyperfine & Microwave & Two & 99.7\% & \(\sim 300\mu s\) & 2019 & [124] \\ Phase & \({}^{43}\)Ca\({}^{*,88}\)Sr\({}^{*}\) & Hyperfine, Zeeman & Laser & Two & 99.8\% & 49.2\(\mu s\) & 2020 & [109] \\ Phase & \({}^{171}\)Yb\({}^{*}\) & Hyperfine & Laser & Two & 99.4\% & \(\sim 310\mu s\) & 2020 & [40] \\ Phase & \({}^{25}\)Mg\({}^{*}\) & Hyperfine & Microwave & Two & \(1_{-0.0017}^{+0}\) & 740\(\mu s\) & 2021 & [32] \\ Phase & \({}^{40}\)Ca\({}^{*}\) & Optical & Laser & Two & 99.94\% & 35\(\mu s\) & 2021 & [97] \\ Frequency & \({}^{171}\)Yb\({}^{*}\) & Hyperfine & Laser & Five & 98.6\% & 90\(\mu s\) & 2018 & [42] \\ Frequency & \({}^{171}\)Yb\({}^{*}\) & Hyperfine & Laser & Four & 99.30\% & 200\(\mu s\) & 2020 & [135] \\ None & \({}^{171}\)Yb\({}^{*}\) & Dress state & Microwave & Two & 98.5\% & 2.7\(\mu s\) & 2016 & [28] \\ None & \({}^{43}\)Ca\({}^{*}\) & Hyperfine & Microwave & Two & 99.7\% & 3.25\(m s\) & 2016 & [29] \\ None & \({}^{43}\)Ca\({}^{*}\) & Hyperfine & Laser & Two & 99.9\% & \(\sim 100\mu s\) & 2016 & [19] \\ None & \({}^{9}\)Be\({}^{*}\) & Hyperfine & Laser & Two & 99.92\% & \(\sim 30\mu s\) & 2016 & [18] \\ None & \({}^{9}\)Be\({}^{*}\) & Hyperfine & Microwave & Two & 97.4\% & \(\sim 105\mu s\) & 2018 & [138] \\ None & \({}^{9}\)Be\({}^{*}\) & Hyperfine & Microwave & Two & 98.2\% & 808\(\mu s\) & 2019 & [125] \\ None & \({}^{171}\)Yb\({}^{*}\) & Hyperfine & Laser & Two & 99.8\% & 30\(\mu s\) & 2020 & [139] \\ \hline \end{tabular} \end{table} Table 1: Summary of experimental realizations for two-qubit gates. ### General framework of multi-frequency method For trapped-ion systems, the periods of the trapping potential provide a natural time scale for quantum operations. The characteristic time scales for the prementioned entangling gates involving a single motional mode are much longer than the trapping period. On the contrary, fast gates are operated on a time scale comparable with the trapping period. Intuitively, the laser intensities, as well as the Rabi frequencies, are large in order to exert enough influence on the trapped-ion system in such a short operation time, which results in non-negligible off-resonant carrier coupling and invalidation of the Lamb-Dicke approximation. To derive fast gate schemes, we focus on the phase-insensitive \(\hat{\sigma}^{x}\) gate and the light-shift \(\hat{\sigma}^{z}\) gate. Despite the different laser configurations, the derivation can be unified by substituting \(\hat{\sigma}^{x}_{i}\) in Eq. (29) and \(\hat{\sigma}^{z}_{i}\) in Eq. (30) with \(\hat{\sigma}^{\alpha}_{i}\) (\(\alpha=x\), or \(z\)). The reason that we do not consider the phase-sensitive gate is that the effect of the carrier coupling cannot be canceled due to the \(\pi/2\) phase difference between the carrier and the qubit part of the sideband terms. It is obvious from the effective Hamiltonians in Eqs. (29) and (30) that the detuning \(\mu\) of the stimulated Raman process introduces an intensity modulation function in the spin-dependent force. In the slow region where the gate length is much longer than the trapping period, the detuning \(\mu\) can be chosen to be close to a single motional mode, such that only this mode is excited and the others can be effectively eliminated in the dynamics. In this case, the monochromatic modulation will suffice to disentangle the qubit state and the motional modes. In the fast regime, however, the frequency differences between \(\mu\) and multiple motional modes are comparable with the Rabi frequency \(\Omega\), so multiple motional modes take part in the dynamics and need to be disentangled. To tackle this complicated situation, Refs. [45; 48; 51] proposed a multichromatic modulation scheme, which, compared to the prementioned monochromatic modulation scheme, introduces more controlling parameters to satisfy multiple constraints imposed by the requirements both to disentangle the motional modes and to apply effective spin-spin interactions. In the Lamb-Dicke regime, the effective Hamiltonian with multichromatic modulation in the rotating frame is written as follows, \[\hat{H}_{I}(t) = \sum_{i=1}^{N}\Bigl{[}f_{\mathrm{car},i}(t)+f_{\mathrm{sdf},i}(t)\] \[\times\sum_{m=1}^{N}\eta_{i,m}\left(\hat{a}_{m}e^{-i\omega_{m}t}+ \hat{a}_{m}^{\dagger}e^{i\omega_{m}t}\right)\Bigr{]}\hat{\sigma}^{\alpha}_{i},\] with \(\alpha\in\{x,z\}\) and the multichromatic modulation functions for the carrier and the spin-dependent force terms being \[f_{\mathrm{car},i}(t) = \Omega_{i}\sum_{k=1}^{K}r_{i,k}\cos\left(\nu_{k}t-\phi_{i,k} \right), \tag{32}\] \[f_{\mathrm{sdf},i}(t) = \Omega_{i}\sum_{k=1}^{K}r_{i,k}\sin\left(\nu_{k}t-\phi_{i,k} \right),\] where \(\Omega_{i}\) is the strength of the spin-dependent force and \(\vec{r}\) is a normalized dimensionless \(K\)-entry vector (\(\|\vec{r}_{i}\|_{2}=1\)) characterizing the distribution of laser power among different frequency components. The frequency components are determined by the gate duration \(\tau\), such that \(\nu_{k}=2k\pi/\tau\) (\(k=1,\ldots,K\)), which guarantees that the contribution of the carrier coupling always vanishes at the end of the dynamics. For the derivation to be succinct and clear, we set \(\phi_{i,k}=0\) for \(\forall\,i\) and \(\forall k\) irrespective of any position and frequency dependence that may exist in real systems. Later for the generality, we will consider gate schemes that are robust with respect to the overall fluctuation of the optical phases. Using the Magnus expansion, we obtain the evolution operator \(\hat{U}(t)\) as follows, \[\hat{U}(t)=\exp\Biggl{[}-i\sum_{i,m}\hat{B}_{i,m}(t)\hat{\sigma}^{\alpha}_{i}+ i\sum_{i<j}\Theta_{i,j}(t)\hat{\sigma}^{\alpha}_{i}\hat{\sigma}^{\alpha}_{j} \Biggr{]}, \tag{33}\] with \(\hat{B}_{i,m}(t)=\beta^{\ast}_{i,m}(t)\hat{a}_{m}+\beta_{i,m}\hat{a}^{\dagger}_ {m}\), where \(\beta_{i,m}(t)\) and \(\Theta_{i,j}\) can be obtained as \[\beta_{i,m}(t) = \eta_{i,m}\int_{0}^{t}f_{\mathrm{sdf},i}(t^{\prime})e^{i\omega_{m }t^{\prime}}dt^{\prime}, \tag{34}\] \[\Theta_{i,j}(t) = 2\sum_{m}\eta_{i,m}\eta_{j,m}\int_{0}^{t}\int_{0}^{t^{\prime}}f_{ \mathrm{sdf},i}(t^{\prime})\] (35) \[\times f_{\mathrm{sdf},j}(t^{\prime\prime})\sin\omega_{m}\left(t^{ \prime}-t^{\prime\prime}\right)dt^{\prime}dt^{\prime\prime}.\] Figure 9: Laser geometries. (a) Two non-copropagating Raman lasers shining on a trapped-ion chain. One of the two Raman lasers is monochromatic while the other is bichromatic. The detuning is around the qubit-state transition frequency. The direction of the net transferred wave vector is assumed to be along the axial direction. (b) The same as (a) with two monochromatic Raman laser beams. The detuning is around the frequencies of relevant motional modes. (c, d) Relevant energy levels and laser frequencies for the phase-sensitive gate (c) and the phase-insensitive gate. (e) The same as (c, d) for the \(\hat{\sigma}^{z}\) gate. To construct a target unitary operator with effective spin-spin interactions, i.e. \(\hat{U}_{\rm tar}(\tau)=\exp\left(-i\sum_{i<j}J_{i,j}\hat{\sigma}_{i}^{\alpha} \hat{\sigma}_{j}^{\alpha}\right)\), we obtain the following constraints to be satisfied at time \(t=\tau\), \[\beta_{i,m}(\tau)=0,\quad\Theta_{i,j}(\tau)=J_{i,j}. \tag{36}\] For an \(N\)-ion system, the number of constraints is \(N^{2}+\frac{N(N-1)}{2}\). Thus solutions exist as long as the variational modulation scheme provides more controlling parameters. Moreover, as variational modulation schemes always provide much more degrees of freedom, it is possible to search for optimized schemes with respect to realistic physical considerations. For example, we can require the maximum Rabi frequency to be as small as possible by optimizing a cost function \(\mathcal{C}\left(\left\{\Omega_{i},\vec{r}_{i},\tilde{\phi}_{i}\right\}\right) =\max_{i}|\Omega_{i}|\). ### Framework of multi-frequency method with global control In general trapped ion systems, individual control requires highly focused laser beams with the beam waist smaller than the spatial separation of ions. Thus it not only introduces complicated experimental instruments to generate and control the individual laser beams but also introduces extra errors in the manipulation procedure of the quantum system. For example, tiny movements in the ion position will lead to serious amplitude fluctuation in the Raman process, due to the steep profile of the highly-focused beams. Moreover, the residual electromagnetic field experienced by the neighboring ions also induces coherent crosstalk errors. Thus the entangling-gate schemes that ease the requirement for individual control always attract considerable attention. Without individual control, the control parameters are reduced as \(\Omega_{i}\equiv\Omega\), \(\hat{r}_{i}\equiv\vec{r}=(r_{1},\ldots,r_{K})\) and \(\tilde{\phi}_{i}\equiv\tilde{\phi}=(\phi_{1},\ldots,\phi_{K})\), with \(K\) being the number of frequency components. We further assume \(\phi_{k}=\phi\) for simplicity. In this symmetric case, it is natural to define the collective spin operators \(\hat{S}_{m}^{\alpha}=\sum_{i=1}^{K}b_{i,m}\hat{\sigma}_{i}^{\alpha}\) and the evolution operator in Eq. (33) is reduced to the following form, \[\hat{U}(t) = \exp\Big{[}-i\sum_{m=1}^{N}\left(\hat{a}_{m}A_{m}^{*}(t)+{\rm h. c.}\right)\hat{S}_{m}^{\alpha} \tag{37}\] \[+i\sum_{m=1}^{N}\Theta_{m}(t)\hat{S}_{m}^{\alpha 2}\Big{]},\] with \[A_{m}(t) = \eta_{m}\Omega\int_{0}^{t}f_{\rm sdf}(t^{\prime})e^{i\omega_{m}t ^{\prime}}dt^{\prime}, \tag{38}\] \[\Theta_{m}(t) = \eta_{m}^{2}\Omega^{\prime}\int_{0}^{t}\int_{0}^{t^{\prime}}f_{ \rm sdf}(t^{\prime})f_{\rm sdf}(t^{\prime\prime})\] (39) \[\times\sin\left[\omega_{m}\left(t^{\prime}-t^{\prime\prime} \right)\right]dt^{\prime}dt^{\prime\prime}.\] For the all-to-all entangling gate, we required that at the end of the dynamics \(t=\tau\) the following constraints are satisfied, \[A_{m}\left(\tau\right)=0,\quad\forall m, \tag{40}\] \[\Theta_{1}(\tau)=\pi/4,\quad\Theta_{m>1}=0,\] where the first part is the aforementioned motion-spin disentangling condition. For target unitary that represents a general spin-spin interaction, \(\hat{U}(\tau)=\exp\left(-\sum_{i<j}J_{i,j}\hat{\sigma}_{i}^{\alpha}\hat{\sigma }_{j}^{\alpha}\right)\), the effective interaction strength \(J_{i,j}\) can be obtained as \[J_{i,j}=2\sum_{m}\Theta_{m}(\tau)b_{i,m}b_{j,m}. \tag{41}\] In other words, for a given set of \(J_{i,j}\), gate schemes without individual control exist only when the linear-equation system in Eq. (41) has non-vanishing solutions. ### Considerations for fast and robust global-entangling gate When the gate length is comparable to the trapping period, the above gate scheme becomes sensitive to the fluctuation of the optical phases and the drift of the equilibrium ion positions. These effects result in an indefinite initial phase in the modulation function. Moreover, the required Rabi frequencies, which are proportional to the laser intensities, also increase as the gate length becomes shorter. Thus some of the motional modes are strongly driven during the evolution, and the Lamb-Dicke approximation may not always hold good. In other words, the higher-order terms of the Lamb-Dicke parameters cannot be neglected. In this case, a protocol obtained within the Lamb-Dicke approximation may suffer beyond-Lambda-Dicke error in experiment realization. Here we introduce two sets of constraints to guarantee the robustness of the resulting gate schemes with respect to the initial phase and higher-order terms. To obtain robust gate schemes with respect to the drift of the initial phase, we notice that the modulation function in Eq. (32) can be decomposed as \(f_{\rm sdf}(t)\equiv f_{s}(t)\cos\phi+f_{c}(t)\sin\phi\), with \[f_{s}(t)=\Omega\sum_{k=1}^{K}r_{k}\sin\nu_{k}t,\quad f_{c}(t)= \Omega\sum_{k=1}^{K}r_{k}\cos\nu_{k}t. \tag{42}\] Note that before we obtain gate schemes for \(\phi=0\), where we consider linear constraints with \(f_{\rm sdf}(t)=f_{s}(t)\). Requiring the linear constraints \(A_{m}(\tau)=0\) be satisfied for arbitrary \(\phi\) is equivalent to adding another set of linear constraints with \(f_{\rm sdf}(t)=f_{c}(t)\). Similarly, we decompose the quadratic constraints for \(\Theta_{m}(\tau)\) into four sets of constraints. Figure 10 shows two-qubit gate schemes in a two-ion chain with the axial trapping frequency \(\omega_{z}=2\pi\times 1\) MHz. Without loss of generality, we assume the axial modes are driven and the gate duration is set to be 3.2 \(\mu\)s. As expected, the robust gate scheme obtained in the above procedure is insensitive to the drift of the initial phase, as the endpoints of the trajectories of both motional modes come back to the origin at the end of the gate operation, as shown in Fig. 10 (e, f). Under the Lamb-Dicke approximation, the fidelity of the gate scheme without consideration of the initial phase drift decreases to \(98.9\%\) with \(\phi_{0}=\pi/2\), while the fidelity of the robust gate is always perfect irrespective of the initial phase. The cost of the robustness is, however, that the dimensionless magnitude of the Rabi frequency \(|\Omega|/\omega_{z}\) increases from about \(1.81\) to \(1.93\). The amount of the increment becomes severe for gate schemes with shorter gate durations. To suppress the beyond-Lamb-Dicke error, it is a natural way to first expand the effective Hamiltonian in Eq. (31) to the second order of the Lamb-Dicke parameters \(\eta_{m}\). In this case, the evolution operator in Eq. (37) contains second-order terms, which depend linearly on the modulation function according to the Magnus expansion. As a result, there will be another set of linear constraints to be satisfied. Although errors induced by the second-order terms are suppressed, the resulting gate schemes always have much higher Rabi frequencies than the original gate scheme. As a result, these schemes only work well in the ultrafast regime, where the beyond-Lamb-Dicke errors are significant. ### Comments on the individual control capability for multi-qubit gates On the contrary, with the ability to individually address each qubit, the multi-qubit entangling gate scheme can be more flexible. For example, the global entangling gate scheme via individual phase modulation in Ref. [49] can be reduced to a similar gate involving fewer ion qubits by simply keeping the individual lasers shining on the selected ions and turning off the others. Intuitively, gate schemes with individual control are usually more efficient than those without individual control for entangling gates only involving a small part of the ions in a multi-qubit system. However, for a concrete target gate, the compromise between individual and global control needs to be considered seriously. ## VI VI. Conclusion and outlook In this article, we review the quantum gates for trapped-ion quantum computation and quantum simulation. For the single qubit gates, it has been shown that the duration of the gates approach to picoseconds and the fidelities much higher than typical error correction requirements have been demonstrated with both microwaves and laser beams. For the two-qubit entangling gates, the main speed limit comes from the frequency of the vibrational modes, and the gates close to the limit have been demonstrated. The fidelities of the gates are limited by photon scatterings, heating and dephasing of the vibrational modes, dephasing of qubits, amplitude fluctuations of the laser fields, and so on. In principle, these errors can be further suppressed to below the level of \(10^{-4}\) infidelity. Currently, one big challenge is to realize the gates with such high fidelity in a scalable way, either using ion shuttling or multiple ions in a single trap. One interesting quantum gate with trapped ions is the global gates that contain equivalently \(\approx N^{2}/2\) gates in a single operation [49; 140; 141]. If a global gate can be realized at a speed close to the trap frequency, then the speed per two-qubit gates can be considered as the same factor Figure 10: Two-qubit entangling gate robust with respect to the initial phase drift. Here we assume the lasers are coupled to the axial-\(z\) modes, with the center-of-mass mode frequency \(\omega_{0}\) equal to the axial trapping frequency \(\omega_{z}=2\pi\times 1\) MHz. The gate duration is \(\tau=3.2\)\(\mu\)s and \(K=30\) different driving harmonics with the frequency \(\nu_{k}=2k\pi/\tau\), \(k=1,2,\ldots K\) are considered. (a) Driving amplitudes for the gate scheme without considering the robustness against the initial phase drift. The phase-space trajectories of the center-of-mass mode (c) and the stretch mode (d) are shown for different values of the initial phase \(\phi_{0}\). (b, e, f) The same as (a, c, d) for a robust gate scheme considering the initial phase drift. In (c-f), the endpoints of the trajectories at the end of the gate operations are marked with stars. The gate schemes are obtained under the condition \(\phi_{0}=0\), and only for the robust gate, the endpoints return to the origin for a drifted initial phase \(\phi_{0}=\pi/2\). \(\approx N^{2}/2\) of improvement. \(Note\)- while preparing the manuscript, we learned that Ref. [142] reviewed the trapped-ion quantum gates, similar contents to this paper. This work was supported by the innovation Program for Quantum Science and Technology under Grants No. 2021ZD0301602, and the National Natural Science Foundation of China under Grants No.92065205, and No.11974200.
2306.06801
CARNA: Characterizing Advanced heart failure Risk and hemodyNAmic phenotypes using learned multi-valued decision diagrams
Early identification of high risk heart failure (HF) patients is key to timely allocation of life-saving therapies. Hemodynamic assessments can facilitate risk stratification and enhance understanding of HF trajectories. However, risk assessment for HF is a complex, multi-faceted decision-making process that can be challenging. Previous risk models for HF do not integrate invasive hemodynamics or support missing data, and use statistical methods prone to bias or machine learning methods that are not interpretable. To address these limitations, this paper presents CARNA, a hemodynamic risk stratification and phenotyping framework for advanced HF that takes advantage of the explainability and expressivity of machine learned Multi-Valued Decision Diagrams (MVDDs). This interpretable framework learns risk scores that predict the probability of patient outcomes, and outputs descriptive patient phenotypes (sets of features and thresholds) that characterize each predicted risk score. CARNA incorporates invasive hemodynamics and can make predictions on missing data. The CARNA models were trained and validated using a total of five advanced HF patient cohorts collected from previous trials, and compared with six established HF risk scores and three traditional ML risk models. CARNA provides robust risk stratification, outperforming all previous benchmarks. Although focused on advanced HF, the CARNA framework is general purpose and can be used to learn risk stratifications for other diseases and medical applications.
Josephine Lamp, Yuxin Wu, Steven Lamp, Prince Afriyie, Kenneth Bilchick, Lu Feng, Sula Mazimba
2023-06-11T22:56:59Z
http://arxiv.org/abs/2306.06801v1
CARNA: Characterizing Advanced heart failure Risk and hemodyNAmic phenotypes using learned multi-valued decision diagrams ###### Abstract Early identification of high risk heart failure (HF) patients is key to timely allocation of life-saving therapies. Hemodynamic assessments can facilitate risk stratification and enhance understanding of HF trajectories. However, risk assessment for HF is a complex, multi-faceted decision-making process that can be challenging. Previous risk models for HF do not integrate invasive hemodynamics or support missing data, and use statistical methods prone to bias or machine learning methods that are not interpretable. To address these limitations, this paper presents CARNA (Characterizing Advanced heart failure Risk and hemodyNAmic phenotypes), a hemodynamic risk stratification and phenotyping framework for advanced HF that takes advantage of the explainability and expressivity of machine learned Multi-Valued Decision Diagrams (MVDDs). This interpretable framework learns risk scores that predict the probability of patient outcomes, and outputs descriptive patient phenotypes (sets of features and thresholds) that characterize each predicted risk score. Specifically, we develop an innovative methodology to first learn a risk stratification using hierarchical clustering, and develop a training regime to learn MVDDs that predict risk scores and output patient phenotypes. CARNA incorporates invasive hemodynamics and can make predictions on missing data. The CARNA models were trained and validated using a total of five advanced HF patient cohorts collected from previous trials, and compared with six established HF risk scores and three traditional ML risk models. CARNA provides robust risk stratification, outperforming all previous benchmarks. Although focused on advanced HF, the CARNA framework is general purpose and can be used to learn risk stratifications for other diseases and medical applications. Moreover, to facilitate practical use, we provide an extensible, open source tool implementation. Explainable machine learning, heart failure, hemodynamics, multi-valued decision diagrams, phenotyping, risk stratification ## I Introduction Hear failure (HF) is a complex disease condition with high morbidity and mortality [1]. On a fundamental level, HF is defined by the inability of the heart to deliver adequate blood flow to the body without an elevation in cardiac filling pressures [2]. Identifying high risk advanced HF patients early on in the care continuum is critical for timely allocation of advanced, life-saving therapies such as mechanical support, device implantation or transplant allocation. Due to high variability in patient conditions and complexity of the disease, determining patient risk involves a challenging, multi-faceted decision making process that places a high burden on clinicians [3]. Hemodynamic assessments can facilitate risk stratification and enhance understanding of HF trajectories [4]. Hemodynamics provide measures of cardiovascular function, and quantify distributions of pressures and flows within the heart and circulatory system [5]. However, obtaining a comprehensive picture of the patient state from these, particularly in the context of treatment-guiding outcomes, is difficult [6]. Many established HF risk scores such as the Seattle Heart Failure Risk model [7] use statistical or naive models which are difficult to optimize and may be prone to bias [8, 9, 10]. Machine learning (ML) models present a promising opportunity to outperform traditional risk assessment methods, especially when dealing with large, high-dimensional data [11]. However, despite the promise of machine learning for HF risk stratification, ML-based risk scores remain unpopular due to modest model performance and issues with model interpretability [12]. Moreover, no previous models (statistical of ML-based) incorporate invasive hemodynamics, or contain mechanisms to handle missing data. To address these limitations, this paper develops and validates an advanced HF hemodynamic risk stratification framework entitled CARNA (Characterizing Advanced heart failure Risk and hemodyNAmic phenotypes)1. We harness the explainability and expressivity of machine learned Multi-Valued Decision Diagrams (MVDDs) to learn a risk score that predicts the probability of patient outcomes, including mortality and rehospitalization, and provide descriptive patient phenotypes. MVDDs are discrete structures representing logical functions in directed, acyclic graphs where nodes represent features, edges represent logical operators ("and", "or") with parameter threshold values, and leaf nodes represent the final score classification [13]. An example MVDD is shown in Figure 1. Due to their use of logical operators, MVDDs can handle missing data, as multiple substitutable features may contribute to the same score prediction. Moreover, the "path" through the MVDD may be returned to provide a descriptive patient phenotype that characterizes the score. MVDDs have typically been applied in optimization and model checking contexts [14], and they do not inherently learn a risk stratification. Therefore, we develop an innovative methodology within our framework to first learn a risk stratification using a hierarchical clustering algorithm, and then develop a training regime to train the MVDDs on the learned risk scores and output explainable phenotypes. Although focused on advanced HF, CARNA is a general purpose risk stratification and phenotyping framework that can be used for other diseases and medical applications. In summary, we present the following contributions: 1. We develop CARNA, an interpretable ML framework using Multi-Valued Decision Diagrams that works with missing data and includes invasive hemodynamics for risk stratifying advanced heart failure patients. In addition to producing a risk score, CARNA provides detailed patient phenotypes, i.e., sets of features and their thresholds that characterize a risk score. 2. We provide robust validation of the CARNA models using four independent HF cohorts, and compare them with six established HF risk scores and three traditional ML models. The CARNA models achieve high performance and outperform all benchmarks across metrics including Accuracy, Sensitivity, Specificity and AUC. 3. In order to facilitate practical use and promote open science, we provide an extensible, open-source tool implementation such that others can quickly and easily explore, extend, or prototype on top of the tool. In addition, our tool includes a deployed web server, which provisions live risk score prediction for ease of clinical use. All code is publicly available: [https://github.com/jozieLamp/CARNA](https://github.com/jozieLamp/CARNA). ## 2 Related Work ### Explainable AI in Healthcare Explainable AI (XAI) encompasses a wide range of techniques to provide interpretable explanations about a model's choices, such as through the use of feature maps, textual annotations, local, feature or example explanations, and model simplification methods [15]. XAI methods have been used in a range of healthcare applications [16], including for predicting heart failure incidence [17]. Adding explainability to a model often comes with a trade-off: the XAI methods may be complicated to implement, inefficient (i.e., it takes much longer to train an explainable model), or have stability and reliability issues [18]. In addition, although these methods are good for understanding more about the model (e.g., visualizing at a high level how combinations of features contributed to a prediction), they are not always conducive to quick decision making in high stress environments. For example, interpreting feature maps or understanding textual explanations is nontrivial; one may need time to decipher the explanation and determine its use (e.g., figure out how a risk score was computed.) On the other hand, MVDDs are efficient, reliable and stable [19]. Moreover, they provide simple phenotypes that quickly summarize the exact set of features and thresholds that were used to make a score prediction. As such, they are easy to understand and provide an uncomplicated, quick decision support for clinicians during patient triage. ### HF Risk Scores There are a variety of HF risk scores that provide risk stratifications in HF populations using statistical and machine learning models; a comparison is available in Table 1. The EFFECT [20], GWTG [21] and MAGGIC [22] risk scores predict risk of mortality in HF patients using various regression methods. The ESCAPE Risk Model and Discharge Score [23] and SHFM [7] stratify mortality risk using Cox Proportional Hazards models (CPH). The ESCAPE score was derived using the same dataset that we use for our training cohort. Finally, in the TOPCAT [26], ADHERE [24], and MARKER [25] risk models, machine learning algorithms, including decision trees, boosted decision trees, support vector machines and random forests are used to predict risk of mortality. Some of these risk models use small, selective feature sets, or only stratify risk into a small number of groups (e.g., only \begin{table} \begin{tabular}{c c c c c} \hline \hline **Score** & **Method Used** & **\# Features** & **Home?** & **Allows Missing Data** \\ \hline **CARNA Hemo** & MVDD & 28 & **Yes** & **Yes** \\ **CARNA AI Fls** & MVDD & 66 & **Yes** & **Yes** \\ EFECT [20] & Logistic Regression & 11 & No & No \\ GWTG [21] & Logistic Regression & 7 & No & No \\ MAGGIC [22] & Postson Regression & 13 & No & No \\ ESCAPE [23] & CPH & 8 & No & No \\ SHFM [7] & CPH & 30 & No & No \\ ADHERE [24] & Decision Tree (CART) & 3 & No & No \\ MARKER [25] & Decision Tree (BDT) & 8 & No & No \\ TOPCAT [26] & Various ML & 86 & No & No \\ \hline \hline \end{tabular} * Hemo = Invasive Hemodynamics; All Fls = All Features; CPH = Cox Proportional Hazards; CART = Classification and Regression Tree; BDT = Boosted Decision Tree \end{table} Table 1: Comparison of HF Risk Score Approaches Figure 1: Example MVDD for the Invasive Hemodynamic Feature Set and DeLvTx Outcome. Dotted lines represent “or” boolean operators, and solid lines represent “and” boolean operators. The leaf nodes highlighted in yellow indicate the risk score. The highlighted red path indicates the example phenotype of Sex = Male \(\wedge\) BPSYS \(>\) 103.5 \(\wedge\) CPI \(>\) 0.621 \(\wedge\) (PAS \(>\) 74.5 \(\vee\) PCWP \(\leq\) 33) = Score 5. two groups of high and low risk as in MARKER), and none of them incorporate invasive hemodynamics. Moreover, these methods suffer from limitations associated with statistical and naive machine learning models, such as being prone to bias, and lacking mechanisms to handle missing data [8, 9]. In fact, in external validation of these scores, a common issue cited is that some variables are not readily available in routine clinical practice or are missing from collected data cohorts so the score cannot be computed [10]. CARNA uses a larger, more diverse feature set than most scores, is able to provide more fine-grained risk stratification, i.e., can have more risk groups, and incorporates invasive hemodynamics. In addition, our model is explainable and can handle missing data. Ultimately, it is our intention that our risk score would be complementary to previous risk methodologies, in which our score is used to provision risk stratification for advanced HF patients requiring invasive hemodynamic monitoring, and others may be used to gain an understanding of risk for more general HF patients. ## 3 Preliminaries ### Multi-Valued Decision Diagrams MVDDs are discrete structures representing logical functions in directed, acyclic graphs where nodes represent features, edges represent logical operators ("and", "or") with parameter threshold values, and leaf nodes represent the final score classification [13]. As such, the "path" through the graph may be returned to provide a descriptive patient phenotype. An example MVDD is shown in Figure 1: the highlighted red path characterizes the high-risk score of 5 by the following phenotype: _Sex = Male \(\wedge\) BPSYS \(>\) 103.5 \(\wedge\) CPI \(>\) 0.621 \(\wedge\) (PAS \(>\) 74.5 \(\vee\) PCWP \(\leq\) 33) = Score 5_. MVDDs are well suited to classification tasks and the representation of HF phenotypes over other black-box models because they allow increased flexibility in characterizing feature relationships and are highly interpretable [19]. This is advantageous over other models that do not provide any details about how a score was computed. Moreover, unlike other explainable models such as decision trees or random forests, MVDDs are resilient to missing data due to their use of logical operators; multiple substitutable features may contribute to the same prediction score. For example, in the above phenotype, PAS or PCWP may be used for calculation, and as such, when a feature is missing from the provided data, alternative features may be used to still allow for score prediction. This is advantageous in clinical scenarios where complete patient measurements may not be available and clinicians must make quick decisions on partial observations. Despite these advantages, MVDDs have typically been used for optimization and model checking contexts [14], with limited use in medical classification and no applications to risk stratification. As such, we develop a training regime for MVDDs within our framework to learn risk scores and output HF phenotypes that characterize the predicted risk scores. ## 4 Data ### Outcomes and Cohort Selection #### 4.1 Outcomes The primary outcome was a composite endpoint of death, left ventricular assist device (LVAD) implantation or heart transplantation (denoted as DeLvTx). A secondary outcome of rehospitalization within 6 months of follow up was included, as rehospitalizations have been shown to be predictive of adverse outcomes [30, 31]. #### 4.2.1 Patient Cohorts This study used 5 HF cohorts, three from randomized clinical trials and two from a real-world setting of a single quaternary healthcare system. We trained the model using the ESCAPE trial [433 patients, mean age 56.1, 25.9% female], a randomized control trial studying the use of pulmonary artery catheters in severe HF patients [27]. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & **ESCAPE [27]** & **BEST [28]** & **GUIDE-IT [29]** & **UVA Shock** & **UVA Serial** \\ \hline \# Patients & 433 & 2707 & 388 & 364 & 183 \\ \# Patients with Invasive Hemo & 209 & 0 & 0 & 130 & 181 \\ Baseline Data & Yes & Yes & Yes & Yes & Yes \\ Discharge Data & Yes & No & Yes & Yes & Yes \\ Total Records & 866 & 2707 & 776 & 728 & 366 \\ Total Data Missing (\%) & 7.8 & 2.0 & 15.1 & 10.4 & 7.3 \\ Hemodynamics Missing (\%) & 12.0 & N/A & N/A & 5.9 & 9.2 \\ Age (years) & 56.1+13.9 & 60.2\(\pm\)12.3 & 62.2+13.9 & 59.4+18.5 & 60.6+15.1 \\ Sex (\% female) & 25.9 & 21.9 & 66.2 & 35.2 & 43.2 \\ Race (\% white) & 59.6 & 70.0 & 49.2 & N/A & N/A \\ BMI (kg/m2) & 28.4+6.7 & N/A & 31.2\(\pm\)8.6 & 29.8+8.8 & 30.5+8.0 \\ LVEF (\%) & 19.3\(\pm\)6.6 & 23.0\(\pm\)7.3 & 24.0\(\pm\)8.2 & 31.7\(\pm\)17.4 & 31.3\(\pm\)18.0 \\ SBP (mm Hg) & 103.7\(\pm\)15.8 & 118.5\(\pm\)19.4 & 115.4\(\pm\)20.0 & 111.1\(\pm\)21.9 & 109.1\(\pm\)21.4 \\ DBP (mm Hg) & 64.1+11.5 & 71.9\(\pm\)11.7 & 70.2+13.5 & 62.2\(\pm\)15.5 & 59.9+17.2 \\ Blood Urea Nitrogen (mg/dL) & 36.3\(\pm\)22.5 & 24.6\(\pm\)15.3 & 31.3\(\pm\)22.6 & 34.9\(\pm\)24.2 & 39.1\(\pm\)25.7 \\ Creatinine (mg/dL) & 1.5\(\pm\)0.6 & 1.2\(\pm\)0.4 & 1.6\(\pm\)0.7 & 1.7\(\pm\)1.3 & 1.7\(\pm\)1.0 \\ Potassium (mmol/L) & 4.3\(\pm\)0.6 & 4.3\(\pm\)0.5 & 4.4\(\pm\)0.6 & N/A & N/A \\ Sodium (mmol/L) & 136.0\(\pm\)4.4 & 138.9\(\pm\)3.4 & 138.3\(\pm\)3.8 & 136.9\(\pm\)5.1 & 135.7\(\pm\)5.2 \\ DeLvTx (\%) & 27.0 & 31.7 & 23.7 & 56.6 & 41.5 \\ \hline \multicolumn{5}{l}{Rhospitalization (\%)} & 57.0 & 62.9 & 51.8 & 47.5 & 78.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Characteristics of HF Cohorts The ESCAPE dataset contains a rich feature set of clinical and hemodynamic variables. Invasive hemodynamics (e.g., right atrial pressure (RAP) and pulmonary capillary wedge pressure (PCWP)) were recorded for 209 patients at baseline and prior to the removal of a heart catheter. The other 4 cohorts were used for validation: BEST [28], GUIDE-IT [29] and two real-world cohorts from University of Virginia (UVA); 1) a registry of cardiogenic shock patients, and (UVA Shock) and 2) a registry of HF patients with at least two serial right heart catheterizations for hemodynamic assessment during the same hospitalization (UVA Serial). More details are available in the cohort publications [27, 28, 29]. Only New York Heart Association (NYHA) functional class III-IV were included in the study to ensure comparability. Characteristics of each cohort are in Table 2. Only the ESCAPE, UVA Cardiogenic Shock and UVA Serial Cardiac cohorts have invasive hemodynamics. GUIDE-IT had the highest percentage of missing data (15.07%), and ESCAPE had the highest percentage of missing hemodynamic data (12.04%). ### Data Preprocessing For each dataset, we first preprocessed the data, including removing outliers (necessary to reduce bias in our ML training). If the dataset had multiple temporal recordings, the values recorded at baseline and discharge were treated as two separate records. Baseline values were included (as opposed to only discharge) as they have been shown to inform a range of hemodynamic and contractile metrics and are also important in predicting outcomes as shown in previous studies, e.g., [4]. Moreover, since it was the authors' intention to provide a single point of care risk score that could make predictions even at initial hospital admission, we included baseline measurements in the models. This also increased the total number of training/validation records, especially helpful with the small training (ESCAPE) dataset. Which cohorts had baseline and discharge data, as well as the total number of records used is reported in Table 2. Since our models support missing features, we did not impute or remove missing values from the data records. We also calculated noninvasive hemodynamics, additional metrics indicative of hemodynamic states, computed from features that were collected noninvasively. Examples include mean arterial pressure (MAP), cardiac power index (CPI), and pulse pressure (PP). These metrics were specifically selected a priori based on previous studies demonstrating incremental value in HF risk stratification [4, 32, 33]. Data was stratified into two subsets: one exploring phenotypes of invasive hemodynamics only, and the other for characterizing phenotypes between noninvasive hemodynamics and all available clinical variables including demographics, labs, and medications. Henceforth, we refer to these as the Invasive Hemodynamics and All Features feature sets, respectively. ## 5 Methods A high-level overview of the CARNA methodology is shown in Figure 2. First, the risk labels are generated (Section 5.1). Agglomerative clustering is used to stratify patients in all datasets into a specified number of cluster groups and risk categories are derived for each cluster (e.g., class 1-5 ordered numerically based on actual event rates). The output of this step is a set of risk labels that indicate the probability threshold of the outcome event happening (e.g., a patient record assigned to a class of 1 indicates an outcome probability of \(<\)10% for that patient). This clustering occurred twice (once for each feature set), and the probabilities from each cluster were derived for each outcome, resulting in a total of four risk label sets. Next, using the training data (ESCAPE cohort), Multi-Valued Decision Diagrams were trained to predict the risk labels (e.g., classes 1-5, Section 5.2). The trained MVDD models take in a set of features for a patient and output the predicted CARNA risk score. A total of four models were derived for each of the four risk label sets: one for each outcome (DeLvTx, Rehospitalization) and feature set (Invasive Hemodynamics, All Features) pair. Finally, the CARNA risk scores were evaluated using the four other validation cohorts Figure 2: Overview of CARNA Methodology. (A) The risk labels are generated using a clustering-based derivation scheme using the training and validation datasets; (B) the training data is used to train the CARNA MVDD models as well as (C) three traditional ML models for comparison. Finally, the validation data is used to evaluate the performance of the models and the resulting CARNA risk scores are compared with six previous HF risk scores (D). and compared with traditional ML models (Section V-C) and established HF risk scores (Section V-D) based on their predictions of the risk classes. A step-by-step walkthrough of the methodology is provided next. ### Risk Label Generation Each patient cohort had binary outcomes for the two endpoints (DeLvTx, rehospitalization), indicating if the outcome occurred or not. As such, there were no explicit risk thresholds for each of the patient records. Additionally, our MVDDs do not implicitly assign risk scores as a function of their learning. Since the goal of our approach was to generate a risk stratification and phenotyping score, the next step was to generate the categorical risk score values (i.e., 1-5) corresponding to real-valued outcome risks (e.g., 1 indicates a \(<\)10% risk of DeLvTx) for each record in the training and validation datasets. To this end, we reduced the dimensions of the covariates in the datasets using Principal Component Analysis (PCA) and then used a clustering approach to group patients and determine risk categorizations. #### V-A1 Pca For each feature set and outcome, we performed PCA using two principal components to reduce the dimensions of the data. This was a necessary pre-step to reduce bias in the clustering, as clustering methods can be sensitive to outliers or slight changes in feature set distributions. Specifically, we use the LAPACK implementation of Singular Value Decomposition, following the PCA library available in the scikit-learn packages [34]. Since PCA cannot handle missing features, we imputed any missing values with the feature mean. We note however, that the original (non-imputed) datasets were used in the MVDD training steps later to ensure the models learned from the datasets with missing data. Importantly, the risk scores generated from this step were the labels used to train the MVDD models. #### V-A2 Hierarchical Clustering Next, we clustered the patients into a specified number of groups using Agglomerative Clustering, a form of hierarchical clustering. Since the number of groups, \(k\), is a hyperparameter, the users can select how many groups they wish to stratify the patients into. We argue this is an advantage of our approach because, based on details of the patient cohort being trained on or other user criteria (e.g., a clinician wish for only three risk groups), the number of risk groups can be adaptively selected. For our experimental purposes, we selected \(k\) as the optimal number of groups using the "Elbow" method, in which the sum of squares at each number of clusters is plotted on a graph [35]. The point on the graph where the slope changes from steep to shallow ("elbow" of the graph) indicates the optimal number of clusters to use. The clustering was performed across all datasets (including the four validation cohorts). In order to discriminate how well separated the clusters were, we computed the Hubert & Levin C Index for each feature set [36]. The C Index provides a metric to compare the dispersion of clusters compared to the overall dataset dispersion [37]. C Index should be minimized; a smaller index indicates more distinct (stable) clusters. Each cluster corresponds to one score value (e.g., five clusters for five score value assignments). #### V-A3 Derive Outcome Probabilities From there, the outcome probability ranges for each score cluster were derived by computing the ground truth probability of the denoted outcome from the patients in each cluster. For example, cluster 1, corresponding to a score value of 1, had a ground truth probability of 0.041 for the DeLvTx outcome and Invasive Hemodynamics feature set; an outcome probability of \(<\)10% was derived. As a sanity check to ensure the derived score categories corresponded to the ground truth outcome probabilities across all the datasets, we reported the actual probabilities for each dataset in the Results. Finally, the score labels were assigned to each data record based on the associated cluster (e.g., a record in cluster 1 is assigned a score of 1). Using this process, we generated the risk score (labels) separately for each outcome and feature set, resulting in a total of four risk score label sets. #### V-A4 Label Method Reasoning We decided to use this clustering approach because, in addition to risk stratifying patients, it uses an unsupervised method to holistically group patients, i.e., autonomously groups patients based on similar characteristics. This is highly advantageous over manually stratifying patients; manually grouping patients into risk groups is nontrivial due to large (potentially conflicting) sets of features and high variability in the presentation of patient conditions. Moreover, manual grouping is labor intensive (e.g., would require many clinician hours to characterize every patient's risk). ### Learning Multi-Valued Decision Diagrams #### V-B1 Overall Training Details As a reminder, the MVDDs were trained on the risk score labels (i.e., classes 1-5) generated during the previous step and the risk score labels indicate probability categories of outcomes. The resulting trained models take in a set of features for a patient and output the predicted CARNA risk score. All MVDDs were learned using an independent training set (ESCAPE dataset). To maximize the training capabilities of the small dataset, we used 5-fold cross validation, in which 80% of the data in the split was used for training and the other 20% was held out for validation purposes. A total of four models were derived: one for each outcome (DeLvTx, Rehospitalization) and feature set (Invasive Hemodynamics and All Features) pair. #### V-B2 MVDD Learning Process Each MVDD was learned using a training process similar to the Iterative Dichotomiser 3 (ID3) multi-class decision tree algorithm [38]. Specifically, we learn a multi-class tree using the splitting criterion of gini index or entropy. Each time we add a node to the tree, we replace the boolean edge with logical operators ("and", "or") and select the operator that gives the best performance (e.g., lowest gini or entropy.) The MVDDs were trained iteratively until model convergence. The implementation was developed de novo in Python3 using publicly available packages [34]. #### V-B3 Validating the MVDDs After model training, we independently validated the models using the four other cohorts, which had not been used in the training phase. To assess the performance of our MVDDs, five receiver operator characteristic curves (ROC) for each risk class were plotted for each model based on the ground truth risk classes in the validation datasets. If the predicted risk class matched the ground truth risk class, this was considered a success for the ROC analysis. For example, in the case of class 1 patients, if the MVDD predicted class 1, it was considered a success, and if it predicted another class, it was considered a failure. The ROC curves were then constructed based on predictions of the risk classes, which is different from the conventional ROC method of predicting an actual event. To measure the overall model performance (e.g., as a summary metric across all risk classes,) we report a single averaged area under the curve (AUC) metric, calculated by taking the weighted average of the AUCs from each risk class, weighted by the number of individuals in each class. We also calculated accuracy, sensitivity and specificity in a similar manner. We note that ROC/AUC were used over a reclassification analysis due to limitations associated with reclassification such as systematic miscalibration on validation cohorts [39]. ### Comparison to Traditional Machine Learning Models We compared the performance of CARNA models with traditional ML models, including K-nearest neighbors (KNN), Decision Trees (DT) and Random Forests (RF). Median imputation was used for any missing values. We followed the same training procedure used for the MVDDs; each model was trained on the ESCAPE dataset using 5-fold cross validation with a 80-20% split for training/validation. Performance was computed using the same metrics on the four validation cohorts. Additionally, to assess the concordance between the predicted risk and the ground truth outcomes, calibration plots were computed, using a bin size of 10. ### Comparison to Other Heart Failure Risk Scores For benchmark comparison, we compared our CARNA risk score models with six other established HF risk scores: ADHERE [24], EFFECT [20], ESCAPE [23], GWTG [21], MAGGIC [22], and SHFM [7]. We limited our comparison to the models predicting risk of mortality with similar feature sets and patient cohorts. In particular, we exclude scores that use biomarkers and pathology based features (e.g., QRS measurements) since those were not available in our cohorts. Since the comparison scores cannot handle missing data, missing values were imputed with the median. For each validation dataset, the predicted probability of an event was obtained from each score for each patient, and then a predicted class was assigned based on that probability. In other words, if the predicted probability of the event from the SHFM was 5% for a patient, we would say the SHFM predicted class 1, which had a probability range of 0-10% for an event. The accuracy of these other models for predicting the risk class (not the actual event) was again used for the comparison ROC analysis. To compare the AUCs between the established HF risk scores and CARNA, we performed hypothesis testing using the DeLong approach [40]. We report the scores' AUCs, the change in AUCs (CARNA AUC - other score AUC) and the p-value. ### Open Source Tool Implementation In order to promote open science, CARNA is an open source, extensible framework that others can easily use and build off of. Our implementation is developed in Python 3 using open source libraries. The tool package is clearly commented and includes a jupyter notebook runner file such that others can quickly and easily explore, extend, or prototype on top of the tool. In addition, our implementation includes a deployed web server which provides a live risk score prediction for ease of clinical use. An example web portal image is in Figure 3. All code is publicly available from the Github repository: [https://github.com/jozieLamp/CARNA](https://github.com/jozieLamp/CARNA), and the live web server may be accessed here: [http://hemopheno.pythonanywhere.com/](http://hemopheno.pythonanywhere.com/). ## 6 Results ### Risk Label Generation Results From the elbow plots, 5 was chosen as the optimal number of cluster groups corresponding to 5 risk categories. An example dendrogram displaying the cluster splits for the All Features feature set is shown in Figure 4. In hierarchical Figure 4: Agglomerative Clustering Dendogram for All Features feature set. Clusters are separated by horizontally dividing the top of the hierarchy based on the specified number of groups (5 in our case); this is illustrated by the horizontal dashed black line in the figures. Each leaf (end of the dendrogram) represents an individual data point. Figure 3: Example CARNA Web Portal – interface for predicting the invasive hemodynamic risk score. clustering methods, clusters are separated by horizontally dividing the top of the hierarchy based on the specified number of groups, illustrated by the horizontal dashed black line in the figures. **Our clusters are distinct with a high degree of separation, with low C Indexes of 0.063 for the Hemodynamics feature set and 0.051 for the All Features feature set.** Table 3 reports the risk score meaning and corresponding real-valued average risk probabilities for each score category across all feature sets and outcomes. For example, for the Hemodynamics feature set and the DeLvTx outcome, a risk score of 3 indicates a 20-30% chance of the outcome, with a mean outcome probability of 0.245 computed from the patients in this cluster. For a sanity check, we also reported the average risk probabilities for each dataset individually. **These results provide evidence that the risk ranges correspond to the real observed risk in the patient cohorts.** ### Learned MVDDs We generated a total of four MVDD models for each of the feature sets (Invasive Hemodynamics and All Features) and outcomes (DeLvTx and Rehospitalization). The Invasive Hemodynamic models use a combination of 28 features that include basic demographics, invasive and noninvasive hemodynamics; the All Features models use a combination of 66 features across demographics, labs, medications, exercise, quality metrics, other medical diagnostics and noninvasive hemodynamics. We note that these are the _maximum_ number of features per model and actual prediction paths through the \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Outcome** & **Dataset** & **Accuracy** & **Averaged AUC** & **Sensitivity** & **Specificity** \\ \hline DeLvTx & \begin{tabular}{c} UVA Shock \\ UVA Serial \\ \end{tabular} & \begin{tabular}{c} 0.947\(\pm\)0.107 \\ 0.969\(\pm\)0.081 \\ \end{tabular} & \begin{tabular}{c} 0.938\(\pm\)0.106 \\ 0.965\(\pm\)0.080 \\ \end{tabular} & \begin{tabular}{c} 0.915\(\pm\)0.103 \\ 0.950\(\pm\)0.079 \\ \end{tabular} & \begin{tabular}{c} 0.961\(\pm\)0.108 \\ 0.980\(\pm\)0.081 \\ 0.980\(\pm\)0.081 \\ \end{tabular} \\ \hline Rehospitalization & \begin{tabular}{c} UVA Shock \\ UVA Serial \\ \end{tabular} & \begin{tabular}{c} 0.907\(\pm\)0.102 \\ 0.896\(\pm\)0.074 \\ \end{tabular} & \begin{tabular}{c} 0.861\(\pm\)0.096 \\ 0.896\(\pm\)0.074 \\ \end{tabular} & \begin{tabular}{c} 0.791\(\pm\)0.086 \\ 0.852\(\pm\)0.070 \\ \end{tabular} & \begin{tabular}{c} 0.935\(\pm\)0.105 \\ 0.940\(\pm\)0.078 \\ \end{tabular} \\ \hline \multicolumn{5}{c}{All Features Feature Set} \\ \hline **Outcome** & **Dataset** & **Accuracy** & **Averaged AUC** & **Sensitivity** & **Specificity** \\ \hline DeLvTx & \begin{tabular}{c} BEST \\ GUIDE-IT \\ UVA Shock \\ UVA Serial \\ \end{tabular} & \begin{tabular}{c} 0.997\(\pm\)0.037 \\ 0.997\(\pm\)0.070 \\ 0.997\(\pm\)0.070 \\ 0.865\(\pm\)0.049 \\ 0.858\(\pm\)0.067 \\ \end{tabular} & \begin{tabular}{c} 0.994\(\pm\)0.037 \\ 0.996\(\pm\)0.070 \\ 0.996\(\pm\)0.070 \\ 0.871\(\pm\)0.050 \\ 0.871\(\pm\)0.068 \\ \end{tabular} & \begin{tabular}{c} 0.990\(\pm\)0.037 \\ 0.995\(\pm\)0.069 \\ 0.995\(\pm\)0.069 \\ 0.816\(\pm\)0.045 \\ 0.815\(\pm\)0.063 \\ \end{tabular} & \begin{tabular}{c} 0.990\(\pm\)0.037 \\ 0.998\(\pm\)0.037 \\ 0.998\(\pm\)0.037 \\ 0.998\(\pm\)0.070 \\ 0.998\(\pm\)0.070 \\ 0.816\(\pm\)0.046 \\ 0.653\(\pm\)0.044 \\ \end{tabular} & \begin{tabular}{c} 0.998\(\pm\)0.037 \\ 0.997\(\pm\)0.070 \\ 0.998\(\pm\)0.037 \\ 0.998\(\pm\)0.070 \\ 0.816\(\pm\)0.046 \\ 0.942\(\pm\)0.075 \\ \end{tabular} & \begin{tabular}{c} 0.990\(\pm\)0.037 \\ 0.996\(\pm\)0.070 \\ 0.995\(\pm\)0.069 \\ 0.998\(\pm\)0.070 \\ 0.816\(\pm\)0.046 \\ 0.653\(\pm\)0.044 \\ \end{tabular} \\ \hline \multicolumn{5}{c}{Table displays value \(\pm\) confidence interval. DeLvTx = composite endpoint of death, IVAD implantation or transplantation.} \\ \end{tabular} \end{table} Table 3: Risk Score Meaning and Ground Truth Risk Probabilities \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Risk Score**} & \multirow{2}{*}{**Probability**} & \multirow{2}{*}{**Risk Category**} & \multicolumn{4}{c}{**Invasive Hemodynamics Cluster Means**} & \multicolumn{4}{c}{**All Features Cluster Means**} \\ & & & **Overall** & **ESCAPE** & **UVA Shock** & **UVA Serial** & **Overall** & **ESCAPE** & **BEST** & **GUIDE-IT** & **UVA Shock** & **UVA Serial** \\ \hline 1 & \(<\)10\% & Low & 0.014 & 0.081 & N/A & 0.0 & 0.043 & 0.042 & 0.0 & 0.076 & 0.048 & 0.048 \\ 2 & 10 - 20\% & Low - Intermediate & 0.176 & 0.185 & N/A & 0.167 & 0.145 & 0.129 & 0.159 & 0.143 & 0.167 & 0.125 \\ 3 & 20 - 30\% & Intermediate & 0.245 & 0.25 & 0.227 & 0.259 & 0.255 & 0.265 & 0.275 & 0.255 & 0.201 & 0.299 \\ 4 & 30 - 40\% & Intermediate & High & 0.364 & 0.39 & 0.31 & 0.392 & 0.343 & 0.333 & 0.31 & 0.253 & 0.315 & 0.485 \\ 5 & \(>\)40\% & High & 0.535 & 0.429 & 0.651 & 0.525 & 0.688 & 0.769 & 0.333 & 0.338 & 1.0 & 1.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Risk Score Meaning and Ground Truth Risk Probabilities MVDDs use smaller subsets with interchangeable combinations of features (e.g., the features that may be "or-ed" together along a path that provide choices for which feature is used for prediction in the phenotype.) Figure 5: ROC Curves for Validation Datasets and All Features Feature Set. Figure 8: Calibration Plots for Invasive Hemodynamics using bin size of 10. True probability is the fraction of positives per bin. Figure 6: Calibration Plots for All Features using a bin size of 10. True probability is the fraction of positives per bin. Figure 7: ROC Curves for Validation Datasets and Invasive Hemodynamics Feature Set. nan = no data in that class. #### Vi-C1 MVDD Performance Table IV presents the validation performance summary. The UVA Cardiogenic Shock and Serial Cardiac cohorts were used to validate the invasive hemodynamics models, since they were the only cohorts with invasive hemodynamics; all 4 validation cohorts were used to validate the All Features models. Figures 5 and 7 show the ROC curves and AUC values for each risk class for the All Features and Invasive Hemodynamics sets, respectively. For the Invasive Hemodynamics feature set across all outcomes, our validation models performed well with averaged AUCs of 0.861\(\pm\)0.096 to 0.965\(\pm\)0.080. For the All Features set across all datasets, our models performed well for the DeLvTx outcome with averaged AUCs of 0.871\(\pm\)0.068 to 0.996\(\pm\)0.070 and moderately for the rehospitalization outcome with averaged AUCs of 0.533\(\pm\)0.015 to 0.996\(\pm\)0.070. **These validation results provide evidence that the CARNA models yield robust risk stratification.** can make predictive decisions using incomplete data, which is particularly advantageous in clinical scenarios with missing data. Fourth, we use multiple patient cohorts to train and validate our models, compare risk ascertainment to previous HF risk scores, and compare our models with traditional ML models, which support CARNA's application to diverse real-world patient cohorts. Finally, our ML method is explainable and interpretable; elucidation of the phenotypes used to make risk characterizations by our models allow clinicians to better understand how and why a risk score was given. These phenotypes may identify possible HF subgroups that can be further investigated in clinical studies. #### 4.2.2 CARNA Outperforms Benchmarks As shown in Tables 4-8, the CARNA risk scores highly outperform previous risk scores. The CARNA Invasive Hemodynamics score was more predictive than other scores including the ESCAPE risk score which was derived on the same cohort as our training data using linear statistical methods. The CARNA All Features score also outperformed previous risk scores, with the exception of the Rehospitalization outcome for the two UVA cohorts, which performed similar to standard risk models. Moreover, as evidenced by Tables 5 and 6, the CARNA models outperform traditional ML models across all datasets, feature sets and outcomes. We speculate MVDDs may outperform traditional ML models due to their ability to handle missing data. #### 4.2.3 Comment on Hemodynamics The CARNA Invasive Hemodynamic models do better than the CARNA All Features models, which suggests that invasive hemodynamics (along with integrated metrices) improve outcome prediction for advanced HF patients. Integrated hemodynamic indices such as Cardiac Power Index, Mean Arterial Pressure, and Pulmonary Artery Pulsatility Index were highly predictive of patient outcomes. This aligns with findings from previous studies, demonstrating the incremental utility of integrated metrics in risk assessment [4, 32, 33, 41]. #### 4.2.4 Model Design Choices and Limitations Our models use single point-of-care measurements, and do not take advantage of multiple follow-up recordings. As a result, they may lose interrelations available from multiple temporal recordings (i.e., changes between measurements). However, using single measurements in our models allows for clinician ease-of-use. Furthermore, although only the "OR" nodes in the MVDD model explicitly handle missing data, we chose to use "AND/OR" MVDDs because the "OR"-only MVDDs become very large and overfit the data. We used single MVDD models for interpretability purposes throughout the project evaluation. However, ensemble approaches (e.g., ensembles of MVDDs) have been shown to outperform single model methods [42], and this will be investigated in future. Additionally, we note \begin{table} \begin{tabular}{c|c c c|c c c c c} \hline \hline \multirow{2}{*}{**Score**} & \multicolumn{2}{c|}{**Invasive Hemodynamic Feature Set**} & \multicolumn{6}{c}{**All Features Feature Set**} \\ \hline \multirow{2}{*}{**Score**} & \multicolumn{2}{c|}{**Dataset**} & \multicolumn{3}{c}{**Dataset**} \\ & ESCAPE & UVA Shock & UVA Serial & ESCAPE & BEST & GUIDE-IT & UVA Shock & UVA Serial \\ \hline ADHERE [24] & -0.357, 0.262 & -0.412, \(<\)0.001 & -0.391, 0.413 & -0.383, 0.031 & -0.418, \(<\)0.001 & -0.395, 0.005 & -0.345, \(<\)0.001 & -0.297, 0.413 \\ EFFECT 30D [20] & -0.402, 0.881 & -0.354, \(<\)0.001 & -0.355, 0.315 & -0.163, 0.020 & -0.428, \(<\)0.001 & -0.384, 0.069 & -0.361, \(<\)0.001 & -0.287, 0.315 \\ EFFECT Y1 [20] & -0.404, 0.832 & -0.326, \(<\)0.001 & -0.321, 0.028 & -0.430, 0.026 & -0.356, \(<\)0.001 & -0.364, 0.096 & -0.259, \(<\)0.001 & -0.227, 0.028 \\ ESCAPE [23] & -0.271, 0.089 & -0.343, \(<\)0.001 & -0.400, 0.311 & -0.297, \(<\)0.001 & -0.407, \(<\)0.001 & -0.281, \(<\)0.001 & -0.276, \(<\)0.001 & -0.306, 0.311 \\ GWTG [21] & -0.351, 0.593 & N/A & N/A & -0.377, 0.002 & -0.456, 0.001 & -0.459, 0.001 & N/A & N/A \\ MAGGIC Y1 [22] & -0.312, 0.151 & -0.260, 0.018 & N/A & -0.338, 0.094 & N/A & -0.307, 0.048 & -0.193, 0.018 & N/A \\ MAGGIC Y3 [22] & -0.312, 0.151 & -0.260, 0.018 & N/A & -0.338, 0.094 & N/A & -0.307, 0.048 & -0.193, 0.018 & N/A \\ SHFM Y1 [7] & -0.329, 0.011 & -0.351, \(<\)0.001 & -0.377, 0.784 & -0.355, \(<\)0.001 & -0.381, \(<\)0.001 & -0.373, 0.201 & -0.284, \(<\)0.001 & -0.283, \(<\)0.784 \\ SHFM Y3 [7] & -0.329, 0.012 & -0.350, \(<\)0.001 & -0.381, 0.878 & -0.355, \(<\)0.001 & -0.378, \(<\)0.001 & -0.371, 0.173 & -0.283, \(<\)0.001 & -0.287, 0.878 \\ SHFM Y5 [7] & -0.330, 0.013 & -0.365, \(<\)0.001 & -0.386, 0.996 & -0.356, \(<\)0.001 & -0.379, \(<\)0.001 & -0.377, 0.268 & -0.298, \(<\)0.001 & -0.292, 0.996 \\ \hline \hline \end{tabular} \end{table} Table 8: Hypothesis Testing Between CARNA and Comparison HF Scores \begin{table} \begin{tabular}{c|c|c c c c c} \hline \hline \multirow{2}{*}{**Score**} & \multicolumn{2}{c|}{**Median**} & \multicolumn{5}{c}{**Dataset**} \\ & **Follow-Up** & ESCAPE & BEST & GUIDE-IT & UVA Shock & UVA Serial \\ \hline CARNA - Hemo & 6 months & 0.952\(\pm\)0.091 & N/A & N/A & **0.938\(\pm\)0.106** & **0.965\(\pm\)0.080** \\ CARNA - All Fts & 6 months & **0.978\(\pm\)0.065** & **0.994\(\pm\)0.037** & **0.996\(\pm\)0.070** & 0.871\(\pm\)0.050 & 0.871\(\pm\)0.068 \\ ADHERE [24] & 5.85 days & 0.595\(\pm\)0.029 & 0.576\(\pm\)0.015 & 0.601\(\pm\)0.021 & 0.526\(\pm\)0.013 & 0.574\(\pm\)0.030 \\ EFFECT 30D [20] & 30 days & 0.550\(\pm\)0.021 & 0.610\(\pm\)0.018 & 0.635\(\pm\)0.024 & 0.584\(\pm\)0.024 & 0.610\(\pm\)0.037 \\ EFFECT Y1 [20] & 1 year & 0.548\(\pm\)0.021 & 0.638\(\pm\)0.020 & 0.632\(\pm\)0.024 & 0.612\(\pm\)0.027 & 0.644\(\pm\)0.043 \\ ESCAPE [23] & 6 months & 0.681\(\pm\)0.057 & 0.587\(\pm\)0.016 & 0.715\(\pm\)0.043 & 0.595\(\pm\)0.025 & 0.565\(\pm\)0.029 \\ GWTG [21] & 4 days & 0.601\(\pm\)0.030 & 0.538\(\pm\)0.010 & 0.537\(\pm\)0.013 & N/A & N/A \\ MAGGIC Y1 [22] & 2.5 years & 0.640\(\pm\)0.035 & N/A & 0.689\(\pm\)0.029 & 0.678\(\pm\)0.034 & N/A \\ SHFM Y1 [7] & 1 year & 0.623\(\pm\)0.033 & 0.613\(\pm\)0.018 & 0.623\(\pm\)0.023 & 0.587\(\pm\)0.024 & 0.588\(\pm\)0.033 \\ SHFM Y3 [7] & 3 years & 0.623\(\pm\)0.033 & 0.616\(\pm\)0.018 & 0.625\(\pm\)0.023 & 0.588\(\pm\)0.024 & 0.584\(\pm\)0.032 \\ SHFM Y5 [7] & 5 years & 0.622\(\pm\)0.033 & 0.615\(\pm\)0.018 & 0.619\(\pm\)0.023 & 0.573\(\pm\)0.022 & 0.579\(\pm\)0.032 \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison to Previous Scores - AUC for Outcome Mortality that an aspect of model interpretability may be lost due to the model predicting risk classes generated from an unsupervised clustering method as opposed to predicting the binary outcome(s) directly. Even so, we believe such a tradeoff may be acceptable due to the improved ability to risk-stratify HF patients. Despite fewer patients and shorter follow-up time (6-months) compared to other datasets, the ESCAPE trial was selected for model training because, to the best of the authors' knowledge, it is the only cohort available with detailed invasive hemodynamics derived from a well-designed randomized HF clinical trial. There is potential for selection bias by choosing trial data and higher-risk patients in the two UVA cohorts. In addition, many of our validation datasets did not have invasive hemodynamics so we were unable to validate the invasive hemodynamic models on all four of the patient cohorts. Further, there were heterogeneities in HF acuity status in the datasets used. Even so, validation of the CARNA models yielded robust risk stratification compared to other conventional HF risk score and ML models. ## VIII Conclusion This study developed a novel advanced HF risk stratification using an explainable ML methodology. The CARNA risk scores are more predictive of patient outcomes than previous approaches and provide detailed characterizations of clinical phenotypes. CARNA may facilitate clinical decision making and provide robust risk stratification.
2301.04069
The chaotic emergence of thermalization in highly excited string decays
We analyse the most general process of a generic highly excited string that decays into a less excited, yet generic, highly excited string emitting a tachyon. We provide a simple and compact analytic description of the decay process which discriminates between and within the structure of every single microstate of the initial and final highly excited string. Taking into account the random nature of the decay process we extract the energy spectrum of highly excited strings, microstate by microstate, finding a behavior which corresponds to the greybody emission spectrum. In addition, by exploiting the analytic control of the decay process, we identify the origin of thermal effects which are triggered by the chaotic nature of the highly excited string interactions modeled by the microstates structure.
Maurizio Firrotta
2023-01-10T16:45:00Z
http://arxiv.org/abs/2301.04069v3
# The chaotic emergence of thermalization in highly excited string decays ###### Abstract We analyse the most general process of a generic highly excited string that decays into a less excited, yet generic, highly excited string emitting a tachyon. We provide a simple and compact analytic description of the decay process which discriminates between and within the structure of every single microstate of the initial and final highly excited string. Taking into account the random nature of the decay process we extract the energy spectrum of highly excited strings, microstate by microstate, finding a behavior which corresponds to the greybody emission spectrum. In addition, by exploiting the analytic control of the decay process, we identify the origin of thermal effects which are triggered by the chaotic nature of the highly excited string interactions modeled by the microstates structure. ## 1 Chaos in highly excited string processes ### Classical string vs quantum string configurations 1.2 Probing chaotic behavior of quantum strings through their interactions * 2 Thermalization emergence in highly excited string decays * 2.1 Chaos driven thermalization * 2.2 Thermal spectrum: the greybody emission of highly excited strings * 3 Results of random generated spectra * 4 Conclusion and future directions * A Highly excited string decay: \(H_{N}\Rightarrow H_{N^{\prime}}+T\) * A.1 Decay amplitude * A.2 Decay rate * A.3 Thermal nature of the decay amplitude ## Introduction The present paper is focused on enlightening the connection between chaos and thermal effects within the physical systems provided by highly excited string (HES) interactions. Motivated by the intriguing interplay between chaos, thermal effects and quantum information [1]-[5], which are three milestones of black hole (BH) physics, we first used HES as promising candidates of BH states [6]-[8] and then we computed their energy spectra. The main goal was to detect a manifest connection between the chaotic behavior of HES interactions and the thermalization of their energy spectra which emerges naturally. In line with past studies on string decays and the produced Hawking radiation [9]-[18], we used and improved the most general process of an HES that decays into an HES emitting a tachyon [19][20], providing an analytic description of the decay process which discriminates between and within the structure of every single microstate of the initial and final HES. Considering the random nature of the decay process we extracted the spectrum of the HES, microstate by microstate, finding a behavior which corresponds to the greybody emission spectrum. In addition, exploiting the analytic control of the decay process, we identified the origin of thermal effects, finding that they are triggered by the chaotic nature of HES interactions. The setup we adopted relies on the recent improvement of the Di Vecchia, Del Giudice and Fubini (DDF) formalism [21][22], where its spectrum generating algebra was recasted in a manifestly covariant form [23][24], for both bosonic string and superstring theories. The possibility of identifying each state of the string spectrum with the associated physical vertex operator gave rise to a wide range of applications: the realization of the scattering of string coherent vertex operators1[19], the non perturbative string footprint in the gravitational wave (GW) signal produced in the merging phase of BHs collision [25], the non perturbative spinning corrections to the electromagnetic wave produced in the collision of heavy sized objects [26], such as BHs and neutron stars NSs, the two body decay of HES [20] and finally the indications about the chaotic behavior of HES interactions [27][28]. Footnote 1: A very powerful application of the scattering amplitudes of coherent string states is their nature of generating amplitudes of any desired string states, obtained by a simple derivative projection over mass eigenstates. About the chaotic behavior of HES interactions, quite recently a novel measure of chaos for scattering amplitudes was proposed [29], where the behavior of HES amplitudes was compared with the chaotic distribution of the zeros of the Riemann zeta function and the quantum mechanical scattering on a leaky torus, finding a common pattern among them. The scope of the present paper was to continue the study of the physical applications connected to the possibility of exploring the interactions of the whole tower of string excitations, or string microstates, proceeding beyond the physics of light string states and the first Regge trajectory. The paper is organized as follows: in section 1 we explained the connection between the shape of classical string configurations and the structure of quantum string configurations, in particular we studied the classical string profiles as a function of the number of harmonics and the respective coefficients and we compared their shape with the degenerate quantum string partitions of generic mass levels. We followed the logic that the structure of a quantum string state can be probed through its interaction, we selected the simplest one \(i.e\) the decay of HES into two tachyons, in order to preserve the HES structure. After a brief review of the chaotic analysis, developed in [29], we compared the shape of classical string configurations with the chaotic behavior of the scattering amplitudes relative to the quantum analog of classical string configurations finding a common pattern. In section 2 we studied the most general process of a generic HES that decays into a less excited, yet generic, HES emitting a tachyon in the thermalization regime. Exploiting the analytic control of the decay process, we identify the origin of thermal effects with the chaotic structure of the process. Finally we gave a description of how to compute the emission spectrum of HES. In section 3 we presented the results of HES spectra, microstate by microstate. In particular we found that a generic string excitation is characterized by a greybody emission, while for the extreme case where only excitations of the first Regge trajectory (FRtj) are considered, the thermal nature is highly suppressed as to be negligible. In appendix 4 we reviewed the computation developed in [20], and we described the analytical implications of the thermalization regime, or more precisely the regime in which the ratio between the energy of the emitted state and the mass of the decaying state is enough small in such a way that the energy loss of the decaying state is smooth. Chaos in highly excited string processes The aim of this section is to review recent results about the chaotic behavior of HES interactions. ### Classical string vs quantum string configurations Classical three dimensional2 bosonic open string profiles with Neumann boundary conditions, in the temporal gauge (\(X^{0}=t\)), are given by [30] Footnote 2: In the specific case of classical string, it was chosen \(j=1,2,3\) in order to plot 3D profiles, but in general \(j=1,...,D-2\). \[X^{j}_{\{a_{n}\}}(\sigma,\tau)=\sum_{n=1}^{n^{*}}x^{j}_{\{n\},\{a_{n}\}}( \sigma,\tau) \tag{1}\] \(\sigma\in[0,1]\) and \(\tau\in[0,\infty)\) are the worldsheet variables and \(X^{j}(\sigma,\tau)\) is the map between the worldsheet and the target space (\(\mathbb{R}_{3}\)), representing the 3-D string profile at any value of \(\tau\)3. Classical string configurations can be classified by the set of harmonics \(\{n\}\) and the respective set of coefficients \(\{a_{n}^{j}\}\) which are respectively the harmonic label and the relative weight coefficient of the classical mode \(x^{j}\) : Footnote 3: We did not include the center of mass position of the string \(x_{0}\), and also the center of mass momentum \(p_{0}\tau\), because are not relevant in our investigation. \[x^{j}_{\{n\},\{a_{n}\}}(\sigma,\tau)=\frac{a_{n}^{j}}{n}\cos\left(n\pi\sigma \right)\sin\left(n\pi\tau\right) \tag{2}\] In figure 1 there are some 3-D string profiles for different choices of the set of harmonics and coefficients, from which one can observe how the string profile becomes more involved if the number of harmonics is increased. From figure 2 one can observe the dependence of string profiles from the set of coefficients \(\{a_{n}^{j}\}\). In the present case it was assumed that coefficients are normalized to the identity, and they are uniformly distributed in the interval \((0,1)\). One can observe that the behavior of string profiles is unchanged also for non normalized integer coefficients. The features of string profiles are unaffected by different parametrizations of the coefficients \(a_{n}^{j}\). At this level a trivial observation is that, if one considers a classical string profile Figure 1: Examples of 3D string profiles for different combinations of harmonics \(n^{*}=1,5,10,50\) with uniformly distributed random parameters \(a_{n}^{j}\in(0,1)\). \[X^{j}_{\{a_{n}\}}(\sigma,\tau)=x^{j}_{\{n\},\{a_{n}\}}(\sigma,\tau) \tag{3}\] with the same single generic harmonic \(n\) in the three spatial directions \(j=1,2,3\), one obtains the smoothest profile, which is a straight line, such as the first profile of figure 1. This is the general picture of how the classical string profiles are modeled by the harmonic set \(\{n\}\) and the coefficient set \(\{a_{n}\}\). Now it is helpful to study the comparison between classical and quantum string configuration. In particular, promoting the coefficients \(a^{j}_{n}\) to be creation operators \({\cal A}^{j}_{-n}\) one has the quantum analog of the string mode, and in addition to the set of harmonics \(\{n\}\), one has to include the number of excitation \(g_{n}\) of each harmonic so much so that for a given quantized level \(N\) of the string spectrum, one has a set of states spanned by the set of solutions \(\{g_{n}\}\), representing all the partitions of the integer \(N\) with occupation number \(J\): \[N=\sum_{n=1}^{N}ng_{n}\,,\quad J=\sum_{n=1}^{N}g_{n} \tag{4}\] The promotion from classical to quantum string is summarized as \[x_{\{n\},\{a_{n}\}}(\sigma,\tau)\,\Rightarrow\,{\cal N}_{n,g_{n}}{\cal A}^{g _{n}}_{-n}|0\rangle\ ;\quad X_{\{a_{n}\}}(\sigma,\tau)\Rightarrow\Pi^{(N)}_{\{g_{n}\}}= \prod_{n=1}^{N}{\cal N}_{n,g_{n}}{\cal A}^{g_{n}}_{-n}|0\rangle \tag{5}\] where \({\cal N}^{-1}_{n,g_{n}}=\sqrt{n^{g_{n}}\,g_{n}!}\) is the normalization constant of each mode4. In particular due to its quantum nature, the quantum string configuration for a fixed level \(N\) has a large degeneracy5 which produces the microstate structure, so in order to identify the final quantum string state one can take the simplest linear combination of microstates introducing the Figure 2: Examples of 3D string profiles for different combinations of harmonics \(n^{*}=10,50,100\). For each value of \(n^{*}\) there are five choices of string profiles for different choices of uniformly distributed random parameters \(a^{j}_{n}\in(0,1)\). average over microstates in the second expression of (5) \[X(\sigma,\tau)\Rightarrow\sum_{\{g_{n}\}}\Pi^{(N)}_{\{g_{n}\}}=\sum_{\{g_{n}\}} \prod_{n=1}^{N}\mathcal{N}_{n,g_{n}}\mathcal{A}_{-n}^{g_{n}}|0\rangle \tag{6}\] Looking at the complicated classical string profiles, parametrized by \(\{n\}\) and \(\{a_{n}\}\), a natural question is how the quantum string configurations reflect their classical characteristic shape as a function of \(\{n\}\) and \(\{g_{n}\}\). A possible way of testing the features of quantum string profiles as a function of the microstate structure, is to probe the implications of their shapes through their interactions. In particular one can choose the simplest string decay amplitude and study the microstate dependence. In the next subsection there will be a review of the analysis of string configurations leading to a chaotic behavior of their interactions [29]. ### Probing chaotic behavior of quantum strings through their interactions Inspired by the systematic of classical string profiles, the logic proposed for the analysis of quantum string configurations is the following: the main observable is the decay amplitude provided by a level \(N\) string microstate \(\Pi^{(N)}_{\{g_{n}\}}\) decaying into two tachyons. Along the line of indications and improvements introduced in [23] and developed in [19], the decay amplitude \(\mathcal{A}_{\Pi^{(N)}_{\{g_{n}\}}}\) for the most general HES was computed, with a remarkably simple procedure based on coherent state techniques. The informations about the structure of \(\Pi^{(N)}_{\{g_{n}\}}\) are translated into the decay amplitude, and they are manifested through the profile of the decay amplitude. The choice of looking at tachyons is connected to their simple vertex operators, in such a way that all the information inside the decay amplitude is governed by the structure of \(\Pi^{(N)}_{\{g_{n}\}}\). In figure 3 there is a representative picture of the decay amplitude from which one can see that the decay amplitude profile is only a function of the angle \(\alpha\). Now comparing decay profiles of different microstates one can extract a general behavior associated to choices of \(\{n\}\) and \(\{g_{n}\}\), as it was pointed out in [27]. Most of the information of the decay profile can be codified in terms of its extrema, so it is useful to introduce the logarithmic derivative \(F_{\{g_{n}\}}(\alpha)\) and study its distribution of zeros. A suitable parameterization of the distribution is intimately connected with the chaotic behavior of the decay [29]. Before describing the chaotic analysis in detail, a fast presentation of the resulting decay amplitude is discussed. The HES state of the level \(N\) with polarizations \(\{\zeta_{n}\}\) and momentum \(p\) is described by \[\Pi^{(N)}_{\{g_{n}\}}(\{\zeta_{n}\},p)=\prod_{n=1}^{N}\frac{(\zeta_{n}{\cdot} \mathcal{A}_{-n})^{g_{n}}}{\sqrt{g_{n}!}\,n^{g_{n}}}|\widetilde{p}\rangle \tag{7}\] where \(\widetilde{p}\) is the tachyonic DDF reference momentum, that combined with the action of the creation operators reproduce the momentum of the final state \(p=\widetilde{p}-q\sum_{n}ng_{n}\). Following the DDF formalism one can write the corresponding BRST vertex operator and compute he decay amplitude of figure 3 using circular polarizations6: Footnote 6: The general amplitude is made of contributions both linear and bilinear in \(\zeta_{n}\)[19]. The bilinear contribution is proportional to the square of the linear contribution, so up to an irrelevant polynomial the functional dependence on the microstate structure is preserved by the choice \(\zeta_{n}^{(a)}\cdot\zeta_{m}^{(b)}=0\). \[\mathcal{A}_{\Pi_{\{g_{n}\}}^{(N)}}(\alpha)\simeq\left(\zeta\cdot(p_{1}-p_{2}) \right)^{J}\prod_{n=1}^{N}\left(\frac{(1-n\sin^{2}\alpha)_{n-1}}{\Gamma(n)} \right)^{g_{n}} \tag{8}\] all the information about the microstate \(\Pi_{\{g_{n}\}}^{(N)}\) is encoded in the dressing factor of the coupling \(\zeta\cdot(p_{1}-p_{2})\) \[\Pi_{\{g_{n}\}}^{(N)}-\text{structure}\Rightarrow\prod_{n=1}^{N}\left(\frac{(1 -n\sin^{2}\alpha)_{n-1}}{\Gamma(n)}\right)^{g_{n}} \tag{9}\] Using the properties of the Pochhammer factor and the explicit parametrization of the polarizations the decay amplitude can be written as \[\mathcal{A}_{\Pi_{\{g_{n}\}}^{(N)}}(\alpha)\simeq\prod_{n=1}^{N}\left(\frac{ \sin\alpha}{\Gamma(n)}\,\sin\left(n\pi\cos^{2}\alpha/2\right)\Gamma\left(n\cos ^{2}\alpha/2\right)\,\Gamma\left(n\sin^{2}\alpha/2\right)\right)^{g_{n}} \tag{10}\] Finally the logarithmic derivative of the decay amplitude \[F_{\{g_{n}\}}(\alpha)=\frac{d}{d\alpha}\log\mathcal{A}_{\Pi_{\{g_{n}\}}^{(N)} }(\alpha) \tag{11}\] has the following form \[\begin{split} F_{\{g_{n}\}}(\alpha)=& J\cot\alpha- \pi\frac{\sin\alpha}{2}\sum_{n=1}^{N}ng_{n}\cot\left(n\pi\cos^{2}\frac{\alpha} {2}\right)\\ &-\frac{\sin\alpha}{2}\sum_{n=1}^{N}ng_{n}\Big{(}\psi\left(n\cos ^{2}\frac{\alpha}{2}\right)-\psi\left(n\sin^{2}\frac{\alpha}{2}\right)\Big{)}.\end{split} \tag{12}\] This is the final observable which will be subjected to the analysis described below. Figure 3: Picture of the decay amplitude where a representative microstate \(\Pi_{\{g_{n}\}}^{(N)}\) decays into two tachyons \(T\). The only kinematical freedom of the decay amplitude is the emission angle \(\alpha\) which starts from the reference dashed line. #### Chaotic analysis: setup Random Matrix Theory (RMT) provides a very powerful tool to make the bridge between quantum chaos and universal statistical properties [31][32]. In particular a quantitative connection between chaos and probability distributions was conjectured in [33] and subsequently the link between chaos and statistical properties was widely studied in many contexts such as quantum chromodynamics (QCD)[34], nuclear physics [35], black holes [36] and condensed matter [37]. In what follows we laid out the identification strategy of the target distribution used as a discriminant of the chaotic behavior. * Starting from the Hermite \(\beta\)-ensemble of \(N\times N\) random matrices, given the set \(\{\alpha\}=(\alpha_{1},....,\alpha_{N})\) of matrix eigenvalues, the associated joint probability distribution is given by \[P_{\beta}(\{\alpha\})=\frac{e^{-\sum_{j=1}^{N}\frac{\alpha_{j}^{2}}{2}}}{Z_{N, \beta}}\prod_{\ell<v}|\alpha_{\ell}-\alpha_{v}|^{\beta}\] (13) * A very useful approximation was given in [38], where it was considered the joint probability distribution of nearest-neighbor spacing \(\alpha_{j+1}-\alpha_{j}=\delta_{j}\) : the Wigner surmise distribution \[P_{\beta}(\{\delta\})=C_{\beta}\,\delta^{\beta}\,e^{-d_{\beta}\delta^{2}}\] (14) with constants \[C_{\beta}=2\,\frac{\Gamma(\beta/2+1)^{\beta+1}}{\Gamma(\beta/2+1/2)^{\beta+2 }}\,,\quad d_{\beta}=\frac{\Gamma(\beta/2+1)^{2}}{\Gamma(\beta/2+1/2)^{2}}\] (15) A very nice application of the Wigner surmise is the prediction of the zeros distribution of the Riemann zeta function [39]. * In considering the joint probability distribution of consecutive spacings, there is a technical issue related to the unfolding procedure of the data, that in general cases can be difficult to implement, as explained in [40]. In order to avoid the unfolding procedure one can introduce a more robust index [40]: the ratio of consecutive level spacing \[r_{j}=\frac{\delta_{j+1}}{\delta_{j}}\] (16) where the joint probability distribution was intensively studied in [41][42], and for \(3\times 3\) block diagonal matrices takes the following form \[P_{\beta}(r)=\frac{3^{3(1+\beta)/2}\Gamma(1+\beta/2)^{2}}{2\pi\Gamma(1+\beta) }\,\frac{(r+r^{2})^{\beta}}{(1+r+r^{2})^{1+3\beta/2}}\] (17) which is valid for any value of \(\beta>0\)[43] and also for large asymptotic values of \(\beta\)[44]. For \(\beta=1,2,4\) one has the standard GOE, GUE and GSE respectively, which are the gaussian orthogonal/unitary/symplectic ensembles. #### Chaotic analysis: results Following the discussion related to figure 1 one can expect that the microstate with the maximal number of harmonics will produce a less smooth decay amplitude. In particular one can measure the chaotic behavior of the decay profile computing the distribution of the index (16) for the zeros of (12), which is the study of how the unbiased \(r\)-index indicator(16) for the extrema of the amplitude is distributed in agreement with the target chaotic class of \(\beta\)-distributions (17). From the results in figure 4, relative to microstates of \(N=100\), one can observe the profile of the logarithmic derivative \(F_{\{g_{n}\}}(\alpha)\) and the respective joint probability distribution of the microstate with the maximal number of harmonics (which are the first two plots respectively). The five small plots represent how the measured joint probability distributions deviate from the target distribution (17), respect the variation of the number of harmonics. Quite similar to the case of classical string profiles where the number of harmonics triggers the complexity of shape of the profiles, the chaotic behavior of the decay profile is triggered by the number of harmonics. An additional check of this kind of harmonic hierarchy is provided in figure 5, where we plot the logarithmic derivative of the decay amplitude and the joint probability distribution of the single harmonic microstate of \(N=100\). In particular one can observe that the distribution totally deviates from the chaotic jGUE, which is a hint about the connection between classical single harmonic string profiles (straight lines) and the absence of chaos in the decay amplitude of single harmonic microstates. Figure 4: Microstates of \(N=100\). From left to right the profile of the logarithmic derivative of the decay amplitude and the joint probability distribution both relative to the microstate with maximal number of harmonics. The dashed line is the joint probability distribution with \(\beta=2\), which represents the joint GUE distribution (jGUE). Below there are five examples of joint probability distributions for microstates with less number of harmonics. ## 2 Thermalization emergence in highly excited string decays In the previous section we presented a systematic study of how the microstate structure emerges through the profile of its decay process. In particular we described how the chaotic behavior of the decay process is triggered by the microstate structure. The aim of this section is to study how the chaotic behavior is related to the thermalization process of the most general HES decay (fig.7). Following [20] and appendix (A) the decay rate of the present process, in a d-dimensional phase space, is given by the following expression \[\Gamma_{\Pi^{N}_{\{g_{n}\}},\Pi^{N^{\prime}}_{\{g_{n^{\prime}}\}}}=\frac{ \Omega_{s}\,E_{k}^{d-3}}{16(N{-}1)(2\pi)^{d-2}}\Big{|}\mathcal{A}_{\Pi^{N}_{ \{g_{n}\}},\Pi^{N^{\prime}}_{\{g_{n^{\prime}}\}}}\Big{|}^{2} \tag{1}\] This is the decay rate of the single microstate at level \(N\) decaying into a single microstate at level \(N^{\prime}\) through the emission of a tachyon of energy \(E_{k}\), where the solid angle \(\Omega_{s}=2\pi^{(d-1)2}/\Gamma((d{-}1)/2)\) is introduced. Figure 5: The logarithmic derivative of the decay amplitude and the joint probability distribution both relative to the microstate of \(N=100\) with only one harmonic. The dashed line is the jGUE. Figure 6: Comparison between classical string profiles and decay amplitudes he non trivial dependence of the decay rate is due to the square of the absolute value of the amplitude, which is the main quantity that will be analyzed. In particular in the region where the ratio between the energy of the emitted state and the mass of the decaying state is enough small, in such a way that the energy loss of the decaying state is smooth, the absolute value square of the amplitude assumes the following form \[\left|\mathcal{A}_{\Pi^{N}_{\{g_{n}\}},\Pi^{N^{\prime}}_{\{g_{n^{\prime}}\}}} \right|^{2}=\widetilde{\mathcal{N}}^{2}_{\{g_{n}\}}\widetilde{\mathcal{N}^{ \prime}}^{2}_{\{g^{\prime}_{n^{\prime}}\}}\,e^{-\mathcal{C}_{N}(\{g_{n}\},\{g^ {\prime}_{n^{\prime}}\})\frac{E_{k}}{T_{H}}-2\mu_{N}\left(\{g_{n}\},\{g^{ \prime}_{n^{\prime}}\};E_{k}/T_{H}\right)} \tag{2}\] where there is a weight factor \(\mathcal{C}_{N}\) that depends on the level \(N\) of the decaying state, and also depends on the microstates geometry through the relation \[\mathcal{C}_{N}(\{g_{n}\},\{g^{\prime}_{n^{\prime}}\})=\frac{2}{\sqrt{N}} \left(\sum_{n=1}^{N}g_{n}n\log n-\sum_{n^{\prime}=1}^{N^{\prime}}g^{\prime}_ {n^{\prime}}n^{\prime}\log n^{\prime}\right) \tag{3}\] the other function in the exponent is given by \[\mu_{N}\left(\{g_{n}\},\{g^{\prime}_{n^{\prime}}\};\frac{E_{k}}{T_{H}}\right) =\sum_{n=1}^{N}g_{n}\log\Gamma\left(1-\frac{n}{\sqrt{N}}\frac{E_{k}}{T_{H}} \right)+\sum_{n^{\prime}=1}^{N^{\prime}}g^{\prime}_{n^{\prime}}\log\Gamma \left(1+\frac{n^{\prime}}{\sqrt{N}}\frac{E_{k}}{T_{H}}\right). \tag{4}\] and finally the dimensional temperature \(T_{H}=1/\ell_{s}\) is the Hagedorn temperature which is the inverse of the string length \(\ell_{s}=\sqrt{\alpha^{\prime}}\). The thermal nature of the characteristic expression (2) is intimately related to the chaotic behavior of the decay, in fact the non trivial dependence on the microstates is directly related to chaos (section 1) and it will play also a crucial role in the thermalization of the decay, as will be explained in the next part 2.1 of the present section. When is considered the decay rate of a state of the level \(N\) made of many microstates (see fig.8) \[|BH\rangle_{N}=\sum_{\{g_{n}\}}\Pi^{(N)}_{\{g_{n}\}} \tag{5}\] which can be interpreted as a black hole state, the final decay rate results to be the sum over all the possible microstate configurations of the decay rates \[\Gamma\Big{(}|BH\rangle_{N}\Rightarrow|BH\rangle_{N^{\prime}}+E_{k}\Big{)}=\frac {\Omega_{s}\,E_{k}^{d-3}}{16(N{-}1)(2\pi)^{d-2}}\sum_{\{g_{n}\},\{g^{\prime}_{n ^{\prime}}\}}\Big{|}\mathcal{A}_{\Pi^{N}_{\{g_{n}\}},\Pi^{N^{\prime}}_{\{g_{n ^{\prime}}\}}}\Big{|}^{2}. \tag{6}\] This is a very rich observable that is characterized by the highly non trivial functional dependence of the microstates. In the last part 2.2 of the present section, we will describe the behavior of such observable and the derivation of the non trivial energy spectrum leading to the greybody radiation of highly excited string states. ### Chaos driven thermalization In section 1 we analyzed the chaotic behavior of a string microstate decaying into two tachyons, where we described the connection between the microstate structure and the chaotic behavior of the decay. The sensitivity of the decay to the microstate structure was essentially encoded in the dressing factor (9). Now considering the direct extension of the decay in (fig.3), which is the one of (fig.7), one has the generalization of (9) to the case of two microstate structures: one for the decaying microstate \(\Pi^{N}_{\{g_{n}\}}\) and the other for the final microstate \(\Pi^{N^{\prime}}_{\{g^{\prime}_{n^{\prime}}\}}\). Using the general setup described in appendix A one finds a Figure 8: Qualitative picture of the decay rate \(|BH\rangle_{N}\Rightarrow|BH\rangle_{N^{\prime}}+E_{k}\). The microstates of the decaying state are represented by colorful small strings, each color is associated to the possible microstate decay. The red region represents the random superposition of all the possible decays which is the mechanism through which the thermalization process emerge. remarkably compact expression of the decay rate7 Footnote 7: In the expression we used \(\alpha^{\prime}=1/2\) that means \(T_{H}=\sqrt{2}\). \[\left|\mathcal{A}_{\Pi^{N}_{\{g_{n}\}},\Pi^{N^{\prime}}_{\{g_{n^{\prime}}\}}} \right|^{2}\simeq \prod_{n=1}^{N}\left(\frac{\left(1\!-\!n(1\!-\!E_{k}/\sqrt{2N}) \right)_{n-1}}{\Gamma(n)}\right)\prod_{n^{\prime}=1}^{2g_{n}^{N^{\prime}}} \left(\frac{\left(1\!-\!n^{\prime}(1\!+\!E_{k}/\sqrt{2N})\right)_{n^{\prime}- 1}}{\Gamma(n^{\prime})}\right)^{2g_{n^{\prime}}^{\prime}} \tag{11}\] comparing this expression with (9) one can see the systematic generalization of the decay rate due to the presence of an additional microstate. Following the expansion (105) and the rest of the appendix, one recovers (2). This is the systematics of how the chaotic factors make the decay rate thermal, giving a precise identification between chaos and the emergence of thermalization. Following the same logic as in (fig.6), where a comparison between classical strings and decay amplitudes was argued, one can complete the scenario describing the mechanism that originates the thermalization of the precess. In fact one can analyze two extreme cases: the first one is to consider microstates of the first Regge trajectory. Classically their profile is just a straight line (fig.6), because of the fact that they are single harmonic states, so they do not produce chaos. The associated thermalization is given by the expression (11) with \(n=1\), \(g_{1}=N\) and \(n^{\prime}=1\), \(g_{1}^{\prime}=N^{\prime}\), which trivializes to unity, so the decay is not thermal. The second extreme case, which is less trivial, is to consider still single harmonic states but with the maximal harmonic excited, \(n=N\), \(g_{N}=1\) and \(n^{\prime}=N^{\prime}\), \(g_{N^{\prime}}^{\prime}=1\). Classically they are a straight lines and do not produce chaos (fig.5). From (11) and (2) one can finds \[\left|\mathcal{A}_{\Pi^{N}_{\{g_{N}=1\}},\Pi^{N^{\prime}}_{\{g_{N^{\prime}}=1 \}}}\right|^{2}\simeq e^{-2\log\left(\frac{N}{N^{\prime}}\right)\sqrt{N}E_{k} /T_{H}}\left(\frac{\sin\left(\pi\sqrt{N}E_{k}/T_{H}\right)}{\pi\sqrt{N}E_{k}/T _{H}}\right)^{2} \tag{12}\] this behavior, at large \(N\), is very suppressed even if \(N^{\prime}\sim N\), and it reflects how the thermal behavior of the decay is subdominant in the case of states with only the maximal harmonic excited. Beyond the two extremal cases discussed, one can have a more qualitative picture of the microstates functional dependence of the decay looking at the explicit distributions in (fig.9) and (fig.10). The fluctuations of the decay reflect the connection between the geometry of microstates, which is clear in the classical picture of string profiles, and the associated chaos which is the trigger of the thermal behavior. ### Thermal spectrum: the greybody emission of highly excited strings In the previous part we described the connection between chaos and thermal behavior of the decay rate, where the intrinsic microscopical structure of microstates was fundamental for the origin of such connection. Here the goal will be the analysis of the energy spectrum radiated from HES states. The first thing to note is that the decay rate (2) is not a function of the emitted energy \(E_{k}\), the kinematics of the two body decay fixes the energy \(E_{k}\) as a function of the masses of the present states \[E_{k}=\frac{M_{N}^{2}-M_{N^{\prime}}^{2}+M_{T}^{2}}{2M_{N}}=\frac{N-N^{\prime}-1 }{\sqrt{2N-2}} \tag{9}\] In order to extract the energy spectrum of an HES state, one has to explore the energy range of the decay rate point by point, producing a discrete energy trajectory of the decay process. Since the target observable is the radiation of the HES state at generic level \(N\), one can reproduce the energy trajectory varying the mass \(M_{N^{\prime}}=2N^{\prime}-2\) of the final state. The energy region of interest is identified by the range in which the ratio between the energy of the emitted state \(E_{k}\) and the mass of the decaying state \(M_{N}=2N-2\) is Figure 10: Microstates population of the decay rate in logarithmic scale for 500 random partitions \(\{g_{n}\}\) of \(N=100\) and 500 random partitions \(\{g^{\prime}_{n^{\prime}}\}\) of \(N^{\prime}=99\). The red points are the values of the decay rate. Figure 9: Microstates population of the decay rate in logarithmic scale for 100 random partitions \(\{g_{n}\}\) of \(N=100\) and 100 random partitions \(\{g^{\prime}_{n^{\prime}}\}\) of \(N^{\prime}=99\). The red points are the values of the decay rate. enough small in such a way that the energy loss of the decaying state is smooth, which is the situation where a thermal spectrum is expected. Given \(N\) and \(N^{\prime}\), the decay rate of a state made of microstates is given in (6), and even if the energy \(E_{k}\) is fixed there is a non trivial microstates distribution that mediates the decay rate (fig.10). Therefore in order to reproduce the energy spectrum of a given HES state with \(\{\Pi^{(N)}_{\{g_{n}\}}\}\) microstates, one can generate an energy trajectory of random decays for each microstate \(\Pi^{(N)}_{\{g_{n}\}}\) of the level N that can decay into a random microstate \(\Pi^{(N^{\prime})}_{\{g^{\prime}_{n^{\prime}}\}}\) of the whole set \(\{\Pi^{(N^{\prime})}_{\{g^{\prime}_{n^{\prime}}\}}\}\) of the level \(N^{\prime}\). Finally the spectrum is obtained averaging over the energy trajectories (fig.11). In general each microstate of the level \(N\) can decay into each microstate of the level \(N^{\prime}\), and this is the reason why one has to mimic the randomness of the decay process in order to describe the intrinsic nature of a degenerate sized object such as the HES. The general behavior of the energy spectrum turns out to be described by the greybody radiation \[\Sigma^{\rm grey}_{N}(E_{k}/T_{eff})=\langle\Gamma_{N\Rightarrow N^{\prime}} \rangle_{{\cal P}(\{E_{k}\})}=\frac{\sigma^{\rm grey}_{N}(E_{k}/T_{eff})}{e^{ \frac{E_{k}}{T_{eff}}}-1} \tag{10}\] with an effective temperature which depends on the decaying state through \[T_{eff}=T_{H}/\sqrt{N} \tag{11}\] Figure 11: Picture of the random microstates energy trajectories \({\cal P}(\{E_{k}\})\). and with the greybody factor which is sensitive to the nature of the microstates of \(N\) and \(N^{\prime}\) and also depends on the randomness of the decay process \[\sigma_{N}^{\rm grey}(E_{k}/T_{eff})=\frac{C_{N}(\{g_{n}\}.\{g^{\prime}_{n^{ \prime}}\})\Omega_{s}E_{k}^{d-3}}{16(N{-}1)(2\pi)^{d-2}}\frac{\left(e^{\frac{E _{k}}{T_{eff}}}-1\right)\left(\frac{E_{k}}{T_{eff}}\right)^{r_{N}(\{g_{n}\},\{g^ {\prime}_{n^{\prime}}\})}}{e^{\nu_{N}(\{g_{n}\},\{g^{\prime}_{n^{\prime}}\}) \frac{E_{k}}{T_{eff}}}-1} \tag{12}\] The randomness of the decay process is incorporated in the coefficients \[C_{N}(\{g_{n}\},\{g^{\prime}_{n^{\prime}}\})\,,\quad r_{N}(\{g_{n}\},\{g^{ \prime}_{n^{\prime}}\})\,,\quad\nu_{N}(\{g_{n}\},\{g^{\prime}_{n^{\prime}}\}) \tag{13}\] that also reflect the intrinsic dependence on the chosen microstates that determine the decaying state and the final state. In the next section we will perform the spectrum analysis of different states, in particular a state with definite mass and occupation number \((N,J)\) decaying into a generic state of the level \(N^{\prime}\), a state \((N,J)\) decaying into a state \((N^{\prime},J^{\prime})\) and finally a generic state \(N\) decaying into a state \(N^{\prime}\). For simplicity it will be considered the greybody spectrum (10) without the phase space factor, which is equal for each case, therefore without loss of generality one can redefine the final observable as \[\gamma_{N}^{\rm grey}(E_{k}/T_{eff})=C_{N}(\{g_{n}\}.\{g^{\prime}_{n^{\prime}} \})\frac{\left(\frac{E_{k}}{T_{eff}}\right)^{r_{N}(\{g_{n}\},\{g^{\prime}_{n^ {\prime}}\})}}{e^{\nu_{N}(\{g_{n}\},\{g^{\prime}_{n^{\prime}}\})\frac{E_{k}}{T _{eff}}}-1} \tag{14}\] ## 3 Results of random generated spectra Let's start by considering a simple example in which only a single microstate can decay, in particular let's consider the specific microstate \(\Pi_{\pi_{1}(g_{n})}^{(100)}\) of the level \(N=100\) given by \[\pi_{1}(g_{n})=\{g_{1}=1,g_{2}=1,g_{5}=2,g_{7}=1,g_{8}=1,g_{9}=1,g_{10}=1,g_{2 5}=1,g_{28}=1\} \tag{15}\] The first energy point of the spectrum is given by the decay of \(\Pi_{\pi_{1}(g_{n})}^{(100)}\) into a generic microstate of the level \(N^{\prime}=N-1\) and corresponds to \(E_{k}=0\). The fact that \(E_{k}=0\) is constrained by the kinematics of the emitted tachyonic particle which make the first point of the spectrum to be trivial. The first non trivial energy point of the spectrum is given by the decay of \(\Pi_{\pi_{1}(g_{n})}^{(100)}\) into a generic microstate of the level \(N^{\prime}{=}N{-}2\), and the consecutive points are obtained from \(N^{\prime}{=}N{-}3\), \(N^{\prime}{=}N{-}4\) and so on. Since the HES of the level \(N^{\prime}\) is generically composed by many degenerate microstates, one can introduce the intrinsic random nature of the process taking the average over different sets of random microstates for each energy point of the spectrum. In order to make clear the relation between chaos and thermalization one can adopt the same logic of (fig.4), in particular one can compare the spectrum of the microstate with the maximal number of harmonics and the extreme case of the first Regge trajectory (fig.13). As expected the spectrum of a state of the FRtj is not thermal. A complementary picture of the thermal spectrum of string microstates is given in fig.14, where many spectra are presented, classified by different microstates and values of the occupation number \(J\). The specific parameters of (14) are reported in Tab.1. ## 4 Conclusion and future directions In section 1 we have studied the interplay between classical string configurations and quantum string configurations, where the latter are essentially the microstates of the mass degeneracy of the HES. We have observed how the chaotic nature of HES interactions has a common pattern with the shape of classical string profiles, which is provided by the number Figure 12: Thermal spectrum of the representative microstate \(\Pi^{(100)}_{\pi_{1}(g_{n})}\) computed for two different random sample of microstates of \(N^{\prime}\). Figure 13: Spectra of microstates of \(N=100\). Comparison between the spectrum of the first Regge trajectory (blue points) and the spectrum of the state with the maximal number of harmonics (red points). The black dashed line represents the fitted behavior of the red spectrum. of harmonics that characterizes the microstate. We have presented explicit results for a representative HES at mass level \(N=100\), but the same systematics holds for different mass levels. From the point of view of the analysis of the chaotic information present in the previous section, we have also studied the evolution of the \(J\)-dependent \ HES interactions, one can quantitatively improve the measure of chaos with additional parametrization of the information content based on modern techniques of quantum information theory [45]-[50] In section 2 we have analyzed how the thermal nature of the decay amplitude is intimately related to its chaotic nature, in particular we have observed that the chaotic analytical structure, which appear as a non trivial dressing factor, originates a Boltzmann factor which encodes the thermal information of the decay process. Exploiting the exact analytical result of the decay process of a generic HES that decays into a less excited, yet generic HES, through the emission of scalar particle, such as the tachyon, we have computed the energy spectrum of HES. Starting from a definite microstate of the level \(N\), we computed the average over random microstates of the level \(N^{\prime}\), for many different values of \(N^{\prime}\). In particular the thermal spectra we found are originated by the randomization of all the possible final microstates. This prescription results connected with the random walk nature of HES through the emergence of an effective temperature which scales as \(N^{-\frac{1}{2}}\), which is the inverse of the characteristic size of HES. Such temperature is the result of the numerical analysis performed in section 3. To be more precise, in section 3 we have presented numerical results of the extracted spectra for many different microstates of the representative level \(N=100\), but the same holds for higher levels. Lower levels, \(N<100\), have less accessible energy points of interest. We numerically confirmed that for the expression (14) the coefficient \(\nu_{N}\simeq O(N^{0})\) while \(C_{N}\) and \(r_{N}\) are sensitive to the microstate structure (as we have seen from table 1). As a result we observed how a string of the first Regge trajectory deviates from the thermal spectrum, in fact it is not enough excited to produce a chaotic interaction leading to a thermal behavior, while a generic HES produces such behavior (fig.13). We also observed a non trivial dependence of the spectra on the occupation number \(J\) that we want to quantitatively address in future works, together with a fully quantitative computation of \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline \(J\) & \multicolumn{2}{c||}{Microstate} & \(C_{N}\) & \(r_{N}\) & \(\nu_{N}\) \\ \hline \multirow{4}{*}{10} & \(\{g_{2}\)=3, \(g_{12}\)=2, \(g_{3}\)=\(g_{10}\)=\(g_{13}\)=\(g_{20}\)=\(g_{24}\)=1\} & \(1.49\times 10^{-9}\) & 4.38 & 1.64 \\ & \(\{g_{2}\)=\(g_{3}\)=\(g_{4}\)=\(g_{5}\)=\(g_{6}\)=\(g_{13}\)=\(g_{15}\)=\(g_{16}\)=\(g_{17}\)=\(g_{19}\)=1\} & \(4.90\times 10^{-9}\) & 4.70 & 1.60 \\ & \(\{g_{13}\)=2, \(g_{1}\)=\(g_{2}\)=\(g_{4}\)=\(g_{6}\)=\(g_{7}\)=\(g_{12}\)=\(g_{19}\)=\(g_{23}\)=1\} & \(6.23\times 10^{-9}\) & 3.53 & 1.33 \\ \hline \multirow{4}{*}{20} & \(\{g_{2}\)=6, \(g_{3}\)=3, \(g_{5}\)=\(g_{9}\)=2, \(g_{1}\)=\(g_{4}\)=\(g_{6}\)=\(g_{7}\)=\(g_{8}\)=\(g_{12}\)=\(g_{13}\)=1\} & \(1.27\times 10^{-14}\) & 9.84 & 1.74 \\ & \(\{g_{2}\)=8, \(g_{7}\)=2, \(g_{1}\)=4, \(g_{3}\)=\(g_{4}\)=\(g_{11}\)=\(g_{12}\)=\(g_{16}\)=\(g_{20}\)=1\} & \(2.08\times 10^{-14}\) & 11.30 & 2.64 \\ & \(\{g_{1}\)=\(g_{2}\)=3, \(g_{3}\)=4, \(g_{4}\)=3, \(g_{8}\)=2, \(g_{5}\)=\(g_{6}\)=\(g_{7}\)=\(g_{12}\)=\(g_{21}\)} & \(28.9\times 10^{-14}\) & 9.52 & 2.01 \\ \hline \multirow{4}{*}{30} & \(\{g_{3}\)=\(g_{4}\)=\(g_{6}\)=3, \(g_{1}\)=8, \(g_{2}\)=6, \(g_{5}\)=\(g_{7}\)=\(g_{9}\)=1\} & \(1.00\times 10^{-23}\) & 17.02 & 1.97 \\ & \(\{g_{1}\)=8, \(g_{2}\)=11, \(g_{3}\)=\(g_{6}\)=3, \(g_{8}\)=2, \(g_{4}\)=\(g_{5}\)=\(g_{18}\)=1\} & \(7.51\times 10^{-24}\) & 17.18 & 2.56 \\ & \(\{g_{1}\)=11, \(g_{2}\)=7, \(g_{3}\)=3, \(g_{4}\)=\(g_{7}\)=2, \(g_{5}\)=\(g_{6}\)=\(g_{8}\)=\(g_{11}\)=\(g_{14}\)=1\} & \(9.74\times 10^{-22}\) & 15.63 & 2.18 \\ \hline \multirow{4}{*}{40} & \(\{g_{1}\)=15, \(g_{2}\)=9, \(g_{3}\)=7, \(g_{4}\)=4, \(g_{5}\)=3, \(g_{6}\)=\(g_{9}\)=1\} & \(7.44\times 10^{-33}\) & 18.35 & 1.74 \\ & \(\{g_{1}\)=7, \(g_{2}\)=10, \(g_{3}\)=\(g_{4}\)=4, \(g_{6}\)=3, \(g_{5}\)=\(g_{12}\)=1\} & \(6.22\times 10^{-32}\) & 16.75 & 1.78 \\ \cline{1-1} & \(\{g_{1}\)=17, \(g_{2}\)=9, \(g_{3}\)=7, \(g_{6}\)=3, \(g_{7}\)=2, \(g_{4}\)=\(g_{8}\)=1\} & \(6.27\times 10^{-32}\) & 16.75 & 1.78 \\ \hline \end{tabular} \end{table} Table 1: Table of specific microstates and parameters of (14) relative to the spectra of fig.14. the spectrum of degenerate HES, which means the computation of the linear combination of the same spectra we computed, but for all the possible microstate of the level \(N\). The chaotic nature of the decay process together with the non trivial dependence of final microstates gave rise to a greybody emission spectrum with an effective temperature \(T_{eff}=T_{H}/\sqrt{N}\), which is different from the temperature of the string/BH transition [7]. In fact we recovered the characteristic behavior of the Hawking temperature which is expected to be proportional to the inverse of the mass of the decaying state. A possible interpretation of such result can be connected to an enhancement of the effective string Schwarzschild radius, due to the random walk nature of HES interactions modeled by the explicit introduction of the microstates dependence. The chaotic nature of HES interactions [27]-[29], and also the associated thermal nature suggest a non trivial spatial distribution of HES, which can be probed in a scattering experiment similar to the analysis in [52]. It is well known that an HES can be described as random walk of interactions [53]-[58], and than one can measure the precise microstates structure of the spatial distribution of HES studying the effective horizon, for example, probed in HES Compton-like scattering processes. Implementing random surface techniques [59]-[63] to the HES form factors one can obtain a complementary picture of the HES nature where chaotic and thermal effects can be matched with the effective HES horizon governed by the superposition of microstates. We leave this investigation for future works. Alternatively quite recently it was proposed a technique to resolve the spatial distribution of strings [64] and also a connected chaotic analysis of the HES Compton scattering [65], based on the principle of transient chaos. The understanding of the intrinsic structure of HES along with a complete picture of HES interactions can be very useful in studying deep microscopical connections between thermalization and chaos. The statistical non trivial nature of HES provides a very rich physical system which still deserves to be further explored. ## Acknowledgements I would like to thank M. Bianchi, G. Rossi, J. Sonnenschein, D. Weissman, V. Rosenhaus, D. Gross, B. Sundborg, A. Tseytlin, G. Di Russo, A. Guerrieri and V. Niarchos for valuable discussions and comments. I would like also to thank The Graduate Center, CUNY for the hospitality during the completion of the manuscript. ## Appendix A Highly excited string decay: \(H_{N}\Rightarrow H_{N^{\prime}}+T\) The present appendix concerns a detailed review and new insights about the decay process computation based on [20]. In particular we have presented the analytical setup with the main steps of the computation and also it will be discussed the behavior of the decay rate in the thermalization region, which is reached when the ratio between the energy of the emitted state and the mass of the decaying state is enough small in such a way that the energy loss of the decaying state is smooth. Following the picture in (fig.15) the kinematics of the process is parametrized as follows \[p=\sqrt{2N-2}(1,\vec{0})\,,\quad p^{\prime}=-(E^{\prime},\omega \sin\theta,\omega\cos\theta,\vec{0})\,,\quad k=(-E_{k},\omega\sin\theta,\omega \cos\theta,\vec{0}) \tag{10}\] \[q=-\frac{(1,0,1,\vec{0})}{\sqrt{2N-2}}\,,\quad\lambda=\frac{(0,1,0,\vec{\Lambda})}{\sqrt{1+|\vec{\Lambda}|^{2}}}\,;\quad q^{\prime}=-\frac{(1, 0,1,\vec{0})}{\omega\cos\theta-E^{\prime}}\,,\quad\lambda^{\prime}=\frac{(0,1,0,\vec{\Lambda}^{\prime})}{\sqrt{1+|\vec{\Lambda}^{\prime}|^{2}}} \tag{11}\] where the momenta \(\tilde{p}\) and \(\tilde{p}^{\prime}\) of (fig.15) are tachyonic DDF reference momenta, while \(p=\tilde{p}-\sum_{n}ng_{n}\,q\) and \(p^{\prime}=\tilde{p}^{\prime}-\sum_{n^{\prime}}n^{\prime}g^{\prime}_{n^{ \prime}}\,q^{\prime}\) are the total momenta of the microstates \(\Pi^{(N)}_{\{g_{n}\}}\) and \(\Pi^{(N^{\prime})}_{\{g^{\prime}_{n^{\prime}}\}}\). The nice feature of the DDF formalism relies in the direct identification of the final BRST vertex operator corresponding to the microstate, where its structure is modeled by the number of DDF photon insertions with polarizations \(\lambda_{n}\), \(\lambda^{\prime}_{n}\) and momenta \(q\) and \(q^{\prime}\). The photon insertions are in exact correspondence with each harmonic and every excitation number, exactly reproducing the action of creation operators. ### Decay amplitude Let's start by introducing the generating amplitude of all the possible decay processes of \(H_{N}\Rightarrow H_{N^{\prime}}+T\) discriminated by all the possible microstate \(\Pi^{(N)}_{\{g_{n}\}}\) and \(\Pi^{(N^{\prime})}_{\{g_{n^{\prime}}\}}\) originated Figure 15: Picture of the decay amplitude where a representative microstate \(\Pi^{(N)}_{\{g_{n}\}}\) decays into \(\Pi^{(N^{\prime})}_{\{g^{\prime}_{n^{\prime}}\}}\) emitting a tachyon \(T\). The DDF structure of the microstates is also depicted below the decay process. from the partition of integers \(N=\sum_{n}ng_{n}\) and \(N^{\prime}=\sum_{n^{\prime}}n^{\prime}g_{n^{\prime}}\) \[\begin{split}\mathcal{A}_{gen}=\exp\Bigg{(}\sum_{n;\,a_{n}=1}^{g_{n }}J_{n}^{(a_{n})}.V_{n}+J_{n}^{\prime(a_{n})}.V_{n}^{\prime}+\\ \sum_{n,m;\,a_{n},b_{m}}^{g_{n},g_{m}}J_{n}^{(a_{n})}.J_{m}^{(b_{m })}W_{m,n}+J_{n}^{\prime(a_{n})}.J_{m}^{\prime(b_{m})}W_{m,n}^{\prime}+J_{n}^{( a_{n})}.J_{m}^{\prime(b_{m})}M_{m,n}\Bigg{)}\end{split} \tag{11}\] where all the interaction terms are classified as follows \[\mathcal{V}_{n}^{\mu}={p^{\prime}}^{\mu}V_{n}={p^{\prime}}^{\mu}\frac{(-)^{n+1 }}{\Gamma(n)}(1+nq\cdot p^{\prime})_{n-1}\,,\quad{\mathcal{V}^{\prime}}^{\mu} =p^{\mu}V_{n}^{\prime}=\frac{p^{\mu}}{\Gamma(n)}(1+nq^{\prime}\cdot p)_{n-1} \tag{12}\] \[W_{n,m}=\frac{n\,m}{n+m}(1+q\cdot p^{\prime})\,q\cdot p^{\prime}\,V_{n}\,V_{m} \,,\quad W_{n,m}^{\prime}=\frac{n\,m}{n+m}(1+q^{\prime}\cdot p)\,q^{\prime} \cdot p\,V_{n}^{\prime}\,V_{m}^{\prime} \tag{13}\] \[M_{n,m}=-\frac{nm(1+q\cdot p^{\prime})}{m+nq\cdot p^{\prime}}V_{n}V_{m}^{\prime} \tag{14}\] Any particular decay amplitude can be obtained by operating with derivative combinations representing the projection on the single amplitude with the desired states identified by the partition set \(\{g_{n}\}\) and \(\{g_{n^{\prime}}\}\) as \[\mathcal{A}_{\Pi_{\{g_{n}\}}^{N},\Pi_{\{g_{n^{\prime}}\}}^{N^{\prime}}}=\prod_ {n}\prod_{a_{n}=1}^{g_{n}}\zeta_{n}^{(a_{n})}.\frac{d}{dJ_{n}^{(a_{n})}}\prod_{ n^{\prime}}\prod_{a_{n^{\prime}}=1}^{g_{n^{\prime}}}\zeta^{\prime(a_{n^{ \prime}})}.\frac{d}{dJ_{n^{\prime}}^{(a_{n^{\prime}})}}\,\mathcal{A}_{gen} \Bigg{|}_{J=J^{\prime}=0} \tag{15}\] where all the polarizations \(\zeta_{n\,,\mu}^{(a_{n})}=\lambda_{n\,,\mu}^{(a_{n})}-\lambda_{n}^{(a_{n})}.pq_ {\mu}\) and \(\zeta^{\prime(a_{n^{\prime}})}_{n^{\prime}\,,\mu}=\lambda^{\prime(a_{n^{\prime }})}_{n^{\prime}\,,\mu}-\lambda^{\prime(a_{n^{\prime}})}_{n^{\prime}}\cdot p^{ \prime}q_{\mu}^{\prime}\) are independent. By considering a kinematical setup where the decaying string is at rest, one can compute the relevant scalar product \(q\cdot p^{\prime}\) that encodes the partition dependence of the interacting states. In particular \[E^{\prime}=M_{N}-E_{k}\,,\quad\omega=\sqrt{E_{k}^{2}-M_{T}^{2}}\,,\quad E_{k}= \frac{N-N^{\prime}-1}{\sqrt{2N-2}} \tag{16}\] and using the kinematics one finds \[q\cdot p^{\prime}=\frac{1}{q^{\prime}\cdot p}=-\frac{E^{\prime}-\omega\cos \theta}{M_{N}}=-1+\frac{E_{k}}{M_{N}}+\frac{\sqrt{2}}{M_{N}}\sqrt{1+\frac{E_{k }^{2}}{2}}\cos\theta \tag{17}\] Without loss of generality one can chose \(\theta=\pi/2\), in fact the final observable will be the decay rate, so when the modulus square of the amplitude is considered, the exact spherical symmetry is restored. It means that one can freely fix the value of \(\theta\) without loss of generality. With this choice one has \[q\cdot p^{\prime}=-1+\frac{E_{k}}{M_{N}} \tag{18}\] In this framework one can easily analyze the non trivial contributions in (12), (13) and (14): \[V_{n}=(-)^{n+1}\frac{(1+nq\cdot p^{\prime})_{n-1}}{\Gamma(n)}=\frac{(-)^{n+1} \Gamma\left(n\frac{E_{k}}{M_{N}}\right)}{\Gamma(n)\Gamma\left(1-n(1-\frac{E_{k }}{M_{N}})\right)} \tag{19}\] this is the oscillating function that generates chaos, in fact it can be written as \[V_{n}=\frac{1}{\pi\,\Gamma(n)}\Gamma\left(n\frac{E_{k}}{M_{N}}\right)\Gamma\left( n-n\frac{E_{k}}{M_{N}}\right)\sin\left(n\pi\frac{E_{k}}{M_{N}}\right) \tag{108}\] The other term to analyze is \[W_{n,m}=\frac{n\,m}{n+m}(1+q\!\cdot\!p^{\prime})\,q\!\cdot\!p^{\prime}\,V_{n}\, V_{m}=\frac{n\,m}{n+m}\frac{E_{k}}{M_{N}}\left(\frac{E_{k}}{M_{N}}-1\right)\,V_{n} \,V_{m} \tag{109}\] and also there is \[V^{\prime}_{n}=\frac{(1+nq^{\prime}\!\cdot\!p)_{n-1}}{\Gamma(n)}=\frac{\Gamma \left(-n\frac{E_{k}}{M_{N}}\right)}{\Gamma(n)\Gamma\left(1-n(1+\frac{E_{k}}{M _{N}})\right)} \tag{110}\] where both numerator and denominator are oscillating \[V^{\prime}_{n}=\frac{\Gamma\left(n+n\frac{E_{k}}{M_{N}}\right)}{\Gamma(n) \Gamma\left(1+n\frac{E_{k}}{M_{N}}\right)}\frac{\sin\left(n\pi+n\pi\frac{E_{k} }{M_{N}}\right)}{\sin\left(n\pi\frac{E_{k}}{M_{N}}\right)}=(-)^{n}\frac{ \Gamma\left(n+n\frac{E_{k}}{M_{N}}\right)}{\Gamma(n)\Gamma\left(1+n\frac{E_{k }}{M_{N}}\right)} \tag{111}\] then there is \[W^{\prime}_{n,m}=\frac{n\,m}{n+m}(1+q^{\prime}\!\cdot\!p)\,q^{\prime}\!\cdot\! p\,V^{\prime}_{n}\,V^{\prime}_{m}=\frac{n\,m}{n+m}\frac{E_{k}}{M_{N}}\left(\frac{E_{k} }{M_{N}}+1\right)\,V^{\prime}_{n}\,V^{\prime}_{m} \tag{112}\] finally the mixed term \[M_{n,m}=-\frac{nm(1+q\!\cdot\!p^{\prime})}{m+nq\!\cdot\!p^{\prime}}V_{n}V^{ \prime}_{m}=-\frac{nm\frac{E_{k}}{M_{N}}}{m-n+n\frac{E_{k}}{M_{N}}}V_{n}V^{ \prime}_{m} \tag{113}\] To sum up one has the following structures \[{\cal V}^{\mu}_{n}={p^{\prime}}^{\mu}V_{n}(E_{k})\,,\quad{\cal V}^{\prime}{}^ {\mu}_{n}=p^{\mu}V^{\prime}_{n}(E_{k})\,,\quad M_{n,m}=\mu_{n,m}(E_{k})V_{n}(E _{k})V^{\prime}_{m}(E_{k}) \tag{114}\] \[W_{n,m}=w_{n,m}(E_{k})V_{n}(E_{k})V_{m}(E_{k})\,,\quad W^{\prime}_{n,m}=w^{ \prime}_{n,m}(E_{k})V^{\prime}_{n}(E_{k})V^{\prime}_{m}(E_{k}) \tag{115}\] where \[\mu_{n,m}(E_{k})=-\frac{nm\frac{E_{k}}{M_{N}}}{m-n+n\frac{E_{k}}{M_{N}}}\,, \quad w_{n,m}(E_{k})=\frac{n\,m}{n+m}\frac{E_{k}}{M_{N}}\left(\frac{E_{k}}{M_ {N}}-1\right) \tag{116}\] \[w^{\prime}_{n,m}(E_{k})=\frac{n\,m}{n+m}\frac{E_{k}}{M_{N}}\left(\frac{E_{k}}{ M_{N}}+1\right) \tag{117}\] Finally the general decay amplitude can be written as \[{\cal A}_{\Pi^{N}_{\{g_{n}\}},\Pi^{N^{\prime}}_{\{g_{n^{\prime}}\}}}={\cal P} _{\Pi_{N},\Pi_{N^{\prime}}}\left(\zeta^{(a)}_{n}\,,\zeta^{{}^{\prime}\!\,(a^{ \prime})}_{n^{\prime}}\,,p\,,p^{\prime}\,,w_{n,m}\,,w^{\prime}_{n^{\prime},m^ {\prime}}\,,\mu_{n,n^{\prime}}\right)\,\prod_{n}\left(V_{n}\right)^{g_{n}} \prod_{n^{\prime}}\left(V^{\prime}_{n^{\prime}}\right)^{g_{n^{\prime}}} \tag{118}\] where \({\cal P}_{\Pi_{N},\Pi_{N^{\prime}}}\) is a polynomial that depends on the partition of \(N\) and \(N^{\prime}\) and it can be obtained by the generating function \[\begin{split}{\cal P}_{gen}=\exp\Bigg{(}\sum_{n;\,a_{n}=1}^{g_{n}} J^{(a_{n})}_{n}\cdotp^{\prime}+\sum_{n;\,a_{n}=1}^{g_{n}}J^{\prime(a_{n})}_{n} \cdotp+\sum_{n,m;\,a_{n},b_{m}}^{g_{n},g_{m}}J^{(a_{n})}_{n}.J^{\prime(b_{m})} _{m}\mu_{m,n}\\ \sum_{n,m;\,a_{n},b_{m}}^{g_{n},g_{m}}J^{(a_{n})}_{n}.J^{(b_{m})} _{m}w_{m,n}+\sum_{n,m;\,a_{n},b_{m}}^{g_{n},g_{m}}J^{\prime(a_{n})}_{n}.J^{ \prime(b_{m})}_{m}w^{\prime}_{m,n}\Bigg{)}\end{split} \tag{100}\] the important thing to note is that the oscillating terms \(V_{n}\) and \(V^{\prime}_{n}\) are factorized from the polynomial. They are the same factors that produce chaos and that contain most of the information about the microstate dependence. ### Decay rate The general structure of the absolute value square of the amplitude is given by \[\begin{split}\left|{\cal A}_{\Pi^{N}_{\{g_{n}\}},\Pi^{N^{\prime} }_{\{g_{n^{\prime}}\}}}\right|^{2}=\left|{\cal P}_{\Pi_{N},\Pi_{N^{\prime}}} \left(\zeta_{n}\,,\zeta^{\prime}_{n^{\prime}}\,,p\,,p^{\prime}\,,w_{n,m}\,,w^ {\prime}_{n^{\prime},m^{\prime}}\,,\mu_{n,n^{\prime}}\right)\right|^{2}\, \prod_{n}\left(V^{2}_{n}\right)^{g_{n}}\prod_{n^{\prime}}\left(V^{\prime}_{n^ {\prime}}\,{}^{2}\right)^{g_{n^{\prime}}}\end{split} \tag{101}\] where the polynomial is generated by \[{\cal P}_{\Pi_{N},\Pi_{N^{\prime}}}=\prod_{n}\prod_{a_{n}=1}^{g_{n}}\zeta^{(a_ {n})}_{n}.\frac{d}{dJ^{(a_{n})}_{n}}\prod_{n^{\prime}}\prod_{a_{n^{\prime}}=1} ^{g_{n^{\prime}}}\zeta^{\prime(a_{n^{\prime}})}_{n^{\prime}}.\frac{d}{dJ^{ \prime(a_{n^{\prime}})}_{n^{\prime}}}\,{\cal P}_{gen}\Bigg{|}_{J=J^{\prime}=0} \tag{102}\] Taking the absolute value square and using the completeness relations \[\sum_{pol}\zeta^{(a)\mu}_{n}\zeta^{*(a)\mu}_{n}={\cal L}^{(a)\mu\nu}\,,\quad \sum_{pol}\zeta^{\prime(a)\mu}_{n}\zeta^{\prime*(a)\mu}_{n}={\cal L}^{\prime( a)\mu\nu} \tag{103}\] where the superscript \((a)\) refers to different independent polarizations, with the explicit completeness structures \[{\cal L}^{(a)\mu\nu}=\eta^{\mu\nu}-2p^{(\mu}q^{\mu)}+p^{2}q^{\mu}q^{\nu}\,, \quad{\cal L}^{\prime(a)\mu\nu}=\eta^{\mu\nu}-2p^{\prime(\mu}q^{\prime\mu)} +p^{\prime 2}q^{\prime\mu}q^{\prime\nu} \tag{104}\] one can study the relevant contributions of the polynomial, in particular the first one yields \[p^{\prime}\cdot{\cal L}\cdot p^{\prime}={p^{\prime}}^{2}-2p\cdot p^{\prime}\, q\cdot p^{\prime}+p^{2}(q\cdot p^{\prime})^{2}\,,\quad\mbox{with}\,\,\,q\cdot p^{ \prime}=-1+\frac{E_{k}}{M_{N}} \tag{105}\] that can be written as \[p^{\prime}\cdot{\cal L}\cdot p^{\prime}=(p+p^{\prime})^{2}-2p\cdot p^{\prime} \frac{E_{k}}{M_{N}}-2p^{2}\frac{E_{k}}{M_{N}}+O(E_{k}^{2}) \tag{106}\] using \(p\cdot p^{\prime}=M_{N}E^{\prime}=M_{N}^{2}-M_{N}E_{k}\), \(p^{2}=-M_{N}^{2}\) and \((p+p^{\prime})^{2}=k^{2}\) one has \[p^{\prime}\cdot{\cal L}\cdot p^{\prime}=k^{2}-2M_{N}E_{k}+2E_{k}^{2}+2M_{N}E_{ k}=\omega^{2}+O(E_{k}^{2})\simeq-M_{T}^{2}=2 \tag{107}\] The second contribution is given by \[p\cdot{\cal L}^{\prime}\cdot p=p^{2}-2p\cdot p^{\prime}q^{\prime}\cdot p+{p^{ \prime}}^{2}(q^{\prime}\cdot p)^{2}\,,\quad\mbox{with}\,\,\,q^{\prime}\cdot p =-1-\frac{E_{k}}{M_{N}} \tag{108}\] which yields \[p\!\cdot\!{\cal L}^{\prime}\!\cdot\!p=(p+p^{\prime})^{2}+2p\!\cdot\!p^{\prime} \frac{E_{k}}{M_{N}}+2{p^{\prime}}^{2}\frac{E_{k}}{M_{N}} \tag{114}\] still using \(p\!\cdot\!p^{\prime}=M_{N}E^{\prime}=M_{N}^{2}-M_{N}E_{k},\,{p^{\prime}}^{2}=-M _{N^{\prime}}^{2}\) and \((p+p^{\prime})^{2}=k^{2}\) one has \[p\!\cdot\!{\cal L}^{\prime}\!\cdot\!p=k^{2}+2M_{N}E_{k}-2E_{k}^{2}-\frac{M_{N^ {\prime}}^{2}}{M_{N}^{2}}2M_{N}E_{k} \tag{115}\] and finally noting that \[\frac{M_{N^{\prime}}^{2}}{M_{N}^{2}}=1-\frac{E_{k}}{M_{N}}+O(1/M_{N}^{2}) \tag{116}\] the expression becomes \[p\!\cdot\!{\cal L}^{\prime}\!\cdot\!p=k^{2}=2 \tag{117}\] Using the same steps the last term can be written as \[p^{\prime}\!\cdot\!{\cal L}\!\cdot\!{\cal L}^{\prime}\!\cdot\!p=M_{N}E^{\prime }+M_{N}^{2}q\!\cdot\!p^{\prime}+M_{N^{\prime}}^{2}q^{\prime}\!\cdot\!p=2(1+E_{ k}/M_{N})+O(E_{k}^{2}) \tag{118}\] This is a map from \({\cal P}_{\Pi_{N},\Pi_{N^{\prime}}}\) and its absolute value square, where the latter can be recasted in a new polynomial \({\cal A}_{\Pi_{N},\Pi_{N^{\prime}}}\) \[\left|{\cal P}_{\Pi_{N},\Pi_{N^{\prime}}}\right|^{2}={\cal X}_{\Pi_{N},\Pi_{N ^{\prime}}}\left(p^{\prime}\!\cdot\!{\cal L}^{(a)}\!\cdot\!p^{\prime},\,p\! \cdot\!{\cal L}^{\prime}{}^{(a)}\!\cdot\!p,\,p^{\prime}\!\cdot\!{\cal L}^{(a) }\!\cdot\!{\cal L}^{\prime}{}^{(b)}\!\cdot\!p,\,w_{n,m},\,w^{\prime}_{n^{ \prime},m^{\prime}},\,\mu_{n,n^{\prime}}\right) \tag{119}\] Now let's compare the amplitude with its mod square \[{\cal A}_{\Pi_{\{g_{n}\}}^{N},\Pi_{\{g_{n^{\prime}}\}}^{N^{\prime}}}={\cal P }_{\Pi_{N},\Pi_{N^{\prime}}}\,\prod_{n}\big{(}V_{n}\big{)}^{g_{n}}\prod_{n^{ \prime}}\big{(}V^{\prime}_{n^{\prime}}\big{)}^{g_{n^{\prime}}} \tag{120}\] \[\left|{\cal A}_{\Pi_{\{g_{n}\}}^{N},\Pi_{\{g_{n^{\prime}}\}}^{N^{\prime}}} \right|^{2}={\cal X}_{\Pi_{N},\Pi_{N^{\prime}}}\,\prod_{n}\big{(}V_{n}^{2} \big{)}^{g_{n}}\prod_{n^{\prime}}\big{(}V^{\prime}_{n^{\prime}}{}^{2}\big{)}^ {g_{n^{\prime}}} \tag{121}\] it is clear that if one takes \(n,n^{\prime}=1\) and \(g_{1}=N,\,g_{1}^{\prime}=N^{\prime}\) then \(V_{1}=V_{1}^{\prime}=1\), and even if the polynomial \({\cal P}_{\Pi_{N},\Pi_{N^{\prime}}}\) is very complicated there are no chaotic oscillations. This is a clear information of how the polynomial \({\cal P}_{\Pi_{N},\Pi_{N^{\prime}}}\) does not contain most of the information about the microstate structures. When \({\cal X}_{\Pi_{N},\Pi_{N^{\prime}}}\) is considered, there is still a complicated structure, but there is a constant universal leading term plus many other suppressed terms. The crucial point is that, if there is thermalization the only way to generate the process is using the chaotic factors \(\{V_{n}\},\{V^{\prime}_{n^{\prime}}\}\). This is an evidence of how the chaos is a trigger for the thermalization. To see how the \({\cal X}_{\Pi_{N},\Pi_{N^{\prime}}}\) contributes, let's start by considering \[\left|{\cal P}_{\Pi_{N},\Pi_{N^{\prime}}}\right|^{2}={\cal X}_{\Pi_{N},\Pi_{N^ {\prime}}}\left(p^{\prime}\!\cdot\!{\cal L}^{(a)}\!\cdot\!p^{\prime},\,p\!\cdot \!{\cal L}^{\prime}{}^{(a)}\!\cdot\!p,\,p^{\prime}\!\cdot\!{\cal L}^{(a)}\! \cdot\!{\cal L}^{\prime}{}^{(b)}\!\cdot\!p,\,w_{n,m},\,w^{\prime}_{n^{\prime},m ^{\prime}},\,\mu_{n,n^{\prime}}\right) \tag{122}\] where \[\mu_{n,m}(E_{k})=-\frac{nm\frac{E_{k}}{M_{N}}}{m-n+n\frac{E_{k}}{M_{N}}}\,, \quad w_{n,m}(E_{k})=\frac{n\,m}{n+m}\frac{E_{k}}{M_{N}}\left(\frac{E_{k}}{M_ {N}}-1\right) \tag{123}\] \[w^{\prime}_{n,m}(E_{k})=\frac{n\,m}{n+m}\frac{E_{k}}{M_{N}}\left(\frac{E_{k}}{M_{N }}+1\right) \tag{100}\] when \(E_{k}/M_{N}\) is small, which is the region where the emitted radiation is expected to be thermal, one has \[w^{\prime}_{n,m}=-w_{n,m}=\frac{n\,m}{n+m}\frac{E_{k}}{M_{N}}+O\left(\frac{E_{k }^{2}}{M_{N}^{2}}\right) \tag{101}\] and the other terms are given by (101), therefore if \(E_{k}/M_{N}\) is small one has \[p^{\prime}\!\cdot\!\mathcal{L}\!\cdot\!p^{\prime}=p\!\cdot\!\mathcal{L}^{ \prime}\!\cdot\!p=p^{\prime}\!\cdot\!\mathcal{L}\!\cdot\!\mathcal{L}^{\prime} \!\cdot\!p\simeq 2=-2\alpha^{\prime}M_{T}^{2} \tag{102}\] and the leading term of the polynomial \(\mathcal{X}\), included the microstates normalization is \[\mathcal{N}_{\{g_{n}\}}\mathcal{N}^{\prime}_{\{g^{\prime}_{n^{\prime}}\}} \mathcal{X}_{\Pi_{N},\Pi_{N^{\prime}}}\simeq\frac{(-2\alpha^{\prime}M_{T}^{2} )^{J+J^{\prime}}}{\prod_{n}n^{g_{n}}g_{n}!\,\prod_{n^{\prime}}n^{g_{n^{\prime} }}g_{n^{\prime}}!}+O\left(\frac{E_{k}}{M_{N}}\right) \tag{103}\] so one can absorb the polynomial contribution into a new microstates normalization \[\widetilde{\mathcal{N}}_{\{g_{n}\}}\widetilde{\mathcal{N}}^{\prime}_{\{g^{ \prime}_{n^{\prime}}\}}=\frac{1}{\prod_{n}(n/2)^{g_{n}}g_{n}!\,\prod_{n^{ \prime}}(n^{\prime}/2)^{g_{n^{\prime}}}g_{n^{\prime}}!} \tag{104}\] Now focusing on the chaotic factors, \(V_{n}\) and \(V^{\prime}_{n^{\prime}}\), when \(E_{k}/M_{N}\) is small one can write \[V_{n}=\frac{\Gamma\left(n-n\frac{E_{k}}{M_{N}}\right)}{\Gamma(n)\Gamma\left( 1-n\frac{E_{k}}{M_{N}}\right)} \tag{105}\] Using the Legendre duplication formula \[\Gamma(nx)=\frac{\prod_{k=0}^{n-1}\Gamma\left(x+\frac{k}{n}\right)}{(2\pi)^{ \frac{n-1}{2}}n^{\frac{1}{2}}n^{-nx}} \tag{106}\] one has \[\Gamma\left(n\left(1-\frac{E_{k}}{M_{N}}\right)\right)=\frac{\prod_{k=0}^{n-1 }\Gamma\left(1-\frac{E_{k}}{M_{N}}+\frac{k}{n}\right)}{(2\pi)^{\frac{n-1}{2}} n^{\frac{1}{2}-n}}\,e^{-n\frac{E_{k}}{M_{N}}\log n} \tag{107}\] since \(E_{k}/M_{N}\) is very small one can approximate the function as \[\Gamma\left(n\left(1-\frac{E_{k}}{M_{N}}\right)\right)=\Gamma(n)\,e^{-n\frac{ E_{k}}{M_{N}}\log n} \tag{108}\] and the final result is given by \[V_{n}\simeq\frac{\Gamma\left(n-n\frac{E_{k}}{M_{N}}\right)}{\Gamma(n)\Gamma \left(1-n\frac{E_{k}}{M_{N}}\right)}\simeq\frac{e^{-n\frac{E_{k}}{M_{N}}\log n }}{\Gamma\left(1-n\frac{E_{k}}{M_{N}}\right)} \tag{109}\] In a similar fashion the other term can be written as \[V^{\prime}_{n}=(-)^{n}\frac{\Gamma\left(n+n\frac{E_{k}}{M_{N}}\right)}{\Gamma (n)\Gamma\left(1+n\frac{E_{k}}{M_{N}}\right)}\simeq(-)^{n}\,\frac{e^{n\frac{E_{ k}}{M_{N}}\log n}}{\Gamma\left(1+n\frac{E_{k}}{M_{N}}\right)} \tag{110}\] this is how the chaotic factors play their role in the thermalization of the decay rate. ### Thermal nature of the decay amplitude Let's consider the general form of the decay amplitude \[\mathcal{A}_{\Pi^{N}_{\{g_{n}\},\Pi^{N^{\prime}}_{\{g_{n^{\prime}}\}}}}=\mathcal{ P}_{\Pi_{N},\Pi_{N^{\prime}}}\,\prod_{n}\big{(}V_{n}\big{)}^{g_{n}}\prod_{n^{ \prime}}\big{(}V^{\prime}_{n^{\prime}}\big{)}^{g_{n^{\prime}}} \tag{100}\] in the limit \(E_{k}/M_{N}\) small, it was shown that the decay rate, which is the physical observable, has a simple compact dependence on the polynomial \(\mathcal{X}_{\Pi_{N},\Pi_{N^{\prime}}}\) that can be absorbed in the microstates normalization. Such simplification can be translated in the structure of the decay amplitude when only circular polarizations are considered, for example taking \[\lambda_{+}=\lambda^{\prime}_{+}=(0,1,i,\vec{0}) \tag{101}\] one finds \[\zeta_{n}\!\cdot\!p^{\prime}=\zeta^{\prime}_{n^{\prime}}\!\cdot\!p\simeq- \omega\,,\quad\zeta_{n}\!\cdot\!\zeta_{n}=\zeta^{\prime}_{n^{\prime}}\!\cdot\! \zeta^{\prime}_{n^{\prime}}=\zeta_{n}\!\cdot\!\zeta^{\prime}_{n^{\prime}}=0 \tag{102}\] and the decay amplitude simplifies to \[\mathcal{A}_{\Pi^{N}_{\{g_{n}\},\Pi^{N^{\prime}}_{\{g_{n^{\prime}}\}}}}= \mathcal{N}_{\{g_{n}\}}\mathcal{N}^{\prime}_{\{g^{\prime}_{n^{\prime}}\}}\,(- \omega)^{J+J^{\prime}}\,\prod_{n}\big{(}V_{n}\big{)}^{g_{n}}\prod_{n^{\prime}} \big{(}V^{\prime}_{n^{\prime}}\big{)}^{g_{n^{\prime}}} \tag{103}\] Now using (102) and (103) one finds \[\mathcal{A}_{\Pi^{N}_{\{g_{n}\},\Pi^{N^{\prime}}_{\{g_{n^{\prime}}\}}}}= \mathcal{N}_{\{g_{n}\}}\mathcal{N}^{\prime}_{\{g^{\prime}_{n^{\prime}}\}}\,(- \omega)^{J+J^{\prime}}\,\,e^{-\mathcal{C}_{N}(\{g_{n}\},\{g^{\prime}_{n^{\prime }}\})\frac{E_{k}}{2T_{H}}-\mu_{N}\big{(}\{g_{n}\},\{g^{\prime}_{n^{\prime}}\};E _{k}/T_{H}\big{)}} \tag{104}\] and squaring the decay amplitude one recovers the formula of the decay rate (2). The kinematical simplifications of the decay amplitude are just the reflection of the subleading contributions of bilinear polarization terms of the decay rate, in the limit \(E_{k}/M_{N}\) small. The possibility of computing the most general decay amplitude, with the explicit dependence on the microstates structure leads to the identification of the decay amplitude with the Boltzmann factor, which is the thermal weight of the interaction between microstates.
2303.12072
The QED four-photon amplitudes off-shell: part 2
This is the second one of a series of four papers devoted to a first calculation of the scalar and spinor QED four-photon amplitudes completely off-shell. We use the worldline formalism which provides a gauge-invariant decomposition for these amplitudes as well as compact integral representations. It also makes it straightforward to integrate out any given photon leg in the low-energy limit, and in the present sequel we do this with two of the four photons. For the special case where the two unrestricted photon momenta are equal and opposite the information on these amplitudes is also contained in the constant-field vacuum polarisation tensors, which provides a check on our results. Although these amplitudes are finite, for possible use as higher-loop building blocks we evaluate all integrals in dimensional regularisation. As an example, we use them to construct the two-loop vacuum polarisation tensors in the low-energy approximation, rederive from those the two-loop $\beta$-function coefficients and analyse their anatomy with respect to the gauge-invariant decomposition. As an application to an external-field problem, we provide a streamlined calculation of the Delbr\"uck scattering amplitudes in the low-energy limit. All calculations are done in parallel for scalar and spinor QED.
Naser Ahmadiniaz, Cristhiam Lopez-Arcos, Misha A. Lopez-Lopez, Christian Schubert
2023-03-21T17:58:30Z
http://arxiv.org/abs/2303.12072v1
# The QED four-photon amplitudes off-shell: part 2 ###### Abstract This is the second one of a series of four papers devoted to a first calculation of the scalar and spinor QED four-photon amplitudes completely off-shell. We use the worldline formalism which provides a gauge-invariant decomposition for these amplitudes as well as compact integral representations. It also makes it straightforward to integrate out any given photon leg in the low-energy limit, and in the present sequel we do this with two of the four photons. For the special case where the two unrestricted photon momenta are equal and opposite the information on these amplitudes is also contained in the constant-field vacuum polarisation tensors, which provides a check on our results. Although these amplitudes are finite, for possible use as higher-loop building blocks we evaluate all integrals in dimensional regularisation. As an example, we use them to construct the two-loop vacuum polarisation tensors in the low-energy approximation, rederive from those the two-loop \(\beta\)-function coefficients and analyse their anatomy with respect to the gauge-invariant decomposition. As an application to an external-field problem, we provide a streamlined calculation of the Delbruck scattering amplitudes in the low-energy limit. All calculations are done in parallel for scalar and spinor QED. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †: journal:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote †:. + Footnote:. Gauge-invariant decomposition of the two-loop \(\beta\)-functions * 5.1 Scalar QED \(\beta\)-function * 5.2 Spinor QED \(\beta\)-function * 5.3 Four-dimensional computation of the Spinor QED \(\beta\)-function * 6 Delbruck scattering at low energies * 7 Summary and outlook * A Collection of integral formulas * A.1 Integrating out a low-energy leg * A.2 The functions \(Y_{nl}\) and tensor reduction * B Explicit results for \(Q_{\rm scal(34)}\) and \(Q_{\rm spin(34)}\) * B.1 Scalar QED * B.2 Spinor QED ## 1 Introduction The present paper is the second one in a series of four devoted to a first calculation of the scalar and spinor QED one-loop four-photon amplitudes fully off-shell, using the worldline representation of these amplitudes. In part I [1] we already derived these representations, discussed their properties, and calculated them for the simple limiting case where all four photons are taken in the low-energy (or "Euler-Heisenberg" limit). In the present sequel, we calculate them explicitly for the case where only photons \(3\) and \(4\) are taken in that limit, but \(1\) and \(2\) have arbitrary off-shell momenta, see Fig. 1. The results are given in compact form in terms of the hypergeometric function \({}_{2}F_{1}\) for the dimensionally continued case, and in trigonometric form for \(D=4\). Although to our knowledge this special case of the four-photon amplitudes has not been considered by other authors, for \(k_{1}=-k_{2},k_{3}=-k_{4}\) it is straightforward to extract them from the well-studied photon polarization tensors in a constant field [2; 3; 4; 5; 6; 7; 8; 9], and we use this fact for a check on our calculations. Off-shell legs can be used for creating internal propagators by sewing, or for connecting them to external fields. As an example for the former, we use our results to construct, by sewing of the legs \(1\) and \(2\) that carry full momentum, the two-loop scalar and spinor QED vacuum polarization tensors in the low-energy limit, see Fig. 2. In this limit the vacuum polarization tensors just reduce to the induced Maxwell terms, from which we can rederive the two-loop \(\beta\)-function coefficients. Apart from providing another check, this also improves on previous work [10] where the worldline formalism had already been applied to the calculation of these coefficients. Although these coefficients were already known for decades [11; 12], that work was motivated by the fact that the formalism allows one to obtain parameter-integral representations that unify all the Figure 1: Four-photon box diagram with two low-energy legs \(k_{4}\) and \(k_{3}\) indicated by empty bullets at their ends. A sum over permutations is understood, as well as the inclusion of seagull diagrams in the scalar QED case. Feynman diagrams contributing to the quenched (single fermion loop) photon propagator at any loop order, see., e.g., Fig. 3 for the three-loop case. The interesting feature of the worldline formalism is that the sewing results in parameter integrals that represent not some particular Feynman diagram, but the whole set of Feynman diagrams shown in Fig. 3. And it is precisely this type of sums of diagrams that are known for particularly extensive cancellations between diagrams. In 1967 Johnson, Willey and Baker showed [13] that the quenched vacuum polarisation has, at any loop order, only a single overall UV divergence. And the coefficient of this pole, which is essentially the \(\beta\)-function, turned out to be rational up to the four-loop level (see [14] and refs. therein). The role of gauge invariance in these cancellations is not transparent, and remains an object of active investigation even at the two-loop level [15]. The worldline formalism was applied to the recalculation of the two-loop scalar and spinor QED \(\beta\)-functions by M.G. Schmidt and one of the authors in [10] (see also [16]) to achieve significant cancellations already at the integrand level. However, those calculations were done using a different approach based on two-loop worldline Green's functions [17], which provides a shortcut but obscures the role of gauge invariance. Only after that work a refinement of the usual homogeneising Bern-Kosower integration-by-parts procedure was found that led to a decomposition of the four-photon amplitudes into sixteen individually gauge invariant contributions (see [18; 19] and part I). This gives us a chance to improve on [10] by asking the following two questions: first, can we identify parts of that decomposition that drop out in the construction of the two-loop \(\beta\) functions? Second, are there cancellations in their computation beyond what is expected from gauge invariance, that is, not only inside each gauge invariant structure, but also between them? Thus we will study here the distribution of the \(\beta\)-function coefficients, as well as the cancellation of the double pole in the \(\frac{1}{\epsilon}\) expansion, in the gauge-invariant decomposition derived in part I. As an application to an external-field problem, among the many processes related to the four-photon amplitudes we have chosen here to reanalyse low-energy Delbruck scattering, the deflection of photons in the Coulomb field of nuclei due to the vacuum polarization (partially motivated also by the fact that the worldline formalism has already been applied extensively to calculations involving constant [20; 21; 7; 16; 22; 23], plane-wave [24; 25] and combinations of the two types of fields [26], but not as yet to Coulomb fields). Delbruck introduced this scattering in 1933 in order to explain the discrepancies Meitner and Kosters had found in their experiment for the Compton scattering on heavy atoms [27]. Later Bethe and Rohrlich computed the angular distribution for small angles and the total cross section for Delbruck scattering [28]. In 1973 DESY reported a first observation of this scattering in the high-energy, small-angle limit [29] which was in agreement with the prediction made in a series of papers by _Cheng_ and _Wu_[30; 31; 32; 33] and later confirmed in [34; 35]. In 1975 the Gottingen group performed another experiment which was the first one where exact predictions based on Feynman diagrams were confirmed with high precision after considering other background phenomena like atomic and nuclear Rayleigh scattering [36]. The most accurate high energy experiment done so far on Figure 3: The three-loop quenched photon propagator. Figure 2: Construction of the two-loop photon propagator from the four-photon amplitude by sewing. Delbruck scattering is the one carried out at BINP [37; 38]. Here we will use our results for a short and efficient calculation of the Delbruck scattering cross section in the low-energy limit. This paper is organized as follows. In Section 2 we shortly review the worldline representation of the scalar and spinor QED four-photon amplitudes, and discuss our general computational strategy in this paper. In particular, we provide all the integral formulas required to integrate out low-energy photons without having to split these multi-photon integrals into ordered sectors. The central Section is 3, where we list our explicit final results for the amplitudes, in \(D\) as well as in four dimensions, and for both scalar and spinor loops. Section 4 contains the above-mentioned comparison with the amplitudes extracted from the vacuum polarization in a constant field. In Section 5 we recalculate the two-loop \(\beta\)-functions, while Section 6 is devoted to low-energy Delbruck scattering. Finally, Section 7 gives a summary and outline of future work. Supplementary information and formulas are provided in the appendices. ## 2 Worldline representation of the four-photon amplitudes In part I, we presented in detail the derivation and structure of the worldline representation of the \(N\)-photon amplitudes. Due to the inherent freedom in the integration-by-parts procedure, this representation comes in various slightly different versions. For easy reference, let us summarize here the representation of the four-photon amplitudes that we will actually use in the present sequel 1: Footnote 1: Since henceforth we are concerned exclusively with the four-photon case we will now omit the subscript \(N\) on the \(Q\)’s. \[\Gamma_{\rm scal}(k_{1},\varepsilon_{1};\cdots;k_{4},\varepsilon_{4})=\frac{ (-ie)^{4}}{(4\pi)^{\frac{D}{2}}}\int_{0}^{\infty}\frac{dT}{T}\,T^{4-\frac{D}{2 }}\,{\rm e}^{-m^{2}T}\,\int_{0}^{1}\prod_{i=1}^{4}du_{i}\,Q_{\rm scal}\,\,{\rm e }^{(\cdot)}\,. \tag{2.1}\] Here we have already done the usual rescaling \(\tau_{i}=Tu_{i}\) such that the exponential part is \[{\rm e}^{(\cdot)}\equiv\,{\rm e}^{T\sum_{i<j=1}^{4}G_{ij}k_{i}\cdot k_{j}}\,, \tag{2.2}\] the bosonic Green's functions are \[G_{ij}\equiv G(u_{i},u_{j})=|u_{i}-u_{j}|-(u_{i}-u_{j})^{2}\,, \tag{2.3}\] and the polynomial \(Q_{\rm scal}\) is given by 2 Footnote 2: When comparing with [16] note that there a different basis was used for the four-cycle component \(Q^{4}\). The two bases are related by cyclicity and inversion. \[Q_{\rm scal} = Q_{\rm scal}^{4}+Q_{\rm scal}^{3}+Q_{\rm scal}^{2}+Q_{\rm scal}^{ 22}\,,\] \[Q_{\rm scal}^{4} = \dot{G}(1234)+\dot{G}(2314)+\dot{G}(3124)\,,\] \[Q_{\rm scal}^{3} = \dot{G}(123)T(4)+\dot{G}(234)T(1)+\dot{G}(341)T(2)+\dot{G}(412)T( 3)\,,\] \[Q_{\rm scal}^{2} = \dot{G}(12)T_{sh}(34)+\dot{G}(13)T_{sh}(24)+\dot{G}(14)T_{sh}(23) +\dot{G}(23)T_{sh}(14)+\dot{G}(24)T_{sh}(13)+\dot{G}(34)T_{sh}(12)\,,\] \[Q_{\rm scal}^{22} = \dot{G}(12)\dot{G}(34)+\dot{G}(13)\dot{G}(24)+\dot{G}(14)\dot{G}( 23)\,. \tag{2.4}\] The extraordinary compactness of this representation is made possible through the introduction of the "Lorentz-cycle" \(Z_{n}(i_{1}i_{2}\ldots i_{n})\) as \[\begin{split} Z_{2}(ij)&\equiv\frac{1}{2}{\rm tr} \big{(}f_{i}f_{j}\big{)}=\varepsilon_{i}\cdot k_{j}\varepsilon_{j}\cdot k_{i} -\varepsilon_{i}\cdot\varepsilon_{j}k_{i}\cdot k_{j}\,,\\ Z_{n}(i_{1}i_{2}\ldots i_{n})&\equiv{\rm tr}\Big{(} \prod_{j=1}^{n}f_{i_{j}}\Big{)}\,,\quad(n\geq 3)\,,\end{split} \tag{2.5}\] where \(f_{i}^{\mu\nu}=k_{i}^{\mu}\varepsilon_{i}^{\nu}-k_{i}^{\nu}\varepsilon_{i}^ {\mu}\) is the field strength tensor of photon \(i\), and the "bicycle" \[\dot{G}(i_{1}i_{2}\cdots i_{n}) \equiv \dot{G}_{i_{1}i_{2}}\dot{G}_{i_{2}i_{3}}\cdots\dot{G}_{i_{n}i_{1} }Z_{n}(i_{1}\cdots i_{n})\,. \tag{2.6}\] It is the "tails" that exist in various versions. For the present computation, we use the one-photon tail \(T(i)\) of the original \(Q\)-representation and the "short tail" \(T_{sh}(ij)\), introduced in part I, as the two-photon tail: \[T(i) \equiv \sum_{r\neq i}\dot{G}_{ir}\varepsilon_{i}\cdot k_{r}\,, \tag{2.7}\] \[T_{sh}(ij) \equiv \sum_{r,s\neq i,j}\dot{G}_{ri}\dot{G}_{js}\ \frac{k_{r}\cdot f_{i}\cdot f_{j}\cdot k_{s}}{k_{i}\cdot k_{j}}\,. \tag{2.8}\] The spinor-loop result is obtained by employing the _Bern-Kosower replacement rule_, i.e. replacing simultaneously every closed (full) cycle as \(\dot{G}_{i_{1}i_{2}}\dot{G}_{i_{2}i_{3}}\cdots\dot{G}_{i_{n}i_{1}}\) appearing in the integrand of the scalar-loop with \[\dot{G}_{i_{1}i_{2}}\dot{G}_{i_{2}i_{3}}\cdots\dot{G}_{i_{n}i_{1}}-G_{Fi_{1}i_ {2}}G_{Fi_{2}i_{3}}\cdots G_{Fi_{n}i_{1}} \tag{2.9}\] where \(G_{Fij}=\mathrm{sgn}(u_{i}-u_{j})\) is the fermionic Green function (here it is understood that \(\dot{G}_{ij}=-\dot{G}_{ji}\) may have to be used to achieve the cycle form). We write the spinor-loop amplitude as \[\Gamma_{\mathrm{spin}}(k_{1},\varepsilon_{1};\cdots;k_{4}, \varepsilon_{4}) = -2\frac{(-ie)^{4}}{(4\pi)^{\frac{p}{2}}}\int_{0}^{\infty}\frac{dT }{T}\,T^{4-\frac{D}{2}}\,\mathrm{e}^{-m^{2}T}\,\int_{0}^{1}\prod_{i=1}^{4}du_{ i}\,Q_{\mathrm{spin}}\ \mathrm{e}^{(\cdot)}\,. \tag{2.10}\] Thus, apart from a global factor of \(-2\), the only difference to the scalar QED formula (2.1) is the replacement of \(Q_{\mathrm{scal}}\) by \(Q_{\mathrm{spin}}\) according to the rule (2.9). Let us also emphasize once more that equations (2.1), (2.10) are valid off-shell, and that the right-hand sides are manifestly finite term-by-term. The well-known spurious UV-divergences of the four-photon diagrams that usually cancel only in the sum of diagrams would show up here as logarithmic divergences of the \(T\)-integration at \(T=0\), but have been eliminated already at the beginning by the IBP procedure that led from the P-representation to the Q-representation, see part I. To avoid carrying common prefactors we define \[\hat{\Gamma}_{\left\{\,{\mathrm{spin}}\right\}}\equiv\int_{0}^{ \infty}\frac{dT}{T}\,T^{4-\frac{D}{2}}\,\mathrm{e}^{-m^{2}T}\,\int_{0}^{1} \prod_{i=1}^{4}du_{i}\,Q_{\left\{\,{\mathrm{spin}}\right\}}\ \mathrm{e}^{(\cdot)}\,. \tag{2.11}\] ### Case distinction for the low energy limit of photons \(3\) and \(4\) As has been mentioned above, in the present part II the photons number \(3\) and \(4\) are taken in low-energy limit which means that in the expressions \(Q_{\mathrm{scal/spin}}\,e^{(\cdot)}\) we only consider those contributions which are linear in both \(k_{3}\) and \(k_{4}\) (for more details about the low-energy limit see part I). For the sake of clarity, in this subsection we list the different cases that appear in our calculations: * **Case 1**: Terms in \(Q_{\mathrm{scal}}\) which are already linear in both \(k_{3}\) and \(k_{4}\) do not need any further factors of \(k_{3,4}\) from the exponential, so that we can simply replace \(\,\mathrm{e}^{(\cdot)}\to\,\mathrm{e}^{(\cdot)|k_{3},k_{4}\to 0}=e^{ TG_{12}k_{1}\cdot k_{2}}\). This includes all the "pure cycle" terms, defined by the absence of tails. For instance, the four-cycle \(Q_{\mathrm{scal}}^{4}\) \[\dot{G}(1234)\ \mathrm{e}^{(\cdot)}\to\dot{G}(1234)\,\mathrm{e}^{ TG_{12}k_{1}\cdot k_{2}}.\] (2.12) Or the two-two cycles \(Q_{\mathrm{scal}}^{22}\), where it is possible to have both low-energy photons in the same cycle \[\dot{G}(12)\dot{G}(34)\ \mathrm{e}^{(\cdot)}\to\dot{G}(12)\dot{G}(34)\ \mathrm{e}^{ TG_{12}k_{1}\cdot k_{2}},\] (2.13) or distributed among different cycles \[\dot{G}(13)\dot{G}(24)\ \mathrm{e}^{(\cdot)}\to\dot{G}(13)\dot{G}(24)\ \mathrm{e}^{ TG_{12}k_{1}\cdot k_{2}}.\] (2.14) * **Case 2**: Terms in \(Q_{\mathrm{scal}}^{3}\) which contain one of the low-energy momenta, say, \(k_{3}\), in the three-cycle, but lack \(k_{4}\). Those require an expansion of the exponential factor to linear order in \(k_{4}\). For instance, the one-tail term \(\dot{G}(123)T(4)\ \mathrm{e}^{(\cdot)}\) gives \[\dot{G}(123)T(4)\ \mathrm{e}^{(\cdot)}\to\dot{G}(123)\Big{[}\dot{G}_{41} \varepsilon_{4}\cdot k_{1}+\dot{G}_{42}\varepsilon_{4}\cdot k_{2}\Big{]}\Big{[}TG _{14}k_{1}\cdot k_{4}+TG_{24}k_{2}\cdot k_{4}\Big{]}\ \mathrm{e}^{ TG_{12}k_{1}\cdot k_{2}}.\] (2.15) * **Case 3**: Terms in \(Q^{3}_{\rm scal}\) which have two low-energy momenta in the three-cycle, for those we neglect terms in the one-tail that are linear in any of the two low-energy momenta. For instance, the one-tail term \(\dot{G}(234)T(1)\ {\rm e}^{(\cdot)}\) gives \[\dot{G}(234)T(1)\ {\rm e}^{(\cdot)}\to\dot{G}(234)\dot{G}_{12}\varepsilon_{1} \cdot k_{2}\,{\rm e}^{TG_{12}k_{1}\cdot k_{2}}.\] (2.16) However, it turns out that the contributions of these particular terms vanish after integration over \(u_{3}\) and \(u_{4}\). * **Case 4**: Terms in \(Q^{2}_{\rm scal}\) which lack both \(k_{3}\) and \(k_{4}\). Such terms occur only in the two-tail term \(\dot{G}(12)T_{\rm sh}(34)\ {\rm e}^{(\cdot)}\). Here the exponential factor must be expanded to linear order in both \(k_{3}\) and \(k_{4}\), \[\dot{G}(12)T_{\rm sh}(34)\ {\rm e}^{(\cdot)}\to\dot{G}(12)T_{\rm sh}(34) \left(TG_{34}k_{3}\cdot k_{4}+T^{2}\sum_{i,j=1}^{2}G_{i4}k_{i}\cdot k_{4}G_{j 3}k_{j}\cdot k_{3}\right)\ {\rm e}^{TG_{12}k_{1}\cdot k_{2}}.\] (2.17) * **Case 5**: Terms in \(Q^{2}_{\rm scal}\) which have only one of the low-energy momenta in the two-cycle. For instance, the two-tail term \(\dot{G}(13)T_{\rm sh}(24)\ {\rm e}^{(\cdot)}\) gives \[\dot{G}(13)T_{\rm sh}(24)\ {\rm e}^{(\cdot)}\to\dot{G}(13)\dot{G}_{12}\dot{G}_{4 1}\frac{k_{1}\cdot f_{2}\cdot f_{4}\cdot k_{1}}{k_{2}\cdot k_{4}}\Big{[}TG_{ 14}k_{1}\cdot k_{4}+TG_{24}k_{2}\cdot k_{4}\Big{]}\ {\rm e}^{TG_{12}k_{1}\cdot k_{2}}\] (2.18) where we took the terms linear in \(k_{4}\) from the exponential. * **Case 6**: Terms in \(Q^{2}_{\rm scal}\) which are already quadratic in at least one of the low-energy momenta can be discarded. For instance, in the two-tail term \(\dot{G}(34)T_{\rm sh}(12)\ {\rm e}^{(\cdot)}\) we have terms like \[\dot{G}(34)\dot{G}_{31}\dot{G}_{24}\frac{k_{3}\cdot f_{1}\cdot f_{2}\cdot k_{ 4}}{k_{1}\cdot k_{2}}\ {\rm e}^{(\cdot)}\to 0\,.\] (2.19) In particular, all terms in \(\dot{G}(34)T_{\rm sh}(12)\,{\rm e}^{(\cdot)}\) can be neglected. Thus we see that, in the limit of two low-energy photons which is our object of interest in this paper, there are many terms that drop out of the amplitude at the integrand level, and others that drop out after integration. The transition to spinor QED does not lead to any new considerations. ### Low-energy limit for two of the photons: computational examples Next, let us explain our strategy for computing the four-photon amplitude with two low-energy photons, using some sample terms from both \(Q_{\rm scal}\) and \(Q_{\rm spin}\). **Example 1**: Let us consider the following one-tail term: \[Q^{3}_{\rm scal}(123;4)=\left.\dot{G}_{12}\dot{G}_{23}\dot{G}_{31}Z_{3}(123) \left(\dot{G}_{41}\varepsilon_{4}\cdot k_{1}+\dot{G}_{42}\varepsilon_{4} \cdot k_{2}+\dot{G}_{43}\varepsilon_{4}\cdot k_{3}\right).\right. \tag{2.20}\] In order to take the low-energy limit for leg 4, we define \[Q^{3}_{\rm scal}(4)(123;4)\equiv\left.\int_{0}^{1}du_{4}\ Q^{3}_{\rm scal}(123 ;4)\ \left.{\rm e}^{(\cdot)}\right|_{\rm lin\ k_{4}}\,,\right. \tag{2.21}\] where in the full exponent we single out those terms which have a \(k_{4}\) \[{\rm e}^{(\cdot)}\equiv\left.{\rm e}^{T(G_{14}k_{1}\cdot k_{4}+G_{24}k_{2} \cdot k_{4}+G_{34}k_{3}\cdot k_{4})}\,,\right. \tag{2.22}\] such that \[{\rm e}^{(\cdot)}=\left.{\rm e}^{(\cdot 4)}\ {\rm e}^{T(G_{12}k_{1}\cdot k_{2}+G_{1 3}k_{1}\cdot k_{3}+G_{23}k_{2}\cdot k_{3})}.\right. \tag{2.23}\] Here, for convenience, we set the following convention for low-energy legs: on the left-hand side of expression (2.21) the subscript '(4)' indicates that the leg number 4 with momentum \(k_{4}\) is to be taken in the low energy limit and integrated out. In A we give a list of all the occurring integrals (up to permutations). It allows us to perform the integral over \(u_{4}\) without fixing an ordering for the remaining photon legs (for efficient techniques to derive such formulas see, e.g., [39]). Since in this example we do not have linear terms in \(k_{4}\), we take them from the exponential part and we use Eq. (A.5) to integrate \[\begin{split} Q^{3}_{\text{scal}(4)}(123;4)&=\frac{T }{3}Z_{3}(123)\,\dot{G}_{12}\dot{G}_{23}\dot{G}_{31}\left(\sum_{i=1}^{3}\dot{G} _{1i}G_{i1}\,k_{i}\cdot k_{4}\varepsilon_{4}\cdot k_{1}\right.\\ &\qquad\qquad\qquad\qquad\left.+\sum_{i=1}^{3}\,\dot{G}_{2i}G_{i2 }\,k_{i}\cdot k_{4}\varepsilon_{4}\cdot k_{2}+\sum_{i=1}^{3}\dot{G}_{3i}G_{i3} \,k_{i}\cdot k_{4}\varepsilon_{4}\cdot k_{3}\right)\,.\end{split} \tag{2.24}\] Combining terms, we can write the result in a manifestly gauge invariant form \[\begin{split} Q^{3}_{\text{scal}(4)}(123;4)&=\frac{T }{3}Z_{3}(123)\dot{G}_{12}\dot{G}_{23}\dot{G}_{31}\Big{(}\dot{G}_{12}G_{21}\,k _{2}\cdot f_{4}\cdot k_{1}\\ &\qquad\qquad\qquad\qquad\left.+\,\dot{G}_{23}G_{32}\,k_{3} \cdot f_{4}\cdot k_{2}+\dot{G}_{31}G_{13}\,k_{1}\cdot f_{4}\cdot k_{3}\right).\end{split} \tag{2.25}\] Next we repeat all this with photon number 3. We define \[\begin{split} Q^{3}_{\text{scal}(34)}(123;4)&\equiv \left.\int_{0}^{1}du_{3}\int_{0}^{1}du_{4}\,\,Q^{3}_{\text{scal}}(123;4)\,\, \text{e}^{(\cdot\cdot)}\,\text{e}^{(\cdot\cdot)}\right|_{\text{lin }k_{4},k_{3}}\\ &=\left.\int_{0}^{1}du_{3}\,\,Q^{3}_{\text{scal}(4)}(123;4)\,\, \text{e}^{(\cdot\cdot)}\right|_{\text{lin }k_{3}}\,,\end{split} \tag{2.26}\] where \[\text{e}^{(\cdot\cdot)}\equiv\,\text{e}^{T(G_{13}k_{1}\cdot k_{3}+G_{23}k_{2} \cdot k_{3})}\,. \tag{2.27}\] Similarly to the above, the subscript '(34)' in equation (2.26) indicates that legs number 3 and 4 are both projected on their low energy limits. Note that some terms in (2.25) are quadratic in \(k_{3}\) and can be neglected. We use (A.4) to perform the integral over \(u_{3}\) and, with the aid of the identity \(\dot{G}^{2}_{ij}=1-4G_{ij}\), write the result as \[Q^{3}_{\text{scal}(34)}(123;4)=-\frac{T}{9}Z_{3}(123)k_{2}\cdot f_{4}\cdot k_{ 1}\left(G_{12}-10G_{12}^{2}+24G_{12}^{3}\right). \tag{2.28}\] Therefore at this stage the contribution of \(Q^{3}_{\text{scal}}(123;4)\) to the four-photon amplitude with legs 3 and 4 at low energy is given by \[\hat{\Gamma}^{3}_{\text{scal}(34)}(123;4)=\int_{0}^{\infty}\frac{dT}{T}\,T^{ 4-\frac{D}{2}}\,\text{e}^{-m^{2}T}\,\int_{0}^{1}du_{1}du_{2}\,Q^{3}_{\text{scal }(34)}(123;4)\,\,\text{e}^{TG_{12}k_{1}\cdot k_{2}}\,. \tag{2.29}\] This leads us to define \[Y_{nl}\equiv\int_{0}^{\infty}\frac{dT}{T}\,T^{n-D/2}\,\,\int_{0}^{1}du_{1} \int_{0}^{1}du_{2}\,\,G^{l}_{12}\,\text{e}^{-T[m^{2}-G_{12}k_{1}\cdot k_{2}]}\,. \tag{2.30}\] The proper-time integral \(T\) is elementary and, due to the unbroken translation invariance along the loop, one of the two parameters \(u_{1},u_{2}\) can be fixed at some arbitrary value, such as \(u_{2}=0\) and \(u_{1}=u\). To the remaining \(u\) integral we apply a tensor reduction procedure, explained in A.2, to arrive at the following expression, \[Y_{nl}=\frac{\Gamma\left(n-\frac{D}{2}-l\right)}{m^{2n-D}}\,\,\frac{d^{l}}{d \hat{k}^{l}_{12}}\,\,_{2}F_{1}\left(1,n-l-\frac{D}{2};\frac{3}{2};\frac{\hat{ k}_{12}}{4}\right), \tag{2.31}\] where \(\hat{k}_{12}\equiv\frac{k_{1}\cdot k_{2}}{m^{2}}\) and \({}_{2}F_{1}(a,b;c;x)\) is the Gauss hypergeometric function. With these definitions, (2.28) and (2.29) transform into \[\hat{\Gamma}^{3}_{\text{scal}(34)}(123;4)=-\frac{1}{9}Z_{3}(123)\,k_{2}\cdot f_ {4}\cdot k_{1}\left(Y_{51}-10Y_{52}+24Y_{53}\right)\,. \tag{2.32}\] Proceeding to the spinor QED case, since for the term at hand the integration over \(u_{4}\) does not interfere with the application of the Bern-Kosower replacement rule (2.9), the term corresponding to (2.25) reads \[\begin{split} Q^{3}_{\text{spin}(4)}(123;4)&=\frac{T} {3}\Big{(}\dot{G}_{12}\dot{G}_{23}\dot{G}_{31}-G_{F12}G_{F23}G_{F31}\Big{)}Z_{ 3}(123)\\ &\times\Big{(}\dot{G}_{12}G_{21}k_{2}\cdot f_{4}\cdot k_{1}+\dot{ G}_{23}G_{32}k_{3}\cdot f_{4}\cdot k_{2}+\dot{G}_{31}G_{13}k_{1}\cdot f_{4}\cdot k _{3}\Big{)}\,.\end{split} \tag{2.33}\] It would seem that we now have to extend our table of integrals in A to integrals involving both \(\dot{G}_{ij}\) and \(G_{Fij}\), but we can avoid this by eliminating the latter in favor of the former. This can be done using the basic identities \[G^{2}_{Fij}=1,\quad G_{Fij}G_{Fjk}G_{Fki}=-\dot{G}_{ij}-\dot{G}_{jk}-\dot{G}_{ ki} \tag{2.34}\] which transforms (2.33) into \[\begin{split} Q^{3}_{\text{spin}(4)}(123;4)&=\frac{T }{3}\Big{(}\dot{G}_{12}\dot{G}_{23}\dot{G}_{31}+\dot{G}_{12}+\dot{G}_{23}+\dot {G}_{31}\Big{)}Z_{3}(123)\\ &\times\Big{(}\dot{G}_{12}G_{21}k_{2}\cdot f_{4}\cdot k_{1}+\dot {G}_{23}G_{32}k_{3}\cdot f_{4}\cdot k_{2}+\dot{G}_{31}G_{13}k_{1}\cdot f_{4} \cdot k_{3}\Big{)}\,.\end{split} \tag{2.35}\] Proceeding as in the scalar case, one finds that the spinor-QED equivalent of (2.32) becomes \[\hat{\Gamma}^{3}_{\text{spin}(34)}(123;4) = \frac{2}{9}Z_{3}(123)k_{2}\cdot f_{4}\cdot k_{1}\left(Y_{51}-Y_{ 52}-12Y_{53}\right)\,, \tag{2.36}\] as the reader can easily verify using formulas of A. **Example 2**: Let us consider another one-tail term, \[Q^{3}_{\text{scal}}(234;1)=Z_{3}(234)\,\dot{G}_{23}\dot{G}_{34}\dot{G}_{42} \left(\dot{G}_{12}\varepsilon_{1}\cdot k_{2}+\dot{G}_{13}\varepsilon_{1}\cdot k _{3}+\dot{G}_{14}\varepsilon_{1}\cdot k_{4}\right)\,. \tag{2.37}\] Notice that it has a quadratic term in \(k_{4}\) and two linear terms in \(k_{4}\). Omitting the former, and integration the latter over \(u_{4}\) using (A.4), we obtain \[Q^{3}_{\text{scal}(4)}(234;1)=Z_{3}(234)\,\dot{G}_{23}\left(\frac{1}{6}-\frac{ 1}{2}\dot{G}_{32}^{2}\right)\left(\dot{G}_{12}\varepsilon_{1}\cdot k_{2}+\dot {G}_{13}\varepsilon_{1}\cdot k_{3}\right)\,. \tag{2.38}\] Thus we have a term quadratic in \(k_{3}\) and a term linear in \(k_{3}\), and for the latter the integral over \(u_{3}\) turns out to vanish. Therefore we get nothing, \[\hat{\Gamma}^{3}_{\text{scal}(34)}(234;1)=0\,. \tag{2.39}\] We leave it to the reader to check that the additional terms generated by applying the Bern-Kosower replacement rule to the (234) cycle vanish, too. Therefore \[\hat{\Gamma}^{3}_{\text{spin}(34)}(234;1)=0\,. \tag{2.40}\] **Example 3**: As our final example, we consider the following 4-cycle term in the spinor QED integrand, \[Q^{4}_{\text{spin}}(1234)=Z_{4}(1234)\Big{(}\dot{G}_{12}\dot{G}_{23}\dot{G}_{ 34}\dot{G}_{41}-G_{F12}G_{F23}G_{F34}G_{F41}\Big{)}\,. \tag{2.41}\] To reexpress the spin contribution \(G_{F12}G_{F23}G_{F34}\dot{G}_{F41}\), we insert a factor \(1=-G_{F13}G_{F31}\), and once more use the identity \(G_{Fij}G_{Fjk}G_{Fki}=-G_{ij}-\dot{G}_{jk}-\dot{G}_{ki}\). This leads to \[Q^{4}_{\text{spin}(4)}(1234)=Z_{4}(1234)\int_{0}^{1}du_{4}\Big{[}\dot{G}_{12} \dot{G}_{23}\dot{G}_{34}\dot{G}_{41}+\Big{(}\dot{G}_{12}+\dot{G}_{23}+\dot{G}_ {31}\Big{)}\left(\dot{G}_{13}+\dot{G}_{34}+\dot{G}_{41}\right)\Big{]}\,. \tag{2.42}\] Upon integration we find, after some cancellations, \[Q^{4}_{\text{spin}(4)}(1234)=Z_{4}(1234)\left[\dot{G}_{12}\dot{G}_{23}\left( \frac{1}{6}-\frac{1}{2}\dot{G}_{31}^{2}\right)+\Big{(}\dot{G}_{12}+\dot{G}_{23} +\dot{G}_{31}\Big{)}\,\dot{G}_{13}\right]\,. \tag{2.43}\] Integrating over \(u_{3}\) we obtain \[Q^{4}_{\rm spin(34)}(1234)=-\frac{4}{3}Z_{4}(1234)\left(G_{12}+2G_{12}^{2}\right)\,. \tag{2.44}\] Thus with the notation (2.30), the contribution to the amplitude of this term reads \[\hat{\Gamma}^{4}_{\rm spin(34)}(1234)=-\frac{4}{3}Z_{4}(1234)\left(Y_{41}+2Y_{ 42}\right)\,. \tag{2.45}\] ## 3 Results: low-energy limit for two of the photons In this section, we present our results for the complete four-photon amplitudes in both scalar and spinor QED, off-shell but under the restriction that photons 3 and 4 are taken in the low-energy limit i. e., we consider only contributions that are linear in \(k_{3}\) and \(k_{4}\) (see part I). These amplitudes represent the main result of the present article. In the worldline formalism, they appear naturally decomposed as \[\hat{\Gamma}_{\{\stackrel{{\rm real}}{{\rm spin}}\}(34)}=\hat{ \Gamma}^{4}_{\{\stackrel{{\rm real}}{{\rm spin}}\}(34)}+\hat{ \Gamma}^{\stackrel{{\rm real}}{{\rm spin}}}_{\{\stackrel{{ \rm real}}{{\rm spin}}\}(34)}+\hat{\Gamma}^{2}_{\{\stackrel{{ \rm real}}{{\rm spin}}\}(34)}+\hat{\Gamma}^{22}_{\{\stackrel{{ \rm real}}{{\rm spin}}\}(34)} \tag{3.1}\] with \[\hat{\Gamma}^{i}_{\{\stackrel{{\rm real}}{{\rm spin}}\}(34)}= \int_{0}^{1}du_{1}\int_{0}^{1}du_{2}\int_{0}^{\infty}\frac{dT}{T}\,T^{4-\frac{ B}{2}}\,{\rm e}^{-T(m^{2}-G_{12}k_{1}\cdot k_{2})}Q^{i}_{\{\stackrel{{ \rm real}}{{\rm spin}}\}(34)}\ \,\ \ i=4,3,2,22\,. \tag{3.2}\] Recall that the subscript (34) in \(\hat{\Gamma}^{i}_{\{34\}}\) indicates that the integrals over \(u_{3}\) and \(u_{4}\) were performed taking the low-energy limit for the corresponding photons. The absolute normalizations of the amplitudes are given by \[\Gamma_{\rm scal(34)}(k_{1},\varepsilon_{1};\cdots;k_{4}, \varepsilon_{4}) = \frac{e^{4}}{(4\pi)^{\frac{B}{2}}}\hat{\Gamma}_{\rm scal(34)}\,, \tag{3.3}\] \[\Gamma_{\rm spin(34)}(k_{1},\varepsilon_{1};\cdots;k_{4}, \varepsilon_{4}) = -2\frac{e^{4}}{(4\pi)^{\frac{B}{2}}}\hat{\Gamma}_{\rm spin(34)}\,. \tag{3.4}\] We give them first in dimensionally regularized form, and then in \(D=4\). Since the worldline representation of these amplitudes is manifestly finite, taking this limit is trivial and will not involve a cancellation of spurious \(1/\epsilon\) poles as would be the case in a Feynman diagram calculation (in our conventions \(\epsilon=D-4\)). ### Dimensionally regularized amplitudes The integrals appearing in (3.2) are all of the form of (2.30) so that we choose to express them in terms of the functions \(Y_{nl}\) that have been given in terms of \({}_{2}F_{1}\) in (2.31). We note that this implies some redundancy since, as shown in A.2, there exist recursion relations between the \(Y_{nl}\) functions that would also allow us to rewrite all our results entirely in terms of the \(Y_{n0}\). However, the resulting representation would be less compact than the one given in the following. #### 3.1.1 Scalar QED For \(Q^{4}_{\rm scal}\) \[\hat{\Gamma}^{4}_{\rm scal(34)}(1234) = \frac{2}{3}Z_{4}(1234)\left(Y_{41}-4Y_{42}\right)\,, \tag{3.5}\] \[\hat{\Gamma}^{4}_{\rm scal(34)}(2314) = \frac{1}{9}Z_{4}(2314)\left(Y_{40}-12Y_{41}+36Y_{42}\right)\,,\] (3.6) \[\hat{\Gamma}^{4}_{\rm scal(34)}(3124) = \frac{2}{3}Z_{4}(3124)\left(Y_{41}-4Y_{42}\right)\,. \tag{3.7}\] For \(Q_{\rm scal}^{3}\) \[\hat{\Gamma}_{\rm scal(34)}^{3}(123;4) = -\frac{1}{9}Z_{3}(123)k_{2}\cdot f_{4}\cdot k_{1}\left(Y_{51}-10Y_{5 2}+24Y_{53}\right)\,, \tag{3.8}\] \[\hat{\Gamma}_{\rm scal(34)}^{3}(234;1) = 0\,,\] (3.9) \[\hat{\Gamma}_{\rm scal(34)}^{3}(341;2) = 0\,,\] (3.10) \[\hat{\Gamma}_{\rm scal(34)}^{3}(412;3) = -\frac{1}{9}Z_{3}(412)k_{2}\cdot f_{3}\cdot k_{1}\left(Y_{51}-10Y _{52}+24Y_{53}\right)\,. \tag{3.11}\] For \(Q_{\rm scal}^{2}\) \[\hat{\Gamma}_{\rm scal(34)}^{2}(12;34) =-\frac{1}{18}Z_{2}(12)\Bigg{\{}\left[\frac{1}{5}\left(Y_{50}-4Y _{51}\right)-6\left(Y_{52}-4Y_{53}\right)\right]k_{1}\cdot f_{3}\cdot f_{4} \cdot k_{2} \tag{3.12}\] \[\phantom{\hat{\Gamma}_{\rm scal(34)}^{2}(12;34)=}{}+\frac{1}{5} \left(Y_{50}-4Y_{51}\right)k_{1}\cdot f_{3}\cdot f_{4}\cdot k_{1}+(1\leftrightarrow 2 )\Bigg{\}}\] \[\phantom{\hat{\Gamma}_{\rm scal(34)}^{2}(12;34)=}{}-\frac{1}{9}Z_{ 2}(12)\left(Y_{62}-8Y_{63}+16Y_{64}\right)k_{1}\cdot f_{3}\cdot k_{2}k_{1} \cdot f_{4}\cdot k_{2}\,,\] \[\hat{\Gamma}_{\rm scal(34)}^{2}(13;24)+\hat{\Gamma}_{\rm scal(34)}^ {2}(23;14)=-\frac{1}{9}Z_{2}(13)k_{1}\cdot f_{2}\cdot f_{4}\cdot k_{1}\,\left( Y_{51}-4Y_{52}\right)+(1\leftrightarrow 2)\,,\] (3.13) \[\hat{\Gamma}_{\rm scal(34)}^{2}(14;23)+\hat{\Gamma}_{\rm scal(34)}^ {2}(24;13)=-\frac{1}{9}Z_{2}(14)k_{1}\cdot f_{2}\cdot f_{3}\cdot k_{1}\,\left( Y_{51}-4Y_{52}\right)+(1\leftrightarrow 2)\,,\] (3.14) \[\hat{\Gamma}_{\rm scal(34)}^{2}(34;12)=0\,. \tag{3.15}\] And finally for \(Q_{\rm scal}^{22}\) \[\hat{\Gamma}_{\rm scal(34)}^{22}(12,34) = \frac{1}{3}Z_{2}(12)Z_{2}(34)\left(Y_{40}-4Y_{41}\right)\,, \tag{3.16}\] \[\hat{\Gamma}_{\rm scal(34)}^{22}(13,24) = \frac{1}{9}Z_{2}(13)Z_{2}(24)Y_{40}\,,\] (3.17) \[\hat{\Gamma}_{\rm scal(34)}^{22}(14,23) = \frac{1}{9}Z_{2}(14)Z_{2}(23)Y_{40}\,. \tag{3.18}\] #### 3.1.2 Spinor QED For \(Q_{\rm spin}^{4}\) \[\hat{\Gamma}_{\rm spin(34)}^{4}(1234) = -\frac{4}{3}Z_{4}(1234)\left(Y_{41}+2Y_{42}\right)\,, \tag{3.19}\] \[\hat{\Gamma}_{\rm spin(34)}^{4}(2314) = -\frac{4}{9}Z_{4}(2314)\left(2Y_{40}-6Y_{41}-9Y_{42}\right)\,,\] (3.20) \[\hat{\Gamma}_{\rm spin(34)}^{4}(3124) = -\frac{4}{3}Z_{4}(3124)\left(Y_{41}+2Y_{42}\right)\,. \tag{3.21}\] For \(Q_{\rm spin}^{3}\) \[\hat{\Gamma}_{\rm spin(34)}^{3}(123;4) = \frac{2}{9}Z_{3}(123)k_{2}\cdot f_{4}\cdot k_{1}\left(Y_{51}-Y_{5 2}-12Y_{53}\right)\,, \tag{3.22}\] \[\hat{\Gamma}_{\rm spin(34)}^{3}(234;1) = 0\,,\] (3.23) \[\hat{\Gamma}_{\rm spin(34)}^{3}(341;2) = 0\,,\] (3.24) \[\hat{\Gamma}_{\rm spin(34)}^{3}(412;3) = \frac{2}{9}Z_{3}(412)k_{2}\cdot f_{3}\cdot k_{1}\left(Y_{51}-Y_{5 2}-12Y_{53}\right)\,. \tag{3.25}\] For \(Q^{2}_{\rm spin}\) \[\hat{\Gamma}^{2}_{\rm spin(34)}(12;34) =\frac{2}{9}Z_{2}(12)\left\{\,\left[\frac{1}{5}Y_{51}k_{1}\cdot f_{3 }\cdot f_{4}\cdot k_{1}+\left(\frac{1}{5}Y_{51}-6Y_{53}\right)k_{1}\cdot f_{3} \cdot f_{4}\cdot k_{2}+(1\leftrightarrow 2)\right]\right.\] (3.26) \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad For \(Q_{\rm scal}^{2}\) \[\hat{\Gamma}_{\rm scal(34)}^{2}(12;34) =\frac{-1}{90}Z_{2}(12)\Bigg{\{}\Bigg{[}\frac{2(-\hat{k}_{12}+2)-16p_ {0}}{m^{6}(\hat{k}_{12}-4)\hat{k}_{12}}\,k_{1}\cdot f_{3}\cdot f_{4}\cdot k_{1} \tag{3.36}\] \[+P_{1}\,k_{1}\cdot f_{3}\cdot f_{4}\cdot k_{2}+(1\leftrightarrow 2 )\Bigg{]}+10P_{2}\ k_{2}\cdot f_{4}\cdot k_{1}k_{2}\cdot f_{3}\cdot k_{1}\Bigg{\}}\,,\] \[\hat{\Gamma}_{\rm scal(34)}^{2}(13;24) =-\frac{2(\hat{k}_{12}-6)-16p_{0}(\hat{k}_{12}-3)}{9m^{6}(\hat{k} _{12}-4)\hat{k}_{12}^{2}}Z_{2}(13)k_{1}\cdot f_{2}\cdot f_{4}\cdot k_{1}\,,\] \[\hat{\Gamma}_{\rm scal(34)}^{2}(23;14) =-\frac{2(\hat{k}_{12}-6)-16p_{0}(\hat{k}_{12}-3)}{9m^{6}(\hat{k} _{12}-4)\hat{k}_{12}^{2}}Z_{2}(23)k_{2}\cdot f_{1}\cdot f_{4}\cdot k_{2}\,,\] \[\hat{\Gamma}_{\rm scal(34)}^{2}(14;23) =-\frac{2(\hat{k}_{12}-6)-16p_{0}(\hat{k}_{12}-3)}{9m^{6}(\hat{k} _{12}-4)\hat{k}_{12}^{2}}Z_{2}(14)k_{1}\cdot f_{2}\cdot f_{3}\cdot k_{1}\,,\] \[\hat{\Gamma}_{\rm scal(34)}^{2}(24;31) =-\frac{2(\hat{k}_{12}-6)-16p_{0}(\hat{k}_{12}-3)}{9m^{6}(\hat{k} _{12}-4)\hat{k}_{12}^{2}}Z_{2}(24)k_{2}\cdot f_{1}\cdot f_{3}\cdot k_{2}\,.\] And finally, for \(Q_{\rm scal}^{22}\) \[\hat{\Gamma}_{\rm scal(34)}^{22}(12,34) =-\frac{2-8p_{0}}{3m^{4}\hat{k}_{12}}Z_{2}(12)Z_{2}(34)\,, \tag{3.37}\] \[\hat{\Gamma}_{\rm scal(34)}^{22}(13,24) =-\frac{2+8p_{0}}{9m^{4}(\hat{k}_{12}-4)}Z_{2}(13)Z_{2}(24)\,,\] \[\hat{\Gamma}_{\rm scal(34)}^{22}(14,23) =-\frac{2+8p_{0}}{9m^{4}(\hat{k}_{12}-4)}Z_{2}(14)Z_{2}(23)\,.\] Here we have introduced two more functions, \(P_{i}=P_{i}(\hat{k}_{12})\) with \(i=1,2\), which are defined as \[P_{1}\equiv\frac{2(-\hat{k}_{12}^{3}+2\hat{k}_{12}^{2}-210\hat{k}_{12}+900)-32 p_{0}(8\hat{k}_{12}^{2}-90\hat{k}_{12}+225)}{m^{6}(\hat{k}_{12}-4)\hat{k}_{12}^{3}}\,, \tag{3.38}\] \[P_{2}\equiv\frac{4(-\hat{k}_{12}^{2}+55\hat{k}_{12}-210)+48p_{0}(3\hat{k}_{12 }^{2}-30\hat{k}_{12}+70)}{m^{8}(\hat{k}_{12}-4)\hat{k}_{12}^{4}}\,. \tag{3.39}\] #### 3.2.2 Spinor QED For \(Q_{\rm spin}^{4}\) \[\hat{\Gamma}_{\rm spin(34)}^{4}(1234) =\frac{16[3+p_{0}(\hat{k}_{12}^{2}+2\hat{k}_{12}-12)]}{3m^{4}( \hat{k}_{12}-4)\hat{k}_{12}^{2}}Z_{4}(1234)\,, \tag{3.40}\] \[\hat{\Gamma}_{\rm spin(34)}^{4}(2314) =\frac{4[4\hat{k}_{12}^{2}-3\hat{k}_{12}-54-8p_{0}(\hat{k}_{12}^{ 2}+3\hat{k}_{12}-27)]}{9m^{4}(\hat{k}_{12}-4)\hat{k}_{12}^{2}}Z_{4}(2314)\,,\] \[\hat{\Gamma}_{\rm spin(34)}^{4}(3124) =\frac{16[3+p_{0}(\hat{k}_{12}^{2}+2\hat{k}_{12}-12)]}{3m^{4}( \hat{k}_{12}-4)\hat{k}_{12}^{2}}Z_{4}(3124)\,.\] For \(Q_{\rm spin}^{3}\) \[\hat{\Gamma}_{\rm spin(34)}^{3}(123;4) =\frac{4(\hat{k}_{12}^{2}+15\hat{k}_{12}-90)+16p_{0}(\hat{k}_{12}^ {2}-30\hat{k}_{12}+90)}{9m^{6}(\hat{k}_{12}-4)\hat{k}_{12}^{3}}Z_{3}(123)k_{2} \cdot f_{4}\cdot k_{1}\,, \tag{3.41}\] \[\hat{\Gamma}_{\rm spin(34)}^{3}(412;3) =\frac{4(\hat{k}_{12}^{2}+15\hat{k}_{12}-90)+16p_{0}(\hat{k}_{12}^ {2}-30\hat{k}_{12}+90)}{9m^{6}(\hat{k}_{12}-4)\hat{k}_{12}^{3}}Z_{3}(412)k_{2} \cdot f_{3}\cdot k_{1}\,.\] For \(Q^{2}_{\rm spin}\) \[\begin{split}\hat{\Gamma}^{2}_{\rm spin(34)}(12;34)&= \frac{1}{3}Z_{2}(12)\Bigg{\{}\Bigg{[}\frac{4(\hat{k}_{12}+2)+32p_{0}(\hat{k}_{12} -1)}{15m^{6}(4-\hat{k}_{12})^{2}\hat{k}_{12}}\,k_{1}\cdot f_{3}\cdot f_{4}\cdot k _{1}\\ &+\frac{2}{15}\tilde{P}_{1}\,k_{1}\cdot f_{3}\cdot f_{4}\cdot k_{ 2}+(1\leftrightarrow 2)\Bigg{]}+\frac{4}{3}\tilde{P}_{2}\,k_{2}\cdot f_{4}\cdot k_{ 1}k_{2}\cdot f_{3}\cdot k_{1}\Bigg{\}}\,,\\ \hat{\Gamma}^{2}_{\rm spin(34)}(13;24)&=\frac{4( \hat{k}_{12}-6)-32p_{0}(\hat{k}_{12}-3)}{9m^{6}(\hat{k}_{12}-4)\hat{k}_{12}^{2 }}Z_{2}(13)k_{1}\cdot f_{2}\cdot f_{4}\cdot k_{1}\,,\\ \hat{\Gamma}^{2}_{\rm spin(34)}(23;14)&=\frac{4( \hat{k}_{12}-6)-32p_{0}(\hat{k}_{12}-3)}{9m^{6}(\hat{k}_{12}-4)\hat{k}_{12}^{2 }}Z_{2}(23)k_{2}\cdot f_{1}\cdot f_{4}\cdot k_{2}\,,\\ \hat{\Gamma}^{2}_{\rm spin(34)}(14;23)&=\frac{4( \hat{k}_{12}-6)-32p_{0}(\hat{k}_{12}-3)}{9m^{6}(\hat{k}_{12}-4)\hat{k}_{12}^{2 }}Z_{2}(14)k_{1}\cdot f_{2}\cdot f_{3}\cdot k_{1}\,,\\ \hat{\Gamma}^{2}_{\rm spin(34)}(24;31)&=\frac{4( \hat{k}_{12}-6)-32p_{0}(\hat{k}_{12}-3)}{9m^{6}(\hat{k}_{12}-4)\hat{k}_{12}^{2 }}Z_{2}(24)k_{2}\cdot f_{1}\cdot f_{3}\cdot k_{2}\,.\end{split} \tag{3.42}\] And finally, for \(Q^{22}_{\rm spin}\) \[\begin{split}\hat{\Gamma}^{22}_{\rm spin(34)}(12,34)& =-\frac{16[1+2p_{0}(\hat{k}_{12}-2)]}{3m^{4}(\hat{k}_{12}-4)\hat{ k}_{12}}Z_{2}(12)Z_{2}(34)\,,\\ \hat{\Gamma}^{22}_{\rm spin(34)}(13,24)&=-\frac{8( 1+4p_{0})}{9m^{4}(\hat{k}_{12}-4)}Z_{2}(13)Z_{2}(24)\,,\\ \hat{\Gamma}^{22}_{\rm spin(34)}(14,23)&=-\frac{8( 1+4p_{0})}{9m^{4}(\hat{k}_{12}-4)}Z_{2}(14)Z_{2}(23)\,,\end{split} \tag{3.43}\] Here, we have defined \(\tilde{P}_{i}=\tilde{P}_{i}(\hat{k}_{12})\) as \[\tilde{P}_{1}\equiv\frac{2(\hat{k}_{12}^{3}+32\hat{k}_{12}^{2}-390\hat{k}_{12} +900)+16p_{0}(\hat{k}_{12}^{3}-46\hat{k}_{12}^{2}+270\hat{k}_{12}-450)}{m^{6}( \hat{k}_{12}-4)^{2}\hat{k}_{12}^{3}}, \tag{3.44}\] \[\tilde{P}_{2}\equiv\frac{2(-23\hat{k}_{12}^{2}+200\hat{k}_{12}-420)-24p_{0}( \hat{k}_{12}^{3}-18\hat{k}_{12}^{2}+90\hat{k}_{12}-140)}{m^{8}(\hat{k}_{12}-4 )^{2}\hat{k}_{12}^{4}}. \tag{3.45}\] ## 4 Check: the photon propagator in a constant field The kinematical regime of the four-photon amplitudes that we are studying in this paper has, to the best of our knowledge, not been treated in the literature before. However, for the special case where \(k_{1}=-k_{2}\) (which then also implies \(k_{3}=-k_{4}\)) they can be easily extracted from the photon polarization tensor in a constant background field, a quantity that has been intensely studied in spinor [2; 4; 5; 6; 7; 8; 9] and to a lesser degree in scalar QED [3; 7]. The field can be treated non-perturbatively using the exact Dirac or Klein-Gordon propagator in the field, indicated by the double line on the left-hand side of Fig. 4 as is customary. Expanding out the full non-perturbative polarization tensor in powers of the background field one obtains the series depicted on the right-hand side of Fig. 4, containing any even number of interactions with the field. Since a constant field cannot inject energy or momentum into the loop, these interactions are mediated by zero-momentum photons. Thus the second diagram on the right-hand side of Fig. 4 corresponds to our case of two arbitrary and two low-energy photons, the two low-energy legs corresponding to the two photons taken from the background field, however under the restriction \(k_{1}=-k_{2}=k\) imposed by the energy-momentum conservation in the constant field. We would now like to use this correspondence for a check on our above results. While such a comparison could be made using the results of any of the references cited above, it will be convenient to compare with [7], where the worldline formalism was already used to compute the constant-field photon polarization tensors for both scalar and spinor QED. Although that calculation is still quite different from our present one (mainly because the integrating out of the low-energy photon legs there was effectively done already at the level of the construction of the generalized worldline Green's functions, shown below in (4.22), (4.44)), it is still close enough to our present one to make it possible to verify the equivalence at an early stage, without the need to perform all the integrals. It will be sufficient to work with the intermediate results of our above calculations after integrating out \(u_{3}\) and \(u_{4}\), which we collect in B. Using suitable integrations-by-part, we identify the resulting set of parameter integrals with the ones obtained by expanding the integral representations given in [7] for the scalar and spinor QED photon propagators in a constant background field to second order in the field strength tensor \(F_{\mu\nu}\). ### Scalar QED The above replacement for the momenta (\(k_{1}=-k_{2}=k\)) actually makes some terms in \(Q_{\rm scal}\) vanish. The remaining terms in the scalar loop case can be compactly written as: [MISSING_PAGE_POST] Figure 4: The first three terms in the diagrammatic expansion of the photon polarization tensor in powers of the constant background field. The second diagram on the right-hand side represents the four-photon amplitude with two low-energy photons and \(k_{1}=-k_{2}\). \[Q_{\rm scal(34)}^{2}(24;13)=-\frac{T}{9}Z_{2}(24)k\cdot f_{1}\cdot f_{3}\cdot k \left(G_{12}-4G_{12}^{2}\right), \tag{4.12}\] \[Q_{\rm scal(34)}^{2}(34;12)=0, \tag{4.13}\] \[Q_{\rm scal(34)}^{22}(12,34)=\frac{1}{3}Z_{2}(12)Z_{2}(34)\left(1-4G_{12}\right), \tag{4.14}\] \[Q_{\rm scal(34)}^{22}(13,24)=\frac{1}{9}Z_{2}(13)Z_{2}(24), \tag{4.15}\] \[Q_{\rm scal(34)}^{22}(14,23)=\frac{1}{9}Z_{2}(14)Z_{2}(23). \tag{4.16}\] Note that with the above replacement \(f_{1}^{\mu\nu}=k^{\mu}\varepsilon_{1}^{\nu}-k^{\nu}\varepsilon_{1}^{\mu}\) and \(f_{2}^{\mu\nu}=-k^{\mu}\varepsilon_{2}^{\nu}+k^{\nu}\varepsilon_{2}^{\mu}\). Rewriting the remaining integrals we have \[\Gamma_{\rm scal}(k,\varepsilon_{1};-k,\varepsilon_{2};k_{3}, \varepsilon_{3};k_{4},\varepsilon_{4})=\frac{e^{4}}{(4\pi)^{\frac{D}{2}}} \int_{0}^{\infty}\frac{dT}{T}\,T^{4-\frac{D}{2}}\,{\rm e}^{-m^{2}T}\int_{0}^{ 1}du\,Q_{\rm scal(34)}\,{\rm e}^{-Tk^{2}G(u,0)}\,,\] where we have also set \(u_{2}=0\) and \(u_{1}=u\) so that now \(G_{12}=G(u,0)=u(1-u)\). In order to be able to make the comparison at the integrand level is important to use IBP or equivalently the identity \[T\left(G_{12}-4G_{12}^{2}\right)k^{2}\,\,{\rm e}^{-TG_{12}k^{2}}=(1-6G_{12}) \,\,{\rm e}^{-TG_{12}k^{2}}-\partial_{u}\left(\dot{G}_{12}G_{12}\,{\rm e}^{- TG_{12}k^{2}}\right) \tag{4.18}\] for many of the terms in \(Q_{\rm scal(34)}^{2}\). Note that this identity can also be expressed in terms of the \(Y_{nl}\) as \[\left(Y_{51}-4Y_{52}\right)k^{2}=Y_{40}-6Y_{41} \tag{4.19}\] which will be useful in Section 5. Turning our attention now to the photon polarization tensors in a constant background field, for scalar QED the worldline calculation of [7] yielded the following presentation: \[\Pi_{\rm scal}^{\mu\nu}=-\frac{e^{2}}{(4\pi)^{\frac{D}{2}}}\int_{0}^{\infty} dT\,T^{1-\frac{D}{2}}\,{\rm e}^{-m^{2}T}\,{\rm det}^{-\frac{1}{2}}\left(\frac{ \sin{\cal Z}}{{\cal Z}}\right)\int_{0}^{1}du\,I_{\rm scal}^{\mu\nu}\,{\rm e}^{ -Tk\cdot\Phi_{12}\cdot k}\,, \tag{4.20}\] where \[I_{\rm scal}^{\mu\nu}=\dot{\cal G}_{B12}^{\mu\nu}\,k\cdot\dot{\cal G}_{B12} \cdot k-\left(\dot{\cal G}_{B11}-\dot{\cal G}_{B12}\right)^{\mu\lambda}\!\! \left(\dot{\cal G}_{B21}-\dot{\cal G}_{B22}\right)^{\nu\kappa}\!\!k^{\lambda}k ^{\kappa}\,, \tag{4.21}\] and \({\cal G}_{Bij}\) is the bosonic worldline Green's function in a constant field background [21], \[\begin{split}{\cal G}_{Bij}&=\frac{T}{2\bar{\cal Z} }\Big{(}\frac{{\cal Z}}{\sin{\cal Z}}\,{\rm e}^{-i{\cal Z}\dot{G}_{ij}}+i{\cal Z }\dot{G}_{ij}-1\Big{)}\,,\\ {\cal G}_{Bii}&=\frac{T}{2\bar{\cal Z}}\Big{(}{\cal Z }\cot{\cal Z}-1\Big{)}\,,\end{split} \tag{4.22}\] with \({\cal Z}^{\mu\nu}=eTF^{\mu\nu}\). For the exponent of (4.20), we have \[-Tk\cdot\Phi_{ij}\cdot k=-k\cdot\left[\frac{T}{2\bar{\cal Z}}\Big{(}\frac{{\rm e }^{-i{\cal Z}\dot{G}_{ij}}-\cos{\cal Z}}{\sin{\cal Z}}+i\dot{G}_{ij}\Big{)} \right]\cdot k\,. \tag{4.23}\] In order to compare with the results in the present work, we choose \(F^{\mu\nu}=i(f_{3}^{\mu\nu}+f_{4}^{\mu\nu})\) and expand the whole polarization tensor to second order in \(F^{\mu\nu}\) so as to obtain the first two diagrams on the right-hand side of Fig. 4. After some straightforward algebra, we can express the result as \[\varepsilon_{1}\cdot\Pi_{\rm scal}\cdot\varepsilon_{2}=-\frac{e^{4}}{( 4\pi)^{D/2}}\int_{0}^{\infty}dT\,T^{3-D/2}\,{\rm e}^{-m^{2}T}\int_{0}^{1}du\,\,{ \rm e}^{-TG_{12}k^{2}} \tag{4.24}\] \[\qquad\times\bigg{\{}(1-4G_{12})\Big{(}\varepsilon_{1}\cdot \varepsilon_{2}k^{2}-\varepsilon_{1}\cdot k\varepsilon_{2}\cdot k\Big{)}\Big{[} (eT)^{-2}-\frac{2T}{3}G_{12}^{2}k\cdot f_{3}\cdot f_{4}\cdot k+\frac{1}{6}tr (f_{3}f_{4})\Big{]}\] \[\qquad+\frac{2}{3}\Big{(}G_{12}-4G_{12}^{2}\Big{)}\Big{[}2 \varepsilon_{1}\cdot\varepsilon_{2}k\cdot f_{3}\cdot f_{4}\cdot k+k^{2}\,( \varepsilon_{1}\cdot f_{3}\cdot f_{4}\cdot\varepsilon_{2}+\varepsilon_{1} \cdot f_{4}\cdot f_{3}\cdot\varepsilon_{2})\] \[\qquad-\varepsilon_{2}\cdot k\Big{(}\varepsilon_{1}\cdot f_{3} \cdot f_{4}\cdot k+\varepsilon_{1}\cdot f_{4}\cdot f_{3}\cdot k\Big{)}- \varepsilon_{1}\cdot k\Big{(}\varepsilon_{2}\cdot f_{3}\cdot f_{4}\cdot k+ \varepsilon_{2}\cdot f_{4}\cdot f_{3}\cdot k\Big{)}\Big{]}\] \[\qquad-4G_{12}^{2}\Big{(}\varepsilon_{1}\cdot f_{3}\cdot k \varepsilon_{2}\cdot f_{4}\cdot k+\varepsilon_{1}\cdot f_{4}\cdot k \varepsilon_{2}\cdot f_{3}\cdot k\Big{)}\bigg{\}}\,,\] Note that the term proportional to \((eT)^{-2}\) corresponds to the two-point diagram or the vacuum polarization diagram in the absence of the background field. The remaining terms give the sought after four-photon amplitude, and are found to be in complete agreement with (4.17). ### Spinor QED For the spinor QED case we find, in complete analogy, [MISSING_PAGE_POST] \[Q_{\rm spin(34)}^{2}(34;12)=0, \tag{4.37}\] \[Q_{\rm spin(34)}^{22}(12,34)=\frac{8}{3}Z_{2}(12)Z_{2}(34)G_{12}\,, \tag{4.38}\] \[\begin{split} Q^{22}_{\text{spin(34)}}(13,24)&=\frac{4}{9} Z_{2}(13)Z_{2}(24),\end{split} \tag{4.39}\] \[\begin{split} Q^{22}_{\text{spin(34)}}(14,23)&=\frac{4} {9}Z_{2}(14)Z_{2}(23),\end{split} \tag{4.40}\] and the following integral representation \[\begin{split}\Gamma_{\text{spin}}(k,\varepsilon_{1};-k, \varepsilon_{2};k_{3},\varepsilon_{3};k_{4},\varepsilon_{4})=-2\frac{e^{4}}{( 4\pi)^{\frac{D}{2}}}\int_{0}^{\infty}\frac{dT}{T}\,T^{4-\frac{D}{2}}\,\text{e} ^{-m^{2}T}\int_{0}^{1}du\,Q_{\text{spin(34)}}\,\text{e}^{-Tk^{2}G(u,0)}\,.\end{split} \tag{4.41}\] As in the scalar case, to be able to compare at the integrand level it is important to make repeated use of (4.18). The spinor-loop result of the vacuum polarization tensor (from [16; 7]) is \[\begin{split}\Pi^{\mu\nu}_{\text{spin}}&=2\frac{e ^{2}}{(4\pi)^{\frac{D}{2}}}\int_{0}^{\infty}dTT^{1-\frac{D}{2}}\,\text{e}^{-m ^{2}T}\det^{-\frac{1}{2}}\left(\frac{\tan\mathcal{Z}}{\mathcal{Z}}\right) \int_{0}^{1}du\,I^{\mu\nu}_{\text{spin}}\,\text{e}^{-Tk\cdot\Phi_{12}\cdot k} \,,\end{split} \tag{4.42}\] where \[\begin{split} I^{\mu\nu}_{\text{spin}}&=\hat{\cal G }^{\mu\nu}_{B12}\,k\cdot\hat{\cal G}_{B12}\cdot k-{\cal G}^{\mu\nu}_{F12}\,k \cdot{\cal G}_{F12}\cdot k\\ &-\Big{[}\Big{(}\hat{\cal G}_{B11}-{\cal G}_{F11}-\hat{\cal G}_{ B12}\Big{)}^{\mu\lambda}\Big{(}\hat{\cal G}_{B21}-\hat{\cal G}_{B22}+{\cal G}_{F22} \Big{)}^{\nu\kappa}+{\cal G}^{\mu\lambda}_{F12}{\cal G}^{\nu\kappa}_{F21} \Big{]}k^{\lambda}k^{\kappa}\,,\end{split} \tag{4.43}\] with the fermionic Green's function in a constant field and its coincidence limit [21] \[\begin{split}{\cal G}_{Fij}&=G_{Fij}\frac{\text{e} ^{-iZ\hat{\cal G}_{ij}}}{\cos\mathcal{Z}}\,,\\ {\cal G}_{Fii}&=-i\tan\mathcal{Z}\,.\end{split} \tag{4.44}\] After the expansion, choosing \(\mathcal{Z}\) as in the scalar case, the spinor-loop gives \[\begin{split}\varepsilon_{1}\cdot\Pi_{\text{spin}}\cdot \varepsilon_{2}&=\frac{2e^{4}}{(4\pi)^{\frac{D}{2}}}\int_{0}^{ \infty}dT\,T^{3-\frac{D}{2}}\,\text{e}^{-m^{2}T}\int_{0}^{1}du\,\,\text{e}^{- TG_{12}k^{2}}\\ &\quad\times\bigg{\{}4G_{12}\Big{(}\varepsilon_{1}\cdot \varepsilon_{2}k^{2}-\varepsilon_{1}\cdot k\varepsilon_{2}\cdot k\Big{)} \Big{[}-(eT)^{-2}+\frac{2T}{3}G_{12}^{2}k\cdot f_{3}\cdot f_{4}\cdot k+\frac {1}{3}\text{tr}(f_{3}f_{4})\Big{]}\\ &\quad-\frac{4}{3}\Big{(}G_{12}+2G_{12}^{2}\Big{)}\Big{[}2 \varepsilon_{1}\cdot\varepsilon_{2}k\cdot f_{3}\cdot f_{4}\cdot k+k^{2}( \varepsilon_{1}\cdot f_{3}\cdot f_{4}\cdot\varepsilon_{2}+\varepsilon_{1} \cdot f_{4}\cdot f_{3}\cdot\varepsilon_{2})\\ &\quad-\varepsilon_{2}\cdot k\Big{(}\varepsilon_{1}\cdot f_{3} \cdot f_{4}\cdot k+\varepsilon_{1}\cdot f_{4}\cdot f_{3}\cdot k\Big{)}- \varepsilon_{1}\cdot k\Big{(}\varepsilon_{2}\cdot f_{3}\cdot f_{4}\cdot k+ \varepsilon_{2}\cdot f_{4}\cdot f_{3}\cdot k\Big{)}\Big{]}\\ &\quad-4G_{12}^{2}\Big{(}\varepsilon_{1}\cdot f_{3}\cdot k \varepsilon_{2}\cdot f_{4}\cdot k+\varepsilon_{1}\cdot f_{4}\cdot k \varepsilon_{2}\cdot f_{3}\cdot k\Big{)}\bigg{\}}\,,\end{split} \tag{4.45}\] which again after removing the term proportional to \((eT)^{-2}\) is in perfect agreement with (4.42). Despite of the restriction \(k_{1}=-k_{2}\), this provides an excellent check on many of the results presented in Section 3 above. ## 5 Gauge-invariant decomposition of the two-loop \(\boldsymbol{\beta}\)-functions In this section, we use the results presented in Section 3 to go to two loops by sewing together the unrestricted photons 1 and 2 (Fig. 2). This yields the two-loop photon propagators in the low-energy limit, that is, the induced Maxwell terms, from whose \(1/\epsilon\)-poles we can (up to a contribution from mass renormalization) compute the two-loop \(\beta\)-function coefficients for scalar and spinor QED, recuperating the known results [11; 12; 40; 10; 41; 16]. As explained in the introduction, besides providing another check this is motivated by the following two questions suggested by the highly structured worldline integrand (2.4): first, are there terms that drop out in the sewing procedure? Second, does the fact that the sixteen terms in this decomposition are (differently from the usual decomposition into Feynman diagrams) individually gauge invariant, have a bearing on the issue of gauge cancellations? Specifically, will the vanishing of the double poles involve only cancellations inside each gauge-invariant structure, or between them? We use Feynman gauge in the sewing, so that it is done by the replacement \[\varepsilon_{1}^{\mu}\varepsilon_{2}^{\nu}\to\frac{\eta^{\mu\nu}}{k^{2}}\,, \tag{5.1}\] (\(k_{1}=k=-k_{2}\)) with \(k\) to be integrated over. The induced Maxwell term in our present set-up will appear as \(\text{tr}(f_{3}f_{4})\). The sewing will now change the argument of \(Y_{nl}\) functions, such that in this section it is understood that \(Y_{nl}=Y_{nl}(-k^{2},m^{2},D)\). Going back to the results in \(D\)-dimensions in Section 3 for the scalar/spinor-loop, we have the following expressions which will be used in both cases: \[\begin{split} Z_{4}(1234)|_{\text{sewing}}=Z_{4}(3124)|_{\text{ sewing}}&\Rightarrow\frac{2}{D}(D-1)\text{tr}(f_{3}f_{4})\,,\\ Z_{4}(2314)|_{\text{sewing}}&\Rightarrow\frac{2}{D} \text{tr}(f_{3}f_{4})\,,\\ Z_{3}(123)|_{\text{sewing}}\,k\cdot f_{4}\cdot k& \Rightarrow 0\,,\\ Z_{2}(12)|_{\text{sewing}}\,k\cdot f_{3}\cdot f_{4}\cdot k& \Rightarrow\frac{1}{D}(D-1)k^{2}\text{tr}(f_{3}f_{4})\,,\\ Z_{2}(13)(k\cdot f_{4}\cdot f_{2}\cdot k)|_{\text{sewing}}& \Rightarrow\frac{1}{D}\text{tr}(f_{3}f_{4})k^{2}\,,\\ Z_{2}(12)Z_{2}(34)|_{\text{sewing}}&\Rightarrow \frac{(D-1)}{2}\text{tr}(f_{3}f_{4})\,,\\ Z_{2}(13)Z_{2}(24)|_{\text{sewing}}&=Z_{2}(14)Z_{2} (23)|_{\text{sewing}}&\Rightarrow\frac{1}{D}\text{tr}(f_{3}f_{4}) \,.\end{split} \tag{5.2}\] Here we have used the Passarino-Veltman type identity \[\int d^{D}k\left(k\cdot f_{3}\cdot f_{4}\cdot k\right)F\left(k^{2}\right)\to \int d^{D}k\,\frac{k^{2}}{D}\text{tr}(f_{3}f_{4})\,F\left(k^{2}\right)\,, \tag{5.3}\] where \(F\) is an arbitrary scalar function. Note that the three-cycle \(Z_{3}(123)\) drops out in the sewing, together with its three permutations. Therefore \(Q_{\text{scal/spin}}^{3}\) does not contribute here, and the total amplitudes can be written as (using a similar convention as in (2.11)) \[\begin{split}\Gamma^{(2)}_{\text{scal}}\big{(}f_{3},f_{4}\big{)}& =\frac{e^{4}}{2(4\pi)^{\frac{D}{2}}}\int\frac{d^{D}k}{(2\pi)^{D}} \left(\hat{\Gamma}^{(2)4}_{\text{scal}}+\hat{\Gamma}^{(2)22}_{\text{scal}}+ \hat{\Gamma}^{(2)22}_{\text{scal}}\right)\,,\\ \Gamma^{(2)}_{\text{spin}}\big{(}f_{3},f_{4}\big{)}&= -\frac{e^{4}}{(4\pi)^{\frac{D}{2}}}\int\frac{d^{D}k}{(2\pi)^{D}} \left(\hat{\Gamma}^{(2)4}_{\text{spin}}+\hat{\Gamma}^{(2)22}_{\text{spin}}+ \hat{\Gamma}^{(2)22}_{\text{spin}}\right)\,.\end{split} \tag{5.4}\] (note that a symmetry factor of \(\frac{1}{2}\) has to be included). ### Scalar QED \(\beta\)-function In this way, we find the following result for the Maxwell term of the two-loop vacuum polarization tensor in scalar QED, \[\begin{split}\Gamma^{(2)}_{\text{scal}}\big{(}f_{3},f_{4}\big{)}& =\frac{e^{4}}{2(4\pi)^{\frac{D}{2}}}\int\frac{d^{D}k}{(2\pi)^{D}} \Bigg{\{}\left(\frac{4}{9D}+\frac{D-1}{6}\right)Y_{40}-\frac{2}{3D}\big{[}(D- 4)(D-1)+4\big{]}Y_{41}\\ &-\frac{8}{3D}(4D-7)Y_{42}-\frac{4}{9D}(Y_{51}-4Y_{52})k^{2}- \frac{2(D-1)}{3D}(Y_{52}-4Y_{53})k^{2}\Bigg{\}}\text{tr}(f_{3}f_{4})\,.\end{split} \tag{5.5}\] Using the identity (4.19) this can be somewhat simplified, \[\begin{split}\Gamma^{(2)}_{\rm scal}(f_{3},f_{4})&=\frac {e^{4}}{2(4\pi)^{\frac{D}{2}}}\int\frac{d^{D}k}{(2\pi)^{D}}\bigg{[}\frac{D-1}{6 }Y_{40}-\frac{2}{3D}(D-4)(D-1)Y_{41}\\ &\qquad-\frac{8}{3D}(4D-7)Y_{42}-\frac{2(D-1)}{3D}\,k^{2}(Y_{52}- 4Y_{53})\bigg{]}\text{tr}(f_{3}f_{4})\,.\end{split} \tag{5.6}\] Next, we rewrite this as \[\Gamma^{(2)}_{\rm scal}(f_{3},f_{4})\equiv\frac{e^{4}}{2(4\pi)^{4}}\gamma_{\rm scal }(D,m^{2})\text{tr}(f_{3}f_{4})\,. \tag{5.7}\] We then use (A.10) to eliminate the \(k\) - integration, which leads to \[\begin{split}\gamma_{\rm scal}(D,m^{2})&=\frac{(4 \pi)^{4}\Gamma(4-D)}{4(4\pi)^{D}m^{2(4-D)}}\int_{0}^{1}\,du\,\frac{1}{3D}\Big{\{} 2(D-1)D\Big{[}G(u,0)\Big{]}^{-\frac{D}{2}}\\ &\quad-4(D-1)(3D-8)\Big{[}G(u,0)\Big{]}^{1-\frac{D}{2}}+16(D-7)(D -2)\Big{[}G(u,0)\Big{]}^{2-\frac{D}{2}}\Big{\}}.\end{split} \tag{5.8}\] The remaining \(u\)-integral can be expressed in terms of the Euler Beta-function as \[\int_{0}^{1}du\Big{[}G(u,0)\Big{]}^{x}=B(x+1,x+1),\qquad B(x,y)\equiv\frac{ \Gamma(x)\Gamma(y)}{\Gamma(x+y)}\,, \tag{5.9}\] which leads to our final result for the coefficient of the induced Maxwell term, \[\gamma_{\rm scal}(D,m^{2})=-\frac{(4\pi)^{4}\,\Gamma\,(4-D)}{(4\pi)^{D}\,m^{2( 4-D)}}\,B\left(1-\frac{D}{2},1-\frac{D}{2}\right)\frac{(D-4)[(D-3)D+8]}{12(D- 5)(D-3)D}\,. \tag{5.10}\] Note the global factor of \((D-4)\), which makes it clear that, as expected, the \(1/\epsilon\)-expansion has only a single pole, \[\gamma_{\rm scal}(\epsilon,m^{2}) = \frac{2}{\epsilon}+\mathcal{O}(\epsilon^{0})\,. \tag{5.11}\] After substituting (5.11) into (5.7) one arrives at \[\Gamma^{(2)}_{\rm scal}(f_{3},f_{4})=\frac{e^{4}}{(4\pi)^{4}\epsilon}\text{tr} (f_{3}f_{4})+\mathcal{O}(\epsilon^{0})\,, \tag{5.12}\] which is in agreement with previous calculations of the two-loop dimensionally regularized effective action in Scalar QED [10; 42]3. Footnote 3: Note that there is an overall sign difference between the above results and the one obtained in [16] which comes from the fact that the two external photons in Fig. 2 have equal and opposite momenta (\(k_{3}=-k_{4}\)) which leads to an extra sign in \(\text{tr}(f_{3}f_{4})\). For the full two-loop renormalization of the photon propagator, one still needs to add an equal contribution \(\Delta\Gamma^{(2)}_{\rm scal}(f_{3},f_{4})\) from mass renormalization [10], \[\Gamma^{(2)}_{\rm scal}(f_{3},f_{4})+\Delta\Gamma^{(2)}_{\rm scal}(f_{3},f_{4}) \sim\frac{2e^{4}}{(4\pi)^{4}\epsilon}\text{tr}(f_{3}f_{4})\,. \tag{5.13}\] This then gives the two-loop photon wave-function renormalization factor and leads to the standard result for the two-loop \(\beta\)-function coefficient in Scalar QED [12; 10] \[\beta^{(2)}_{\rm scal}(\alpha)=\frac{\alpha^{3}}{2\pi^{2}}\quad\,,\quad\alpha= \frac{e^{2}}{4\pi}\,. \tag{5.14}\] More interesting is the distribution of the double and single - pole coefficients \(a_{\rm scal}\) and \(b_{\rm scal}\) in the decomposition (2.4), shown in Table 1. Although all terms here are already individually gauge invariant, the vanishing of the double pole still happens through an intricate cancellation between them. Similarly, apart from the vanishing of the three-cycle contributions the other terms all contribute to the single pole, without any clear pattern emerging as one might have hoped. ### Spinor QED \(\beta\)-function For the more relevant spinor QED case, let us also write down the individual contributions in terms of the \(Y_{nl}\): \[\begin{split}\hat{\Gamma}_{\text{spin}}^{(2)4}(1234)=\hat{\Gamma}_ {\text{spin}}^{(2)4}(3124)&=-\frac{8(D-1)}{3D}\left(Y_{41}+2Y_{42 }\right)\text{tr}(f_{3}f_{4})\,,\\ \hat{\Gamma}_{\text{spin}}^{(2)4}(2314)&=-\frac{8}{9D }\left(2Y_{40}-6Y_{41}-9Y_{42}\right)\text{tr}(f_{3}f_{4})\,,\\ \hat{\Gamma}_{\text{spin}}^{(2)2}(12;34)&=\frac{8}{3D }(D-1)\,k^{2}\,Y_{53}\,\text{tr}(f_{3}f_{4})\,,\\ \hat{\Gamma}_{\text{spin}}^{(2)2}(13;24)=\hat{\Gamma}_{\text{spin} }^{(2)2}(23;14)&=\frac{2}{9D}\left(Y_{40}-6Y_{41}\right)\text{tr}( f_{3}f_{4})\,,\\ \hat{\Gamma}_{\text{spin}}^{(2)2}(14;23)=\hat{\Gamma}_{\text{spin} }^{(2)2}(24;13)&=\frac{2}{9D}\left(Y_{40}-6Y_{41}\right)\text{tr}( f_{3}f_{4})\,,\\ \hat{\Gamma}_{\text{spin}}^{(2)22}(12,34)=\hat{\Gamma}_{\text{ spin}}^{(2)22}(14,23)&=\frac{4}{9D}\,Y_{40}\,\text{tr}(f_{3}f_{4})\,. \end{split} \tag{5.15}\] Collecting all terms, we obtain \[\begin{split}\Gamma_{\text{spin}}^{(2)}(f_{3},f_{4})=-\frac{e^{4} }{(4\pi)^{\frac{D}{2}}}\int\frac{d^{D}k}{(2\pi)^{D}}\bigg{[}&\frac{ 4}{3D}(D-4)(D-1)Y_{41}-\frac{8}{3D}(4D-7)Y_{42}\\ &+\frac{8}{3D}(D-1)\,k^{2}\,Y_{53}\bigg{]}\text{tr}(f_{3}f_{4}) \,.\end{split} \tag{5.16}\] As in the scalar case, we define \[\Gamma_{\text{spin}}^{(2)}(f_{3},f_{4})\equiv-\frac{e^{4}}{(4\pi)^{4}}\,\gamma _{\text{spin}}(D,m^{2})\,\text{tr}(f_{3}f_{4})\,. \tag{5.17}\] Application of (A.10) leads to the following representation for \(\gamma_{\text{spin}}(D,m^{2})\), \[\begin{split}\gamma_{\text{spin}}(D,m^{2})&=\frac{ (4\pi)^{4}\Gamma(4-D)}{4(4\pi)^{D}m^{2(4-D)}}\,\int_{0}^{1}\,du\,\frac{16}{3D} \Big{\{}(D-4)(D-1)\Big{[}G(u,0)\Big{]}^{1-\frac{D}{2}}\\ &\quad+(D-7)(D-2)\Big{[}G(u,0)\Big{]}^{2-\frac{D}{2}}\Big{\}}. \end{split} \tag{5.18}\] This result agrees already at the integrand level with Eq. (36) of [10], providing another excellent check on many of the equations of the present paper. Performing the \(u\) - integral brings us to our final result for the induced Maxwell term, \[\gamma_{\text{spin}}(D,m^{2})=\frac{(4\pi)^{4}\Gamma\left(4-D\right)}{(4\pi)^{ D}\,m^{2(4-D)}}\,B\left(2-\frac{D}{2},2-\frac{D}{2}\right)\frac{(D-4)[34+D(5D-3 3)]}{3D(D-5)}\,. \tag{5.19}\] \begin{table} \begin{tabular}{|c|c|c||c|c|c|c|c|} \hline \(\gamma_{\text{scal}}\) & \(a_{\text{scal}}\) & \(b_{\text{scal}}\) & \(\gamma_{\text{scal}}\) & \(a_{\text{scal}}\) & \(b_{\text{scal}}\) & \(\gamma_{\text{scal}}\) & \(a_{\text{scal}}\) & \(b_{\text{scal}}\) \\ \hline \(\gamma_{\text{scal}}^{4}(1234)\) & \(4\) & \(\frac{13}{3}\) & \(\gamma_{\text{scal}}^{2}(13;24)\) & \(\frac{4}{9}\) & \(-\frac{2}{9}\) & \(\gamma_{\text{scal}}^{22}(12,34)\) & \(-4\) & \(\frac{2}{3}\) \\ \hline \(\gamma_{\text{scal}}^{4}(1243)\) & \(4\) & \(\frac{13}{3}\) & \(\gamma_{\text{scal}}^{2}(14;23)\) & \(\frac{4}{9}\) & \(-\frac{2}{9}\) & \(\gamma_{\text{scal}}^{22}(13,24)\) & \(\frac{2}{9}\) & \(\frac{1}{18}\) \\ \hline \(\gamma_{\text{scal}}^{4}(1324)\) & \(-\frac{20}{9}\) & \(-\frac{11}{9}\) & \(\gamma_{\text{scal}}^{2}(24;13)\) & \(\frac{4}{9}\) & \(-\frac{2}{9}\) & \(\gamma_{\text{scal}}^{22}(14,23)\) & \(\frac{2}{9}\) & \(\frac{1}{18}\) \\ \hline \(\gamma_{\text{scal}}^{2}(12;34)\) & \(-4\) & \(-\frac{16}{3}\) & \(\gamma_{\text{scal}}^{2}(23;14)\) & \(\frac{4}{9}\) & \(-\frac{2}{9}\) & & & & \\ \hline \end{tabular} \end{table} Table 1: Contributions of individual terms in the worldline integrand to the double and single poles of the two-loop Maxwell term in Scalar QED. The \(a_{\text{scal}}\) sum to zero, the \(b_{\text{scal}}\) to \(2\), see (5.11). Note once more the global factor of \((D-4)\), which reduces the pole in dimensional regularization to a single one, \[\gamma_{\rm spin}(\epsilon,m^{2})=\frac{6}{\epsilon}+\mathcal{O}( \epsilon^{0})\,. \tag{5.20}\] Adding the appropriate term from mass renormalization [10] \[\Delta\Gamma^{(2)}_{\rm spin}(f_{3},f_{4})\sim\frac{8e^{4}}{(4 \pi)^{4}\epsilon}{\rm tr}(f_{3}f_{4})\,, \tag{5.21}\] we get the total pole of the induced Maxwell term, \[\Gamma^{(2)}_{\rm spin}(f_{3},f_{4})+\Delta\Gamma^{(2)}_{\rm spin }(f_{3},f_{4})\sim\frac{2e^{4}}{(4\pi)^{4}\epsilon}{\rm tr}(f_{3}f_{4})\,, \tag{5.22}\] which finally gives the correct two-loop spinor QED \(\beta\)-function [11; 40], \[\beta^{(2)}_{\rm spin}(\alpha)=\frac{\alpha^{3}}{2\pi^{2}}\,. \tag{5.23}\] Finally, in Table 2 we list the double and single pole contributions of the individual terms in the worldline integrand. The picture that emerges is similar to the scalar QED case above. ### Four-dimensional computation of the Spinor QED \(\beta\)-function Although the calculation of corresponding quantities in Scalar and Spinor QED in the worldline formalism usually proceeds with only minor differences, a surprising finding of [10] had been that, in the case of the two-loop QED \(\beta\) functions, this holds true when employing dimensional regularization but not for strictly four-dimensional regularization schemes such as the use of a proper-time cutoff. In [16], this observation was explained as a consequence of the above-mentioned theorem by Johnson et al. [13] that the total induced effective action can have only a global divergence. Since, unlike Scalar QED, Spinor QED does not possess true quadratic divergences, the required cancellation of subdivergences here leads to two constraint equations. And since, by the structure of the worldline representation of the induced Maxwell term, the integrand before the final integration over \(u\) in \(D=4\) must be of the form (compare (5.18)) \[\left(\frac{A}{G_{12}^{2}}+\frac{B}{G_{12}}+C\right){\rm tr}(F^{ 2})\,, \tag{5.24}\] one can first conclude from the absence of a quadratic subdivergence that \(A=0\), and then from the absence of a logarithmic subdivergence that \(B=0\), leaving only the coefficient \(C\) non-vanishing and making the final \(u\)-integral trivial (this argument does not work in dimensional regularization due to the principal suppression of quadratic subdivergences by that scheme). Thus one would expect that also the present recalculation should simplify in the Spinor QED case when a four-dimensional scheme is used. And indeed, if we return to (5.16) and put \(D=4\) there, we find \[\gamma_{\rm spin}(D,m^{2}) \xrightarrow{D=4} (4\pi)^{2}\int\frac{d^{4}k}{(2\pi)^{4}}\bigg{[}-6Y_{42}+2Y_{53 }k^{2}\bigg{]}\,, \tag{5.25}\] and then using any four-dimensional method for the regularization of the global divergence that is still contained in the global \(T\)-integration will lead to a trivial \(u\)-integration. \begin{table} \begin{tabular}{|c|c|c||c|c|c||c|c|c|} \hline \(\gamma_{\rm spin}\) & \(a_{\rm spin}\) & \(b_{\rm spin}\) & \(\gamma_{\rm spin}\) & \(a_{\rm spin}\) & \(b_{\rm spin}\) & \(\gamma_{\rm spin}\) & \(a_{\rm spin}\) & \(b_{\rm spin}\) \\ \hline \(\gamma_{\rm spin}^{4}(1234)\) & \(-8\) & \(\frac{10}{3}\) & \(\gamma_{\rm spin}^{2}(13;24)\) & \(-\frac{8}{9}\) & \(\frac{4}{9}\) & \(\gamma_{\rm spin}^{22}(12,34)\) & \(16\) & \(\frac{16}{3}\) \\ \hline \(\gamma_{\rm spin}^{4}(1243)\) & \(-8\) & \(\frac{10}{3}\) & \(\gamma_{\rm spin}^{2}(14;23)\) & \(-\frac{8}{9}\) & \(\frac{4}{9}\) & \(\gamma_{\rm spin}^{22}(13,24)\) & \(\frac{8}{9}\) & \(\frac{2}{9}\) \\ \hline \(\gamma_{\rm spin}^{4}(1324)\) & \(\frac{16}{9}\) & \(-\frac{38}{9}\) & \(\gamma_{\rm spin}^{2}(24;13)\) & \(-\frac{8}{9}\) & \(\frac{4}{9}\) & \(\gamma_{\rm spin}^{22}(14,23)\) & \(\frac{8}{9}\) & \(\frac{2}{9}\) \\ \hline \(\gamma_{\rm spin}^{2}(12;34)\) & \(0\) & \(-4\) & \(\gamma_{\rm spin}^{2}(23;14)\) & \(-\frac{8}{9}\) & \(\frac{4}{9}\) & & & \\ \hline \end{tabular} \end{table} Table 2: Contributions of individual terms in the worldline integrand to the double and single poles of the two-loop Maxwell term in Spinor QED. The \(a_{\rm spin}\) sum to zero, the \(b_{\rm spin}\) to \(6\), see (5.20). ## 6 Delbruck scattering at low energies In this section, we compute the differential cross section of Delbruck scattering in scalar and spinor QED under the assumption that the photon that interacts with the Coulomb field has low energy. For the spinor QED case, this quantity was computed in detail in [43], therefore we will follow their conventions for easy comparison. Here we use our above results for the four-photon amplitudes with two low-energy photons, and replace the two unrestricted legs \(1\) and \(2\) with two Coulomb photons. Furthermore, we now take the two low-energy photons (\(3\) and \(4\)) on-shell. The vector potential for the photons from the Coulomb field is given by \[A_{\mu}(x)=\left(-\frac{Ze}{4\pi r},\,\mathbf{0}\right)\,, \tag{6.1}\] and has the following Fourier representation, \[A_{\mu}(x)=\varepsilon_{\mu}\int\frac{d^{4}k}{(2\pi)^{4}}\,\frac{Ze}{k^{2}}\, 2\pi\,\delta(k_{0})\,\,\mathrm{e}^{ik\cdot x}\,. \tag{6.2}\] From here the new vertex operator for photons from the Coulomb field read as \[\begin{split} V_{\text{Nucl}}^{\gamma}[k,\varepsilon]& =\frac{Ze}{(2\pi)^{3}}\int d^{4}k\frac{\delta(k_{0})}{k^{2}}\int_ {0}^{T}d\tau\,\varepsilon\cdot\dot{x}\,\mathrm{e}^{ik\cdot x}\\ &=\frac{Ze}{(2\pi)^{3}}\int\frac{d^{3}\mathbf{k}}{\mathbf{k}^{2} }\int_{0}^{T}d\tau\,\varepsilon\cdot\dot{x}\,\,\mathrm{e}^{ik\cdot\mathbf{x}} \,.\end{split} \tag{6.3}\] Thus in the present formalism, we have \[\Gamma_{\left\{\begin{subarray}{c}\text{real}(34)\\ \text{split}\text{split}\text{split} \text{split}\text{split}\text{split} \text{split}\text{split}\text{split}\text{split} \text{split}\text{split}\text{split}\text{split} \text{split}\text{split}\text{split}\text{}\text{split} \text{split}\text{}\text{split}\text{}\text{split}\text{} \text{split}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{ }\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{ }\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{ }\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{ where \[\begin{split}\mathbf{q}^{\prime}&=(\omega\sin\theta/2,0, 0),\\ \mathbf{k}&=(0,0,\omega\cos\theta/2),\\ \mathbf{q}&=(q_{1},q_{2},q_{3}),\end{split} \tag{6.6}\] with \(\omega\) as the energy and \(\theta\) the scattering angle. The polarizations are chosen as \[\begin{split}\varepsilon_{1}^{\mu}&=\varepsilon_{2 }^{\mu}=(i,0,0,0),\\ \varepsilon_{3}^{\mu}&=\frac{1}{\sqrt{2}}(0,-i \lambda_{3}\cos\theta/2,1,+i\lambda_{3}\sin\theta/2),\\ \varepsilon_{4}^{\mu}&=\frac{1}{\sqrt{2}}(0,-i \lambda_{4}\cos\theta/2,1,-i\lambda_{4}\sin\theta/2),\end{split} \tag{6.7}\] where \(\lambda_{i}=\pm 1\) for right-and left-handed circular polarization, respectively. In the following it is understood that \(\hat{\Gamma}^{+-}_{\text{scal}(34)}=\hat{\Gamma}_{\text{scal}(34)}|_{\lambda_ {3}=1,\,\lambda_{4}=-1}\) etc. and we will also use the abbreviations \[P_{0}\equiv\frac{\text{arcsinh}\left(\frac{q}{2m}\right)}{q\sqrt{4m^{2}+q^{2} }},\qquad S\equiv\sin\frac{\theta}{2},\qquad C\equiv\cos\frac{\theta}{2}\,. \tag{6.8}\] With the kinematics of (6.5) and using conservation of momentum we can write (6.4) as \[\Gamma_{\left\{\text{scal}(34)\atop\text{spin}(34)}\right\}=\left\{\begin{array} []{l}1\\ -2\end{array}\right\}\frac{1}{2}\frac{e^{4}(Ze)^{2}}{(4\pi)^{2}(2\pi)^{6}}(2 \pi)^{4}\delta(k_{3}^{0}+k_{4}^{0})\int\frac{d^{3}\mathbf{q}}{|\mathbf{q}- \mathbf{q}^{\prime}|^{2}|\mathbf{q}+\mathbf{q}^{\prime}|^{2}}\hat{\Gamma}_{ \left\{\text{scal}(34)\atop\text{spin}(34)}\right\}. \tag{6.9}\] For convenience, let us further define \[\tilde{\Gamma}_{\left\{\text{scal}\atop\text{spin}\right\}}\equiv\int\frac{d^ {3}\mathbf{q}}{|\mathbf{q}-\mathbf{q}^{\prime}|^{2}|\mathbf{q}+\mathbf{q}^{ \prime}|^{2}}\hat{\Gamma}_{\left\{\text{scal}(34)\atop\text{spin}(34)}\right\}. \tag{6.10}\] Since we are considering the low-energy case, \(\omega\ll m\), we neglect contributions of order superior to \(\omega^{2}\). We notice that \[\begin{split}\int\frac{d^{3}\mathbf{q}}{|\mathbf{q}-\mathbf{q}^{ \prime}|^{2}|\mathbf{q}+\mathbf{q}^{\prime}|^{2}}&=\int_{0}^{ \infty}\int_{0}^{\pi}\int_{0}^{2\pi}\frac{q^{2}\sin\theta^{\prime}dq\,d\theta ^{\prime}\,d\phi^{\prime}}{\left(q^{2}+\omega^{2}\sin^{2}\frac{\theta}{2} \right)^{2}-4q_{1}^{2}\omega^{2}\sin^{2}\frac{\theta}{2}}\\ &=\int_{0}^{\infty}\int_{0}^{\pi}\int_{0}^{2\pi}\frac{\sin\theta^ {\prime}dq\,d\theta^{\prime}\,d\phi^{\prime}}{q^{2}}+\mathcal{O}(\omega)\,. \end{split} \tag{6.11}\] Using the kinematics of (6.5), we find \[\begin{split}\hat{\Gamma}^{+-}_{\text{scal}(34)}&= \frac{4\omega^{2}}{3m^{2}q^{4}(4m^{2}+q^{2})}\Big{\{}3m^{2}[(6m^{2}+q^{2})-8m^{ 2}(3m^{2}+q^{2})P_{0}](q_{2}^{2}-q_{1}^{2})\\ &\quad+[q^{4}(2m^{2}+q^{2})-3m^{2}(6m^{2}+q^{2})q_{2}^{2}]S^{2}-8m ^{4}[q^{4}-3(3m^{2}+q^{2})q_{2}^{2}]S^{2}P_{0}\Big{\}}+\mathcal{O}(\omega^{3}) \,,\end{split} \tag{6.12}\] \[\begin{split}\hat{\Gamma}^{+-}_{\text{spin}(34)}&= \frac{4\omega^{2}}{3q^{4}(4m^{2}+q^{2})^{2}}\Big{\{}3(4m^{2}+q^{2})[(6m^{2}+q^{ 2})-8m^{2}(3m^{2}+q^{2})P_{0}](q_{2}^{2}-q_{1}^{2})\\ &\quad+[q^{4}(2m^{2}-q^{2})-3(4m^{2}+q^{2})(6m^{2}+q^{2})q_{2}^{2 }]S^{2}\\ &\quad-8m^{2}[q^{4}(m^{2}+q^{2})-3(4m^{2}+q^{2})(3m^{2}+q^{2})q_{2 }^{2}]S^{2}P_{0}\Big{\}}+\mathcal{O}(\omega^{3})\,,\end{split} \tag{6.13}\] for the helicity non-conserving component, and \[\begin{split}\hat{\Gamma}^{++}_{\text{scal}(34)}&= \frac{4\omega^{2}}{3q^{4}(4m^{2}+q^{2})}\Big{\{}4[(6m^{2}+q^{2})-8m^{2}(3m^{2 }+q^{2})P_{0}](q_{2}^{2}-q_{3}^{2})\\ &\quad-[3q^{2}(2m^{2}+q^{2})+4(6m^{2}+q^{2})q_{2}^{2}]C^{2}\\ &\quad+4[q^{2}(6m^{4}+4m^{2}q^{2}+q^{4})+8m^{2}(3m^{2}+q^{2})q_{2 }^{2}]C^{2}P_{0}\Big{\}}+\mathcal{O}(\omega^{3})\,,\end{split} \tag{6.14}\] \[\begin{split}\hat{\Gamma}^{++}_{\rm spin(34)}&=\frac{4 \omega^{2}}{3q^{4}(4m^{2}+q^{2})^{2}}\Big{\{}(6m^{2}+q^{2})(16m^{2}+7q^{2})(q_{ 2}^{2}-q_{3}^{2})\\ &\quad+8[(3m^{2}+q^{2})(4m^{2}+q^{2})^{2}-3m^{4}q^{2}](q_{3}^{2} -q_{2}^{2})P_{0}\\ &\quad-[3q^{2}(8m^{4}-2m^{2}q^{2}-q^{4})+(6m^{2}+q^{2})(16m^{2}+7 q^{2})q_{2}^{2}]C^{2}\\ &\quad+8q^{2}(12m^{6}-m^{4}q^{2}-5m^{2}q^{4}-q^{6})C^{2}P_{0}\\ &\quad+8[(3m^{2}+q^{2})(4m^{2}+q^{2})^{2}-3m^{4}q^{2}]q_{2}^{2}C^ {2}P_{0}\Big{\}}+{\cal O}(\omega^{3})\,,\end{split} \tag{6.15}\] for the conserving one. To perform the integral over \({\bf q}\) we use spherical coordinates: \[q_{1}=q\cos\theta^{\prime},\qquad q_{2}=q\sin\theta^{\prime}\cos\phi^{\prime}, \qquad q_{3}=q\sin\theta^{\prime}\sin\phi^{\prime}\,. \tag{6.16}\] The integrals over \(\theta^{\prime}\) and \(\phi^{\prime}\) are trivial, and what remains to be calculated is \[\tilde{\Gamma}^{+-}_{\rm scal}=\frac{16\pi S^{2}\omega^{2}}{3}\int_{0}^{\infty }\frac{dq}{q^{2}}\,\frac{-6m^{4}+m^{2}q^{2}+q^{4}+24m^{6}P_{0}}{m^{2}q^{2}(4m ^{2}+q^{2})}\,, \tag{6.17}\] \[\tilde{\Gamma}^{+-}_{\rm spin}=\frac{32\pi S^{2}\omega^{2}}{3}\int_{0}^{\infty }\frac{dq}{q^{2}}\,\frac{-12m^{4}-4m^{2}q^{2}-q^{4}+24m^{4}(2m^{2}+q^{2})P_{0 }}{q^{2}(4m^{2}+q^{2})^{2}}\,, \tag{6.18}\] \[\tilde{\Gamma}^{++}_{\rm scal}=\frac{16\pi C^{2}\omega^{2}}{9}\int_{0}^{\infty }\frac{dq}{q^{2}}\,\frac{-42m^{2}-13q^{2}+4(42m^{4}+20m^{2}q^{2}+3q^{4})P_{0} }{q^{2}(4m^{2}+q^{2})}\,, \tag{6.19}\] \[\tilde{\Gamma}^{++}_{\rm spin}=\frac{32\pi C^{2}\omega^{2}}{9}\int_{0}^{\infty }\frac{dq}{q^{2}}\,\frac{-84m^{4}-20m^{2}q^{2}+q^{4}+8(42m^{6}+17m^{4}q^{2}-2m ^{2}q^{4}-q^{6})P_{0}}{q^{2}(4m^{2}+q^{2})^{2}}\,. \tag{6.20}\] Performing the integral over \(q\), we get \[\tilde{\Gamma}^{+-}_{\rm scal}=\frac{15\pi^{3}S^{2}\omega^{2}}{32m^{3}},\qquad \tilde{\Gamma}^{+-}_{\rm spin}=-\frac{5\pi^{3}S^{2}\omega^{2}}{32m^{3}}\,, \tag{6.21}\] \[\tilde{\Gamma}^{++}_{\rm scal}=\frac{3\pi^{3}C^{2}\omega^{2}}{32m^{3}},\qquad \tilde{\Gamma}^{++}_{\rm spin}=-\frac{73\pi^{3}C^{2}\omega^{2}}{288m^{3}}\,. \tag{6.22}\] Finally, the differential cross section is \[d\sigma_{\rm scal(\lambda_{3}\lambda_{4})}=\frac{(Z\alpha)^{4}\alpha^{2}}{4(2 \pi)^{6}}\,|\tilde{\Gamma}^{\lambda_{3}\lambda_{4}}_{\rm scal}|^{2}\,d\Omega\,, \tag{6.23}\] \[d\sigma_{\rm spin(\lambda_{3}\lambda_{4})}=\frac{(Z\alpha)^{4}\alpha^{2}}{(2 \pi)^{6}}\,|\tilde{\Gamma}^{\lambda_{3}\lambda_{4}}_{\rm spin}|^{2}\,d\Omega\,. \tag{6.24}\] For scalar QED, we have \[d\sigma_{\rm scal(++)} = d\sigma_{\rm scal(--)}=(Z\alpha)^{4}\left(\frac{\alpha}{m}\right) ^{2}\left(\frac{3}{16}\right)^{2}\left(\frac{1}{32}\right)^{2}\left(\frac{ \omega}{m}\right)^{4}\cos^{4}\frac{\theta}{2}d\Omega\,, \tag{6.25}\] \[d\sigma_{\rm scal(+-)} = d\sigma_{\rm scal(-+)}=(Z\alpha)^{4}\left(\frac{\alpha}{m}\right) ^{2}\left(\frac{15}{16}\right)^{2}\left(\frac{1}{32}\right)^{2}\left(\frac{ \omega}{m}\right)^{4}\sin^{4}\frac{\theta}{2}d\Omega\,. \tag{6.26}\] For spinor QED, we find \[d\sigma_{\rm spin(++)} = d\sigma_{\rm spin(--)}=(Z\alpha)^{4}\left(\frac{\alpha}{m} \right)^{2}\left(\frac{73}{72}\right)^{2}\left(\frac{1}{32}\right)^{2}\left( \frac{\omega}{m}\right)^{4}\cos^{4}\frac{\theta}{2}d\Omega\,, \tag{6.27}\] \[d\sigma_{\rm spin(+-)} = d\sigma_{\rm spin(-+)}=(Z\alpha)^{4}\left(\frac{\alpha}{m}\right) ^{2}\left(\frac{5}{8}\right)^{2}\left(\frac{1}{32}\right)^{2}\left(\frac{ \omega}{m}\right)^{4}\sin^{4}\frac{\theta}{2}d\Omega\,, \tag{6.28}\] in agreement with [43]. ## 7 Summary and outlook In this second part of our series of papers on the off-shell four-photon amplitudes in scalar and spinor QED we have obtained these amplitudes for the case where two of the photon momenta were taken in the low-energy limit, as defined in part I. We have used the worldline representation of these amplitudes which, as outlined in part I, allows one to treat the scalar and spinor cases in parallel, and to arrive at a permutation and gauge-invariant decomposition of these amplitudes that is closely related to the one previously obtained by [43]. The coefficient functions in this decomposition are written in terms of Feynman-Schwinger type parameter integrals, and as our main result we have explicitly evaluated them both in four and in \(D\) dimensions. Since in our formalism the four-photon amplitudes are manifestly free of UV divergences, the former results are sufficient for one-loop applications. The latter formulas serve the double purpose of making the off-shell four-photon amplitudes useful as input for higher-loop calculations in dimensional regularization, but also giving the correct results for them in other physical dimensions (arbitrary \(D\) for scalar QED, even \(D\) for spinor QED). After the integrating out of the two low-energy legs these integrals are of two-point type, and can therefore be written in terms of \({}_{2}F_{1}\) for general \(D\), and for \(D=4\) in terms of trigonometric functions. To the best of our knowledge, for this momentum configuration the four-photon amplitudes have not been obtained before. However, for the special case \(k_{1}+k_{2}=k_{3}+k_{4}=0\) they can be extracted from the low-energy limit of the known vacuum polarization tensors in a constant field, and performing this comparison provided an excellent check on our results for both scalar and spinor QED. From a technical point of view, probably the most important aspect of our calculation is that it demonstrates how to integrate out a low-energy photon leg without fixing an ordering for the remaining legs. As discussed in part I, when using the light-by-light diagram as a subdiagram in higher-loop calculation this property effectively allows one to unify the calculation of Feynman diagrams of different topologies, and considering the proliferation of diagrams in higher-loop QED calculations it is obviously of great interest to explore what level of simplification can be achieved along these lines. Off-shell photon legs can be used for creating internal photons by sewing, or for connecting to external fields. As an example for sewing, we have used our results for a construction of the two-loop vacuum polarisation tensors in the low-energy limit, which allowed us to recover the two-loop \(\beta\)-function coefficients for scalar and spinor QED. Although by present-days standards this is easy enough to do using Feynman diagrams, the worldline calculation in the version presented here has the advantage that it replaces the non-gauge invariant decomposition into diagrams by a gauge-invariant decomposition into sixteen substructures. This leads not only to a more compact parameter integral representation, but also has given us an opportunity to probe into the nature of the extensive cancellations that one generally finds in perturbative QED calculations [13; 44; 14; 45]. If, as is sometimes assumed, these were entirely due to gauge invariance, one might have expected the cancellation of the double poles in the scalar and spinor QED \(\beta\)-function calculations to have happened already at the level of each of the gauge-invariant partial structures; instead, we have seen an intricate pattern of cancellations between the various structures. Another interesting result of this calculation has been that four of the sixteen partial structures, namely the one involving a three-cycle, drop out in the sewing. It remains to be seen whether this fact admits a generalization to larger numbers of external photons and multiple sewing. As an example of an external-field calculation, we have applied our formulas to a calculation of the low-energy Delbruck scattering cross sections for both scalar and spinor QED. For the spinor QED case this served just as another check of efficiency and correctness, while the scalar QED result is new, to the best of our knowledge. Both the scalar and spinor QED calculations have been presented in a way that would be easy to adapt to other external fields. The forthcoming third part of this series is devoted to the explicit, \(D\)-dimensional calculation of the off-shell four-photon amplitudes with only one leg taken in the low-energy limit and its applications. _Acknowledgments:._ We would like to thank D. Broadhurst, F. Karbstein and D. Kreimer for discussions and correspondence. C. Lopez-Arcos and M. A. Lopez-Lopez thank CONACYT for financial support. N. Ahmadiniaz and M. A. Lopez-Lopez would like to thank R. Schutzhold for his support. ## Appendix A Collection of integral formulas Here we collect a number of results for the parameter integrals appearing in one-loop worldline calculations, mostly taken from [16]. All Green's functions have been rescaled to the unit circle, \(T=1\). ### Integrating out a low-energy leg All integrals encountered in the integrating out of a low-energy photon can, using the identities (2.34), be reduced to integrals over polynomials in \(\dot{G}_{ij}\). For this type of integrals a closed-form master formula is available even for the most general (abelian) case, integrating an arbitrary monomial in the \(\dot{G}_{ij}\)'s involving an arbitrary number of variables in one of the variables, and giving the result as a polynomial in the remaining \(\dot{G}_{ij}\)'s [39]: \[\int_{0}^{1}du\,\dot{G}(u,u_{1})^{k_{1}}\dot{G}(u,u_{2})^{k_{2}} \cdots\dot{G}(u,u_{n})^{k_{n}} = \frac{1}{2n}\sum_{i=1}^{n}\prod_{j\neq i}\prod_{l_{j}=0}^{k_{j}} \binom{k_{j}}{l_{j}}\dot{G}_{ij}^{k_{j}-l_{j}}\sum_{l_{i}=0}^{k_{i}}\binom{k_{i }}{l_{i}}\] \[\times\frac{(-1)^{\sum_{j=1}^{n}l_{j}}}{(1+\sum_{j=1}^{n}l_{j})n^ {\sum_{j=1}^{n}l_{j}}}\bigg{\{}\Bigl{(}\sum_{j\neq i}\dot{G}_{ij}+1\Bigr{)}^{ 1+\sum_{j=1}^{n}l_{j}}-(-1)^{k_{i}-l_{i}}\Bigl{(}\sum_{j\neq i}\dot{G}_{ij}-1 \Bigr{)}^{1+\sum_{j=1}^{n}l_{j}}\bigg{\}}\,.\] At the four-photon level, apart from the single four-point integral \[\int_{0}^{1}du_{4}\,\dot{G}_{41}\dot{G}_{42}\dot{G}_{43}=-\frac{1}{6}(\dot{G} _{12}-\dot{G}_{23})(\dot{G}_{23}-\dot{G}_{31})(\dot{G}_{31}-\dot{G}_{12})\,,\] (A.2) only three-point integrals appear, for which the master formula (A.1) reduces to \[\int_{0}^{1}du_{3}\,\dot{G}_{13}^{k}\dot{G}_{32}^{l}=\frac{k!l!}{2}\sum_{m=0} ^{l}\frac{(1-(-1)^{k+l-m+1})\dot{G}_{12}^{m}-(1-(-1)^{m})\dot{G}_{12}^{k+l-m+ 1}}{m!(k+l-m+1)!}\,.\] (A.3) However, the latter formula generally does not provide the most compact way of writing the result in terms of the \(\dot{G}_{ij}\), therefore for easy reference we provide below a list of optimized versions for all the integrals that appeared in our calculations. \[\int_{0}^{1}du_{3}\,\dot{G}_{13}=0\,,\] \[\int_{0}^{1}du_{3}\,\dot{G}_{13}^{2}=\frac{1}{3}\,,\] \[\int_{0}^{1}du_{3}\,\dot{G}_{13}^{4}=\frac{1}{5}\,,\] \[\int_{0}^{1}du_{3}\dot{G}_{13}\dot{G}_{32}=\frac{1}{6}-\frac{1}{ 2}\dot{G}_{12}^{2}\,,\] \[\int_{0}^{1}du_{3}\,\dot{G}_{13}\dot{G}_{32}^{2}=\frac{1}{3}(\dot {G}_{12}-\dot{G}_{12}^{3})\,,\] (A.4) \[\int_{0}^{1}du_{3}\,\dot{G}_{23}^{3}\dot{G}_{31}=\frac{3!}{5!}- \frac{\dot{G}_{12}^{4}}{4}\,,\] \[\int_{0}^{1}du_{3}\,\dot{G}_{13}^{2}\dot{G}_{32}^{2}=\frac{4}{5!}- \frac{\dot{G}_{12}^{4}}{3!}+\frac{\dot{G}_{12}^{2}}{3!}\,,\] \[\int_{0}^{1}du_{3}\,\dot{G}_{13}^{4}\dot{G}_{32}=\frac{1}{5}(\dot {G}_{12}-\dot{G}_{12}^{5})\,,\] \[\int_{0}^{1}du_{3}\,\dot{G}_{13}^{3}\dot{G}_{32}^{2}=\frac{1}{10} (\dot{G}_{12}-\dot{G}_{12}^{5})\,,\] \[\int_{0}^{1}du_{3}\,G_{13}=\frac{1}{6}\,,\] \[\int_{0}^{1}du_{3}\,G_{13}G_{32}=\frac{1}{30}-\frac{1}{6}G_{12}^{2}\,,\] (A.5) \[\int_{0}^{1}du_{3}\,G_{23}\dot{G}_{31}=\frac{1}{3}\dot{G}_{12}G_{12}\,.\] ### The functions \(Y_{nl}\) and tensor reduction Here we study some properties of the functions \(Y_{nl}\) defined in (2.30), \[Y_{nl}=\int_{0}^{\infty}\frac{dT}{T}T^{n-D/2}\ \int_{0}^{1}du_{1}\int_{0}^{1}du_{ 2}\ G_{12}^{l}\,\mathrm{e}^{-T[m^{2}-G_{12}k_{1}\cdot k_{2}]}\,.\] (A.6) In terms of \(\hat{k}_{12}=\frac{k_{1}\cdot k_{2}}{m^{2}}\) and using the translational invariance to fix \(u_{2}=0\) and \(u_{1}=u\), so that \(G_{12}=u(1-u)\), we can perform the following tensor reduction \[Y_{nl} = \int_{0}^{\infty}\frac{dT}{T}T^{n-D/2}\ \int_{0}^{1}du\ [u(1-u)]^{l}\, \mathrm{e}^{-Tm^{2}[1-u(1-u)\hat{k}_{12}]}\] (A.7) \[= \frac{\Gamma\left(n-\frac{D}{2}\right)}{m^{2n-D}}\ \int_{0}^{1}du\ \frac{[u(1-u)]^{l}}{[1-u(1-u)\hat{k}_{12}]^{n-\frac{D}{2}}}\] \[= \frac{\Gamma\left(n-\frac{D}{2}-l\right)}{m^{2n-D}}\ \frac{d^{l}}{d\hat{k}_{12}^{l}}\int_{0}^{1}du\ \frac{1}{[1-u(1-u)\hat{k}_{12}]^{n-l-\frac{D}{2}}}\,.\] This leaves us with the single integral \[\int_{0}^{1}du\ \frac{1}{[1-u(1-u)\hat{k}_{12}]^{n-l-\frac{D}{2}}}={}_{2}F_{1} \left(1,n-l-\frac{D}{2};\frac{3}{2};\frac{\hat{k}_{12}}{4}\right)\,.\] (A.8) So, in terms of the gaussian hypergeometric function \({}_{2}F_{1}\), \[Y_{nl}=\frac{\Gamma\left(n-\frac{D}{2}-l\right)}{m^{2n-D}}\ \frac{d^{l}}{d\hat{k}_{12}^{l}}\ {}_{2}F_{1} \left(1,n-l-\frac{D}{2};\frac{3}{2};\frac{\hat{k}_{12}}{4}\right)\,.\] (A.9) In Section 5 we need the elementary integral \[\int\frac{d^{D}k}{(2\pi)^{D}}\,(k^{2})^{\lambda}\,Y_{nl} = \frac{\Gamma\left(\frac{D}{2}+\lambda\right)\,\Gamma\left(n- \lambda-D\right)}{(4\pi)^{\frac{D}{2}}\Gamma\left(\frac{D}{2}\right)m^{2(n- \lambda-D)}}\,\int_{0}^{1}\,du\big{[}u(1-u)\big{]}^{l-\lambda-\frac{D}{2}}\] \[= \frac{\Gamma\left(\frac{D}{2}+\lambda\right)\,\Gamma\left(n- \lambda-D\right)}{(4\pi)^{\frac{D}{2}}\Gamma\left(\frac{D}{2}\right)m^{2(n- \lambda-D)}}\ B\Big{(}l-\lambda-\frac{D}{2}+1,l-\lambda-\frac{D}{2}+1\Big{)}.\] The following identities are not used in the present paper, but let us include them here for whatever their worth might be: \[Y_{nl}=\Gamma\left(n-\frac{D}{2}\right)\hat{k}_{12}^{-l}\sum_{j=0}^{l}(-1)^{j }\binom{l}{j}\left[\Gamma\left(n-j-\frac{D}{2}\right)\right]^{-1}m^{-2j}\,Y_{ (n-j)0}\,,\] (A.11) which can be used to recursively eliminate all \(Y_{nl}\)'s with \(l\neq 0\). And \[(2n-D)(2n-D-1)\,Y_{n0}=\left[2-(2n-D)\left(\hat{k}_{12}^{2}-8\right)\right]\,m ^{2}\,Y_{(n+1)0}+\left(\hat{k}_{12}^{2}-4\right)m^{4}\,Y_{(n+2)0}\,,\] (A.12) that follows from the following identity of contiguous functions [46] \[(c-b)_{2}F_{1}(a,b-1;c;z)+(2b-c-bz+az)_{2}F_{1}(a,b;c;z)+b(z-1)_{2}F_{1}(a,b+1;c ;z)=0.\] (A.13) ## Appendix B Explicit results for \(Q_{\text{scal}(34)}\) and \(Q_{\text{spin}(34)}\) In this appendix, we explicitly write down the results for \(Q_{\text{scal}}\) and \(Q_{\text{spin}}\) after integration over \(u_{4}\) and \(u_{3}\), under the assumption that photons 3 and 4 are taken in the low-energy limit. ### Scalar QED The explicit form of \(Q_{\text{scal}(34)}\) is written as \[Q_{\text{scal}(34)}^{4}(1234)=\frac{2}{3}Z_{4}(1234)\left(G_{12}-4G_{12}^{2} \right), \tag{123}\] \[Q_{\text{scal}(34)}^{4}(2314)=\frac{1}{9}Z_{4}(2314)\left(1-12G_{12}+36G_{12}^ {2}\right), \tag{124}\] \[Q_{\text{scal}(34)}^{4}(3124)=\frac{2}{3}Z_{4}(3124)\left(G_{12}-4G_{12}^{2} \right), \tag{125}\] \[Q_{\text{scal}(34)}^{3}(123;4)=-\frac{T}{9}Z_{3}(123)k_{2}\cdot f_{4}\cdot k_ {1}\left(G_{12}-10G_{12}^{2}+24G_{12}^{3}\right), \tag{126}\] \[Q_{\text{scal}(34)}^{3}(234;1)=0, \tag{127}\] \[Q_{\text{scal}(34)}^{3}(341;2)=0, \tag{128}\] \[Q_{\text{scal}(34)}^{3}(412;3)=-\frac{T}{9}Z_{3}(412)k_{2}\cdot f_{3}\cdot k_ {1}\left(G_{12}-10G_{12}^{2}+24G_{12}^{3}\right), \tag{129}\] \[Q_{\text{scal}(34)}^{2}(12;34)=-\frac{T}{18}Z_{2}(12)\left(1-4G_{12}\right) \left\{\left[\frac{1}{5}k_{1}\cdot f_{3}\cdot f_{4}\cdot k_{1}+\left(\frac{1} {5}-6G_{12}^{2}\right)k_{1}\cdot f_{3}\cdot f_{4}\cdot k_{2}\right.\right. \tag{130}\] \[\left.\left.+\left(1\leftrightarrow 2\right)\right]+2T\left(1-4G_{12}\right)G_{12}^ {2}\,k_{1}\cdot f_{3}\cdot k_{2}k_{1}\cdot f_{4}\cdot k_{2}\right\},\] \[Q_{\text{scal}(34)}^{2}(13;24)+Q_{\text{scal}(34)}^{2}(23;14)=-\frac{T}{9}Z_{ 2}(13)k_{1}\cdot f_{2}\cdot f_{4}\cdot k_{1}\,\left(1-4G_{12}\right)G_{12}+ \left(1\leftrightarrow 2\right), \tag{131}\] \[Q_{\text{scal}(34)}^{2}(14;23)+Q_{\text{scal}(34)}^{2}(24;13)=-\frac{T}{9}Z_{ 2}(14)k_{1}\cdot f_{2}\cdot f_{3}\cdot k_{1}\,\left(1-4G_{12}\right)G_{12}+ \left(1\leftrightarrow 2\right), \tag{132}\] \[Q_{\text{scal}(34)}^{2}(34;12)=0, \tag{133}\] \[Q_{\text{scal}(34)}^{22}(12,34)=\frac{1}{3}Z_{2}(12)Z_{2}(34)\left(1-4G_{12} \right), \tag{134}\] \[Q_{\text{scal}(34)}^{22}(13,24)=\frac{1}{9}Z_{2}(13)Z_{2}(24), \tag{135}\] \[Q_{\text{scal}(34)}^{22}(14,23)=\frac{1}{9}Z_{2}(14)Z_{2}(23). \tag{136}\] ### Spinor QED The explicit form of \(Q_{\rm spin(34)}\) is written as \[Q_{\rm spin(34)}^{4}(1234)=-\frac{4}{3}Z_{4}(1234)\left(G_{12}+2G_{12}^{2}\right), \tag{15}\] \[Q_{\rm spin(34)}^{4}(2314)=-\frac{4}{9}Z_{4}(2314)\left(2-6G_{12}-9G_{12}^{2} \right), \tag{16}\] \[Q_{\rm spin(34)}^{4}(3124)=-\frac{4}{3}Z_{4}(3124)\left(G_{12}+2G_{12}^{2} \right), \tag{17}\] \[Q_{\rm spin(34)}^{3}(123;4)=\frac{2T}{9}Z_{3}(123)k_{2}\cdot f_{4}\cdot k_{1} \left(G_{12}-G_{12}^{2}-12G_{12}^{3}\right), \tag{18}\] \[Q_{\rm spin(34)}^{3}(234;1)=0, \tag{19}\] \[Q_{\rm spin(34)}^{3}(341;2)=0, \tag{20}\] \[Q_{\rm spin(34)}^{3}(412;3)=\frac{2T}{9}Z_{3}(412)k_{2}\cdot f_{3}\cdot k_{1} \left(G_{12}-G_{12}^{2}-12G_{12}^{3}\right), \tag{21}\] \[Q_{\rm spin(34)}^{2}(12;34)=\frac{2T}{9}Z_{2}(12)\,G_{12}\Biggl{\{}\left[ \frac{1}{5}k_{1}\cdot f_{3}\cdot f_{4}\cdot k_{1}+\left(\frac{1}{5}-6G_{12}^{2 }\right)k_{1}\cdot f_{3}\cdot f_{4}\cdot k_{2}+(1\leftrightarrow 2) \right] \tag{22}\] \[\hskip 142.26378pt+2T\,\left(1-4G_{12}\right)G_{12}^{2}\,k_{1}\cdot f_{3} \cdot k_{2}k_{1}\cdot f_{4}\cdot k_{2}\Biggr{\}}\,,\] \[Q_{\rm spin(34)}^{2}(13;24)+Q_{\rm spin(34)}^{2}(23;14)=\frac{2T}{9}Z_{2}(13) k_{1}\cdot f_{2}\cdot f_{4}\cdot k_{1}\,\left(1-4G_{12}\right)G_{12}+(1 \leftrightarrow 2)\,, \tag{23}\] \[Q_{\rm spin(34)}^{2}(14;23)+Q_{\rm spin(34)}^{2}(24;13)=\frac{2T}{9}Z_{2}(14) k_{1}\cdot f_{2}\cdot f_{3}\cdot k_{1}\,\left(1-4G_{12}\right)G_{12}+(1 \leftrightarrow 2)\,, \tag{24}\] \[Q_{\rm spin(34)}^{2}(34;12)=0, \tag{25}\] \[Q_{\rm spin(34)}^{22}(12,34)=\frac{8}{3}Z_{2}(12)Z_{2}(34)G_{12}\,, \tag{26}\] \[Q_{\rm spin(34)}^{22}(13,24)=\frac{4}{9}Z_{2}(13)Z_{2}(24), \tag{27}\] \[Q_{\rm spin(34)}^{22}(14,23)=\frac{4}{9}Z_{2}(14)Z_{2}(23). \tag{28}\]
2305.17284
GC-Flow: A Graph-Based Flow Network for Effective Clustering
Graph convolutional networks (GCNs) are \emph{discriminative models} that directly model the class posterior $p(y|\mathbf{x})$ for semi-supervised classification of graph data. While being effective, as a representation learning approach, the node representations extracted from a GCN often miss useful information for effective clustering, because the objectives are different. In this work, we design normalizing flows that replace GCN layers, leading to a \emph{generative model} that models both the class conditional likelihood $p(\mathbf{x}|y)$ and the class prior $p(y)$. The resulting neural network, GC-Flow, retains the graph convolution operations while being equipped with a Gaussian mixture representation space. It enjoys two benefits: it not only maintains the predictive power of GCN, but also produces well-separated clusters, due to the structuring of the representation space. We demonstrate these benefits on a variety of benchmark data sets. Moreover, we show that additional parameterization, such as that on the adjacency matrix used for graph convolutions, yields additional improvement in clustering.
Tianchun Wang, Farzaneh Mirzazadeh, Xiang Zhang, Jie Chen
2023-05-26T22:11:38Z
http://arxiv.org/abs/2305.17284v1
# GC-Flow: A Graph-Based Flow Network for Effective Clustering ###### Abstract Graph convolutional networks (GCNs) are _discriminative models_ that directly model the class posterior \(p(y|\mathbf{x})\) for semi-supervised classification of graph data. While being effective, as a representation learning approach, the node representations extracted from a GCN often miss useful information for effective clustering, because the objectives are different. In this work, we design normalizing flows that replace GCN layers, leading to a _generative model_ that models both the class conditional likelihood \(p(\mathbf{x}|y)\) and the class prior \(p(y)\). The resulting neural network, GC-Flow, retains the graph convolution operations while being equipped with a Gaussian mixture representation space. It enjoys two benefits: it not only maintains the predictive power of GCN, but also produces well-separated clusters, due to the structuring of the representation space. We demonstrate these benefits on a variety of benchmark data sets. Moreover, we show that additional parameterization, such as that on the adjacency matrix used for graph convolutions, yields additional improvement in clustering. Machine Learning, Graph Convolutional Networks, Graph Convolutional Networks, Graph Convolutional Networks, Graph Convolutional Networks, Graph Convolutional Networks, Graph Convolutional Networks ## 1 Introduction Semi-supervised learning (Zhu, 2008) refers to the learning of a classification model by using typically a small amount of labeled data with possibly a large amount of unlabeled data. The presence of the unlabeled data, together with additional assumptions (such as the manifold and smoothness assumptions), may significantly improve the accuracy of a classifier learned even with few labeled data. A typical example of such a model in the recent literature is the graph convolutional network (GCN) of Kipf and Welling (2017), which capitalizes on the graph structure (considered as an extension of a discretized manifold) underlying data to achieve effective classification. GCN, together with other pioneering work on parameterized models, have formed a flourishing literature of graph neural networks (GNNs), which excel at node classification (Zhou et al., 2020; Wu et al., 2021). However, driven by the classification task, GCN and other GNNs may not produce node representations with useful information for goals different from classification. For example, the representations do not cluster well in some cases. Such a phenomenon is of no surprise. For instance, when one treats the penultimate activations as the data representations and uses the last dense layer as a linear classifier, the representations need only be close to linearly separable for an accurate classification; they do not necessarily form well-separated clusters. This observation leads to a natural question: can one build a graph representation model that is effective for not only classification but also clustering? The answer is affirmative. One idea is to, rather than construct a discriminative model \(p(y|\mathbf{x})\) as all GNNs do, build a generative model \(p(\mathbf{x}|y)p(y)\) whose class conditional likelihood is defined by explicitly modeling the representation space, for example by using a mixture of well-separated unimodal distributions. Indeed, the recently proposed FlowGMM model (Izmailov et al., 2020) uses a normalizing flow to map the distribution of input features to a Gaussian mixture, resulting in well-structured clusters. This model, however, does not leverage the graph structure. In this work, we present _graph convolutional normalizing flows_ (GC-Flows), a generative model that not only classifies well, but also yields node representations that capture the inherent structure of data, as a result forming high-quality clusters. We can relate GC-Flows to both GCNs and FlowGMMs. On the one hand, GC-Flows incorporate each GCN layer with an invertible flow. Such a flow parameterization allows training a model through maximizing the likelihood of data representations being a Gaussian mixture, mitigating the poor clustering effect of GCNs. On the other hand, GC-Flows augment a usual normalizing flow model (such as FlowGMM) that is trained on independent data, with one that incorporates graph convolutions as an inductive bias in the parameterization, boosting the classification accuracy. In Figure 1, we visualize for a graph data set the nodes in the representation space. It suggests that GC-Flow inherits the clustering effect of FlowGMM, while being similarly accurate to GCN for classification. Standard GNNs' inefficiency in clustering is well recognized by the literature and several efforts exist for improvement (Zhu et al., 2021; Li et al., 2022; Jing et al., 2022; Fetaldi et al., 2022). These methods are typically loss-driven, through using a clustering loss or a contrastive loss to train the GNN, possibly without using labels. What distinguishes GC-Flow from these efforts is the direct modeling of the representation space, lacked by prior work. Moreover, we directly make a new GNN architecture, beyond using merely the loss to drive the node representations. All such is made possible by the normalizing flow, a generative and invertible modeling technique that allows density estimation and likelihood training. Normalizing flows are as powerful as feed-forward networks (a component of GCN besides the graph convolution) in terms of expressivity; and their training costs scale similarly as those of feed-forward networks/GCNs. The benefit of GC-Flow is best seen in cluster separation, empirically verified by a comprehensive set of experiments we demonstrate in SS5. We make the following contributions: 1. We develop a generative model GC-Flow, with specification of the class conditional \(p(\mathbf{x}|y)\) and the class prior \(p(y)\). GC-Flow models the representation space (with variables denoted by \(\mathbf{z}\)) by using Gaussian mixture, leading to an anticipated high quality of node clustering. 2. We establish a determinant lemma that reveals the role of the determinant of the adjacency matrix in density estimation, enabling the efficient training of GC-Flow. 3. We demonstrate that empirically, the node representations learned by GC-Flow admit cluster separations several folds higher under the silhouette score, compared with standard GNNs and those trained by using contrastive losses. We also show that parameterizing the graph convolution operator further improves clustering. ## 2 Related Work Graph neural networks (GNNs) are machineries to produce node- and graph-level representations, given graph-structured data as input (Zhou et al., 2020; Wu et al., 2021). A popular class of GNNs are message passing neural networks (MPNNs) (Gilmer et al., 2017), which treat information from the neighborhood of a node as messages and recursively update the node representation through aggregating messages and combing the result with the past node representation. Many popular GNNs can be considered MPNN, such as GG-NN (Li et al., 2016), GCN (Kipf and Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Velickovic et al., 2018), and GIN (Xu et al., 2019). Normalizing flows are invertible neural networks that can transform a data distribution to a typically simple one, such as the normal distribution (Rezende and Mohamed, 2015; Kobyzev et al., 2021; Papamakarios et al., 2021). The invertibility allows estimating densities and sampling new data from the otherwise intractable input distribution. The densities of the two distributions are related by the change-of-variable formula, which involves the Jacobian determinant of the flow. Computing the Jacobian determinant is costly in general; thus, many proposed neural networks exploit constrained structures, such as the triangular pattern of the Jacobian, to reduce the computational cost. Notable examples include NICE (Dinh et al., 2015), IAF (Kingma et al., 2016), MAF (Papamakarios et al., 2017), RealNVP (Dinh et al., 2017), Glow (Kingma and Dhariwal, 2018), and NSF (Durkan et al., 2019). While these network mappings are composed of discrete steps, another class of normalizing flows with continuous mappings have also been developed, which use parameterized versions of differential equations (Chen et al., 2018; Grathwohl et al., 2019). Normalizing flows can be used for processing or creating graph-structured data in different ways. For example, GraphNVP (Madhawa et al., 2019) and GraphAF (Shi et al., 2020) are graph generative models that use normalizing flows to Figure 1: Representation space of the data set Cora under different models, visualized by t-SNE. Coloring indicates groud-truth labeling. Silhouette coefficients measure cluster separation. Micro-F1 scores measure classification accuracy. generate a graph and its node features. GANF (Dai and Chen, 2022) uses an acyclic directed graph to factorize the joint distribution of time series data and uses the estimated data density to detect anomalies. GNF (Liu et al., 2019) is both a graph generative model and a graph neural network. For the latter functionality, GNF is relevant to our model, but its purpose is to classify rather than to cluster. Furthermore, the architecture of GNF differs from ours in the role the graph plays, incurring no determinant calculation with respect to the graph adjacency matrix (cf. our Lemma 4.1). CGF (Deng et al., 2019) extends the continuous version of normalizing flows to graphs, where the dynamics of the differential equation is parameterized as a message passing layer. The difference between our model and CGF inherits the difference between discrete and continuous flows in how parameterizations transform distributions. For clustering, several graph-based methods were developed based on the use of GNNs for feature extraction. For example, Fettal et al. (2022) use a combination of reconstruction and clustering losses to train the GNN; whereas Zhu et al. (2021); Li et al. (2022); Jing et al. (2022) use contrastive losses. Different from ours, these methods do not model the data (or representation) space with distributions as generative methods do. We empirically compare with several contrastive methods and demonstrate that our model significantly outperforms them in cluster separation. ## 3 Preliminaries In this section, we review a few key concepts and familiarize the reader with notations to be used throughout the paper. ### Normalizing Flow Let \(\mathbf{x}\in\mathbb{R}^{D}\) be a \(D\)-dimensional random variable. A _normalizing flow_ is a vector-valued invertible mapping \(\mathbf{f}(\mathbf{x}):\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) that normalizes the distribution of \(\mathbf{x}\) to some base distribution, whose density is easy to evaluate. Let such a base distribution have density \(\pi(\mathbf{z})\), where \(\mathbf{z}=\mathbf{f}(\mathbf{x})\). With the change-of-variable formula, the density of \(\mathbf{x}\), \(p(\mathbf{x})\), can be computed as \[p(\mathbf{x})=\pi(\mathbf{f}(\mathbf{x}))|\det\nabla\mathbf{f}(\mathbf{x})|, \tag{1}\] where \(\nabla\mathbf{f}\) denotes the Jacobian of \(\mathbf{f}\). In general, such a flow \(\mathbf{f}\) may be the composition of \(T\) constituent flows, all of which are invertible. In notation, we write \(\mathbf{f}=\mathbf{f}_{T}\circ\mathbf{f}_{T-1}\circ\cdots\circ\mathbf{f}_{1}\), where \(\mathbf{f}_{i}(\mathbf{x}^{(i-1)})=\mathbf{x}^{(i)}\) for all \(i\), and \(\mathbf{x}^{(0)}\equiv\mathbf{x}\) and \(\mathbf{x}^{(T)}\equiv\mathbf{z}\). Then, the chain rule expresses the Jacobian determinant as a product of the Jacobian determinants of each constituent flow: \(\det\nabla\mathbf{f}(\mathbf{x})=\prod_{i=1}^{T}\det\nabla\mathbf{f}_{i}( \mathbf{x}^{(i-1)})\). In practical uses, the Jacobian determinant of each constituent flow needs be easy to compute, so that the density \(p(\mathbf{x})\) in (1) can be evaluated. One example that serves such a purpose is the _affine coupling layer_ of Dinh et al. (2017). For notational simplicity, we denote such a coupling layer by \(\mathbf{g}(\mathbf{x})=\mathbf{y}\), which in effect computes \[\mathbf{y}_{1:d} =\mathbf{x}_{1:d},\] \[\mathbf{y}_{d+1:D} =\mathbf{x}_{d+1:D}\odot\exp(\mathbf{s}(\mathbf{x}_{1:d}))+ \mathbf{t}(\mathbf{x}_{1:d}),\] where \(d=\lfloor D/2\rfloor\) and \(\mathbf{s},\mathbf{t}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{D-d}\) are any neural networks. It is simple to see that the Jacobian is a triangular matrix, whose diagonal has value \(1\) in the first \(d\) entries and \(\exp(\mathbf{s})\) in the remaining \(D-d\) entries. Hence, the Jacobian determinant is simply the product of the exponential of the outputs of the \(\mathbf{s}\)-network; that is, \(\det\nabla\mathbf{g}(\mathbf{x})=\prod_{i=1}^{D-d}\exp(s_{i})\). ### Gaussian Mixture and FlowGMM Different from a majority of work that take the base distribution in a normalizing flow to be a single Gaussian, we consider it to be a Gaussian mixture, which naturally induces clustering. Using \(k\) to index mixture components (\(K\) in total), we express the base density \(\pi(\mathbf{z})\) as \[\pi(\mathbf{z}) =\sum_{k=1}^{K}\phi_{k}\mathcal{N}(\mathbf{z};\boldsymbol{\mu}_{ k},\boldsymbol{\Sigma}_{k})\quad\text{with}\] \[\mathcal{N}(\mathbf{z};\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_ {k}) =\frac{\exp(-\frac{1}{2}(\mathbf{z}-\boldsymbol{\mu}_{k})^{T}\boldsymbol{ \Sigma}_{k}^{-1}(\mathbf{z}-\boldsymbol{\mu}_{k}))}{(2\pi)^{D/2}(\det \boldsymbol{\Sigma}_{k})^{1/2}}, \tag{2}\] where \(\phi_{k}\geq 0\) are mixture weights that sum to unity and \(\boldsymbol{\mu}_{k}\) and \(\boldsymbol{\Sigma}_{k}\) are the mean vector and the covariance matrix of the \(k\)-th component, respectively. A broad class of semi-supervised learning models specifies a generative process for each data point \(\mathbf{x}\) through defining \(p(\mathbf{x}|y)p(y)\), where \(p(y)\) is the prior class distribution and \(p(\mathbf{x}|y)\) is the class conditional likelihood for data. Then, by the Bayes' Theorem, the class prediction model \(p(y|\mathbf{x})\) is proportional to \(p(\mathbf{x}|y)p(y)\). Among them, FlowGMM (Izmailov et al., 2020) makes use of the flow transform \(\mathbf{z}=\mathbf{f}(\mathbf{x})\) and defines \(p(\mathbf{x}|y=k)=\mathcal{N}(\mathbf{f}(\mathbf{x});\boldsymbol{\mu}_{k}, \boldsymbol{\Sigma}_{k})|\det\nabla\mathbf{f}(\mathbf{x})|\) with \(p(y=k)=\phi_{k}\). This definition is valid, because marginalizing over the class variable \(y\), one may verify that \(p(\mathbf{x})=\sum_{y}p(\mathbf{x}|y)p(y)\) is consistent with the density formula (1), when the base distribution follows (2). ### Graph Convolutional Network The GCNs (Kipf and Welling, 2017) are a class of parameterized neural network models that specify the probability of class \(y\) of a node \(\mathbf{x}\), \(p(y|\mathbf{x})\), collectively for all nodes \(\mathbf{x}\) in a graph, without defining the data generation process as in FlowGMM. To this end, we let \(\mathbf{A}\in\mathbb{R}^{n\times n}\) be the adjacency matrix of the graph, which has \(n\) nodes, and let \(\mathbf{X}=[\mathbf{x}_{1},\cdots,\mathbf{x}_{n}]^{T}\in\mathbb{R}^{n\times D}\) be the input feature matrix, with \(\mathbf{x}_{i}\) being the feature vector for the \(i\)-th node. We further let \(\mathbf{P}\in\mathbb{R}^{n\times K}\) be the output probability matrix, where \(K\) is the number of classes and \(\mathbf{P}_{ik}\equiv p(y=k|\mathbf{x}_{i})\). An \(L\)-layer GCN is written as \[\mathbf{X}^{(i)}=\sigma_{i}(\widehat{\mathbf{A}}\mathbf{X}^{(i-1)}\mathbf{W}^{ (i-1)}),\quad i=1,\ldots,L, \tag{3}\] where \(\mathbf{X}\equiv\mathbf{X}^{(0)}\) and \(\mathbf{P}\equiv\mathbf{X}^{(L)}\). Here, \(\sigma_{i}\) is an element-wise activation function, such as ReLU, for the intermediate layers \(i<L\), while \(\sigma_{L}\) is the row-wise softmax activation function for the final layer. The matrices \(\mathbf{W}^{(i)}\), \(i=0,\ldots,L-1\), are learnable parameters and \(\widehat{\mathbf{A}}\) denotes a certain normalized version of the adjacency matrix \(\mathbf{A}\). The standard definition of \(\widehat{\mathbf{A}}\) for an undirected graph is \(\widehat{\mathbf{A}}=\widetilde{\mathbf{D}}^{-\frac{1}{2}}\widehat{\mathbf{A }}\widetilde{\mathbf{D}}^{-\frac{1}{2}}\), where \(\widetilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) and \(\widetilde{\mathbf{D}}=\operatorname{diag}\left(\sum_{j}\widehat{\mathbf{A}} _{ij}\right)\), but we note that many other variants of \(\widehat{\mathbf{A}}\) are used in practice as well (such as \(\widehat{\mathbf{A}}=\widetilde{\mathbf{D}}^{-1}\widetilde{\mathbf{A}}\)). ## 4 GC-Flow We propose _graph convolutional normalizing flow_ (GC-Flow), which extends a usual normalizing flow acting on data points separately to one that acts on all graph nodes collectively. Following the notations used in Section 3.3, starting with \(\mathbf{X}^{(0)}\equiv\mathbf{X}\), where \(\mathbf{X}\) is an \(n\times D\) input feature matrix for all \(n\) nodes in the graph, we define a GC-Flow \(\mathbf{F}(\mathbf{X}):\mathbb{R}^{n\times D}\rightarrow\mathbb{R}^{n\times D}\) that is a composition of \(T\) constituent flows \(\mathbf{F}=\mathbf{F}_{T}\circ\mathbf{F}_{T-1}\circ\cdots\circ\mathbf{F}_{1}\), where each constituent flow \(\mathbf{F}_{i}\) has parameter \(\mathbf{W}^{(i-1)}\) and computes \[\mathbf{X}^{(i)}=\mathbf{F}_{i}(\underbrace{\widehat{\mathbf{A}}\mathbf{X}^{( i-1)}}_{\widehat{\mathbf{X}}^{(i)}};\mathbf{W}^{(i-1)}),\quad i=1,\ldots,T. \tag{4}\] The final node representation is \(\mathbf{Z}\equiv\mathbf{X}^{(T)}\in\mathbb{R}^{n\times D}\). ### GC-Flow is Both a Generative Model and a GNN _GC-Flow is a normalizing flow._ Similar to other normalizing flows, each constituent flow preserves the feature dimension; that is, each \(\mathbf{F}_{i}\) is an \(\mathbb{R}^{n\times D}\rightarrow\mathbb{R}^{n\times D}\) function. Furthermore, we let \(\mathbf{F}_{i}\) act on each row of the input argument \(\widetilde{\mathbf{X}}^{(i)}\) separately and identically. In other words, from the functionality perspective, \(\mathbf{F}_{i}\) can be equivalently replaced by some function \(\mathbf{f}_{i}:\mathbb{R}^{1\times D}\rightarrow\mathbb{R}^{1\times D}\) that computes \(\mathbf{x}_{j}^{(i)}=\mathbf{f}_{i}(\widehat{\mathbf{x}}_{j}^{(i)})\) for a node \(j\). The main difference between GC-Flow and a usual flow is that the input argument of \(\mathbf{f}_{i}\) contains not only the information of node \(j\) but also that of its neighbors. One may consider a usual flow to be a special case of GC-Flows, when \(\widehat{\mathbf{A}}=\mathbf{I}\) (e.g., the graph contains no edges). _Moreover, GC-Flow is a GNN._ In particular, a constituent flow \(\mathbf{F}_{i}\) of (4) resembles a GCN layer of (3) by making use of graph convolutions--multiplying \(\widehat{\mathbf{A}}\) to the flow/layer input \(\mathbf{X}^{(i-1)}\). When \(\widehat{\mathbf{A}}\) results from the normalization defined by GCN, such a graph convolution approximates a low-pass filter (Kipf & Welling, 2017). In a sense, the GC-Flow architecture is more general than a GCN architecture, because one may interpret the dense layer (represented by the parameter matrix \(\mathbf{W}^{(i-1)}\)) followed by a nonlinear activation \(\sigma_{i}\) in (3) as an example of the constituent flow \(\mathbf{F}_{i}\) in (4). However, such a conceptual connection does not make a GC-Flow and a GCN mathematically equivalent, because \(\mathbf{W}^{(i-1)}\) in GCN is not required to preserve the feature dimension and the ReLU activation has a zero derivative on the negative axis, compromising invertibility. The nearest adjustment to make the two equivalent is perhaps the _Sylvester flow_(van den Berg et al., 2018), which adds a residual connection and uses an additional parameter matrix \(\mathbf{U}^{(i-1)}\) to preserve the feature dimension:1\(\mathbf{X}^{(i)}=\mathbf{X}^{(i-1)}+\sigma_{i}(\widehat{\mathbf{A}}\mathbf{X}^{(i-1)} \mathbf{W}^{(i-1)})\mathbf{U}^{(i-1)}\). However, the Sylvester flow generally has a high computational complexity (Kobyzev et al., 2021) and a more economic flow is instead used as \(\mathbf{F}_{i}\), such as the affine coupling layer in SS3.1. Footnote 1: For notational convenience and consistency with the GNN literature, here we omit the often-used bias term. ### Training Objective A major distinction between GC-Flow and a usual GNN lies in the training objective. To encourage a good clustering structure of the representation \(\mathbf{Z}\), we use a maximum-likelihood kind of objective for all graph nodes, because it is equivalent to maximizing the likelihood that \(\mathbf{Z}\) forms a Gaussian mixture: \[\max\ \mathcal{L}:=\frac{1-\lambda}{|\mathcal{D}_{l}|}\sum_{(\mathbf{ x},y=k)\in\mathcal{D}_{l}}\log p(\mathbf{x},y=k)\\ +\frac{\lambda}{|\mathcal{D}_{u}|}\sum_{\mathbf{x}\in\mathcal{D}_{ u}}\log p(\mathbf{x}), \tag{5}\] where \(\mathcal{D}_{l}\) denotes the set of labeled nodes, \(\mathcal{D}_{u}\) denotes the set of unlabeled nodes, and \(\lambda\in(0,1)\) is a tunable hyperparameter balancing labeled and unlabeled information. It is useful to compare \(\mathcal{L}\) with the usual (negative) cross-entropy loss for training GNNs. First, for training a usual GNN, no loss is incurred on the unlabeled nodes, because their likelihoods are not modeled. Second, for a labeled node \(\mathbf{x}\) with true label \(k\), the negative cross-entropy is \(\log p(y=k|\mathbf{x})\), while the likelihood term over labeled data in (5) is a joint probability of \(\mathbf{x}\) and \(y\): \(\log p(\mathbf{x},y=k)=\log\ p(y=k|\mathbf{x})\ +\ \log p(\mathbf{x})\). Fundamentally, GC-Flow belongs to the class of generative classification models, while GNNs belong to the class of discriminative models. Under Bayesian paradigm, the former models the class prior and the class conditional likelihood, while the latter models only the posterior. In what follows, we will define the proposed probability model \(p(\mathbf{x}|y)p(y)\) for a node \(\mathbf{x}\), so that the loss can be computed and the label \(y\) can be predicted via \(\operatorname*{argmax}_{k}p(y=k|\mathbf{x})\). We first need an important lemma on the Jacobian determinant when a graph convolution is involved in the flow. ### Determinant Lemma The Jacobian determinant of each constituent flow \(\mathbf{F}_{i}\) defined in (4) is needed for training a GC-Flow. The Jacobian is an \(nD\times nD\) matrix, but it admits a special block structure that allows the determinant to be computed as a product of determinants on \(D\) matrices of size \(n\times n\), after rearrangement of the QR factorization factors of the Jacobians of \(\mathbf{f}_{i}\). The following lemma summarizes this finding; the proof is given in Appendix C. For notational convenience, we remove the flow index and use \(\mathbf{G}\) to denote a generic constituent flow. **Lemma 4.1**.: _Let \(\mathbf{X}\in\mathbb{R}^{n\times D}\) and \(\widehat{\mathbf{A}}\in\mathbb{R}^{n\times n}\). Let \(\mathbf{Y}=\mathbf{G}(\widetilde{\mathbf{X}})\), where \(\widetilde{\mathbf{X}}\equiv\widehat{\mathbf{A}}\mathbf{X}\) and \(\mathbf{G}:\mathbb{R}^{n\times D}\to\mathbb{R}^{n\times D}\) acts on each row of the input matrix independently and identically. Let \(\mathbf{g}:\mathbb{R}^{D}\to\mathbb{R}^{D}\) be functionally equivalent to \(\mathbf{G}\); that is, \(\mathbf{y}_{i}=\mathbf{g}(\widetilde{\mathbf{x}}_{i})\) where \(\mathbf{y}_{i}\) and \(\widetilde{\mathbf{x}}_{i}\) are the \(i\)-th row of \(\mathbf{Y}\) and \(\widetilde{\mathbf{X}}\), respectively. Then, \(\left|\det\left(\frac{d\mathbf{Y}}{d\mathbf{X}}\right)\right|=|\det\widehat{ \mathbf{A}}|^{D}\prod_{i=1}^{n}|\det\nabla\mathbf{g}(\widetilde{\mathbf{x}}_{ i})|\)._ Putting back the flow index, the above lemma suggests that, by the chain rule, the Jacobian determinant of the entire GC-Flow \(\mathbf{F}\) is \[|\det\nabla\mathbf{F}(\mathbf{X})|=|\det\widehat{\mathbf{A}}|^{TD}\prod_{j=1} ^{T}\prod_{i=1}^{n}|\det\nabla\mathbf{f}_{j}(\widetilde{\mathbf{x}}_{i}^{(j)} )|. \tag{6}\] Note that to maintain invertibility of the flow, the matrix \(\widehat{\mathbf{A}}\) must be nonsingular. We next define the probability model for GC-Flow based on equality (6). ### Probability Model Different from a usual normalizing flow, where the representation \(\mathbf{z}_{i}\) for the \(i\)-th data point depends on its input feature vector \(\mathbf{x}_{i}\), in a GC-Flow, \(\mathbf{z}_{i}\) depends on (a possibly substantial portion of) the entire node set \(\mathbf{X}\), because of the \(\widehat{\mathbf{A}}\)-multiplication. To this end, we use \(p(\mathbf{X})\) and \(\pi(\mathbf{Z})\) to denote the joint distribution of the node feature vectors and that of the representations, respectively. We still have, by the change-of-variable formula, \[p(\mathbf{X})=\pi(\mathbf{Z})|\det\nabla\mathbf{F}(\mathbf{X})|, \tag{7}\] where the Jacobian determinant has been derived in (6). Under the freedom of modeling and for convenience, we opt to let \(\pi(\mathbf{Z})\) be expressed as \(\pi(\mathbf{Z})=\pi(\mathbf{z}_{1})\pi(\mathbf{z}_{2})\cdots\pi(\mathbf{z}_{ n})\), where each \(\pi(\mathbf{z}_{i})\) is an independent and identically distributed Gaussian mixture (2). Similarly, we assume the nodes to be independent to start with; that is, \(p(\mathbf{X})=p(\mathbf{x}_{1})p(\mathbf{x}_{2})\cdots p(\mathbf{x}_{n})\). For generative modeling, a task is to model the class prior \(p(y)\) and the class conditional likelihood \(p(\mathbf{x}|y)\), such that the posterior prediction model \(p(y|\mathbf{x})\) can be easily obtained as proportional to \(p(\mathbf{x}|y)p(y)\), by Bayes' Theorem. To this end, we define \[p(\mathbf{x}_{i}|y_{i}=k) :=\mathcal{N}(\mathbf{z}_{i};\boldsymbol{\mu}_{k},\boldsymbol{ \Sigma}_{k})|\det\widehat{\mathbf{A}}|^{TD/n}\] \[\qquad\times\prod_{j=1}^{T}|\det\nabla\mathbf{f}_{j}(\widetilde{ \mathbf{x}}_{i}^{(j)})|\] \[p(y_{i}=k) :=\phi_{k}. \tag{8}\] Such a definition is self-consistent. First, marginalizing over the label \(y_{i}\) and using the Gaussian mixture definition (2) for \(\pi(\mathbf{z}_{i})\), we obtain the marginal likelihood \[p(\mathbf{x}_{i})=\pi(\mathbf{z}_{i})|\det\widehat{\mathbf{A}}|^{TD/n}\prod_{ j=1}^{T}|\det\nabla\mathbf{f}_{j}(\widetilde{\mathbf{x}}_{i}^{(j)})|. \tag{9}\] Then, by the modeling of \(\pi(\mathbf{Z})\) and \(p(\mathbf{X})\), taking the product for all nodes and using the Jacobian determinant formula derived in (6), we exactly recover the density formula (7). We will use (8) and (9) to compute the labeled part and the unlabeled part of the loss (5), respectively. The modeling of \(\pi(\mathbf{Z})\) as a product of \(\pi(\mathbf{z}_{i})\)'s reflects independence, which may seem conceptually at odds with graph convolutions, where a node's representation depends on the information of nodes in its \(T\)-hop neighborhood. However, nothing prevents the convolution results to be independent, just like the fact that a usual normalizing flow can decorrelate the input features and make each transformed feature independent, when postulating a standard normal distribution output. It is the aim of the independence of the \(\mathbf{z}_{i}\)'s that enables finding the most probable GC-Flow. ### Training Costs Despite inheriting the generative characteristics of FlowGMMs (including the training loss), GC-Flows are by nature a GNN, because the graph convolution operation (\(\widehat{\mathbf{A}}\)-multiplication) involves a node's neighbor set when computing the output of a constituent flow for this node. Due to space limitation, we discuss the complication of training and inference owing to neighborhood explosion in Appendix D; these discussions share great similarities with the GNN case. Additionally, we compare the full-batch training costs of GC-Flow and GCN in Appendix E, which suggests that they are comparable and admit the same scaling behavior. ### Variants and Improvement So far, we have treated \(\widehat{\mathbf{A}}\) as the normalization of the graph adjacency matrix \(\mathbf{A}\) defined by GCN (see SS3.3). One con venience of doing so is that \(\det\widehat{\mathbf{A}}\) is a constant and can be safely omitted in the loss calculation. One may improve the quality of GC-Flow through introducing parameterizations to \(\widehat{\mathbf{A}}\). One approach, which we call GC-Flow-p, is to parameterize the edge weights. This approach is similar to GAT (Velickovic et al., 2018) that uses attention weights to redefine \(\widehat{\mathbf{A}}\). Another approach, which we call GC-Flow-l, is to learn \(\widehat{\mathbf{A}}\) in its entirety without resorting to the (possibly unknown) graph structure. For this purpose, several approaches have been developed; see, e.g., Franceschi et al. (2019); Wu et al. (2020); Shang et al. (2021); Fatemi et al. (2021); Dai and Chen (2022). In a later experiment, we will give examples for GC-Flow-p and GC-Flow-l (see Appendix F for details) and investigate the performance improvement over GC-Flow. Note that the parameterization may lead to a different \(\widehat{\mathbf{A}}\) for each constituent flow. See the same appendix for the simple adaptation of the mathematical details. ### What is GC-Flow Good for? GC-Flow is designed to augment the representation quality of GCN, with an emphasis on clustering. GC-Flow achieves so by using a Gaussian mixture representation space, which offers interpretability that is otherwise absent in the vanilla form of GCN. From the derivation, we have seen that GC-Flow shares many similarities with GCN (SS4.1), but the use of invertible flows in place of feed-forward layers in GCN designates a probability model that allows training the feature transformations toward separate Gaussian clusters (SS4.2-SS4.4). Just like feed-forward layers that can compose a universal approximator, so can flows, which sacrifice no expressive powers, nor learning costs (SS4.5). Moreover, one may improve the practical performance of GC-Flows in a manner similar to improving GCNs, through parameterizing the convolution operator \(\widehat{\mathbf{A}}\) (SS4.6). ## 5 Experiments In this section, we conduct a comprehensive set of experiments to evaluate the performance of GC-Flow on graph data and demonstrate that it is competitive with GNNs for classification, while being advantageous in learning representations that extract the clustering structure of the data. **Data sets.** We use six benchmark GNN data sets. Data sets **Cora**, **Citeseer**, and **Pubmed** are citation graphs, where each node is a document and each edge represents the citation relation between two documents. We follow the predefined splits in Kipf and Welling (2017). Data sets **Computers** and **Photo** are subgraphs of the Amazon co-purchase graph (McAuley et al., 2015). They do not have a predefined split. We randomly sample 200/1300/1000 nodes for training/validation/testing for Computers and 80/620/1000 for Photo. The data set **Wiki-CS** is a web graph where nodes are Wikipedia articles and edges are hyperlinks (Mernyei and Cangea, 2020). We use one of the predefined splits. For statistics of the data sets, see Table 5 in Appendix G. **Baselines.** We compare GC-Flow with both discriminative and generative models. For discriminative models, we use three widely used GNNs: GCN, GraphSAGE, and GAT. For generative models, besides FlowGMM, we use the basic Gaussian mixture model (GMM). Note that GMM is not parameterized; it takes either the node features \(\mathbf{X}\) or the graph-transformed features \(\widehat{\mathbf{X}}=\widehat{\mathbf{A}}\mathbf{X}\) as input. **Metrics.** For measuring classification quality, we use the standard micro-averaged F1 score. For evaluating clustering, we mainly use the silhouette coefficient. This metric does not require ground-truth cluster labels. We additionally use NMI (normalized mutual information) and ARI (adjusted rand index) to measure clustering quality when ground truths are known. Note that these metrics evaluate different aspects of the result: silhouette coefficient measures the separation of clusters, NMI measures agreement between the cluster assignment and the label assignment, while ARI measures the similarity of the two assignments. Most of the existing works use the latter two for evaluation, but the first one is more practical because of the decoupling with labeling ground truths. As will be seen, our method is particularly attractive under this metric. **Implementation details and hyperparameter information** may be found in Appendix G. **Classification and clustering performance.** Table 1 lists the F1 scores and the silhouette coefficients for all data sets and all compared models. We observe that GNNs are always better than GMMs for classification; while the flow version of GMM, FlowGMM, beats all GNNs on cluster separation. Our model, GC-Flow, is competitive with the better of the two and is always the best or the second best. When being the best, some of the improvements are rather substantial, such as the F1 score for Computers and the silhouette coefficient for Wiki-CS. It is interesting to note that the basic GMMs perform rather poorly. This phenomenon is not surprising, because without any neural network parameterization, they cannot compete with other models that allow learnable feature transformations to encourage class separation or cluster separation. **Comparison with more clustering methods.** To further illustrate the clustering quality of GC-Flow, we compare it with several contrastive-based methods that produce competitive clusterings: DGI (Velickovic et al., 2019), GRACE (Zhu et al., 2020), GCA (Zhu et al., 2021), GraphCL (You et al., 2020), and MVGRL (Hassani and Khasahmadi, 2020); as well as an unsupervised VAE approach R-GMM-VGAE (Mrabah et al., 2022). Table 2 lists the results for Cora and Table 6 in Appendix H includes more data sets. For Cora, GC-Flow delivers the best performance on all metrics, with a silhouette score nearly double of the second best. Compared with NMI and ARI, silhouette is a metric that takes no knowledge of the ground truth but measures solely the cluster separation in space. This result suggests that the clusters obtained from GC-Flow are more structurally separated, albeit improving less the cluster agreement. **Training behavior.** Figure 2 plots the convergence behavior of the training loss for FlowGMM, GCN, and GC-Flow. The loss for GCN is the cross-entropy while that for the other two is the likelihood. All methods converge smoothly, with GCN reaching the plateau earlier, while FlowGMM and GC-Flow converge at a rather similar speed. **Visualization of the representation space.** To complement the numerical metrics, we visualize the representation space of FlowGMM, GCN, and GC-Flow by using a t-SNE plot (Van der Maaten & Hinton, 2008), for qualitative evaluation. The representations for FlowGMM and GC-Flow are the \(\mathbf{z}_{i}\)'s, while those for GCN are extracted from the penultimate activations. The results for Cora are given earlier in Figure 1; we additionally give the results for Pubmed in Figure 3. From both figures, one sees that similar to FlowGMM, GC-Flow exhibits a better clustering structure than does GCN, which produces little separation for the data. More visualizations are provided in Appendix H. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Cora**} & \multicolumn{2}{c|}{**Citeseer**} & \multicolumn{2}{c}{**Pubmed**} \\ & Silhouette & Micro-F1 & Silhouette & Micro-F1 & Silhouette & Micro-F1 \\ \hline FlowGMM & **0.739 \(\pm\) 0.015** & 0.504 \(\pm\) 0.021 & **0.609 \(\pm\) 0.034** & 0.512 \(\pm\) 0.044 & **0.653 \(\pm\) 0.031** & 0.734 \(\pm\) 0.014 \\ GMM on **X** & 0.162 \(\pm\) 0.000 & 0.163 \(\pm\) 0.000 & 0.071 \(\pm\) 0.000 & 0.085 \(\pm\) 0.000 & 0.062 \(\pm\) 0.000 & 0.581 \(\pm\) 0.000 \\ GMM on **A**X & 0.144 \(\pm\) 0.000 & 0.173 \(\pm\) 0.000 & 0.089 \(\pm\) 0.000 & 0.182 \(\pm\) 0.000 & 0.183 \(\pm\) 0.000 & 0.411 \(\pm\) 0.000 \\ \hline GCN & 0.340 \(\pm\) 0.003 & 0.813 \(\pm\) 0.007 & 0.314 \(\pm\) 0.016 & 0.700 \(\pm\) 0.025 & 0.453 \(\pm\) 0.006 & **0.791 \(\pm\) 0.004** \\ GraphSAGE & 0.346 \(\pm\) 0.004 & 0.801 \(\pm\) 0.005 & 0.278 \(\pm\) 0.007 & 0.697 \(\pm\) 0.007 & 0.440 \(\pm\) 0.018 & 0.769 \(\pm\) 0.011 \\ GAT & 0.383 \(\pm\) 0.003 & **0.825 \(\pm\) 0.005** & 0.304 \(\pm\) 0.003 & **0.702 \(\pm\) 0.007** & 0.435 \(\pm\) 0.010 & 0.774 \(\pm\) 0.005 \\ \hline GC-Flow & **0.734 \(\pm\) 0.006** & **0.815 \(\pm\) 0.011** & **0.538 \(\pm\) 0.022** & **0.714 \(\pm\) 0.011** & **0.669 \(\pm\) 0.021** & **0.791 \(\pm\) 0.009** \\ \hline \hline & \multicolumn{2}{c|}{**Computers**} & \multicolumn{2}{c|}{**Photo**} & \multicolumn{2}{c}{**Wiki-CS**} \\ & Silhouette & Micro-F1 & Silhouette & Micro-F1 & Silhouette & Micro-F1 \\ \hline FlowGMM & **0.540 \(\pm\) 0.024** & 0.614 \(\pm\) 0.026 & **0.704 \(\pm\) 0.027** & 0.599 \(\pm\) 0.089 & **0.677 \(\pm\) 0.011** & 0.671 \(\pm\) 0.011 \\ GMM on **X** & -0.018 \(\pm\) 0.00 & 0.102 \(\pm\) 0.000 & -0.024 \(\pm\) 0.00 & 0.120 \(\pm\) 0.000 & 0.088 \(\pm\) 0.000 & 0.124 \(\pm\) 0.000 \\ GMM on **A**X & -0.021 \(\pm\) 0.00 & 0.062 \(\pm\) 0.000 & -0.041 \(\pm\) 0.00 & 0.098 \(\pm\) 0.000 & 0.026 \(\pm\) 0.000 & 0.188 \(\pm\) 0.000 \\ \hline GCN & 0.357 \(\pm\) 0.026 & 0.812 \(\pm\) 0.016 & 0.388 \(\pm\) 0.003 & 0.891 \(\pm\) 0.012 & 0.264 \(\pm\) 0.005 & **0.775 \(\pm\) 0.005** \\ GraphSAGE & 0.434 \(\pm\) 0.030 & 0.761 \(\pm\) 0.024 & 0.386 \(\pm\) 0.007 & 0.839 \(\pm\) 0.020 & 0.233 \(\pm\) 0.009 & 0.771 \(\pm\) 0.003 \\ GAT & 0.431 \(\pm\) 0.015 & **0.814 \(\pm\) 0.023** & 0.425 \(\pm\) 0.020 & **0.900 \(\pm\) 0.009** & 0.278 \(\pm\) 0.008 & 0.773 \(\pm\) 0.003 \\ \hline GC-Flow & **0.487 \(\pm\) 0.012** & **0.847 \(\pm\) 0.007** & **0.655 \(\pm\) 0.013** & **0.917 \(\pm\) 0.004** & **0.717 \(\pm\) 0.010** & **0.775 \(\pm\) 0.002** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of GMM-based generative models, GNN-based discriminative models, and GC-Flow for semi-supervised clustering and classification. Standard deviations are obtained with ten repetitions. For each data set and metric, the two best cases are boldfaced. \begin{table} \begin{tabular}{c|c c c} \hline \hline & NMI & ARI & Silhouette \\ \hline DGI & 0.592 \(\pm\) 0.001 & 0.570 \(\pm\) 0.002 & 0.330 \(\pm\) 0.000 \\ GRACE & 0.475 \(\pm\) 0.028 & 0.394 \(\pm\) 0.047 & 0.153 \(\pm\) 0.011 \\ GCA & 0.418 \(\pm\) 0.053 & 0.259 \(\pm\) 0.058 & 0.301 \(\pm\) 0.005 \\ GraphCL & 0.577 \(\pm\) 0.002 & 0.482 \(\pm\) 0.003 & 0.297 \(\pm\) 0.002 \\ MVGRL & **0.612 \(\pm\) 0.026** & **0.576 \(\pm\) 0.062** & 0.369 \(\pm\) 0.013 \\ R-GMM-VGAE & 0.559 \(\pm\) 0.006 & 0.557 \(\pm\) 0.009 & **0.430 \(\pm\) 0.013** \\ GC-Flow & **0.621 \(\pm\) 0.013** & **0.631 \(\pm\) 0.008** & **0.734 \(\pm\) 0.006** \\ \hline \hline \end{tabular} \end{table} Table 2: Clustering performance of various GNN methods. The two best cases are boldfaced. Data set: Cora. Figure 3: Representation space of the data set Wiki-CS under different models. Figure 2: Convergence of the training loss (Cora). **Analysis on depth.** Figure 5 plots the performance of FlowGMM, GCN, and GC-Flow as the number of layers/flows increases. One sees that the classification performance of GCN deteriorates with more layers, in agreement with the well-known oversmoothing phenomenon (Li et al., 2018), while the clustering performance is generally stable. On the other hand, the classification performance of FlowGMM and GC-Flow does not show a unique pattern: for Cora, it degrades, while for Pubmed, it stabilizes. The clustering performance of FlowGMM and GC-Flow generally degrades, except for the curious case of GC-Flow on Cora, where the silhouette coefficient shows a V-shape. Overall, a smaller depth is preferred for all models. **Analysis on labeling rate.** Figure 5 plots the performance of FlowGMM, GCN, and GC-Flow as the number of training labels per class increases. One sees that for all models, the performance generally improves with more labeled data. The improvement is more steady and noticeable for classification, while being less significant for clustering. Additionally, GC-Flow classifies significantly better than does GCN at the low-labeling rate regime, achieving a 10.04% relative improvement in the F1 score when there are only two labeled nodes per class. **Improving performance with additional parameterization.** We experiment with two variants of GC-Flow by introducing parameterizations to \(\widehat{\mathbf{A}}\). The variant GC-Flow-p uses an idea similar to GAT, through computing an additive attention on the graph edges to redefine their weights. Another variant GC-Flow-l also computes weights, but rather than using them to define \(\widehat{\mathbf{A}}\), it treats each weight as a probability of edge presence and samples the corresponding Bernoulli distribution to obtain a binary sample \(\widehat{\mathbf{A}}\). The details are given in Appendix F. Table 3 lists the performance of GC-Flow-p and GC-Flow-l on three selected data sets, where the improvement over GC-Flow is notable. The improvement predominantly appears for clustering, with the most striking increase from 0.487 to 0.706. The increase of silhouette coefficients generally come with a marginal decrease in the F1 score, but the decrement amount is below the standard deviation. In one occasion (Computers), the F1 score even increases, despite also being marginal. ## 6 Conclusions We have developed a generative GNN model which, rather than directly computing the class posterior \(p(y|\mathbf{x})\), computes the class conditional likelihood \(p(\mathbf{x}|y)\) and applies the Bayes rule together with the class prior \(p(y)\) for prediction. A benefit of such a model is that one may control the representation of data (e.g., a clustering structure) through modeling the representation distribution (e.g., optimizing it toward a mixture of well-separated unimodal distributions). We achieve so by designing the GNN as a normalizing flow that incorporates graph convolutions. Interestingly, the adjacency matrix appears in the density computation of the normalizing flow as a stand-alone term, which could be ignored if it is a constant, or easily optimized if it is parameterized. We demonstrate that the proposed model not only maintains the predictive power of the past GNNs, but also produces high-quality clusters in the representation space.
2301.05377
Dephasing of ultracold cesium $80D_{5/2}$-Rydberg Electromagnetically Induced Transparency
We study Rydberg electromagnetically induced transparency (EIT) of a cascade three-level atom involving 80$D_{5/2}$ state in a strong interaction regime employing a cesium ultracold cloud. In our experiment, a strong coupling laser couples 6$P_{3/2}$ to 80$D_{5/2}$ transition, while a weak probe, driving 6$S_{1/2}$ to 6$P_{3/2}$ transition, probes the coupling induced EIT signal. At the two-photon resonance, we observe that the EIT transmission decreases slowly with time, which is a signature of interaction induced metastability. The dephasing rate $\gamma_{\rm OD}$ is extracted with optical depth OD = $\gamma_{\rm OD}t$. We find that the optical depth linearly increases with time at onset for a fixed probe incident photon number $R_{\rm in}$ before saturation. The dephasing rate shows a nonlinear dependence on $R_{\rm in}$. The dephasing mechanism is mainly attributed to the strong dipole-dipole interactions, which leads to state transfer from $nD_{5/2}$ to other Rydberg states. We demonstrate that the typical transfer time $\tau_{0(80D)}$ obtained by the state selective field ionization technique is comparable with the decay time of EIT transmission $\tau_{0({\rm EIT})}$. The presented experiment provides a useful tool for investigating the strong nonlinear optical effects and metastable state in Rydberg many-body systems.
Yuechun Jiao, Liping Hao, Jingxu Bai, Jiabei Fan, Zhengyang Bai, Weibin Li, Jianming Zhao, Suotang Jia
2023-01-13T03:24:20Z
http://arxiv.org/abs/2301.05377v1
# Dephasing of ultracold cesium \(80D_{5/2}\)-Rydberg Electromagnetically Induced Transparency ###### Abstract We study Rydberg electromagnetically induced transparency (EIT) of a cascade three-level atom involving \(80D_{5/2}\) state in a strong interaction regime employing a cesium ultracold cloud. In our experiment, a strong coupling laser couples \(6P_{3/2}\) to \(80D_{5/2}\) transition, while a weak probe, driving \(6S_{1/2}\) to \(6P_{3/2}\) transition, probes the coupling induced EIT signal. At the two-photon resonance, we observe that the EIT transmission decreases slowly with time, which is a signature of interaction induced metastability. The dephasing rate \(\gamma_{\mathrm{OD}}\) is extracted with optical depth \(\mathrm{OD}=\gamma_{\mathrm{OD}}t\). We find that the optical depth linearly increases with time at onset for a fixed probe incident photon number \(R_{\mathrm{in}}\) before saturation. The dephasing rate shows a nonlinear dependence on \(R_{\mathrm{in}}\). The dephasing mechanism is mainly attributed to the strong dipole-dipole interactions, which leads to state transfer from \(nD_{5/2}\) to other Rydberg states. We demonstrate that the typical transfer time \(\tau_{0(80D)}\) obtained by the state selective field ionization technique is comparable with the decay time of EIT transmission \(\tau_{0(\mathrm{EIT})}\). The presented experiment provides a useful tool for investigating the strong nonlinear optical effects and metastable state in Rydberg many-body systems. ## I Introduction Due to the strong interaction (\(\propto n^{11}\) with \(n\) principal quantum number) [1], Rydberg atoms provides an ideal platform to implement quantum information and quantum simulation [2; 3; 4; 5] and investigate interaction induced cooperative optical nonlinearities [6]. The optical nonlinear effects are induced by Rydberg atom interactions [7; 8], i.e., van der Waals (vdW) interactions [9; 10; 11] and dipole-dipole interactions [12; 13]. Strong atomic interactions can be effectively mapped onto photon-photon interactions via electromagnetically induced transparency (EIT) [6]. Due to the cooperative effects, the optical nonlinearity can be greatly enhanced [6; 14; 15; 16]. Based on this, Rydberg EIT experiments are employed to measure a radio-frequency electric field [17] with a room-temperature cell, and to realize few-photon optical nonlinearities [18; 19; 20; 21] with an ultracold sample, such as the efficient single photon generation [18], entanglement generation between light and atomic excitations [20], single-photon switches [19; 16; 22] and transistors [23; 21; 24]. The resonant dipole-dipole interaction between two individual Rydberg atoms [25] is angular dependent. Recently, the anisotropic Rydberg interaction have been adopted to investigate Rydberg polaritons by using Rydberg \(nD\)-state [26]. In this work, we present a Rydberg EIT spectrum of a cascade three-level cesium atom involving \(80D_{5/2}\) state in a dipole trap. Under the two-photon resonance condition, we observe that EIT transmission displays a slow decrease with time. This could be a signature of interaction induced metastability [27; 28]. Due to the large dipole matrix elements, the cesium \(nD\)-state atom has strong dipole interactions with energetically close \((n+1)P\) state. We find that, strong dipole interactions between the \(nD\) and \((n+1)P\) state leads to a fast decay of \(nD\) to \((n+1)P\) state, and the dephasing of the transmission spectrum. A theoretical model is built to understand EIT dephasing mechanism. The nonlinear dependence on dephasing rate has also been investigated. The remainder of the article is arranged as follows. In Sec. II, we introduce our experimental setup. In Sec. III, we reveal Rydberg EIT spectrum and its dephasing experimentally. In Sec. IV, we present a simple theoretical model to reveal EIT dephasing mechanism. In Sec. V, we measure the fast decay process on \(80D_{5/2}\) state. Finally, we summarize the main results obtained in this work. ## II Experimental setup Our experiment is performed in a cesium magneto-optical trap (MOT) with optical dipole trap (ODT).The beam waist of dipole trap is \(45\ \mu\)m. A schematic of relevant levels and the experimental setting are shown in Fig. 1 (a) and (b). A three-level system, shown in Fig. 1(a), consists of a ground state \(|6S_{1/2},F=4\rangle\) (\(|g\rangle\)), intermediate state \(|6P_{3/2},F^{\prime}=5\rangle\) (\(|e\rangle\)) and Rydberg state \(|80D_{5/2}\rangle\) (\(|r\rangle\)). A weak probe beam (Rabi frequency \(\Omega_{p}\), 852-nm laser with a 100-kHz linewidth), provided by an external cavity diode laser (Toptica, DL-pro), drives a lower transition and the frequency is stabilized to the \(|6S_{1/2},F=4\rangle\rightarrow|6P_{3/2},F^{\prime}=5\rangle\) transition using the polarization spectroscopy method. The coupling beam (Rabi frequency \(\Omega_{e}\)) provided by a commercial laser (Toptica, TA-SHG110) with linewidth 1 MHz drives Rydberg transition, \(|6P_{3/2},F^{\prime}=5\rangle\rightarrow|80D_{5/2}\rangle\). The coupling laser frequency is stabilized to the Rydberg transition using a Rydberg EIT reference signal obtained from a cesium room-temperature vapor cell [29]. The weak probe laser and strong coupling laser, with respective Gaussian radius of 9 \(\mu\)m and 30 \(\mu\)m, are overlapped and counter-propagated through the MOT center, see Fig. 1(b). The probe laser is scanned using a double-passed acousto-optic modulator (AOM) that covers the lower transition. The transmission of the probe laser, Rydberg EIT spectrum, is detected with a single photon counting module (SPCM) and processed with _Labview_ program. The glass MOT is surrounded by three pairs of field-compensation Helmholtz coils, which allow us to reduce stray magnetic fields via EIT Zeeman splitting, corresponding stray field less than 5 mG. In our experiment, the peak density of atomic cloud about \(10^{11}\) cm\({}^{-3}\) is measured by shadow imaging and the temperature of atomic cloud is about 100 \(\mu\)K. The estimated Rydberg density is 2.4 \(\times\)\(10^{8}\) cm\({}^{-3}\). The experimental timing is shown in Fig. 1(c) with the whole time 200 ms, corresponding to a repetition rate of 5 Hz. In each cycle, after turning off the trap beams, we switch on the coupling and probe lasers for 20 \(\mu\)s, during which the probe-laser frequency is swept across the \(|6S_{1/2},F=4\rangle\rightarrow|6P_{3/2},F^{\prime}=5\rangle\) transition, meanwhile the power is fixed using a proportional-integral-derivative controller (PID) feedback loop that controls the radio-frequency power supplied to the 852-nm AOM. The data shown in Fig. 1(b) is taken by the sum of 3000 cycles. ## III Experimental observation of Rydberg EIT and transmission dephasing Due to the quantum interference effect [30], the probe transmission \(T\) increases when the coupling and probe laser frequencies satisfy the two-photon resonance [see Fig. 1(a)]. Using the rotating wave approximation and in the interaction picture [31], the eigenstate of three-level Hamiltonian can be obtained with \(|D\rangle\propto\Omega_{c}^{*}|g\rangle-\Omega_{p}|r\rangle\) at two-photon resonance. For conventional EIT, the system works in dark state and the probe pulse suffers little optical absorption. For the three-level scheme, all atoms are initially prepared in state \(|g\rangle\). The EIT system can Figure 1: (color online) (a) Atomic level scheme. A weak probe laser field with Rabi frequency \(\Omega_{p}\) drives the lower transition, \(|g\rangle=|6S_{1/2},F=4\rangle\rightarrow|e\rangle=|6P_{3/2},F^{\prime}=5\rangle\). The strong coupling laser (Rabi frequency \(\Omega_{c}\)) couples the transition \(|e\rangle\rightarrow|r\rangle=|80D_{5/2}\rangle\). (b) Experiment setup. The coupling and probe beams are counter-propagated through the MOT center and overlap with the dipole trap beam. The transmission of probe beam is detected with a single photon counting module (SPCM). The inset shows EIT dephasing behaviors (red) and probe absorption (black; without coupling beam). The data are taken by the sum of 3000 experimental cycles. (c) Experimental timing. After switching off the MOT and dipole trap beams, Rydberg-EIT coupling and probe lasers are turned on for 20 \(\mu\)s, during which the probe laser frequency is ramped through the \(|g\rangle\rightarrow|e\rangle\) transition over \(\pm\)15 MHz by a double-passed AOM. Figure 2: (color online) (a) Rydberg-EIT spectra with a coupling laser, \(\Omega_{c}/2\pi=\)10.6 MHz, resonant with the \(|6P_{3/2},F^{\prime}=5\rangle\rightarrow|80D_{5/2}\rangle\) transition, and a probe frequency scanning across the lower transition, \(|6S_{1/2},F=4\rangle\rightarrow|6P_{3/2},F^{\prime}=5\rangle\), at a probe-photon rate \(R_{\rm in}=21\) photons/\(\mu\)s and 170 photons/\(\mu\)s, respectively. The solid lines are the fittings using the density matrix equation of a three-level atom. The EIT transmission displays a strong suppression and blue shift with increasing probe \(R_{\rm in}\). (b) EIT transmissions for the indicated probe photon input rates at two-photon resonance condition. The transmission remains same for \(R_{\rm in}=21\) photons/\(\mu\)s, whereas decrease with \(R_{\rm in}\) at the beginning and then decay with the EIT maintain time for larger \(R_{\rm in}\) cases. The solid lines denote the exponential fittings. For higher \(R_{\rm in}\) the characteristic time are \(\tau_{0({\rm EIT})}\) = 4.04 \(\pm\) 0.10 \(\mu\)s, 4.50 \(\pm\) 0.06 \(\mu\)s and 4.49 \(\pm\) 0.05 \(\mu\)s, respectively. evolve to the dark state \(|D\rangle\) with time \(1/\gamma_{2}\). However, as shown in Fig. 1(b), in \(nD_{5/2}\)-Rydberg EIT system, the EIT transmission slowly decreases with time. Interesting, the time scale of the dephasing is much longer than the lifetime of intermediate state \(|e\rangle\). The dipolar interaction can lead to many-body dephasing [26, 32] in case of Rydberg \(nD\) state. We present EIT spectra with \(80D_{5/2}\) Rydberg state in Fig. 2(a). The probe field is adopted as incident photon rates \(R_{\text{in}}\) = 21 photons/\(\mu\)s (black dashed line) and 170 photons/\(\mu\)s (red dashed line) with \(\Omega_{c}\) = \(2\pi\times 10.6\) MHz. It is found that the transmission decreases with increasing \(R_{\text{in}}\) and accompanies with the EIT-peak shift. The EIT transmission rate is around 40% for \(R_{\text{in}}\) = 21 photons/\(\mu\)s, and decreases to 20% when \(R_{\text{in}}\) increases to 170 photons/\(\mu\)s. We attribute the reduction of optical transmission to atomic interactions between Rydberg states. Besides, when \(R_{\text{in}}\) = 170 photons/\(\mu\)s, the EIT peak has a blue shift about 2.5 MHz compared with the case for \(R_{\text{in}}\) = 21 photons/\(\mu\)s. For Rydberg EIT, dark-state polariton is very sensitive to other Rydberg excitation in the strong interaction regime. This is because when two Rydberg polaritons propagate inside the medium with a distance \(r\), they experience an interaction induced energy shift \(U(r)\). It gives rise to a non-vanishing \(\text{Im}(\chi)\) of probe beam where \(\chi\) optical susceptibility of system and therefore leads to strong absorption and a shift on EIT spectra [6, 14, 15]. To further investigate the dephasing feature for \(nD\) state, we vary the probe-photon rate \(R_{\text{in}}\) with fixed coupling field \(\Omega_{c}/2\pi\) =10.6 MHz. Both the coupling and probe lasers frequencies are on resonance. As shown in Fig. 2(b), the time dependence of the transmission is plotted with different \(R_{\text{in}}\). For low probe photon rates (i.e., \(R_{\text{in}}\) = 21 photons/\(\mu\)s), the transmission is almost a constant, but when increasing \(R_{\text{in}}\), transmission exhibits a slow decrease with time \(t\). By fitting the data in panel (b) with the exponential function \(T\) = \(A\exp(-t/\tau_{0(\text{EIT})})+T_{0}\), the decay time \(\tau_{0(\text{EIT})}\) = 4.04 \(\pm\) 0.10 \(\mu\)s, 4.50 \(\pm\) 0.06 \(\mu\)s and 4.49 \(\pm\) 0.05 \(\mu\)s can be extracted for different \(R_{\text{in}}\). The decay time is much longer than the lifetime in state \(|e\rangle\) (i.e., \(1/\gamma_{2}\sim 0.03\mu\)s). In order to reveal the dephasing mechanism, we define the effective optical depth (OD) of the medium as the logarithm of transmission [i.e., OD = -\(\text{ln}(T)\)] [26]. The time evolution of OD is shown in Fig. 3(a) [corresponding to the results in Fig. 2(b)]. One sees that OD approximately linearly increases with time before t = 5 \(\mu\)s. By neglecting saturation effects, we can redefine \(OD=\gamma_{\text{OD}}t\) where \(\gamma_{\text{OD}}\) reflects creation rate of optical density by decoupled impurities [26]. At small probe photons (i.e., \(R_{\text{in}}\lesssim 340\)\(\mu\)s\({}^{-1}\)), the extracted rate \(\gamma_{\text{OD}}\) displays a linear increase with \(R_{\text{in}}\) and then saturates for large \(R_{\text{in}}\) [see Fig. 3(b)]. This is because the system is not fully in a blockade regime at small probe photons. Therefore, with increasing photon number, more Rydberg atoms are excited, which increase the dipole-dipole interaction, leading to increased \(\gamma_{\text{OD}}\). However, after the system enter fully blockade regime, we can't excite more Rydberg atoms so that the dephasing rate \(\gamma_{\text{OD}}\) shows saturation. This is a signature of many-body dephasing. We calculate the group velocity of probe photon is around 3920 m/s under our experiment condition with \(\Omega_{c}\) = 10.6 MHz. By considering the length of our atomic cloud 1 mm, the propagation time of photon through the cloud is around 0.26 \(\mu\)s. The Rydberg blockade radius is about 10 \(\mu\)m for 80D\({}_{5/2}\). Under this condition, almost 100 atoms can be excited to Rydberg state. Thus we can calculate that in one microsecond, the maximum number of Rydberg excitation is around 385, where 100 atom/0.26 \(\mu\)s\(\simeq\) 385 /\(\mu\)s. Thus, the critical value for \(R_{\text{in}}\) is around 385 /\(\mu\)s. When the photon incidence rates \(R_{\text{in}}\) is smaller than 385 /\(\mu\)s, the system is not in the blockade regime. When increasing the intensity of the probe laser, the system can works in the full blockade regime. This estimation is agreed with the dephasing rates \(\gamma_{\text{OD}}\) in Fig. 3(b). We also change \(\Omega_{c}\) and measure EIT dephasing rates \(\gamma_{\text{OD}}\) versus \(R_{\text{in}}\). The results show a similar nonlinear dependence on \(R_{\text{in}}\) [see Fig. 3(b)]. ## IV Analysis of EIT dephasing mechanism The vdW interaction between \(nD_{5/2}\) pair leads to energy level shifts and dipole interaction of \(nD_{5/2}\) and nearest Rydberg states yield state transfer of \(nD_{5/2}\) to other Rydberg states. As no microwave field is present in the experiment, the other states are excited due to the spontaneous decay from \(nD_{5/2}\) state. We have found [33] that \(nD\)-\((n+1)P\) transition is the strongest in our experiment conditions (see Sec. 5 for details). Hence this leads to a two stage processes. Rydberg \(nD_{5/2}\) state will decay to \((n+1)P_{3/2}\) through spontaneous decay. The dephasing can then be induced in that regime where dipole-dipoles interactions couple nearly degenerate Rydberg pair states [26, 32]. A full model to describe the de Figure 3: (color online) (a) The optical depth of the \(80D_{5/2}\) EIT transmission taken the logarithm of spectra in Fig. 2(b) for indicated photon incidence rates \(R_{\text{in}}\) at fixed \(\Omega_{c}\) = 2\(\pi\times\)10.6 MHz. The solid lines are linear fittings before t = 5 \(\mu\)s to the data to extract the dephasing rates \(\gamma_{\text{OD}}\). (b) The dephasing rates \(\gamma_{\text{OD}}\) as a function of \(R_{\text{in}}\) for coupling Rabi frequency \(\Omega_{c}/2\pi\) = 10.6, 8.2 and 5.2 MHz, respectively. phasing is rather complicated. In this section, we will focus on the dephasing effects with a simplified model, which nonetheless captures the main effects. Considering the three-level scheme in Fig. 1(a), the dephasing rate of Rydberg state \(\gamma_{3}\) is around \(\gamma_{r}+\Gamma_{re}\)\(\thickapprox\gamma_{r}\). Rydberg atom has long lifetimes (\(1/\Gamma_{re}\sim n^{3}\)) on the order of 100 \(\mu\)s. \(\gamma_{r}\) represents the dephasing of the atomic coherence (originated from atomic collisions, residue Doppler effect, dipole-dipole interaction between the Rydberg atoms, finite laser linewidth). The dephasing rate of the intermediate state \(|e\rangle\) denotes \(\gamma_{2}=\gamma_{e}\) + \(\Gamma_{eg}\) with spontaneous decay rate \(\Gamma_{eg}\) and interaction induced decay \(\gamma_{e}\). In our physical system, \(\gamma_{e}\) is much smaller than \(\Gamma_{eg}\simeq 2\pi\times 5.2\) MHz, thus \(\gamma_{2}\thickapprox\Gamma_{eg}\). The collective dissipation can emerge in dense atomic gases, typically through two-body dipolar couplings [32]. Here we adopt the effective dephasing \(\gamma_{3}^{\rm eff}\) (i.e., \(\gamma_{r}=\gamma_{3}^{\rm eff}\)), and seek the relation between the transmission and dipolar interaction induced dephasing. The dynamics of the effective three-level system can be modeled by the quantum master equation for the many-atom density operator \(\rho\): \[\dot{\rho}=-i[\hat{H}_{\rm eff},\rho]+D_{1}(\rho)+D_{\rm eff}(\rho), \tag{1}\] The effective Hamiltonian in the equation is given by \[\hat{H}_{\rm eff}= \sum_{j=1}^{N}\left[-\Delta_{p}\hat{\sigma}_{ee}^{j}({\bf r},t)-( \Delta_{p}+\Delta_{c})\hat{\sigma}_{rr}^{j}({\bf r},t)+\frac{\Omega_{p}}{2} \hat{\sigma}_{eg}^{j}({\bf r},t)\right.\] \[\left.+\frac{\Omega_{c}}{2}\hat{\sigma}_{re}^{j}({\bf r},t)+\sum_ {k\neq j}^{N}\frac{V_{jk}}{2}\hat{\sigma}_{rr}^{j}\hat{\sigma}_{rr}^{k}+{\rm H.c.}\right], \tag{2}\] with \(\hat{\sigma}_{ab}(z_{j})\equiv|a_{j}\rangle\langle b_{j}|\) (\(z_{j}\) is the position of \(j\)th atom in the respective ensemble) and H.c. representing Hermitian conjugate of the preceding terms. \(V_{jk}=C_{6}/|{\bf r}_{j}-{\bf r}_{k}|^{6}\) is the vdW potential with the dispersive coefficient \(C_{6}\propto n^{11}\). The dissipative effects are described by the Lindblad form \(D_{1}(\rho)\), \[D_{1}(\rho)= \sum_{j=1}^{N}\Gamma_{eg}\left(\hat{\sigma}_{ge}^{j}\rho\hat{ \sigma}_{eg}^{j}-\frac{1}{2}\{\hat{\sigma}_{ee}^{j},\rho\}\right), \tag{3}\] where \(D_{1}(\rho)\) denotes the decay from state \(|e\rangle\) to \(|g\rangle\). The effective dephasing term \(D_{\rm eff}(\rho)\) is introduced, \[D_{\rm eff}(\rho)= \sum_{j=1}^{N}\gamma_{3}^{\rm eff}\left(\hat{\sigma}_{33}^{j} \rho\hat{\sigma}_{33}^{j}-\frac{1}{2}\{\hat{\sigma}_{33}^{j},\rho\}\right), \tag{4}\] Due to the dephasing and spectral shift of the transparency resonance [see Fig.2(a)], we employ the theoretical description of individual atoms coupled to a mean field (MF) to analyze the EIT spectrum with strong Rydberg interactions [34; 35; 36]. In the MF approximation, the many-body density matrix \(\rho\) is decoupled into individual ones through \(\dot{\rho}\approx\Pi_{i}\)\(\hat{\rho}_{i}\). In the thermodynamic limit, the optical Bloch equations for the three-level system can be obtained, where elements of the density matrix are represented with \(\rho_{ab}=N^{-1}\sum_{j}\langle\hat{\sigma}_{ab}^{j}\rangle\). As indicated in our experiment, over a long time evolution, the interactions between Rydberg state leads to a MF shift where \(\Delta_{c}\rightarrow\Delta_{c}+V\rho_{rr}\) with MF interaction energy \(V=N^{-1}\sum_{k\neq j}V_{jk}\). By numerically solving the MF Bloch equation, one can obtain EIT transmission varies with \(\Delta_{p}\). As shown in Fig. 4(a), there is an increasing blue shift of transparency away from the non-interaction EIT resonance. This is because that the shifted Rydberg state detunes the EIT windows. The similar EIT signal is also observed experimentally [see Fig.2(a)]. To obtain the dependence of the EIT transmission on the effective dephasing induced by Rydberg interactions, we calculate EIT spectra for a series of effective \(\gamma_{3}^{\rm eff}\), accounting for the many-body dephasing by Rydberg atoms. In Fig. 4(b), the EIT transmission as a function of \(\gamma_{3}^{\rm eff}\) is plotted. The system works at two-photon detuning \(\delta=0\) with the probe \(\Omega_{p}/2\pi=\)1.04 MHz, \(\Omega_{c}/2\pi=\)10.6 MHz and atomic density \(N_{a}=1\times 10^{11}\) cm\({}^{-3}\). It shows that the EIT trans Figure 4: (color online) (a) Theoretical results of EIT transmission varies with \(\Delta_{p}\) for \(V=0\) and \(V\neq 0\). (b) EIT transmission and corresponding OD versus effective dephasing rate \(\gamma_{3}^{\rm eff}\) of Rydberg state that accounts for the interaction between Rydberg atoms with \(\Omega_{p}/2\pi=\)1.04 MHz, \(\Omega_{c}/2\pi=\)10.6 MHz and atomic density 1\(\times 10^{11}\)cm\({}^{-3}\). Figure 5: (color online) (a) Normalized time of flight (TOF) signals for laser excitation to \(80D_{5/2}\) state with indicated interaction times, \(t_{\rm INT}\). The three traces are set vertically offset for clarity, with the respective zero levels shown as horizontal dashed lines. The gates for the \(80D_{5/2}\) and \(81P_{3/2}\) states signals are shown as a light pink and gray shaded regions, respectively. (b) Measurements of the \(80D_{5/2}\) state as a function of the interaction time \(t_{\rm INT}\). Initial prepared \(80D_{5/2}\) Rydberg atoms transfer to nearby Rydberg states \(|r^{\prime}\rangle\) during \(t_{\rm INT}\) due to the strong resonant dipole interaction. The solid line shows the exponential fitting with the characteristic time \(\tau_{0(80D)}=\) 3.15 \(\pm\) 0.38 \(\mu\)s. mission decrease with \(\gamma_{3}^{\rm eff}\). The EIT transmission decreases to 50% when \(\gamma_{3}^{\rm eff}=2\pi\times 1.5\) MHz. It is consistent with the trend of our experimental data [see Fig. 2(a)]. For comparing, we also plot the corresponding OD in Fig. 4(b), as expected, calculated OD of the probe beam increase as \(\gamma_{3}^{\rm eff}\). We should note that the presented theoretical model is based on MF equation by simply varying the effective decay rate \(\gamma_{3}^{\rm eff}\). Beyond the present model, many-body quantum model need to be developed to gain better understanding of the metastable dynamics [27; 28]. We will discuss it somewhere else. ## V Test of the decay in \(80d_{5/2}\) state The dephasing effects arise from many physical reasons, e.g., atomic collisions, residue Doppler effect, depopulation between the Rydberg atoms, or finite laser linewidth. To test the collective dephasing process, we also conduct the experiment to measure the fast population transfer from \(80D_{5/2}\) to \(81P_{3/2}\). For \(80D_{5/2}\) state used in this work, the space to nearest \(81P_{3/2}\) state is 1.3124 GHz, corresponding dipole matrix element 5649.1 \(ea_{0}\) with \(e\) electron charge and \(a_{0}\) Bohr radius. Therefore, \(80D_{5/2}\) Rydberg state displays strong dipole interactions with \(81P_{3/2}\), resulting to the state transfer of \(80D_{5/2}\)\(\rightarrow\)\(81P_{3/2}\)[33]. This transformation leads to the decay of \(80D_{5/2}\) Rydberg atoms and further decreases of EIT transmission. In order to verify this conjecture, we carry out more test that is performed in an additional MOT [not shown in Fig. 1(b)], in which Rydberg atoms is detected with a state select field ionization detection. The temperature of the atomic cloud is almost same with the main setup, and the peak density of atomic cloud is \(8.0\times 10^{10}cm^{-3}\), which is comparable with the main setup of \(1.0\times 10^{11}cm^{-3}\). In addition, we make the probe and coupling Rabi frequency on the almost same to that of the main setup so that the Rydberg population is comparable with that of the EIT in the main apparatus. Therefore, we obtain the similar EIT spectra and field ionization signal simultaneously in the test setup. The details of the setup can be seen in our previous work [37; 33]. In the test experiments, after switching off the MOT beams, we apply a two-photon excitation pulse with duration 4 \(\mu\)s for preparing \(80D_{5/2}\) Rydberg atoms, an optional interaction time \(t_{\rm INT}\) before the ionization detection allows us to study the decay of \(80D_{5/2}\) Rydberg and state transformation. In Fig. 5(a), we present the time of flight (TOF) signals for laser excitation to \(80D_{5/2}\) state. Initially, atoms are populated in \(80D_{5/2}\) state (see the black curve) in the TOF signal. Due to the resonant dipole interaction, atoms in \(80D_{5/2}\) state in light pink shaded region partly transfer to nearby \(81P_{3/2}\) state in the gray shaded region [see the red curve for \(t_{\rm INT}\) = 3 \(\mu\)s]. With further increasing \(t_{\rm INT}\), most of \(80D_{5/2}\) state Rydberg atoms transfer to \(81P_{3/2}\) state (see the blue curve for \(t_{\rm INT}\) = 6 \(\mu\)s). The population in state \(81P_{3/2}\) reaches maximal at \(t_{\rm INT}\) = 6 \(\mu\)s, and then decays to other states. For better understanding of the transfer process, we make a series of measurements for different \(t_{\rm INT}\). Figure 5(b) displays the measured \(80D_{5/2}\) as a function of \(t_{\rm INT}\). It is clear to observe the fast decay process on \(80D_{5/2}\) state within \(8\mu\)s. To obtain the decay characteristic of \(80D_{5/2}\) state, we fit the experimental data using exponential function [see the solid line of Fig. 5(b)]. The fitted results show decay time \(\tau_{0(80D)}\) = 3.15 \(\pm\) 0.38 \(\mu\)s is close to the decay of the EIT transmission rate \(\tau_{0(EIT)}\). Therefore, we conclude that the dipole interaction induced the state transfer may be the reason that leads to the decay of \(80D_{5/2}\) Rydberg three-level EIT transmission. ## VI Conclusion We have presented Rydberg EIT spectra in a cascade three-level scheme involving \(80D_{5/2}\) state of Cs atoms. Rydberg EIT spectrum shows strong dependence on the probe incident photon number. The optical transmission displays decay behavior with time. An optical depth of the medium is defined to characterize the transmission. We have shown that the optical depth displays linear increase with the time at onset for fixed probe \(R_{\rm in}\). We have further obtained the dephasing rate \(\gamma_{\rm OD}\) by redefining OD=\(\gamma_{\rm OD}t\). The dephasing rate linearly increases with weak probe field and then saturates for large \(R_{\rm in}\). We have shown that the dephasing mechanism is mainly attributed to the strong dipole-dipole interactions, which lead to strong population decay on \(|nD_{5/2}\rangle\) state. The experimental setting provides a platform to explore quantum nonlinear optics and quantum information processing, and creates metastable state in quantum many-body systems.
2310.04728
Baxterization for the dynamical Yang-Baxter equation
The Baxterization process for the dynamical Yang-Baxter equation is studied. We introduce the local dynamical Hecke ,Temperley-Lieb and Birman-Murakami-Wenzl operators, then by inserting spectral parameters, from each representation of these operators, we get dynamical R matrix under some conditions. As applications, we reformulate trigonometric degeneration of elliptic quantum group representations and also get dynamical R matrix for critical ADE integrable lattice models. Through Baxterization, we construct some one dimensional integrable systems that are dynamical version of the Heisenberg spin chain.
Muze Ren
2023-10-07T07:55:34Z
http://arxiv.org/abs/2310.04728v2
# Baxterization for the dynamical Yang-Baxter equation ###### Abstract. The Baxterization process for the dynamical Yang-Baxter equation is studied. We introduce the local dynamical Hecke,Temperley-Lieb and Birman-Murakami-Wenzl operators, then by inserting spectral parameters, from each representation of these operators, we get dynamical R matrix under some conditions. As applications, we reformulate trigonometric dynamical R matrix of elliptical quantum group representations and also get dynamical R matrix for critical ADE integrable lattice models. Through Baxterization, we construct some one dimensional integrable systems that are dynamical version of the Heisenberg spin chain. ###### Contents * 1 Introduction * 1.1 Outline of the paper * 1.2 Acknowledgment * 2 Main constructions * 2.1 Yang-Baxter operator and other local operators * 2.2 Local dynamical operators * 2.3 Relation between the usual and dynamical operators * 2.4 Baxterization * 2.5 Example: trigonometric degeneration of elliptic quantum group * 2.6 Example: critical ADE lattice model * 2.7 Transfer matrix and hamiltonian of spin chain * A Perron-Frobenius theorem ## 1. Introduction In 1990, Jones summarized and proposed the Baxterization procedure in [32] when he discussed the relation between braid relation (1.2) and Yang-Baxter equation (1.1) \[R_{i}R^{{}^{\prime}}_{i+1}R^{{}^{\prime\prime}}_{i}=R^{{}^{\prime\prime}}_{i+1 }R^{{}^{\prime}}_{i}R_{i+1}, \tag{1.1}\] the idea as he wrote:"take a knot invariant, turn it into a coherent sequence of braid group representations, and then Baxterize (if possible) by inserting a spectral parameter so that YBE is satisfied....". Jones then gave two universal Baxterization examples, suppose that \(\sigma_{i},i=1,\ldots,n\) satisfies the braid relations, \[\sigma_{i}\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i}\sigma_{i+1} \tag{1.2a}\] \[\sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i},\quad|i-j|>1 \tag{1.2b}\] 1. Hecke case, also see [30]. If \(\sigma_{i}+\sigma_{i}^{-1}=x1\) and we define \(R_{i}(\lambda)=e^{\lambda}\sigma_{i}+e^{-\lambda}\sigma_{i}^{-1}\), then it satisfies the Yang-Baxter equation (1.1), with the relation \(\lambda^{\prime\prime}=\lambda^{\prime}-\lambda\). 2. Birman-Murakami-Wenzl case studied in [5] and [35]. Let \(\frac{1}{x}(\sigma_{i}+\sigma_{i}^{-1})-1\), assume they satisfy the relations \[E_{i}^{2}=(a+a^{-1}-x)^{-1}E_{i}\] \[E_{i}\sigma_{i-1}^{\pm}E_{i}=a^{\pm 1}E_{i},E_{i}\sigma_{i+1}^{ \pm}=a^{\mp 1}E_{i}\] \[E_{i}\sigma_{i\mp 1}\sigma_{i}=E_{i}E_{i\pm 1}\] then \(R_{i}(\lambda)=(e^{\lambda}-1)k\sigma_{i}+x(k+k^{-1})1+(e^{-\lambda}-1)k^{- 1}\sigma_{i}^{-1}\) will satisfy the (1.1) with \(\lambda^{\prime\prime}=\lambda^{\prime}-\lambda\). We want to apply the similar ideas to study the dynamical Yang-Baxter equation and try to have more understanding of the construction of its solutions. The dynamical Yang-Baxter equation was initially considered by Gervais and Neveu in the study of Liouville theory [28]. And later Felder in [20, 21] rediscovered this equation in the study of quantization of Knizhnik-Zamolodchikov-Bernard equation and discovered the theory of elliptic quantum group and elliptic R matrix. Later the equations were widely studied in different aspects and have many connections to other fields, see for example [1, 6, 7, 15, 23, 24, 25, 26, 25, 42], see also the standard books [12, 13, 34]. We consider the generalized Baxterization process for the dynamical Yang-Baxter equation (1.4) based on the groupoid graded vector spaces language developed in [22]. The main difference to the classical situation is the appearance of the groupoid structure. \[\begin{split}&\check{R}^{(23)}(z-w,ah^{(1)})\check{R}^{(12)}(z,a) \check{R}^{(23)}(w,ah^{(1)})\\ &=\check{R}^{(12)}(w,a)\check{R}^{(23)}(z,ah^{(1)})\check{R}^{(12 )}(z-w,a)\end{split} \tag{1.4}\] It is very natural to define the local dynamical Hecke operators, Temperley-Lieb and Birman-Murakami-Wenzl operators using dynamical notations. For example the local Temperley-Lieb operators \(T(a)\) associated to a map \(\check{\kappa}:\mathrm{Ob}(\mathrm{Groupoid})\to\mathbb{C}^{\times}\) is defined by equations on groupoid graded vector spaces \(V\) \[T(a)T(a)=\bar{\kappa}(a)T(a) \tag{1.5a}\] \[T^{(12)}(a)T^{(23)}(ah^{(1)})T^{(12)}(a)=T^{(12)}(a)\] (1.5b) \[T^{(23)}(ah^{(1)})T^{(12)}(a)T^{(23)}(ah^{(1)})=T^{(23)}(ah^{(1)}) \tag{1.5c}\] We provides three kinds of examples of representations for the local dynamical Temperley-Lieb operators 1. The representations related to the classification of coxeter diagrams, see the work of Goodman-Harpe-Jones, chapter 1 of the book [29] and restricted quantum group [22], 2. The elliptic representation related to the theory of representaitons of elliptical quantum group [10, 16, 17, 20, 25]. In \(\mathfrak{sl}_{2}\) case, with "\([a]\)" denote some theta function, \(\bar{\kappa}(a)=\frac{[a+1]+[a-1]}{[a]}\), the operators are written \[\begin{split} T^{\text{ell}}_{A}(a)&=\frac{ \sqrt{[a-1][a+1]}}{[a]}E_{21}\otimes E_{12}+\frac{\sqrt{[a+1][a-1]}}{[a]}E_{12} \otimes E_{21}\\ &+\frac{[a+1]}{[a]}E_{11}\otimes E_{22}+\frac{[a-1]}{[a]}E_{22} \otimes E_{11}\end{split}\] (1.6) 3. The trigonometric representation related to the trigonometric degenerations of elliptic quantum group, in \(\mathfrak{sl}_{2}\) case, with \(\bar{\kappa}=2\cos\lambda\) and \(\langle a\rangle\) is some sine function, it looks like \[\begin{split} T^{\text{tri}}_{A}(a)&=\frac{\sqrt{ \langle a-1\rangle\langle a+1\rangle}}{\langle a\rangle}E_{21}\otimes E_{12}+ \frac{\sqrt{\langle a+1\rangle\langle a-1\rangle}}{\langle a\rangle}E_{12} \otimes E_{21}\\ &+\frac{\langle a+1\rangle}{\langle a\rangle}E_{11}\otimes E_{22 }+\frac{\langle a-1\rangle}{\langle a\rangle}E_{22}\otimes E_{11}\end{split}\] (1.7) And following the Jones case, by inserting the spectral parameters, we consider three cases of Baxterization for the dynamical Yang-Baxter equation, 1. local dynamical Hecke case. Suppose that we have invertible operators \(\sigma(a)\) defined on source fibers of groupoid graded vector space that satisfy \[\sigma^{(12)}(a)\sigma^{(23)}(ah^{(1)})\sigma^{(12)}(a)=\sigma^{(23)}(ah^{1}) \sigma^{(12)}(a)\sigma^{(23)}(ah^{(1)})\] (1.8a) \[\sigma(a)+\sigma^{-1}(a)=f(a)\operatorname{id}\] (1.8b) **Theorem 1.1** (same as the Theorem 2.22).: _Suppose that \(f(a)=f(b)\) if there exists an arrow \(\alpha\) with \(s(\alpha)=a,t(\alpha)=b\), then the operator defined by_ \[\check{R}(z,a)=e^{z}\sigma(a)+e^{-z}\sigma^{-1}(a)\] _satisfies the dynamical Yang-Baxter equation (_1.4_)._ 2. local dynamical Temperley-Lieb case. In this case, if we assume that \[\check{R}(x,a)=\operatorname{id}+xT(a)\] (1.9) where \(T(a)\) is local dynamical Temperley-Lieb operators associated to \(\bar{\kappa}\) on \(V\), then we will get **Theorem 1.2** (also Theorem 2.23).: _Suppose that \(x=f(z),x^{\prime}=f(z^{\prime}),x^{\prime\prime}=f(z^{\prime\prime}),z^{\prime \prime}=z^{\prime}-z\) satisfies the following equation_ \[x^{{}^{\prime\prime}}=\frac{x^{\prime}-x}{1+\bar{\kappa}(ah^{1})x+xx^{\prime} },\quad x^{{}^{\prime\prime}}=\frac{x^{\prime}-x}{1+\bar{\kappa}(a)x+xx^{ \prime}}\] (1.10) _then the operators \(\check{R}(x,a)=\operatorname{id}+xT(a)\) satisfies the dynamical Yang-Baxter equation (_1.4_)._ 3. local dynamical Birman-Murakami-Wenzl case. **Theorem 1.3** (same as Theorem 2.24).: _Suppose that the operators \(U(a)\) are the local dynamical Birman-Murakami-Wenzl operator associated with \(\bar{q},\bar{\nu}\) on \(V\), and suppose that \(\bar{q}(a)=\bar{q}(b),\bar{\nu}(a)=\bar{\nu}(b),\) if there exists an arrow \(\alpha\in\pi\) with \(s(\alpha)=a,t(\alpha)=b\). Then we define \[\check{R}(u,v)[a]:=U(a)+\frac{\bar{q}(a)-\bar{q}^{-1}(a)}{v/u-1}+\frac{\bar{ q}(a)-\bar{q}^{-1}(a)}{1+\bar{\nu}^{-1}(a)\bar{q}(a)v/u}K(a),\] and it satisfies the following two parameters dynamical Yang-Baxter equation \[\check{R}^{(12)}(u_{2},u_{3})[ah^{1}]\check{R}^{(23)}(u_{1},u_{3})[ a]\check{R}^{(12)}(u_{1},u_{2})[ah^{1}]\] \[=\check{R}^{(23)}(u_{1},u_{2})[a]\check{R}^{(12)}(u_{1},u_{3})[ah^{1}] \check{R}^{(23)}(u_{2},u_{3})[a]\] We provide several applications of the Baxterization procedure of the dynamical Yang-Baxter equation mentioned above. 1. The first application is about the trigonometric dynamical \(\check{R}(x)\) in the representation theory of elliptic quantum groups [16, 17, 20, 25] and restricted quantum groups [22], we show that it can be seen as a Baxterization of the representation of local dynamical Temperley-Lieb operators. 2. The second application is to derive some new dynamical \(\check{R}\) matrices of two kinds of face type two dimensional integrable lattice models, critical ADE models (also called Pasquier models [38]) and Temperley-Lieb interaction models introduced by Owczarek and Baxter [36]. Critical ADE models are a series of face type lattice models introduced and studied mainly by Pasquier ([37, 38, 39]), these models are interesting as they are related to the unitary A-series of minimal models considered by Belavin-Polyakov-Zamolodchikov [4] and Friedan-Qiu-Shenker [27]. Temperley-Lieb interaction models are also very interesting models which unifies (meaning that partition functions are the same with carefully chosen parameters and boundary conditions) a series of interesting models including critical ADE models, six-vertex model, self-dual Potts model, critical hard hexagons model, see the nice lecture note of Pearce [41]. Interesting questions about these models are the quantum group picture behind these models [9, 19] and the possibility of application of algebraic Bethe ansatz [18]. With the dynamical \(\check{R}\) matrices derived here and the algebraic Bethe ansatz of face type restricted model developed in [22], these questions can be answered. 3. The third application of Baxterization is to get a family of one dimensional integrable systems. For each representation of local dynamical Temperley-Lieb algebra that can be Baxterized, we can define a Hamiltonian that commute with a family of transfer matrices. They are some dynamical version of Heisenberg spin chains. For the restricted type A case, they were initial considered by Bazhanov-Reshetikhin [3], where they also calculate the eigenvalues and eigenvectors. For the unrestricted type A case, see the work of Etingof-Kirillov [11], Felder-Varchenko [26] and Etingof-Vachenko [14], where they studied the spin generalization of Ruijsenarrs models and MacDonald theory. And also there is a recent breakthrough in the study of long range spin chains, Lamers and Klabbers in [33] introduced two new integrable systems which unifies Inozemtsev and partially isotropic Haldane-Shastry chains. And by taking the short range limit of their new integrable systems, they got some new dynamical spin chains, which is very similar to the Hamiltonian we get (2.69), one difference is we use the periodic boundary condition, they have a twist. Another is that in (2.69), we have conjugation of \(M(0,\alpha)\) which is some translation operator, but the equation (18) of [33] is conjugated by some terms which contains nontrivial \(\check{R}\) matrix. ### Outline of the paper The note is organized as follows, in the beginning of 2, we first recall the basic definition of \(\pi\) graded vector space [22] and operators on it. Then in subsection 2.2, we define local dynamical Hecke, Temperley-Lieb and Birman-Murakami-Wenzl operators and discuss the finite type representation of local Temperley-Lieb operators. In subsection 2.4, we discuss the Baxterization of the dynamical Yang-Baxter equation. In subsection 2.5, we provides the unrestricted trigonometric and elliptical representation of local Temperley-Lieb and reformulate the trigonometric dynamical \(\check{R}\) matrix. In subsection 2.6, we derive the restricted \(ADE\) model. Then in subsection 2.7, we discuss the transfer matrix and hamiltonian spin chain. ### Acknowledgment The author would like to thank Anton Alekseev for encouragement and support, Giovanni Felder for mentioning the reference [40] and Rinat Kashaev, Rob Klabbers and Jules Lamers for interesting discussions. Research of the author is supported by the grant number 208235 of the Swiss National Science Foundation (SNSF) and by the NCCR SwissMAP of the SNSF. ## 2. Main constructions A groupoid \(\pi\) is a small category in which every morphism is invertible. We denote its object by \(\operatorname{Ob}(\pi)\), for any \(a,b\in\operatorname{Ob}(\pi)\), the set of morphisms from an object \(a\) to \(b\) is denoted by \(\pi(a,b)\), we also call the morphisms as arrows, the composition of arrows \(\gamma\in\pi(a,b)\) and \(\eta\in\pi(a,b)\) is denoted by \(\eta\circ\gamma\in\pi(a,c)\). The object of \(\pi\) is identified with the identity arrows. By abuse of notation, the set of arrows of \(\pi\) is also denoted by \(\pi\), then we can denote the source and target maps by \(s,t:\pi\to\operatorname{Ob}(\pi)\). The source fibers are \(s^{-1}(a)\) and target fibers are \(t^{-1}(a)\), for \(a\in\operatorname{Ob}(\pi)\). **Example 2.1**.: For any unoriented graph \(\mathcal{G}\), it can be seen as a groupoid \(\pi(\mathcal{G})\) by taking the vertices as objects, for each unoriented edge \(e\in\mathcal{G}\), we have corresponding two inverse direction arrows \(\alpha_{e},\alpha_{e}^{-1}\in\pi(\mathcal{G})\). **Example 2.2** (Action groupoid).: Let \(A\) be a set with right group action \(A\times G\to A\), the action groupoid \(A\rtimes G\) has the set of objects \(A\) and for each \(a^{\prime}=ag\), there is an arrow \(a\xrightarrow{g}a^{\prime}\), thus an arrow is described by a pair \((a,g)\in A\times G\). The source and target are \(s(a,g)=a,t(a,g)=ag\) and the composition is \[(a^{\prime},g^{\prime})\circ(a,g)=(a,gg^{\prime}),\quad\text{with }a^{\prime}=ag\] The identity arrows are \((a,e),a\in A\) and the inverse of \((a,g)\) is \((ag,g^{-1})\). **Definition 2.3**.: Let \(\pi\) be a groupoid with set of objects \(A\), a \(\pi\)-graded vector space of finite type over a field \(k\) is a collection \((V_{\alpha})_{\alpha\in\pi}\) of finite dimensional vector spaces indexed by the arrows of \(\pi\) such that for each \(a\in A\), there are finitely many arrows \(\alpha\) with source or target \(a\) and nonzero \(V_{\alpha}\). With groupoids, we can define the groupoid graded vector spaces with certain finite conditions. **Definition 2.4**.: Let \(\pi\) be a groupoid, a \(\pi\)-graded vector space of finite type over a field \(k\) is a collection \((V_{\alpha})_{\alpha\in\pi}\) of finite dimensional vector spaces indexed by the arrows of \(\pi\), such that for each \(a\in\operatorname{Ob}(\pi)\), there are only finitely many arrows \(\alpha\) with source or target \(a\) and nonzero \(V_{\alpha}\). For \(a\in\operatorname{Ob}(\pi)\), the \(a\) source fibers of the vector space \(V\in\pi\) is the direct sum of components with source \(a\), that is \(\oplus_{\alpha\in s^{-1}(a)}V_{\alpha}\). And similar for the target fibers. The \(k\) vector space \(\operatorname{Hom}(V,W)\) of two \(\pi\) graded vector spaces consists of families \((f_{\alpha})_{\alpha\in\pi}\) of linear maps \(f_{\alpha}:V_{\alpha}\to W_{\alpha}\), that is \(f_{\alpha}\in\operatorname{End}_{k}(V)\) the composition is also defined componentwise. The category of \(\pi\) groupoid graded vector spaces is a monoidal category, we denote by \(\operatorname{Vect}_{k}(\pi)\), with the tensor product defined \[(V\otimes W)_{\gamma}=\oplus_{\beta\circ\alpha=\gamma}V_{\alpha}\otimes W_{ \beta},\quad V,W\in\operatorname{Vect}_{k}(\pi)\] In the following example, we introduce a family of automorphism induced by the maps from \(\operatorname{Ob}(\pi)\) to nonzero complex numbers. **Example 2.5**.: Let \(\bar{q}:\operatorname{Ob}(\pi)\to\mathbb{C}^{\times}\) be a map from the objects of the groupoid to the nonzero complex numbers. For any \(V_{1},\dots,V_{N}\in\operatorname{Vect}_{k}(\pi)\), the map \(\bar{q}\) induces endormorphisms \(q^{(i)}\in\operatorname{End}(\otimes_{l=1}^{N}V_{l}),i=0,\dots,N\). For convenience \(q^{(0)}\) is also simply denoted by \(q\). For any \(\alpha_{1},\dots,\alpha_{N}\) such that \(\alpha=\alpha_{N}\circ\dots\alpha_{1}\), the \(\alpha\) component of \(q^{(i)}\) restricted to \(\otimes_{l=1}^{N}V_{\alpha_{l}}\) is defined by \[q^{(0)}_{\alpha}|_{V_{\alpha_{1}}\otimes V_{\alpha_{2}}\dots V_ {\alpha_{N}}} :=\bar{q}(s(\alpha_{i}))\operatorname{id}, \tag{2.1a}\] \[q^{(i)}_{\alpha}|_{V_{\alpha_{1}}\otimes V_{\alpha_{2}}\dots V_ {\alpha_{N}}} :=\bar{q}(t(\alpha_{i}))\operatorname{id}, \tag{2.1b}\] where \(\operatorname{id}\) is the identity operator in \(\operatorname{End}_{k}(\otimes_{l=1}^{N}V_{\alpha_{l}})\), \(q^{(i)}\) are invertible, its inverse are the endormoprhisms \((q^{i})-1\) induced by \(\bar{q}^{-1}\). On the \(a\) source fibers of vector space \(\operatorname{End}(\otimes_{l=1}^{N}V_{l})\), the operators \(q^{(i)}\) can also be denoted by \[\bar{q}(a),\quad i=0;\quad\bar{q}(ah^{(i)}),\quad i\geq 1 \tag{2.2}\] here \(h^{(i)}\) are the "dynamical" notations, intuitively \(ah^{(i)}\) means the target vertices of \(i\)th edge of a chain of arrows that starts at \(a\). For the \(k\) additive category \(\operatorname{Vect}_{k}(\pi)\), we can also define the dual groupoid graded vector space and the resulting category is an abelian pivotal monoidal category, see section 2 of [22]. ### Yang-Baxter operator and other local operators Let \(k=\mathbb{C}\), a Yang-Baxter operator on \(V\in\operatorname{Vect}_{k}(\pi)\) is a meromorphic function \(x\to\check{R}(x)\in\operatorname{End}(V\otimes V)\) of the spectral parameter \(z\in\mathbb{C}\) with values in the endormorphisms of \(V\otimes V\), obeying the Yang-Baxter equation \[\check{R}(z-w)^{(23)}\check{R}(z)^{(12)}\check{R}(w)^{(23)}=\check{R}(w)^{(12 )}\check{R}(z)^{(23)}\check{R}(z-w)^{(12)} \tag{2.3}\] more generally, we can write the equation as: \[\check{R}(x)^{(23)}\check{R}(x^{\prime})^{(12)}\check{R}(x^{\prime\prime})^{(2 3)}=\check{R}(x^{\prime\prime})^{(12)}\check{R}(x^{\prime})^{(23)}\check{R}(x )^{(12)} \tag{2.4}\] in \(\operatorname{End}(V\otimes V\otimes V)\) for all generic values of the spectral parameters \(x,x^{\prime},x^{\prime\prime}\) and certain inversion (also called unitary) relation. And here \(x,x^{\prime},x^{\prime\prime}\) are functions of the \(z,z^{\prime},z^{\prime\prime}\) and satisfies the conditions \(z^{\prime\prime}=z^{\prime}-z\). The restriction of \(\check{R}(x)\) to \(V_{\alpha}\otimes V_{\beta}\) for composable arrows \(\alpha,\beta\) has components in each direct summand of the decomposition \[\check{R}(x)|_{V_{\alpha}\otimes V_{\beta}}=\oplus_{\gamma,\delta}\mathcal{W} (x;\alpha,\beta,\gamma,\delta)\] The sum is over \(\gamma,\delta\) such that \(\beta\circ\alpha=\delta\circ\gamma\) and \(\mathcal{W}\) is the component \[\mathcal{W}(x;\alpha,\beta,\gamma,\delta)\in\operatorname{Hom}_{k}(V_{\alpha} \otimes V_{\beta},V_{\gamma}\otimes V_{\delta})\] Similarly we can define the local Hecke operator, local Temperley-Lieb operator and local Birman-Murakami-Wenzl operator which are without the spectral parameter. **Definition 2.6**.: For a map \(\bar{q}:\operatorname{Ob}(\pi)\to\mathbb{C}^{\times}\), it induces a map \(q\in\operatorname{End}(V\otimes V)\) as in 2.5. The local Hecke operator \(S\in\operatorname{End}(V\otimes V)\) associated to \(q\) is defined by the following equations \[(S-q)(S+q^{-1})=0 \tag{2.5a}\] \[S^{(12)}S^{(23)}S^{(12)}=S^{(12)}S^{(23)}S^{(12)} \tag{2.5b}\] **Definition 2.7**.: For a map \(\bar{\kappa}:\operatorname{Ob}(\pi)\to\mathbb{C}^{\times}\), it induces a map \(\kappa\in\operatorname{End}(V\otimes V)\). The local Temperley-Lieb operator \(T\) in \(\operatorname{End}(V\otimes V)\) associated to \(\kappa\) is defined by the equations \[T^{2}=\kappa T \tag{2.6a}\] \[T^{(12)}T^{(23)}T^{(12)}=T^{(12)}\] (2.6b) \[T^{(23)}T^{(12)}T^{(23)}=T^{(23)} \tag{2.6c}\] **Definition 2.8**.: For two maps \(\bar{q},\bar{\nu}:\operatorname{Ob}(\pi)\to\mathbb{C}^{\times}\), let \(V\in\operatorname{Vect}_{k}(\pi)\) and \(U\) be an invertible operator in \(\operatorname{End}(V\otimes V)\) and \[K:=\operatorname{id}-(q-q^{-1})^{-1}(U-U^{-1}), \tag{2.7}\] \(U\) is called local Birman-Murakami-Wenzl operator if it satisfies the following equations \[U^{(12)}U^{(23)}U^{(12)}=U^{(23)}U^{(12)}U^{(23)} \tag{2.8a}\] \[KT=TK=\nu K\] (2.8b) \[K^{(23)}(T^{\epsilon})^{(12)}K^{(23)}=(\nu^{-\epsilon})^{(1)}K^{ (23)},\quad\epsilon=\pm 1\] (2.8c) \[K^{(12)}(T^{\epsilon})^{(23)}K^{(12)}=(\nu^{-\epsilon})K^{(12)}, \quad\epsilon=\pm 1 \tag{2.8d}\] ### Local dynamical operators Let \(V\in\operatorname{Vect}_{k}(\pi)\), the dynamical Yang-Baxter operator \(\check{R}(z,a)\) is defined on the source fibers \(s^{-1}(a)\) of \(V\), \(a\in\operatorname{Ob}(\pi)\), \[\check{R}(z,a)\in\oplus_{\alpha\in s^{-1}(a)}\operatorname{End}_{k}\big{(}(V \otimes V)_{\alpha}\big{)},\] which satisfies the dynamical Yang-Baxter equation \[\check{R}^{(23)}(z-w,ah^{(1)})\check{R}^{(12)}(z,a)\check{R}^{(23 )}(w,ah^{(1)}) \tag{2.9}\] \[=\check{R}^{(12)}(w,a)\check{R}^{(23)}(z,ah^{(1)})\check{R}^{(12) }(z-w,a)\] here we use the "dynamical" notation with the placeholder \(h^{(i)}\): \[\check{R}^{(23)}(ah^{(1)})(u\otimes v\otimes w)=u\otimes\check{R}(t(\alpha_{1 }))(v\otimes w),\quad\text{if}\quad u\in V_{\alpha_{1}},s(\alpha_{1})=a\] And more generally, we write the dynamical Yang-Baxter equation as \[\begin{split}&\check{R}^{(23)}(x,ah^{(1)})\check{R}^{(12)}(x^{\prime },a)\check{R}^{(23)}(x^{\prime\prime},ah^{(1)})\\ &=\check{R}^{(12)}(x^{\prime\prime},a)\check{R}^{(23)}(x^{\prime },ah^{(1)})\check{R}^{(12)}(x,a)\end{split} \tag{2.10}\] and here \(x,x^{\prime},x^{\prime\prime}\) are functions of the \(z,z^{\prime},z^{\prime\prime}\) and satisfies the conditions \(z^{\prime\prime}=z^{\prime}-z\). In the case of an action groupoid \(\pi=A\rtimes G\) and its subgroupoids, the operator can be written explicitly as \[\check{R}(x,d)\in\oplus_{g\in G}\operatorname{End}_{k}((V\otimes V )_{(d,g)}) \tag{2.11a}\] \[(V\otimes V)_{(d,g)}=\sum_{h\in G}V_{(d,h)}\otimes V_{(dh,h^{-1}g)}, \tag{2.11b}\] for any composable edges, we have \[\check{R}(x,d)|_{V_{(d,g_{1})}\otimes V_{(a,g_{2})}}=\oplus_{(d,g_{3}),(c,g_{ 4})}\mathcal{W}\big{(}x;(d,g_{1}),(a,g_{2}),(d,g_{3}),(c,g_{4})\big{)}, \tag{2.12}\] where \(\mathcal{W}\big{(}x;(d,g_{1}),(a,g_{2}),(d,g_{3}),(c,g_{4})\big{)}\in \operatorname{Hom}_{k}(V_{(d,g_{1})}\otimes V_{(a,g_{2})},V_{(d,g_{3})}\otimes V _{(c,g_{3})})\), graphically it is (2.13) In the same spirit, when we look at the source fibers, we can define the various dynamical local operators. **Definition 2.9**.: For a map \(\bar{q}:\operatorname{Ob}(\pi)\to\mathbb{C}^{\times}\), the local dynamical Hecke operator \(S(a)\) is defined on the source fibers \(s^{-1}(a)\) of \(V\), \(a\in\operatorname{Ob}(\pi)\), \[S(a)\in\oplus_{\alpha\in s^{-1}(a)}\operatorname{End}_{k}\big{(}(V\otimes V )_{\alpha}\big{)}, \tag{2.14}\] and satisfy the relations \[\big{(}S(a)-\bar{q}(a)\big{)}\big{(}S(a)+\bar{q}^{-1}(a)\big{)}=0, \tag{2.15a}\] \[S^{(12)}(a)S^{(23)}(ah^{(1)})S^{(12)}(a)=S^{(23)}(ah^{(1)})S^{(1 2)}(a)S^{(23)}(ah^{(1)}) \tag{2.15b}\] **Definition 2.10**.: For a map \(\bar{\kappa}:\operatorname{Ob}(\pi)\to\mathbb{C}^{\times}\), an operator \(T(a)\) defined on the source fibers \(s^{-1}(a),a\in\operatorname{Ob}(\pi)\) \[T(a)\in\oplus_{\alpha\in s^{-1}(a)}\operatorname{End}_{k}\big{(}(V\otimes V) _{\alpha}\big{)}, \tag{2.16}\] is called a local dynamical Temperley-Lieb operator if it satisfies the following dynamical Temperley-Lieb equations: \[T(a)T(a)=\bar{\kappa}(a)T(a) \tag{2.17a}\] \[T^{(12)}(a)T^{(23)}(ah^{(1)})T^{(12)}(a)=T^{(12)}(a)\] (2.17b) \[T^{(23)}(ah^{(1)})T^{(12)}(a)T^{(23)}(ah^{(1)})=T^{(23)}(ah^{(1)}) \tag{2.17c}\] **Lemma 2.11**.: _Suppose that we have dynamical Hecke operator \(S(a)\) with parameter \(q\), if \(\bar{q}(a)+\bar{q}^{-1}(a)\neq 0\) for any \(a\) in \(\operatorname{Ob}(\pi)\), then we can define \(T(a):=\bar{q}(a)-S(a)\), which forms the local dynamical Temperley-Lieb operator with parameter \(\bar{\kappa}=\bar{q}+\bar{q}^{-1}\)._ For any finite connected unoriented graph \(\Gamma\) with \(n\) vertices and that has one edge between different vertices, let \(Y\in\operatorname{Mat}_{n}(\{0,1\})\) be its adjacency matrix, by the Perron-Frobenius theorem, see appendix A, it has a Perron-Frobenius vector \(\xi\) with eigenvalue \(\phi(Y)\) for \(\Gamma\), and satisfies the eigenvalue equation \[Y\xi=\phi(Y)\xi\] For the groupoid \(\pi(\Gamma)\) associated to the graph \(\Gamma\), we associate a one dimensional vector space to each arrow in \(\pi(\Gamma)\) and then we have a \(V\in\pi(\Gamma)\). **Proposition 2.12**.: _Suppose that \(\xi=(S_{i}),i\in\operatorname{Ob}(\pi)\), then we can construct a representation of the local dynamical Temperley-Lieb operator \(T\) associated to constant function \(\bar{\kappa}(a):=\phi(Y)\) by_ (2.18) Proof.: The proof is the same as in 2.29, we discuss later. From the duality between the square lattice model and the loop model, see [36] section 4, we can have the following duality (2.19) From the duality and the example above, we can have the loop model or diagram algebra definition of the above specific example related to graph. Later in subsection 2.3, we will see that the following example is a specific case of proposition 2.20. **Definition 2.13**.: Given any connected non-oriented graph \((V,E)\) with at most single edge between two different vertices, for \(N\in\mathbb{Z},N>0\) and \(\phi\in\mathbb{C}\), the diagram algebra \(\mathrm{dTL}(\mathrm{N},\phi)\) is defined as the follows. For any \(a\in V\), if there exists \(b,c\) which is connected to \(a\) by an edge, a generator \(e_{i}(a)[b,c]\) is defined and the generators satisfy the following three equation \[\sum_{c}e_{i}(a)[c,d]e_{i}(a)[b,c]=\phi e_{i}[b,d] \tag{2.20a}\] \[e_{i}(a)[c,d]e_{i+1}(c)[a,a]e_{i}(a)[b,c]=e_{i}(a)[b,d]\] (2.20b) \[e_{i+1}(b)[a,d]e_{i}(a)[b,b]e_{i+1}(b)[c,a]=e_{i}(b)[c,d] \tag{2.20c}\] And graphically the generators \(e_{i}(a)[b,c]\) are presented as the following: \[e_{i}(a)[b,c]= \tag{2.21}\] The equations (2.20a),(2.20b) and (2.20c) are graphically described by the (2.22),(2.23) and (2.24) respectively. (2.22) **Definition 2.14**.: For two maps \(\bar{q},\bar{\nu}:\operatorname{Ob}(\pi)\to\mathbb{C}^{\times}\) and an invertible operator \(U(a)\) defined on the source fiber \(s^{-1}(a)\) of \(V\) where \(a\in\operatorname{Ob}(\pi)\). That is \[U(a)\in\oplus_{\alpha\in s^{-1}(a)}\operatorname{End}_{k}\big{(}(V\otimes V)_{ \alpha}\big{)} \tag{2.25}\] and we denote \[K(a)=\operatorname{id}-\frac{U(a)-U^{-1}(a)}{\bar{q}(a)-\bar{q}^{-1}(a)}, \tag{2.26}\] \(U(a)\) is called the local dynamical Birman-Murakami-Wenzl operator associated to \(\bar{q},\bar{\nu}\), if they satisfies the following relations \[U^{(12)}(a)U^{(23)}(ah^{(1)})U^{(12)}(a)=U^{(23)}(ah^{(1)})U^{(1 2)}(a)U^{(23)}(ah^{(1)}) \tag{2.27a}\] \[K(a)U(a)=U(a)K(a)=\bar{\nu}(a)K(a)\] (2.27b) \[K^{(23)}(ah^{(1)})(U^{\epsilon})^{(12)}(a)K^{(23)}(ah^{(1)})= \bar{\nu}^{-\epsilon}(ah^{(1)})K^{(23)}(ah^{(1)}),\quad\epsilon=\pm 1\] (2.27c) \[K^{(12)}(a)(U^{\epsilon})^{(23)}(ah^{(1)})K^{(12)}(a)=\bar{\nu}^{- \epsilon}(a)K(a),\quad\epsilon=\pm 1 \tag{2.27d}\] ### Relation between the usual and dynamical operators There are three types of relations that relate the non-dynamical and dynamical operators. The _first type_ of relation is about the groupoid structure, suppose that the groupoid \(\pi\) is trivial that has only one vertex and one identity arrow, then the category of \(\pi\) graded vector space becomes just the usual category of \(k\) vector space \[\operatorname{Vect}_{\mathrm{k}}(\pi)\simeq\operatorname{Vect}_{\mathrm{k}},\] the local dynamical operators does not depend on the dynamical shift and becomes the usual operators, for example the dynamical Yang-Baxter equation (2.9) becomes the usual quantum Yang-Baxter equation \[\check{R}(z-w)^{(23)}\check{R}(z)^{(12)}\check{R}(w)^{(23)}=\check{R}(w)^{(12 )}\check{R}(z)^{(23)}\check{R}(z-w)^{(12)},\] and similarly other local dynamical operators will become the local non dynamical operators. The _second type_ relation is about the relations between usual operators on groupoid graded representation and dynamical operators on the source fibers. We can simply restrict the operators to the source fibers to get the dynamical operators. For example, if \(\pi=A\rtimes G\) is an action groupoid with an Yang-Baxter operator \(\check{R}(x)\) defined on \(V\in\operatorname{Vect}_{k}(\pi)\), we can restrict the \(\check{R}(x)\) to a graded component of the vector space with fixed \(a\in A\) \[\check{R}(x,a):=\check{R}(x)|_{\oplus_{g\in G}\operatorname{End}_{k}((V \otimes V)_{(a,g)})},\] the restriction of the Yang-Baxter equation to the graded component will be the corresponding dynamical Yang-Baxter equation. The third type of relation is the globalization which goes from dynamical to non-dynamical, for example we can first define the following global dynamical operators. **Definition 2.15**.: Let \(V_{i}\in\operatorname{Vect}_{k}(\pi)\), \(\bar{q}_{i}:\operatorname{Ob}(\pi)\to\mathbb{C}^{\times}\), \(i=1,\dots,N\), for operators \(S_{i}(a)\) defined on the source fibers \(s^{-1}(a),a\in\operatorname{Ob}(\pi)\), \[S_{i}(a)\in\oplus_{\alpha\in s^{-1}(a)}\operatorname{End}_{k}\big{(}(\otimes_{ t=1}^{N}V_{i})_{\alpha}\big{)}, \tag{2.28}\] they forms a dynamical Hecke operator associated to \(\bar{q}_{i}\), if it satisfies the following dynamical Hecke relations: \[\big{(}S_{i}(a)-\bar{q}_{i}(ah^{i-1})\big{)}\big{(}S_{i}(a)+\bar{ q}_{i}^{-1}(ah^{i-1})\big{)}=0,\quad 1\leq i\leq N-1 \tag{2.29a}\] \[S_{i}(a)S_{i+1}(a)S_{i}(a)=S_{i+1}(a)S_{i}(a)S_{i+1}(a),\quad 1 \leq i\leq N-2\] (2.29b) \[S_{i}(a)S_{j}(a)=S_{j}(a)S_{i}(a),\quad|i-j|>1 \tag{2.29c}\] Many classical arguments naturally extend to the dynamical case, for example, we can also define the Murphy element of type \(A\), the type here is actually refer to both the boundary type and the Lie algebra type. Here is the definition, \[J_{1}^{(A)}(a):=S_{1}^{2}(a); \tag{2.30}\] \[J_{i}^{(A)}(a):=S_{i}(a)J_{i-1}^{(A)}(a)S_{i}(a),\quad 2\leq i \leq N-1 \tag{2.31}\] **Proposition 2.16**.: _The Murphy elements satisfies the relations:_ \[[J_{i}^{(A)}(a),J_{j}^{(A)}(a)]=0 \tag{2.32}\] \[[S_{1}(a),J_{j}^{(A)}(a)]=0,\quad j\geq 1\] (2.33) \[[S_{i}(a),J_{j}^{(A)}(a)]=0,\quad j>i,i\geq 2,j\neq i-1,i\] (2.34) \[[S_{i}(a),J_{i-1}^{(A)}(a)J_{i}^{(A)}(a)]=0,\quad i\geq 2\] (2.35) \[[S_{i}(a),J_{i}^{(A)}(a)+J_{i-1}^{(A)}(a)]=0,\quad i\geq 2 \tag{2.36}\] **Remark 2.17**.: For different boundary conditions, there may be other different generalizations of Temperley-Lieb and Hecke algebra as in the classical case, for example as in [8]. **Definition 2.18**.: Let \(V_{i}\in\operatorname{Vect}_{k}(\pi)\), \(\bar{\kappa}_{i}:\operatorname{Ob}(\pi)\to\mathbb{C}^{\times}\), \(i=1,\ldots,N\), the operators \(T_{i}(a)\) are defined on the source fibers \(s^{-1}(a),a\in\operatorname{Ob}(\pi)\), \[T_{i}(a)\in\oplus_{\alpha\in s^{-1}(a)}\operatorname{End}_{k}\big{(}(\otimes_{ t=1}^{N}V_{i})_{\alpha}\big{)}, \tag{2.37}\] they forms a dynamical Hecke operator algebra if it satisfies the following dynamical Hecke relations: \[T_{i}(a)T_{i}(a)=\bar{\kappa}_{i}(ah^{i-1})T_{i}(a),\quad i=1, \ldots,N-1 \tag{2.38a}\] \[T_{i}(a)T_{i+1}(a)T_{i}(a)=T_{i}(a),\quad 1\leq i\leq N-2\] (2.38b) \[T_{i+1}(a)T_{i}(a)T_{i+1}(a)=T_{i+1}(a),\quad 1\leq i\leq N-2\] (2.38c) \[T_{i}(a)T_{j}(a)=T_{j}(a)T_{i}(a),\quad|i-j|>1 \tag{2.38d}\] **Lemma 2.19**.: _Suppose that we have dynamical Hecke operators \(S_{i}(a)\) associated to \(\bar{q}_{i}\), if \(\bar{q}_{i}(ah^{i})+\bar{q}_{i}^{-1}(ah^{i})\) is not equal to zero for any \(a\in\operatorname{Ob}(\pi)\) and \(i=1,\ldots,N\), then we can define \(T_{i}(a):=\bar{q}_{i}(ah^{(i)})-S_{i}(a)\), which forms the dynamical Temperpey-Lieb operator associated to \(\bar{\kappa}_{i}=\bar{q}_{i}+\bar{q}_{i}^{-1}\)._ **Proposition 2.20**.: _Let \(T(a)\) be a local dynamical operator defined on source fibers of \(V\) in definition 2.10 associated to \(\bar{\kappa}\), then_ \[T_{i}(a):=T^{(i,i+1)}(ah^{i-1}) \tag{2.39}\] _is a representation of dynamical Temperley-Lieb operator on the source fiber space of \(V^{\otimes N}\) associated with \(\bar{\kappa}_{i}=\bar{\kappa},i=1,\ldots,N\)._ _Suppose that \(\bar{\kappa}(a)=\bar{\kappa}(ah^{(i)})\) for all \(i\), then collecting all the fiber space, we get the operator \(T\), which forms a representation on groupoid graded vector space of usual Temperley-Lieb algebras._ And similarly for the Birman-Wenzl-Mirakami case, we give the following definitions of the global version. **Definition 2.21**.: Let \(V_{i}\in\operatorname{Vect}_{k}(\pi),\bar{q}_{i},\bar{\nu}_{i}:\operatorname{ Ob}\to\mathbb{C}^{\times}\),\(i=1,\ldots,N\), the invertible operators \(U_{i}(a),i=1,\ldots,N-1\) are defined on source fibers \(s^{-1}(a),a\in\operatorname{Ob}(\pi)\), \[U_{i}(a)\in\oplus_{\alpha\in s^{-1}(a)}\operatorname{End}_{k}\big{(}(\otimes_{ t=1}^{N}V_{i})_{\alpha}\big{)}. \tag{2.40}\] The \(K_{i}(a)\) are defined by \[K_{i}(a)=\operatorname{id}-\big{(}q(ah^{i-1})-q^{-1}(ah^{i-1}))^{-1}\big{(}U _{i}(a)-U_{i}^{-1}(a)\big{)}. \tag{2.41}\] \(U_{i}(a)\) are called dynamical Birman-Murakami-Wenzl operator if they satisfies the relation \[U_{i}(a)U_{i+1}(a)U_{i}(a)=U_{i+1}(a)U_{i}(a)U_{i+1}(a)\] (2.42a) \[K_{i}(a)U_{i}(a)=U_{i}(a)K_{i}(a)=\nu_{i}(ah^{i-1})K_{i}(a)\] (2.42b) \[K_{i}(a)U_{i-1}^{\epsilon}(a)K_{i}(a)=\nu^{-\epsilon}(ah^{i-1})K _{i}(a),\quad\epsilon=\pm 1\] (2.42c) \[K_{i}(a)T_{i+1}^{\epsilon}(a)K_{i}(a)=\nu^{-\epsilon}(ah^{i-1})K _{i}(a),\quad\epsilon=\pm 1\] (2.42d) (2.42e) ### Baxterization In this section, we consider about the Baxterization process of the local dynamical operators. In the first case, let \(V\in\operatorname{Vect}_{k}(\pi)\), suppose that we have invertible operators \(\sigma(a)\in\oplus_{\alpha\in s^{-1}(a)}\operatorname{End}_{k}\big{(}(V\otimes V )_{\alpha}\big{)}\) that satisfy the relations \[\sigma^{(12)}(a)\sigma^{(23)}(ah^{1})\sigma^{(12)}(a)=\sigma^{(23) }(ah^{1})\sigma^{(12)}(a)\sigma^{(23)}(ah^{1}) \tag{2.43a}\] \[\sigma(a)+\sigma^{-1}(a)=f(a)\operatorname{id} \tag{2.43b}\] where \(f:\operatorname{Ob}(\pi)\to\mathbb{C}^{\times}\). **Theorem 2.22**.: _Suppose that \(f(a)=f(b)\) if there exists an arrow \(\alpha\) with \(s(\alpha)=a,t(\alpha)=b\), then the operator defined by_ \[\check{R}(z,a)=e^{z}\sigma(a)+e^{-z}\sigma^{-1}(a)\] _satisfies the dynamical Yang-Baxter equation (2.9)._ Proof.: We use the relations \[\sigma(a)+\sigma^{-1}(a)=f(a)\operatorname{id},\quad\sigma(ah^{1})+\sigma^{-1 }(ah^{1})=f(ah^{1})\operatorname{id}=f(a)\operatorname{id}\] In the second case, if we assume that \[\check{R}(x,a)=\operatorname{id}+xT(a) \tag{2.44}\] where \(T(a)\) is local dynamical Temperley-Lieb operators associated to \(\bar{\kappa}\) on \(V\) as defined in 2.10 then we will get **Theorem 2.23**.: _Suppose that \(x=f(z),x^{\prime}=f(z^{\prime}),x^{\prime\prime}=f(z^{\prime\prime}),z^{\prime \prime}=z^{\prime}-z\) satisfies the following equation_ \[x^{{}^{\prime\prime}}=\frac{x^{\prime}-x}{1+\bar{\kappa}(ah^{1})x+xx^{\prime} },\quad x^{{}^{\prime\prime}}=\frac{x^{\prime}-x}{1+\bar{\kappa}(a)x+xx^{ \prime}} \tag{2.45}\] _then the operators \(\check{R}(x,a)=\operatorname{id}+xT(a)\) satisfies the dynamical Yang-Baxter equation (2.10)._ Proof.: Inserting the assumption (2.44) into the dynamical Yang-Baxter equation and using the relations of local dynamical Yang-Baxter operator, we get the following relations \[\begin{split}&(x^{\prime\prime}+x+\bar{\kappa}(ah^{1})xx^{\prime \prime}+xx^{\prime}x^{\prime\prime}-x^{\prime})T^{(23)}(ah^{(1)})\\ &=(x^{\prime}-x-\bar{\kappa}(a)xx^{\prime}-xx^{\prime}x^{\prime \prime}-x^{\prime\prime})T^{(12)}(a),\end{split} \tag{2.46}\] so suppose that we have the relation (2.45), then the Yang-Baxter relation will be satisfied. In the third case, we consider about the Birman-Murakami-Wenzl case. **Theorem 2.24**.: _Suppose that the operators \(U(a)\) are the local dynamical Birman-Murakami-Wenzl operator associated with \(\bar{q},\bar{\nu}\) on \(V\), and suppose that \(\bar{q}(a)=\bar{q}(b),\bar{\nu}(a)=\bar{\nu}(b),\) if there exists an arrow \(\alpha\in\pi\) with \(s(\alpha)=a,t(\alpha)=b\). Then we define_ \[\check{R}(u,v)[a]:=U(a)+\frac{\bar{q}(a)-\bar{q}^{-1}(a)}{v/u-1}+\frac{\bar{q} (a)-\bar{q}^{-1}(a)}{1+\bar{\nu}^{-1}(a)\bar{q}(a)v/u}K(a), \tag{2.47}\] _and it satisfies the following two parameters dynamical Yang-Baxter equation_ \[\begin{split}&\check{R}^{(12)}(u_{2},u_{3})[ah^{1}]\check{R}^{( 23)}(u_{1},u_{3})[a]\check{R}^{(12)}(u_{1},u_{2})[ah^{1}]\\ &=\check{R}^{(23)}(u_{1},u_{2})[a]\check{R}^{(12)}(u_{1},u_{3})[ ah^{1}]\check{R}^{(23)}(u_{2},u_{3})[a]\end{split} \tag{2.48}\] ### Example: trigonometric degeneration of elliptic quantum group In this section, we describe the trigonometric degeneration of elliptic quantum group. For the elliptic dynamical \(\check{R}^{\rm ell}(x,a)\), we have two basic groupoid representations, the unrestricted and restricted case. Fix two complex numbers \(\tau\) and \(L\) such that \(\operatorname{Im}\tau>0\) and \(\frac{1}{L+1}\notin\mathbb{Z}+\tau\mathbb{Z},L\in\mathbb{Z}\), let \[\theta(z,\tau)=-\sum_{n\in\mathbb{Z}}e^{i\pi(n+\frac{1}{2})^{2}\tau+2\pi i(n+ \frac{1}{2})(z+\frac{1}{2})} \tag{2.49}\] be the odd Jacobi theta function and \([z]=\theta\big{(}z/(L+1),\tau\big{)}/\big{(}\theta^{{}^{\prime}}(0,\tau)/(L+1) \big{)}\) is normalized to have derivative \(1\) at \(z=0\). We now take \(L\) to be an integer bigger than \(2\). We have two cases to consider, the first is the unrestricted case, the action groupoid is \(\pi^{\rm res}_{A}=(\mathbb{Z}+b)\rtimes\mathbb{Z}\), where \(\mathbb{Z}\) acts on \(\mathbb{Z}\) by translation, where \(b\in\mathbb{R}\) is a shift to avoid the singularity The corresponding groupoid graded vector space is defined to be \[V^{\pi^{\rm unres}_{A}}_{(a,-1)}=\mathbb{C}e_{(a,-1)},\quad V^{\pi^{\rm unres} _{A}}_{(a,+1)}=\mathbb{C}e_{(a,+1)}. \tag{2.50}\] Then we have the following isomorphism of the fiber space for any \(a\in\pi^{\rm unres}\), \[\oplus_{\alpha\in s^{-1}(a)}(V^{\pi^{\rm unres}_{A}}\otimes V^{\pi^{\rm unres }_{A}})_{\alpha}\cong\mathbb{C}^{4} \tag{2.51}\] by identifying \(e_{(a,+1)}\otimes e_{(a+1,+1)}\) with \((1,0)\otimes(1,0)\), \(e_{(a,+1)}\otimes e_{(a+1,-1)}\) with \((1,0)\otimes(1,0)\) and \(e_{(a,-1)}\otimes e_{(a-1,-1)}\) with \((0,1)\otimes(0,1)\). With the identification (2.51), if we let \(E_{ij}\) be the \(2\times 2\) matrix unit such that \(E_{ij}e_{k}=\delta_{jk}e_{i}\) for all \(k\in\{1,2\}\), then Felder's elliptic dynamical [20] Figure 1. unrestricted groupoid of type \(A\) matrix with Andrews-Baxter-Forrester [2] parametrization is \[\begin{split}\check{R}^{\rm ell}_{A}(z,a)=\sum_{i=1}^{2}E_{ii} \otimes E_{ii}+\frac{\sqrt{[a-1][a+1]}[z]}{[a][1-z]}E_{21}\otimes E_{12}+\frac{ \sqrt{[a+1][a-1]}[z]}{[a][1-z]}E_{12}\otimes E_{21}\\ +\frac{[a+z][1]}{[a][1-z]}E_{11}\otimes E_{22}+\frac{[a-z][1]}{[a][1-z]}E_{22} \otimes E_{11}\end{split} \tag{2.52}\] **Proposition 2.25**.: _On the source fiber vector space of \(V^{\pi^{\rm unres}_{A}}\), the operators \(T^{\rm ell}_{A}\) defined by_ \[\begin{split} T^{\rm ell}_{A}(a)=\frac{\sqrt{[a-1][a+1]}}{[a]}E_{ 21}\otimes E_{12}+\frac{\sqrt{[a+1][a-1]}}{[a]}E_{12}\otimes E_{21}\\ +\frac{[a+1]}{[a]}E_{11}\otimes E_{22}+\frac{[a-1]}{[a]}E_{22} \otimes E_{11}\end{split} \tag{2.53}\] _forms local dynamical Temperley-Lieb operator associated with map_ \[\bar{\kappa}(a)=\frac{[a+1]+[a-1]}{[a]} \tag{2.54}\] Proof.: The first equation (2.17a) is directly computed and it will produce the \(\bar{\kappa}\) map. For the equation (2.17b) and (2.20c), by presenting as in 2.29 it is directly checked. Taking the trigonometric limit, let \(\tau\to-i\infty\), then \(\theta(z)\sim 2e^{\frac{\pi i\tau}{4}}\), we can just replace the function \([z]\) by \(\sin(\frac{\pi z}{L})\) to get the type \(A\) trigonometric parametrization of the \(R\) matrix. For convenience, we use the notation \(\langle z\rangle:=\sin(\frac{\pi z}{L+1})\). \[\begin{split}\check{R}^{\rm tri}_{A}(z,a)=\sum_{i=1}^{2}E_{ii} \otimes E_{ii}+\frac{\sqrt{\langle a-1\rangle\langle a+1\rangle}\langle z \rangle}{\langle a\rangle\langle 1-z\rangle}E_{21}\otimes E_{12}+\frac{\sqrt{ \langle a+1\rangle\langle a-1\rangle}\langle z\rangle}{\langle a\rangle\langle 1-z \rangle}E_{12}\otimes E_{21}\\ +\frac{\langle a+z\rangle\langle 1\rangle}{\langle a\rangle \langle 1-z\rangle}E_{11}\otimes E_{22}+\frac{\langle a-z\rangle\langle 1 \rangle}{\langle a\rangle\langle 1-z\rangle}E_{22}\otimes E_{11}\end{split} \tag{2.55}\] and then by the following calculation and similarly for the term \(\langle a-z\rangle\langle 1\rangle\) \[\begin{split}&\langle a+z\rangle\langle 1\rangle=\sin(\frac{\pi(a+z)\pi}{L +1})\sin(\frac{\pi}{L+1})\\ &=(\sin(\frac{a\pi}{L+1})\cos(\frac{z\pi}{L+1}))\sin(\frac{\pi}{L +1})+\sin(\frac{z\pi}{L+1})\cos(\frac{a\pi}{L+1})\sin(\frac{\pi}{L+1})\\ &-\sin(\frac{z\pi}{L+1})\cos(\frac{\pi}{L+1})\sin(\frac{a\pi}{L+ 1})-\sin(\frac{z\pi}{L+1})\cos(\frac{\pi}{L+1})\sin(\frac{a\pi}{L+1})\\ &=\sin(\frac{a\pi}{L+1})(\sin(\frac{\pi}{L+1}-\frac{z\pi}{L+1}))+ \sin(\frac{\pi z}{L+1})\sin\frac{(\pi(a+1))}{L+1}\end{split}\] **Remark 2.26**.: The above calculation is simply the addition formula for the sine function, we choose to show it explicitly, because there are also addition formulas for the ellipitc function, the use of addition formula for Baxterization of elliptical function will be studied elsewhere. Then the parametrizations become \[\check{R}^{\text{tri}}_{A}(z,a) =\sum_{i=1}^{2}E_{ii}\otimes E_{ii}+\frac{\sqrt{\langle a-1\rangle \langle a+1\rangle}\langle z\rangle}{\langle a\rangle\langle 1-z\rangle}E_{21} \otimes E_{12}+\frac{\sqrt{\langle a+1\rangle\langle a-1\rangle}\langle z \rangle}{\langle a\rangle\langle 1-z\rangle}E_{12}\otimes E_{21}\] \[+(1+\frac{\langle a+1\rangle\langle z\rangle}{\langle a\rangle \langle 1-z\rangle})E_{11}\otimes E_{22}+(1+\frac{\langle a-1\rangle\langle z \rangle}{\langle a\rangle\langle 1-z\rangle})E_{22}\otimes E_{11}\] \[=\text{id}\,+xT^{\text{tri}}_{A}(a)\] Here \(x=\frac{\langle z\rangle}{\langle 1-z\rangle}\) and \(T^{\text{tri}}_{A}(a)\) is defined as follows \[\begin{split} T^{\text{tri}}_{A}(a)&=\frac{\sqrt{ \langle a-1\rangle\langle a+1\rangle}}{\langle a\rangle}E_{21}\otimes E_{12} +\frac{\sqrt{\langle a+1\rangle\langle a-1\rangle}}{\langle a\rangle}E_{12} \otimes E_{21}\\ &\quad+\frac{\langle a+1\rangle}{\langle a\rangle}E_{11}\otimes E _{22}+\frac{\langle a-1\rangle}{\langle a\rangle}E_{22}\otimes E_{11}\end{split} \tag{2.56}\] **Proposition 2.27**.: \(T^{\text{tri}}_{A}(a)\) _forms local dynamical Temperley-Lieb operator algebra on \(V^{\text{res}}\) associated with the constant function \(\bar{\kappa}=2\cos\lambda,\lambda=\pi/(L+1)\)._ **Lemma 2.28**.: _The parametrization_ \[x=\frac{\langle z\rangle}{\langle 1-z\rangle},\quad x^{\prime}=\frac{\langle z ^{\prime}\rangle}{\langle 1-z^{\prime}\rangle},\quad x^{\prime\prime}=\frac{ \langle z^{\prime\prime}\rangle}{\langle 1-z^{\prime\prime}\rangle} \tag{2.57}\] _satisfies the assumption of 2.23 associated with the constant function \(\bar{\kappa}=2\cos\lambda,\lambda=\pi/(L+1)\)._ For the trigonometric case, we can see that the trigonometric limit of elliptical \(\check{R}\) matrix can be seen as a Baxterization of the local dynamical Temperley-Lieb operator associated with constant map \(\bar{\kappa}=2\cos\lambda,\lambda=\pi/(L+1)\). The second case is the restricted case where we consider only the subgroupoid \(\pi^{\text{unres}}_{A}\) which is the full subgroupoid of \(\mathbb{Z}\rtimes\mathbb{Z}\) support on the set \(P^{L+1}_{++}:=\{\lambda|\lambda\in\mathbb{Z},1\leq\lambda\leq L\}\), which means such that both \(a\) and \(a+\mu\) lies in \(P^{L+1}_{++}\). we define the groupoid vector space to be \[V^{\pi^{\text{res}}_{A}}_{(a,-1)} =\mathbb{C}e_{(a,-1)},\text{ if }\text{a},\text{a}-1\in\text{P}^{L+1}_{++} \tag{2.58a}\] \[V^{\pi^{\text{res}}_{A}}_{(a,+1)} =\mathbb{C}e_{(a,+1)},\text{ if }\text{a},\text{a}+1\in\text{P}^{L+1}_{++} \tag{2.58b}\] Figure 2. restricted groupoid of type \(A\) In this case, we do not have the isomorphism (2.51) for all \(a\in\pi^{\rm res}\), because there are boundary points, the source fibers of boundary fibers are different from that of middle points. And these source fibers can all be seen as subspaces of the unrestricted one, by the corrollary 4.2 of the [22], the elliptic \(\tilde{R}(z,a)\) behaves well in these subspaces. The restricted trigonometric case can also be seen as Baxterization with respect to some local dynamical Temperley-Lieb operator, this coincides with type A critical ADE lattice model in the next section 3, so we discuss in the next section. For the higher rank case, we use the Jimbo-Miwa-Okado [31] parametrization of Felder elliptical R matrix, the situation are similar. ### Example: critical ADE lattice model Let \(\pi\) be the groupoid associated with the ADE Dynkin diagrams, it is unoriented, we double each edge by two inverse direction oriented arrows, this then becomes a groupoid. Then we could define a \(\pi\) graded vector space \(V\) by associating a one dimensional vector space to each oriented edge. \[V_{(a,\xi)}=\mathbb{C}\] The Perro-Frobenius eigenvalue of the graphs are \(2\cos(\lambda)\) where \(\lambda=\pi/h\) and \(h\) is the coexter number for the ADE graph as in the Table 1: We denote by \(A\) the adjacency matrix of the graph, let \(S_{a}\) denote the Perro-Frobenius eigenvector of the adjacency matrix \(A\) as in the table 2 of appendix, then we have **Lemma 2.29**.: _The following parametrizations of the operator \(T(a)\) satisfies the dynamical Temperlay-Lieb relations with parameter \(2\cos\lambda\)._ \[T(d)=\oplus_{a,c}T\binom{d}{a} \tag{2.59}\] Proof.: It is easy to see that the "dynamical Temperlay-Lieb" relations are equivalent to the following graphical relations, these graph of relations of \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Lie algebra & \(A_{L}\) & \(D_{L}\) & \(E_{6}\) & \(E_{7}\) & \(E_{8}\) \\ \hline Coxter number & \(L+1\) & \(2L-2\) & 12 & 18 & 30 \\ \hline \end{tabular} \end{table} Table 1. table of coexter number blocks of Temperley-Lieb was already observed by the Pasquier [38], with the above parametrization, these relations are easily verified. **Lemma 2.30**.: _When \(0<\kappa<2\), \(\kappa=2\cos\lambda\) and for the parametrizations_ \[x=\frac{\sin z}{\sin(\lambda-z)},\quad x^{\prime}=\frac{\sin z^{\prime}}{\sin( \lambda-z^{\prime})},x^{\prime\prime}=\frac{\sin z^{\prime\prime}}{\sin( \lambda-z^{\prime\prime})}\] _then if the spectral parameters satisfies the difference relation \(z^{\prime\prime}=z^{\prime}-z\), then we will have the desired relation for \(x,x^{\prime},x^{\prime\prime}\) in (2.45)._ **Remark 2.31**.: There are also interesting Baxterization for the case when \(\kappa=2\) which corresponds to the affine simply laced Dykin diagrams. These will also give rise to the dynamical Yang-Baxter equation defined on the affine Dykin diagrams, see [41] for the lattice model description, the recipe to describe it in terms of dynamical Yang-Baxter equation is the same. With the dynamical Temperley-Lieb operator and the baxterization lemma 2.30, we get the dynamical Yang-Baxter operator. For the type \(A\) case, we can write the \(\check{R}\) matrix and operator \(T\) more explicitly as a matrix similar to the usual setting, but because our representation is the restricted case, we do not have the isomorphism for all fiber space 2.51, so there are some items may not appear due to the restriction. We denote \(\langle a\rangle:=\sin(a\pi/(L+1))\), for \(3\leq a\leq L-3\), we have the following \[\check{R}_{A}^{\rm tri}(u,a)=\begin{pmatrix}1&0&0&0\\ 0&1+\frac{\sin(u)\langle a+1\rangle}{\sin(\lambda-u)\langle a\rangle}&\frac{ \sin u\sqrt{\langle a-1\rangle\langle a+1\rangle}}{\sin(\lambda-u)\langle a \rangle}&0\\ 0&\frac{\sin(u)\sqrt{\langle a-1\rangle\langle a+1\rangle}}{\sin(\lambda-u) \langle a\rangle}&1+\frac{\sin(u)\langle a-1\rangle}{\sin(\lambda-u)\langle a \rangle}&0\\ 0&0&0&1\end{pmatrix} \tag{2.60}\] And in this case the \(T\) is the following: \[T(a):=\begin{pmatrix}0&0&0&0\\ 0&\frac{\langle a+1\rangle}{\langle a\rangle}&\frac{\sqrt{\langle a-1\rangle \langle a+1\rangle}}{\langle a\rangle}&0\\ 0&\frac{\sqrt{\langle a-1\rangle\langle a+1\rangle}}{\langle a\rangle}&\frac{ \langle a-1\rangle}{\langle a\rangle}&0\\ 0&0&0&0\end{pmatrix}\] For other \(a\) near the boundary, certain terms in these matrix are not defined because the fiber space do not have the isomorphism (2.45). For example, if \(a=2\), \[\check{R}_{A}^{\text{tri}}(u,a)=\begin{pmatrix}1&0&0&0\\ 0&1+\frac{\sin(u)\langle a+1\rangle}{\sin(\lambda-u)\langle a\rangle}&\frac {\sin u\sqrt{\langle a-1\rangle\langle a+1\rangle}}{\sin(\lambda-u)\langle a \rangle}&0\\ 0&\frac{\sin(u)\sqrt{\langle a-1\rangle\langle a+1\rangle}+1}{\sin(\lambda-u) \langle a\rangle}&1+\frac{\sin(u)\langle a-1\rangle}{\sin(\lambda-u)\langle a \rangle}&0\\ 0&0&0&*\end{pmatrix}\] if \(a=1\), \[T(a):=\begin{pmatrix}0&0&0&0\\ 0&\frac{\langle a+1\rangle}{\langle a\rangle}&*&0\\ 0&*&*&0\\ 0&0&0&0\end{pmatrix}\] For \(a=L,L-1\), the situation is similar. ### Transfer matrix and hamiltonian of spin chain We briefly recall the definition of convolution algebra with coefficients in \(\pi\) graded algebras over a field and the partial traces to describe the transfer matrix in terms of \(\pi\) graded vector space. **Definition 2.32**.: Let \(\pi\) be a groupoid, a \(\pi\) graded algebra \(R\) over \(k\) is a collection \((R_{\gamma})_{\gamma\in\pi}\) of \(k\)-vector spaces labeled by arrows of \(\pi\) with bilinear products \(R_{\alpha}\times R_{\beta}\to R_{\beta\circ\alpha},(x,y)\to xy\) defined for composable arrows \(\alpha,\beta\) and units \(1_{a}\in R_{a}\), for \(a\in A\) such that \((i)(xy)z=x(yz)\) whenever defined and \((ii)x1_{b}=x=1_{a}x\) for all \(x\in R_{\alpha}\) of degree \(\alpha\in\pi(a,b)\). **Example 2.33**.: Let \(V\in\operatorname{Vect}_{k}(\pi)\) and let \(\underline{\operatorname{End}}V\) be the \(\pi\) graded vector space with \((\underline{\operatorname{End}}V)_{\alpha}=\oplus_{\gamma\in\pi(a,a)} \operatorname{Hom}_{k}(V_{\alpha\gamma\circ\gamma\alpha^{-1}},V_{\gamma})\) where \(a=s(a)\), then \(\underline{\operatorname{End}}V\) with the product given by the composition of linear maps \[\operatorname{Hom}_{k}(V_{\alpha\gamma\alpha^{-1}})\otimes\operatorname{ Hom}_{k}(V_{\beta\alpha\gamma\alpha^{-1}\beta^{-1}},V_{\alpha\gamma\alpha^{-1}}) \rightarrow\operatorname{Hom}_{k}(V_{\beta\alpha\gamma(\beta\alpha)^{-1}},V_{ \gamma})\] and unit \(1_{a}=\oplus_{\gamma\in\pi(a,a)}\operatorname{id}_{V_{\gamma}}\) is a \(\pi\) graded algebra. **Definition 2.34**.: Let \(R\) be a \(\pi\) graded algebra, the convolution algebra \(\Gamma(\pi,R)\) with coefficients in \(R\) is the \(k\) algebra of maps \(f:\pi\rightarrow\sqcup_{\alpha\in\pi}R_{\alpha}\) such that 1. \(f(\alpha)\in R_{\alpha}\) for all arrows \(\alpha\in\pi\), 2. for every \(a\in A\), there are finitely many \(\alpha\in s^{-1}(a)\cup t^{-1}(a)\) such that \(f(\alpha)\neq 0\). The product is the convolution product \[f*g(\gamma)=\sum_{\beta\circ\alpha=\gamma}f(\alpha)g(\beta) \tag{2.61}\] The partial trace over \(V\) is the map \[\operatorname{tr}_{V}:\operatorname{Hom}_{\operatorname{Vect}_{k}(\pi)}(V \otimes W,W\otimes V)\to\Gamma(\pi,\underline{\operatorname{End}}W) \tag{2.62}\] defined as follows. For \(f\in\operatorname{Hom}(V\otimes W,W\otimes V)\) and \(\alpha\in\pi(a,b),\gamma\in\pi(a,a)\), let \(f(\alpha,\gamma)\) be the component of the mapping, which is the mapping between two paths in the figure 3. \[f(\alpha,\gamma):V_{\alpha}\otimes W_{\alpha\gamma\alpha^{-1}}\to W_{\gamma} \otimes V_{\alpha} \tag{2.63}\] Define \[\operatorname{tr}_{V_{\alpha}}f(\alpha,\gamma)=\sum_{i}(\operatorname{id} \otimes e_{i}^{*})f(\alpha,\gamma)(e_{i}\otimes\operatorname{id})\in \operatorname{Hom}(W_{\alpha\gamma\alpha^{-1}},W_{\gamma}) \tag{2.64}\] for any basis \(e_{i}\) of \(V_{\alpha}\) and dual basis \(e_{i}^{*}\) of the dual vector space \((V_{\alpha})^{*}\). **Definition 2.35**.: The partial trace \(\operatorname{tr}_{V}f\in\Gamma(\pi,\underline{\operatorname{End}}W)\) of \(f\in\operatorname{Hom}_{\operatorname{Vect}_{k}(\pi)}(V\otimes W,W\otimes V)\) over \(V\) is the section \[\operatorname{tr}_{V}f:\alpha\to\oplus_{\gamma\in\pi(a,a)}\operatorname{tr}_{V _{\alpha}}f(\alpha,\gamma)\in(\underline{\operatorname{End}}W)_{\alpha} \tag{2.65}\] For the vector spaces \(V_{i},V_{0}\in\operatorname{Vect}_{k}(\pi)\) with the dynamical Yang-Baxter operator \(\tilde{R}_{V_{0},V_{i}}(x,a)\), the component \(\alpha\) of face type transfer matrix can be draw as in the figure 4 which transfer from lower horizontal line to the upper horizontal line, it is an element in \(\Gamma(\pi,\underline{\operatorname{End}}(V_{1}\otimes\cdots\otimes V_{N}))\), its component \(\alpha,s(\alpha)=a\) is written as \[M(x,\alpha)=\operatorname{tr}_{V_{0}}\big{(}\prod_{i=0}^{N-1}\tilde{R}_{V_{0},V_{i+1}}^{(i+1,i+2)}(x,ah^{(i)})\big{)}(\alpha)\] Figure 3. component of \(f\) **Proposition 2.36**.: _By the corollary 3.9 of [22], we have the family of commuting transfer matrix_ \[M(x)M(y)=M(y)M(x) \tag{2.66}\] Now let \(V=V_{i}\in\operatorname{Vect}_{k}(\pi),i=0,\ldots,N\), suppose that we have the Baxterization with respect to the local Temperley-Lieb operators as in with the ansatz \(\check{R}_{V,V}(x,a)=\operatorname{id}+xT(a)\), the component \((\alpha)\) of transfer matrix can be written as \[M(x,\alpha)=\operatorname{tr}_{V}\prod_{i=0}^{N-1}\big{(}\operatorname{id}+xT ^{(i+1,i+2)}(ah^{(i)})\big{)}(\alpha) \tag{2.67}\] We can then define the hamiltonian \(H(x,a)\in\Gamma(\pi,\underline{\operatorname{End}}(V^{\otimes N}))\) to be the "log derivative" at \(0\) of the transfer matrix \[H(x,\alpha):=M^{-1}(0,\alpha)M^{\prime}(x,\alpha)|_{x=0}=M^{-1}(0,\alpha) \operatorname{tr}_{V}\big{(}\sum_{i=0}^{N-1}T^{(i+1,i+2)}(ah^{(i)})\big{)} \tag{2.69}\] From the commuting of the transfer matrices, we have the following commuting relations **Proposition 2.37**.: \[H(x)M(y)=M(y)H(x)\] (2.70) We have that \(M(0,\alpha)=\operatorname{tr}_{V}(\operatorname{id})\), if we assume that \[\dim V_{\alpha}=1 \tag{2.71}\] for simplicity and also enough for all our examples. Then we have more explicitly \(M(0,\alpha)\in\underline{\operatorname{End}}(V^{\otimes N})\) \[\begin{array}{l}M(0,\alpha)|_{V_{\alpha_{1}}\cdots\otimes V_{\alpha_{N}}}:V _{\alpha_{1}}\cdots\otimes V_{\alpha_{N}}\to V_{\alpha_{N}}\otimes V_{1} \cdots\otimes V_{\alpha_{N-1}}\\ \qquad\qquad\qquad v_{1}\otimes\ldots v_{N}\mapsto v_{N}\otimes v_{1} \otimes v_{2}\ldots v_{N-1}\end{array}\] Figure 4. component \(\alpha\) of row transfer matrix of face type graphically this corresponds to More explicitly, we have different forms \[H(x,\alpha) =M^{-1}(0,\alpha)\big{(}\sum_{i=1}^{N-2}M(0,\alpha)T^{(i+1,i+2)}(ah^ {(i)})+M(0,\alpha)T^{(N,1)}(a)\big{)} \tag{2.72a}\] \[=\sum_{i=1}^{N-2}T^{(i+1,i+2)}(ah^{(i)})+T^{(N,1)}(a)\] (2.72b) \[=\sum_{i=1}^{N-2}T^{(i+1,i+2)}(ah^{(i)})+M^{-1}(0,\alpha)T^{(1,2)} (a)M(0,\alpha) \tag{2.72c}\] **Example 2.38**.: For each representation of local dynamical Temperley-Lieb that can be Baxterized and also satisfies the assumption (2.71), we can construct Hamitonians as expressed in (2.72). 1. Plug the representation (2.56) into (2.72), this corresponds to the case of unrestricted type \(A\). 2. Plug the representation (2.59) into (2.72), this corresponds to the case of restricted ADE type. **Remark 2.39**.: The local dynamical Temperley-Lieb structure underlying these one dimensional integrable systems are probably new. As a physics system expressed in the square lattice language, these systems has been widely studied in physics, starting from the work of [3], where they considered restricted type A and calculate the eigenvectors and eigenvalues. ## Appendix A Perron-Frobenius theorem In this appendix, we recall the classical Perron-Frobenius theorem in the form of the book [10], Theorem 3.2.1. **Theorem A.1** (Frobenius-Perron).: _Let \(B\) be a square matrix with non-negative real entries._ 1. \(B\) _has a non-negative real eigenvalue. The largest non-negative real eigenvalue_ \(\lambda(B)\) _of_ \(B\) _dominates the absolute values of all other eigenvalues_ \(\mu\) _of_ \(B:|\mu|\leq\lambda(B)\) _(in other words, the spectral radius of_ \(B\) _is an eigenvalue.) Moreover, there is an eigenvector of_ \(B\) _with non-negative entries and eigenvalue_ \(\lambda(B)\) 2. _If_ \(B\) _has strictly positive entries then_ \(\lambda(B)\) _is a simple positive eigenvalue, and the corresponding eigenvector can be normalized to have strictly positive entries. Moreover,_ \(|\mu|<\lambda(B)\) _for any other eigenvalue_ \(\mu\) _of_ \(B\)_._ 3. _If a matrix_ \(B\) _with non-negative entries has an eigenvector_ \(v\) _with strictly positive entries, then the corresponding eigenvalue is_ \(\lambda(B)\)_._ Proof.: See the proof of [10], Theorem 3.2.1. As a example of the above theorem, we have the following table 2 of Perro-Frobenius eigenvector of classical ADE Dykin diagram, see for example [41].
2308.06375
UAMM: Price-oracle based Automated Market Maker
Automated market makers (AMMs) are pricing mechanisms utilized by decentralized exchanges (DEX). Traditional AMM approaches are constrained by pricing solely based on their own liquidity pool, without consideration of external markets or risk management for liquidity providers. In this paper, we propose a new approach known as UBET AMM (UAMM), which calculates prices by considering external market prices and the impermanent loss of the liquidity pool. Despite relying on external market prices, our method maintains the desired properties of a constant product curve when computing slippages. The key element of UAMM is determining the appropriate slippage amount based on the desired target balance, which encourages the liquidity pool to minimize impermanent loss. We demonstrate that our approach eliminates arbitrage opportunities when external market prices are efficient.
Daniel Jiwoong Im, Alexander Kondratskiy, Vincent Harvey, Hsuan-Wei Fu
2023-08-11T20:17:22Z
http://arxiv.org/abs/2308.06375v2
# UAMM: UBET Automated Market Maker ###### Abstract Automated market makers (AMMs) are pricing mechanisms utilized by decentralized exchanges (DEX). Traditional AMM approaches are constrained by pricing solely based on their own liquidity pool, without consideration of external markets or risk management for liquidity providers. In this paper, we propose a new approach known as UBET AMM (UAMM), which calculates prices by considering external market prices and the impermanent loss of the liquidity pool. Despite relying on external market prices, our method maintains the desired properties of a constant product curve when computing slippages. The key element of UAMM is determining the appropriate slippage amount based on the desired target balance, which encourages the liquidity pool to minimize impermanent loss. We demonstrate that our approach eliminates arbitrage opportunities when external market prices are efficient. ## 1 Introduction A Decentralized Exchange (DEX) is a marketplace for trading digital currencies that operates on a blockchain network in a decentralized manner. The popularity of DEX has grown significantly due to its decentralized nature, accessibility, and ability to provide liquidity provision. It consists of pools of funds where the funds are gathered by liquidity providers (LPs) and utilized to provide liquidity to traders. A core component of DEX is an automated market maker (AMM) that algorithmically determines the price of digital currencies without requiring centralized market makers. Users trade directly against pools of funds based on the price given by the AMM instead of relying on a centralized orderbook. For each trade between a pair of quote and base currencies, the liquidity pool balances get updated, where the quantity of the quote currency becomes less, and the base currency becomes more from trade, adjusting the price for each digital currency. LPs earn passive income from the fees generated from these trades. AMMs use deterministic mathematical formulas to establish the relationship between the quantities of digital currencies in liquidity pools and their corresponding prices. The two most commonly used AMMs are constant function market makers that preserve the relation between the pair of currencies through a constant function concerning the liquidity pool balances [3, 4]. The most well-known ones are Uniswap [2] and Curve [5]. Their formulation ensures that the price reflects the supply and demand of the digital currencies, adjusting proportionally to maintain the constant function of the pool as more traders participate in the market. Unfortunately, existing AMMs that determine prices based on liquidity pool balances, including the constant function market makers, suffer from a major drawback. First and foremost, these AMMs create arbitrage opportunities for traders, resulting in significant losses for LPs. Given the zero-sum game nature of such opportunities, whenever someone capitalizes on arbitrage profits, someone else will inevitably incur losses, and unfortunately, it is typically LPs who bear the brunt as they take the opposite side of the trades. The two primary reasons for these arbitrage opportunities are mispricing and price changes in the currencies. First, AMMs determine prices solely based on internal market information, i.e., the liquidity pool balance, without considering external market information. Consequently, the price of the external market may differ from that of the internal market, exposing LPs to trade at a less favourable price than the fair market price. Moreover, AMMs exhibit slower and delayed adaptation to price changes. It requires someone to seize an arbitrage opportunity resulting from price changes for the AMM to adjust its prices accordingly. This becomes especially problematic when the market does not revert to a stable price or is characterized by high volatility. In this paper, we propose a new algorithm called _UBET AMM_ (UAMM) that addresses the aforementioned issues by eliminating arbitrage opportunities. UAMM calculates prices by considering both external and internal market prices. Despite relying on external market prices, our method maintains the desired properties of a constant product curve when computing slippage. The key element of UAMM is determining the appropriate slippage amount based on the desired target balance, which encourages the liquidity pool to minimize impermanent loss. Therefore, we decompose the price into an estimated fair price and the slippage, estimating each component separately. We demonstrate that our approach eliminates arbitrage opportunities when external market prices are efficient. In summary, UAMM effectively manages risks for LPs while enabling them to earn passive yields. Unlike Uniswap V3 [1], LPs using UAMM need not worry about managing their impermanent loss. We define three transactions: add, remove, and swap, and thoroughly discuss the properties of these UAMM transactions that induce the desired behaviours for an AMM. Moreover, certain properties allow UAMM to eliminate arbitrage opportunities under the assumption that the market price aligns with the fair price. ## 2 Preliminary We list the notations and variable names for the paper and follow the general notations from [4]. ### Token Notations * Let \(\mathbb{T}\) be the set of _fungible atomic tokens_ that consists of native or application-specific tokens like ERC20. The fungible atomic token implies that tokens are tradeable and broken down into smaller units. * Let \(\tau_{0}\in\mathbb{T}\) be the base currency. * Let \(\tau_{1},\tau_{2},\cdots\tau_{K}\in\mathbb{T}\) be the tokens of interests for an application. We group them as a unordered tuple of \(K\) distinct atomic tokens \((\tau_{1},\tau_{2},\cdots\tau_{K})\in\bigcup^{K}\mathbb{T}\) where the distinctive atomic token implies that \(\tau_{i}\neq\tau j\) for \(i\neq j\). For simplicity, we call them _K minted tokens_. * Let \(d\tau_{i}\) be the amount of a token type \(\tau_{i}\). E.g., if base currency \(\tau_{0}=\mathbf{USDC}\), then \(d\tau_{0}\) units of USDC. ### States The UAMM states vary over time through interacting with users' transactions. The state consists of the liquidity pool balances, fair and spontaneous prices of tokens, total value and supply of the pool, and total received collateral. * \(R\tau_{1},R\tau_{2},\cdots,R\tau_{K}\) denote the liquidity pool balances. * \(f\tau_{1},f\tau_{2},\cdots,f\tau_{K}\) be the fair prices of \(K\) minted tokens w.r.t base currency \(\tau_{0}\) given by the oracle. * \(p\tau_{1},p\tau_{2},\cdots,p\tau_{K}\) be the spontaneous token prices w.r.t base currency \(\tau_{0}\) given by the UAMM. * \(TS\) denotes the total supply of liquidity pool share tokens \(s_{lp}\). * \(TB\) be the total investment balance in terms of base currency \(\tau_{0}\) (see Definition 1). * \(TV\) denotes the total value of the liquidity pool (see Definition 3). We denote the state of UAMM as \(\Gamma\). Most of the time we drop subscript t because it does not require a time dependency. However, we sometimes explicitly add subscript t to the variable in order to denote the time. For example, \(TB_{\text{t}}\) and \(f\tau_{1,\text{t}}\) refers to total investment balance and price of tokens \(\tau_{1}\) at time t respectively. ### Transactions UAMM transaction from one state to another through user transaction \(A\), \(\Gamma\xrightarrow{A}\Gamma^{\prime}\). Here is a list of transactions: * \(\mathbf{Add}(d\tau_{1},d\tau_{2},\cdots d\tau_{K})\to s_{lp}\): Returns shares \(s_{lp}\) to the minted units of tokens and adds \(d\tau_{1},d\tau_{2},\cdots d\tau_{K}\) tokens to liquidity pool \(R\tau_{1},R\tau_{2},\cdots,R\tau_{K}\). * \(\mathbf{Remove}(s_{lp})\rightarrow(d\tau_{1},d\tau_{2},\cdots d\tau_{K})\): Returns \((d\tau_{1},d\tau_{2},\cdots d\tau_{K})\) minted tokens and burns LP shares \(s_{lp}\). * \(\mathbf{Swap}(d\tau_{i},\tau_{j})\to d\tau_{j}\): Returns \(d\tau_{j}\) for \(d\tau_{i}\). LPs can add and remove tokens to the liquidity pool by either using \(\mathbf{Add}(d\tau_{1},d\tau_{2},\cdots d\tau_{K})\) and \(\mathbf{Remove}(s_{lp})\) (see Section 3). Traders can swap tokens using \(\mathbf{Swap}(d\tau_{i},\tau_{j})\) (see Section 4). We define metrics that track the states of LPs and liquidity pools. **Definition 1** (Total Investment Balance a.k.a, Target Balance).: Let \(A_{1},A_{2},\cdots,A_{\mathrm{T}}\) be the sequence of actions. The total investment balance at the time \(\mathrm{T}\), \[TB_{\mathrm{T}}:=\sum_{\mathrm{t=1}}^{T}\left[\left(\mathbb{I}[A_{\mathrm{t} }=\mathbf{Add}]-\mathbb{I}[A_{\mathrm{t}}=\mathbf{Remove}]\right)\sum_{i=1}^ {K}d\tau_{i,\mathrm{t}}\cdot f\tau_{i,\mathrm{t}}\right]\] where \(f\tau_{i,\mathrm{t}}\) is the fair price and \(R\tau_{i,\mathrm{t}}\) is the pool balance of \(\tau_{i}\) at time \(\mathrm{t}\). The total investment balance measures the amount of value that was added to the UAMM in terms of the base currency. Because LPs can provide liquidity in \(K\)-minted tokens, it converts each minted token in terms of base currency and then aggregates the net amount. LPs' total investment amount is independent of \(\mathbf{Swap}(d\tau_{i},\tau_{j})\) transactions. While LPs' total investment amount does not change unless more funds are added or removed from the liquidity pool, the total liquidity pool value can fluctuate based on the change in fair price or liquidity pool balances. **Definition 2** (Liquidity Pool Value).: The total value of the liquidity pool is \[TV:=\sum_{i=1}^{K}f\tau_{i}\cdot R\tau_{i}\] where \(f\tau_{i}\) is the fair price and \(R_{i}\) is the pool balance of \(\tau_{i}\). If the fair prices are probability1, then the total value is the expected value (EV) of the liquidity pool. Footnote 1: the total fair prices sum up to 1 and each fair price lies in between \([0,1]\) These definitions enable us to talk about the liquidity provider's impermanent gain and loss. The impermanent gain and loss reflect the relative performance of assets within a pool compared to holding them individually over a specific time frame. **Definition 3** (Impermanent Gain & Loss).: At time \(T\)m the impermanent gain & loss is \[IGnL_{\mathrm{T}}(LP):=TV_{\mathrm{T}}(LP)-TB_{\mathrm{T}}(LP)\] where \(TV_{\mathrm{T}}(LP)\) and \(TB_{\mathrm{T}}(LP)\) are the liquidity pool value and total investment balance of an individual liquidity provider (LP). We call it gain if \(IGnL_{\mathrm{T}}(LP)>0\) and loss if \(IGnL_{\mathrm{T}}(LP)<0\). Impermanent gain and loss refer to situations where you deposit assets into a liquidity pool and experience a gain or loss if you were to withdraw them at a later time compared to holding those assets individually throughout the same period. These gains and losses become permanent only when you refund your LP shares. We list common properties of AMM that do not depend on the design of UAMM. **Property 1** (Basic AMM Properties).: Given a state \(\Gamma\) and transaction \(A\), 1. Initial condition: At time \(0\), \(R\tau_{i,0}=0\) for \(\tau_{i}\) and \(A_{0}=\mathbf{Add}\). 2. Non depletion: For \(\texttt{t}>0\), \(R\tau_{i,\texttt{t}}>0\) for \(\tau_{i}\in\mathbb{T}\) and \(TV_{\texttt{t}}>0\). 3. Determinism: If \(\Gamma\rightarrow\Gamma^{\prime}\) and \(\Gamma\rightarrow\Gamma^{\prime\prime}\), then \(\Gamma^{\prime}=\Gamma^{\prime\prime}\). 4. Preservation of token supply: If \(\Gamma\rightarrow\Gamma^{\prime}\), then total supply of tokens at \(\Gamma\) and \(\Gamma^{\prime}\) remains the same for all \(\tau_{i}\). The non-depletion condition enforces the AMM to never run out of liquidity and its value is greater than zero. The initial condition exists for the non-depletion condition where someone has to add the initial fund. The deterministic transactions are desired properties for users and blockchains. It ensures that users are not taking chances and nodes can reconstruct the network and verify the validity of a transaction from a sequence of transactions. These together set up a framework for AMMs to have a certain desirable dynamical system and allow us to do basic sanity checks. We would like to remind you that the liquidity pool balances do not have to be preserved. Relaxing this property makes UAMM superior to other AMMs in terms of reducing impermanent loss. We argue that it is a direct consequence of our pricing mechanism and it is more fair to LPs in some sense (see the details in Section 4). ## 3 Adding & Removing Tokens Before we dive into adding and removing funds, let us form a relationship between \((d\tau_{1},d\tau_{2},\cdots,d\tau_{K})\) and \(d\tau_{\texttt{0}}\). The total value of \(K\) minted tokens is \(d\tau_{0}\) which is \((d\tau_{1},d\tau_{2},\cdots,d\tau_{K})\) weighted by fair price, \[d\tau_{\texttt{0}}=\sum_{i=1}^{K}d\tau_{i}f\tau_{i}. \tag{1}\] Conversely, \(d\tau_{\texttt{0}}\) can be transformed into \((d\tau_{1},d\tau_{2},\cdots,d\tau_{K})\). There are two ways to do this: 1. A simple method is to equally divide \(d\tau_{0}\) for each atomic token \(\tau_{i}\) based on an aggregated unit of fair prices, \[d\tau_{i}=\frac{d\tau_{0}}{\sum_{k}^{K}f\tau_{k}}\] (2) for all \(\tau_{i}\). Now, \((d\tau_{1},d\tau_{2},\cdots,d\tau_{K})\) and \(d\tau_{\texttt{0}}\) are reversible: \(\sum_{i=1}^{K}d\tau_{i}f\tau_{i}=\sum_{i=1}^{K}\left(\frac{d\tau}{\sum_{k=1}^ {K}f\tau_{k}}\right)f\tau_{i}=d\tau\sum_{i=1}^{K}\frac{f\tau_{i}}{\sum_{k=1}^ {K}f\tau_{k}}=d\tau_{\texttt{0}}\). 2. Another way to distribute with respect to the ratio of the pool balances and the total liquidity value, \[d\tau_{i}=d\tau_{\texttt{0}}\frac{R\tau_{i}}{TV}\] (3) for all \(\tau_{i}\). Again, we verify that they are reversible: \[\sum_{i=1}^{K}d\tau_{i}\cdot f\tau_{i}=\sum_{i=1}^{K}d\tau_{0}\cdot\frac{R\tau_{ i}}{TV}\cdot f\tau_{i}=\frac{d\tau_{0}}{TV}\sum_{i=1}^{K}R\tau_{i}\cdot f\tau_{i}=d \tau_{0}.\] ### Add & Remove Functions Anyone can participate in a liquidity provision program to earn yields. When a user adds liquidity to the pool, you receive LP tokens that represent your ownership in the liquidity pool. Let \(s_{lp}\) be the number of LP tokens that you receive when you add funds either in base currency \(\tau_{0}\) or \(K\) minted tokens. \(\mathbf{Add}(d\tau_{1},d\tau_{2},\cdots d\tau_{K})\to s_{lp}\) returns a new-minted amount of LP shares, \[s_{lp}=d\tau_{0}\cdot\frac{TS}{TV}.\] We update the liquidity pool balance \(R\tau_{i}=R\tau_{i}+d\tau_{i}\) for all \(\tau_{i}\), \(TV\), and \(TB\). LPs can withdraw their funding from the liquidity pool anytime. We calculate how much \(K\) minted tokens to take out and convert back to the base currency based on the number of shares \(s_{lp}\). \(\mathbf{Remove}(s_{lp})\rightarrow(d\tau_{1},d\tau_{2},\cdots d\tau_{K})\) takes \(s_{lp}\) as input and returns \(K\)-minted tokens, where \[d\tau_{i}=R\tau_{i}\cdot\frac{s_{lp}}{TS},\qquad\text{ for all }\tau_{i}.\] After performing the \(\mathbf{Add}\) or \(\mathbf{Remove}\) transaction, UAMM states will be automatically updated. The state \(\Gamma\) consists of variables in Section 2.1. **Property 2** (Additivity).: The \(\mathbf{Add}\) and \(\mathbf{Remove}\) transactions are addictive while the fair prices remain the same. That is, the result of \(\mathcal{A}\) and the states are the same whether a user performs two of the same successive transactions \(A0\) and \(A1\), or through a single transaction \(A2\): 1. \(\mathbf{Add}(d\tau_{1},d\tau_{2},\cdots d\tau_{K})+\mathbf{Add}(d\tau_{1}{}^{ \prime},d\tau_{2}{}^{\prime},\cdots d\tau_{K}{}^{\prime})\Longleftrightarrow \mathbf{Add}(d\tau_{1}+d\tau_{1}{}^{\prime},d\tau_{2}+d\tau_{2}{}^{\prime}, \cdots,d\tau_{K}+d\tau_{K}{}^{\prime})\) 2. \(\mathbf{Remove}(s_{lp})+\mathbf{Remove}(s_{lp}^{\prime})\Longleftrightarrow \mathbf{Remove}(s_{lp}+s_{lp}^{\prime})\) 3. \(\Gamma\xrightarrow{A0}\Gamma_{0}\xrightarrow{A1}\Gamma_{1}\Longleftrightarrow \Gamma\xrightarrow{A2}\Gamma_{1}\) where \(A0=A1=A2\in\{\mathbf{Add},\mathbf{Remove}\}\) **Property 3** (Reversibility).: The \(\mathbf{Add}\) and \(\mathbf{Remove}\) transactions are reversible while the fair prices remain the same. The state derived from \(\mathbf{Add}\) operation is reversible by \(\mathbf{Remove}\), and visa versa: 1. \(\mathbf{Add}(\mathbf{Remove}(s_{lp}))\to s_{lp}\) 2. \(\mathbf{Remove}(\mathbf{Add}(d\tau_{1},d\tau_{2},\cdots d\tau_{K}))\rightarrow(d \tau_{1},d\tau_{2},\cdots d\tau_{K})\) 3. If \(\Gamma\xrightarrow{A}\Gamma^{\prime}\), there exist \(A^{-1}\) such that \(\Gamma^{\prime}\xrightarrow{A^{-1}}\Gamma\) where \(A,A^{-1}\in\{\mathbf{Add},\mathbf{Remove}\}\). The derivations of additivity and reversibility are shown in Appendix A.1. The liquidity pool balances are addictive and reversible regardless of the fair prices. However, the liquidity value (TV) and total invested balance (TB) differ when fair prices change. Both additivity and reversibility ensure that LPs cannot take advantage or disadvantage of add and remove operations within the same fair prices. Swap Transaction The core idea of UAMM is fundamentally different from the rest of other AMMs. AMMs like Uniswap compute the price of \(\tau_{i}\) for \(\tau_{j}\) based on liquidity pool balances. Because it only considers internal market data, the Uniswap price can be far off from the prices in other exchanges. Especially the internal market is inefficient when the internal market volume is relatively small to the external market. This creates arbitrage opportunities for traders and potential losses for LPs. In contrast, we respect efficient market theory and estimate fair prices given the external and internal market data. We use the UAMM algorithm to calculate the slippage while referencing fair price as a spontaneous price. In this section. we describe when and how to add slippage such that we prevent impermanent loss for LPs. ### Notations A user can swap \(d\tau_{i}\) tokens for some units of \(\tau_{j}\) through UAMM. Because swapping between \(\tau_{i}\) and \(\tau_{j}\) applies for any \(0\leq i,j\leq K\), without loss of generality, we label \(\tau_{\text{In}}\) and \(\tau_{\text{Out}}\) for input and output token index. * Let \(R\tau_{\text{In}}\) and \(R\tau_{\text{Out}}\) be the liquidity pool balance of input and output tokens. * Let \(f\tau_{\text{In}}\) and \(f\tau_{\text{Out}}\) be the probability of input and output event. * Let \(d\tau_{\text{In}}\) be the input token amount before adding slippage. * Let \(\Delta\tau_{\text{Out}}\) be the output token amount before adding slippage. * Let \(d\tau_{\text{Out}}=g(\Delta\tau_{\text{Out}})\) be the output token amount with slippage. ### Swapping input token for output token Given the fair prices \(f\tau_{\text{In}}\) and \(f\tau_{\text{Out}}\), the fair exchange without any slippage would be \[d\tau_{\text{Out}}=\rho\cdot d\tau_{\text{In}} \tag{4}\] where \(\rho=\frac{f\tau_{\text{In}}}{f\tau_{\text{Out}}}\) is the fair swapping rate. Unfortunately, there is a danger of liquidity pool balances drying up if the demand for the \(\tau_{\text{Out}}\) heavily outweighs the demand for the \(\tau_{\text{In}}\). For this reason, we add slippage to the swapping rate such that it respects the liquidity pool balance and the user's trade size. Now, we describe the intuition behind how we derive the UAMM swapping function. The amount of output token \(d\tau_{\text{Out}}\) depends on the input token \(d\tau_{\text{In}}\), the liquidity pool situation \(R\tau_{\text{In}}\) and \(R\tau_{\text{Out}}\), and the target balance \(TB\) where \(TB\) is the desired balance for all liquidity pool balances. The price calculation has the following form, \[d\tau_{\text{Out}}=\underbrace{\mathbf{Swap}\Big{(}\underbrace{\rho\cdot d \tau_{\text{In}}}_{\Delta\tau_{\text{Out}}};\Gamma\Big{)}}_{\text{Output Slippage}}\] where \(\Gamma\) is the state parameters and \(\rho=\frac{\rho\tau_{\text{In}}}{\rho\tau_{\text{Out}}}\) is the price ratio. Intuitively, we want to add slippage to output pools \(R\tau_{\text{Out}}\) if we are below the desired target balance \(TB\). This means that we compute the slippage for each pool, starting with the input liquidity pool \(R\tau_{\text{In}}\) and then the output liquidity pool \(R\tau_{\text{Out}}\). Here is the following steps that to compute \(d\tau_{\text{Out}}\): 1. compute the swapping amount based on price ratio, \(\Delta\tau_{\text{Out}}=\rho\cdot d\tau_{\text{In}}\). 2. compute the slippage on output pool \(R\tau_{\text{Out}}\), \(d\tau_{\text{Out}}=\mathbf{swap}(\Delta\tau_{\text{Out}};\Gamma)\). Our liquidity pool balance is in a good state when our pool balance is above the target balance and the slippage is only applied when our pool balance \(R\tau_{i}\) is below our target balance \(TB\). Thus, the slippage function \(g(d\tau_{\text{Out}};\Gamma)\) is a piece-wise function such that \[\mathbf{swap}(\Delta\tau_{\text{Out}};\Gamma)=\begin{cases}\alpha\cdot\Delta \tau_{\text{Out}}+(\rho-\alpha)(R\tau_{\text{Out}}-TB)&\text{ if }R\tau_{\text{Out}}-\Delta\tau_{\text{Out}} \leq TB\leq R\tau_{\text{Out}}\\ \Delta\tau_{\text{Out}}&\text{ else if }TB\leq R\tau_{\text{Out}}\\ R\tau_{\text{Out}}-\frac{TB^{2}}{X\tau_{\text{Out}}+\Delta\tau_{\text{Out}}}& \text{ otherwise}\end{cases}\] where \(\alpha=\frac{R\tau_{\text{Out}}}{X\tau_{\text{Out}}+\Delta\tau_{\text{Out}}}\) and \(TB^{2}=X\tau_{\text{Out}}\cdot R\tau_{\text{Out}}\) is a target balance of the pool. The first if statements in \(\mathbf{swap}\) are to ensure that the functions are continuous with respect to \(d\tau_{\text{In}}\) and \(\Delta\tau_{\text{Out}}\) respectively. The above slippage function is derived from the constant product curve with constant \(TB^{2}\) when the output liquidity pool \(R\tau_{\text{Out}}\) is below the target balance \(TB\). If \(TB>R\tau_{\text{Out}}\), our constant product curve for the output slippage function \(g(\Delta_{\text{Out}};\Gamma)\) is \[TB^{2}=\left(R\tau_{\text{Out}}-\frac{R\tau_{\text{Out}}}{X\tau_{\text{Out}}+ \rho d\tau_{\text{In}}}\right)(X\tau_{\text{Out}}+\rho d\tau_{\text{In}})\] Technically, you can also add the slippage to the input function in a similar manner when the input liquidity pool \(R\tau_{\text{In}}>TB\). The hope is that it discourages users from executing the trade and pulls the \(R\tau_{\text{In}}\) towards \(TB\). We show the derivation for the input slippage function in Appendix A.2. Formally, the slippage rate incorporated in the swapping function has the following expression: \[d\tau_{\text{Out}}=\rho\cdot USX(d\tau_{\text{In}},\Gamma)\cdot d\tau_{\text{ In}} \tag{5}\] The only difference between the fair swap function in Equation 4 and the UBET swap function in Equation 5 is the UBET slippage rate. \(USX\) is a UAMM slippage rate defined below. **Definition 4** (Ubet slippage rate).: The UBET Automated Market Maker rate function is \[USX(d\tau_{\text{In}},\Gamma)=\begin{cases}1&\text{ if }TB<R\tau_{\text{Out}}\\ \xi&\text{ if }R\tau_{\text{Out}}<TB\leq R\tau_{\text{Out}}+d\tau_{\text{In}}\\ \frac{R\tau_{\text{Out}}}{X\tau_{\text{Out}}+\rho d\tau_{\text{In}}}&\text{ if }TB\geq R\tau_{\text{Out}}\end{cases} \tag{6}\] where \(\xi(d\tau_{\text{In}})=\left(R\tau_{\text{Out}}-TB\right)\rho+\left(d\tau_{ \text{In}}-\left(R\tau_{\text{Out}}-TB\right)\right)\frac{R\tau_{\text{Out}}}{ X\tau_{\text{Out}}+\rho d\tau_{\text{In}}}\). TODO: slippage definition. Using the UBET slippage rate, we can assure the following properties of the swap transaction. The output of the swapping function \(d\tau_{\text{Out}}\) is bounded by \(R\tau_{\text{Out}}\) and this assures the non-depletion property of AMM. **Property 4** (Output-boundedness).: UAMM always have enough output tokens \(R\tau_{\text{Out}}\) for a user to perform swap \(\tau_{\text{In}}\) for \(\tau_{\text{Out}}\), \[0\leq d\tau_{\text{Out}}=\rho\cdot USX(d\tau_{\text{In}},\Gamma)\cdot d\tau_{ \text{In}}<R\tau_{\text{Out}}\] for all \(d\tau_{\text{In}}\geq 0\) and \(R\tau_{\text{In}},R\tau_{\text{Out}}>0\). The monotonicity of the swap function ensures that the gain or loss of the user after performing the swap is also monotonic. **Property 5** (Monotonicity).: UAMM slippage rate is monotonic: 1. \(USX(d\tau_{\text{In}},\Gamma)\leq USX(d\tau_{\text{In}}^{\prime},\Gamma)\) if \(d\tau_{\text{In}}\leq d\tau_{\text{In}}^{\prime}\) and \(\Gamma\) is fixed. 2. \(USX(d\tau_{\text{In}},\Gamma)\leq USX(d\tau_{\text{In}}^{\prime},\Gamma^{ \prime})\) if \(d\tau_{\text{In}}\leq d\tau_{\text{In}}^{\prime}\), \(R\tau_{\text{In}}\leq R\tau_{\text{In}}^{\prime}\), \(R\tau_{\text{Out}}\leq R\tau_{\text{Out}}^{\prime}\) and the rest are fixed. UAMM swap function does not add slippage with respect to the input but only the output liquidity pool. This allows us to have a monotonic slippage rate. It is possible to include the input slippage function (see Appendix A.2.1). However, once you add this input slippage function, we do not get a monotonic function as the input liquidity pool increase, \(R\tau_{In}<R\tau_{In}^{\prime}\). This is because the input slippage function pulls back the liquidity pool balances so that it does not get too distant from the target balance \(TB\). We advise that it is not necessary to have an input slippage \(\Delta\tau_{\text{In}}\) and it can simply be removed (i.e., \(\Delta\tau_{\text{In}}=d\tau_{\text{In}}\)). This is because input slippage \(\Delta\tau_{\text{In}}\) can be overly too strict in the sense that it charges too much slippage and it prevents having an impermanent gain. **Property 6** (Homogeneity).: UAMM slippage rate \(USX\) is homogeneous, for \(a>0\), \[USX(a\cdot d\tau_{\text{In}},\Gamma^{\prime})=USX(d\tau_{\text{In}},\Gamma).\] where \(\Gamma=(R\tau_{1},R\tau_{2},\cdots,R\tau_{K},TB)\) and \(\Gamma^{\prime}=(a\cdot R\tau_{1},a\cdot R\tau_{2},\cdots,a\cdot R\tau_{K},a \cdot TB)\). Typically, homogeneity property leads to equal spontaneous price for \(\Gamma\) and \(\Gamma^{\prime}\) for AMMs like Uniswap [2], which maintains the ratio of the liquidity pool balances when you add or remove minted tokens. This is because their spontaneous price directly depends on the ratio of the pool. However, our spontaneous price is determined by the fair price, not by the liquidity pool balances, and only the slippage rate depends on the liquidity pool balances. Therefore, the homogeneity of \(USX\) is irrelevant to the homogeneity of spontaneous price after performing add or remove functions unless the deposit and withdrawal amount maintains the pool balance ratios. **Property 7** (Additivity).: The Swap transaction is additive while the fair prices remain the same. 1. UAMM swap function is additive \[\mathbf{swap}(d\tau_{\mathrm{In}};\Gamma)+\mathbf{swap}(d\tau_{\mathrm{In}};\Gamma ^{\prime})=\mathbf{swap}(d\tau_{\mathrm{In}}+d\tau_{\mathrm{In}}^{\prime};\Gamma)\] where \(\Gamma\xrightarrow{\mathbf{swap}(d\tau_{\mathrm{In}})}\Gamma^{\prime} \xrightarrow{\mathbf{swap}(d\tau_{\mathrm{In}}^{\prime})}\Gamma^{\prime\prime}\) are two output slippage functions. 2. Let \(\alpha=USX(d\tau_{\mathrm{In}},\tau_{\mathrm{Out}};\Gamma)\) and let \(\beta=USX(d\tau_{\mathrm{In}}^{\prime},\tau_{\mathrm{Out}};\Gamma^{\prime})\). Then, \[USX(d\tau_{\mathrm{In}}+d\tau_{\mathrm{In}}^{\prime},\tau_{\mathrm{Out}})= \frac{\alpha\cdot d\tau_{\mathrm{In}}+\beta\cdot d\tau_{\mathrm{In}}^{\prime}} {d\tau_{\mathrm{In}}+d\tau_{\mathrm{In}}^{\prime}}\] 3. The states are the same whether a user performs two of the same successive swap transactions, or through a single swap transaction: \(\Gamma\xrightarrow{\mathbf{swap}(d\tau_{\mathrm{In}}+d\tau_{\mathrm{In}}^{ \prime})}\Gamma_{1}\). The proof is shown in Appendix 6. This property applies to multiple sequences of swap transactions. The amount of slippage and a user's gain or loss will be the same whether you make single or multiple consecutive transactions. As a side note, having an input slippage function as shown in Appendix A.2.1 does not guarantee additivity. Indeed, having a single swap transaction with \(d\tau_{\mathrm{In}}+d\tau_{\mathrm{In}}^{\prime}\) leads to the difference of \(d\tau_{\mathrm{In}}\cdot d\tau_{\mathrm{In}}^{\prime}\) compare to having two successive transactions. In general, UAMM \(swap\) transaction is irreversible. However, we still assure you that the \(swap\circ swap\) transaction will end up with less amount than the originally invested amount. **Property 8** (Weak-Reversibility).: \(\mathbf{swap}\circ\mathbf{swap}(d\tau_{\mathrm{In}};\Gamma)\leq d\tau_{ \mathrm{In}}\)__ This is a desirable property, otherwise, the user can take arbitrage out of the liquidity funds. The user should not be able to get more than their original amount from reverse transactions. Moreover, UAMM is reversible when our liquidity pool balances are in a good situation where \(R\tau_{\mathrm{Out}}-\Delta\tau_{\mathrm{Out}}>TB\) and \(R\tau_{\mathrm{In}}>TB\). This situation indicates that LPs already have made a good profit out of previous transactions. ## 5 Spontaneous Price The spontaneous price is our market price with no slippage. One can consider the spontaneous price as a fair price with an applied spread. Because the price depends on whether \(R\tau_{\mathrm{In}}\) and \(R\tau_{\mathrm{Out}}\) are below the target price or not, the spontaneous price is also a piece-wise function, \[\left.\frac{p\tau_{\mathrm{In}}}{p\tau_{\mathrm{Out}}}=\left(\left.\frac{ \mathrm{d}d\tau_{\mathrm{In}}}{\mathrm{d}d\tau_{\mathrm{Out}}}\right|_{d\tau_ {\mathrm{Out}}=0}\right)^{-1}\right. \tag{7}\] where \[\left.\frac{\mathrm{d}d\tau_{\mathrm{In}}}{\mathrm{d}d\tau_{\mathrm{Out}}} \right|_{d\tau_{\mathrm{Out}}=0}=\begin{cases}\frac{1}{\rho}&\text{if }TB\leq R\tau_{ \mathrm{In}}\text{ and }TB\leq R\tau_{\mathrm{Out}}\\ \frac{X\tau_{\mathrm{Out}}}{p\cdot R\tau_{\mathrm{Out}}}&\text{if }TB>R\tau_{ \mathrm{Out}}\text{ and }TB\leq R\tau_{\mathrm{In}}\end{cases}\] The subscript \(|_{d\tau_{\text{Out}}=0}\) illustrates what happens to the price of an infinitesimally small trade. The derivation is shown in the Appendix. It is also easy to see that the UAMM slippage rate together with \(\rho\) converges to the spontaneous price, **Property 9** (Convergence).: \(\frac{p\tau_{\text{In}}}{p\tau_{\text{Out}}}=\lim_{d\tau_{\text{In}}\to 0} \rho\cdot USX(d\tau_{\text{In}},\Gamma)\). **Property 10**.: UAMM swap rate is always less than or equal to the spontaneous price rate and the spontaneous price rate is lower than or equal to the fair swap rate, \[\rho\cdot USX(d\tau_{\text{In}},\Gamma)\leq\frac{p\tau_{\text{In}}}{p\tau_{ \text{Out}}}\leq\frac{f\tau_{\text{In}}}{f\tau_{\text{Out}}}=\rho\] for all \(d\tau_{\text{In}}>0\). ## 6 Conclusion In conclusion, the existing AMMs used in DEX suffer from a significant drawback--arbitrage opportunities. This occurs due to the AMMs' reliance on internal market information, disregarding external market prices and leading to potential losses for LPs. Additionally, AMMs have slower adaptation to price changes, relying on traders to exploit arbitrage opportunities for price adjustments. To address these issues, the UAMM algorithm is proposed. UAMM calculates prices by considering both external and internal market prices, effectively eliminating arbitrage opportunities. By minimizing impermanent loss and maintaining the desired properties of a constant product curve, UAMM provides LPs with improved risk management and passive yield generation. With UAMM, LPs can trade with increased security and stability, as they no longer need to worry about impermanent loss management. The algorithm defines three transactions, add, remove, swap, and their properties induce desired behaviours for the AMM. Furthermore, UAMM exhibits zero arbitrage opportunities when the market price aligns with the fair price assumption, further enhancing LPs' confidence and profitability. The UAMM algorithm presents a promising solution to the limitations of existing AMMs in decentralized exchanges. By considering external market prices and implementing measures to minimize impermanent loss, UAMM enhances the efficiency and reliability of DEX while providing LPs with a more secure and profitable trading experience.
2308.16166
Conformal pointwise slant Riemannian maps from or to Kähler manifolds
In this article, we study Conformal pointwise-slant Riemannian maps (\textit{CPSRM}) from or to K\"{a}hler manifolds to or from Riemannian manifolds. To check the existence of such maps, we provide some non-trivial examples. We derive some important results for these maps. We discuss the integrability and totally geodesicness of the distributions. Further, we investigate the conditions for homotheticity and harmonicity of these maps. Finally, we study some inequalities for these maps.
A. Zaidi, G. Shanker, J. Yadav
2023-08-30T17:40:26Z
http://arxiv.org/abs/2308.16166v1
# Conformal pointwise slant Riemannian maps from or to Kahler manifolds ###### Abstract In this article, we study Conformal pointwise-slant Riemannian maps (_CPSRM_) from or to Kahler manifolds to or from Riemannian manifolds. To check the existence of such maps, we provide some non-trivial examples. We derive some important results for these maps. We discuss the integrability and totally geodesicness of the distributions. Further, we investigate the conditions for homotheticity and harmonicity of these maps. Finally, we study some inequalities for these maps. **Mathematics Subject Classification:** Primary 53C15; Secondary 53B35, 53C43, 54C05. **Keywords and Phrases:** Complex manifolds, Hermitian manifolds, Kahler manifolds, Riemannian maps, pointwise slant Riemannian maps, Conformal maps. ## 1 Introduction In differential geometry smooth maps play an important role in study the geometrical properties of a manifold by comparing it with another manifold. Riemannian maps are the most important type of maps in Riemannian geometry which are the generalization of isometric immersion, Riemannian submersion, and an isometry. In 1992, the notion of Riemannian map was first introduced by Fischer [5]. According to him, if \(F:(M^{m},g_{M})\rightarrow(N^{n},g_{N})\) is a smooth map between smooth finite dimensional Riemannian manifolds \((M,g_{M})\) and \((N,g_{N})\) such that \(0<rankF<min\{m,n\}\) and \(F_{*p}:T_{p}M\to T_{F(p)}N\) denotes the differential map at \(p\in M\), where \(F(p)\in N\), then \(T_{p}M\) and \(T_{F(p)}N\) split orthogonally with respect to \(g_{M}(p)\) and \(g_{N}(F(p))\), respectively, as \[T_{p}M =kerF_{*p}\oplus(kerF_{*p})^{\perp},\] \[=\mathcal{V}_{p}\oplus\mathcal{H}_{p},\] \[T_{F(p)}N =rangeF_{*p}\oplus(rangeF_{*p})^{\perp},\] where \(\mathcal{V}_{p}=kerF_{*p}\) and \(\mathcal{H}_{p}=(kerF_{*p})^{\perp}\) are vertical and horizontal parts of \(T_{p}M\) respectively. The map \(F\) is called a Riemannian map at \(p\in M,\) if the horizontal restriction \[(F_{*p})^{h}=F_{*p}\ |\ _{\mathcal{H}_{p}}:\mathcal{H}_{p}\to rangeF_{*p}\] is a linear isometry between \((kerF_{*p},g_{M}\ |_{kerF_{*p}})\) and \((rangeF_{*p},g_{N}(y)|_{(rangeF_{*p})}),\) where \(y=F(p)\). In other words, \((F_{*p})^{h}\) satisfies the equation \[g_{N}(F_{*}X,F_{*}Y)\ =\ g_{M}(X,Y), \tag{1.1}\] for all vector fields \(X,Y\) tangent to \(\Gamma(kerF_{*p})^{\perp}\). It can be seen that isometric immersions and Riemannian submersions are particular cases of Riemannian maps with \(kerF_{*}=\{0\}\) and \((rangeF_{*})^{\perp}=\{0\}\) respectively. In 2010, Sahin [10] introduced Riemannian maps between almost Hermitian manifolds and Riemannian manifolds. In recent past, many authors have broadly studied various types of Riemannian maps [1, 7, 12, 15]. Moreover, a smooth map \(F:(M^{m},g_{M})\rightarrow(N^{n},g_{N})\) between Riemannian manifolds \(M\) and \(N\) is called a conformal Riemannian map at a point \(p\in M\), if there exists a positive function \(\lambda(p)\) such that [12] \[g_{N}(F_{*}X,F_{*}Y)=\lambda^{2}(p)g_{M}(X,Y) \tag{1.2}\] for \(X,Y\in\Gamma((kerF_{*p})^{\perp})\). The function \(\lambda(p)\) is called dilation and \(\lambda^{2}(p)\) is the square dilation of \(F\) at \(p\). \(F\) is said to be a conformal Riemannian map, if \(F\) is conformal Riemannian at each point \(p\in M.\) It can be seen that for \(\lambda=1\), every conformal Riemannian map is a Riemannian map. Further, a conformal Riemannian map \(F\) is said to be horizontally homothetic, if the gradient of its dilation \(\lambda\) is vertical, i.e., \(\mathcal{H}(grad\lambda)=0\) at each point. Conformal Riemannian maps have many applications in various field of science. Therefore, it is very tempting for researchers to investigate different types of conformal Riemannian maps on various structures in complex as well as contact geometry [2, 3, 4, 13]. Recently, Zaidi et al [16] have studied conformal anti-invariant Riemannian maps from or to Sasakian manifolds. In this paper, we investigate conformal pointwise-slant Riemannian maps from or to Kahler manifolds. The paper is divided into four sections. In section 2, we recall all the basic definitions and terminologies which are needed throughout the paper. In section 3, we study conformal pointwise-slant Riemannian maps from Kahler manifolds to Riemannian manifolds. To show the existence of such maps, we construct an example. We investigate the integrability of distributions and derive the conditions for horizontal and vertical distributions to be totally geodesic. We establish some results on the homotheticity of the map \(F\), we also check the harmonicity of these maps. In section 4, we investigate conformal pointwise-slant Riemannian maps from Riemannian manifolds to Kahler manifolds and construct an example. We study the integrability of distributions and derive the conditions for horizontal and vertical distributions to be totally geodesic. We drive the condition for the homotheticity and harmonicity of these maps and finally we establish some inequalities for these maps. ## 2 Preliminaries Let \(M\) be an even-dimensional manifold. Then a differentiable manifold \(M\) is said to be an almost complex manifold, if there exists a linear map \(J:TM\to TM\) satisfying \(J^{2}=-I\) and \(J\) is called an almost complex structure of \(M\). The tensor field \(\mathcal{N}\) of type (1,2), defined by \[\mathcal{N}_{J}(X,Y)=[JX,JY]-[X,Y]-J[X,JY]-J[JX,Y] \tag{2.1}\] for any \(X,Y\in\varGamma(TM)\), is called Nijenhuis tensor field of \(J\). If \(\mathcal{N}\) vanishes on an almost complex manifold \(M\), then \(J\) defines a complex structure on \(M\) and \(M\) is called a complex manifold. Almost complex manifolds are necessarily orientable. A Riemannian metric \(g_{M}\) on an almost complex manifold \((M,J)\) satisfying \[g_{M}(JX,JY)=g_{M}(X,Y) \tag{2.2}\] for all \(X,Y\in\varGamma(TM)\), is called an almost Hermitian metric, and the manifold \(M\) with Hermitian metric \(g_{M}\) is called an almost Hermitian manifold. If \((\nabla_{X}J)Y=0\), for all \(X,Y\in\varGamma(TM)\), then \(M\) is called a Kahler manifold [9]. Moreover, if \(M\) is a Kahler manifold, then Riemannian curvature tensor of a complex space form \(K(v)\) of constant holomorphic sectional curvature \(v\) satisfies [9] \[\begin{split} R_{M}(Y_{1},Y_{2},Y_{3},Y_{4})&=\frac {v}{4}\{g_{M}(Y_{1},Y_{4})g_{M}(Y_{2},Y_{3})-g_{M}(Y_{1},Y_{3})g_{M}(Y_{2},Y_{ 4})\\ &+g_{M}(Y_{1},JY_{3})g_{M}(JY_{2},Y_{4})-g_{M}(Y_{2},JY_{3})g_{M}( JY_{1},Y_{4})\\ &+2g_{M}(Y_{1},JY_{2})g_{M}(JY_{3},Y_{4})\}\end{split} \tag{2.3}\] for vector fields \(Y_{1},Y_{2},Y_{3},Y_{4}\in\varGamma(TK)\). Further, let \(F:(M^{m},g_{M})\rightarrow(N^{n},g_{N})\) be a smooth map between smooth finite dimensional Riemannian manifolds, then the differential map \(F_{*}\) of \(F\) can be viewed as a section of the bundle \(Hom(TM,F^{-1}TN)\to M\), where \(F^{-1}TN\) is the pullback bundle whose fibres at \(p\in M\) is \((F^{-1}TN)_{p}=T_{F(p)}N\). If the bundle \(Hom(TM,F^{-1}TN)\) has a connection \(\nabla\) induced from the Levi-Civita connection \(\nabla^{M}\) and the pullback connection \(\nabla^{N}_{F}\), then the second fundamental form of \(F\) is given by [12] \[(\nabla F_{*})(X,Y)=\nabla^{N}_{X}F_{*}Y-F_{*}(\nabla^{M}_{X}Y) \tag{2.4}\] for all \(X,Y\ \in\varGamma(TM)\) and \(\nabla^{N}_{X}F_{*}Y\circ F=\nabla^{N}_{F_{*}X}F_{*}Y\). Let \(F\) be a Riemannian map from a Riemannian manifold \(M\) to a Riemannian manifold \(N\). Then we define \(\mathcal{T}\) and \(\mathcal{A}\) as \[\begin{split}\mathcal{A}_{D}E&=\mathcal{H}\nabla^{ M}_{\mathcal{H}D}\mathcal{V}E+\mathcal{V}\nabla^{M}_{\mathcal{H}D}\mathcal{H}E,\\ \mathcal{T}_{D}E&=\mathcal{H}\nabla^{M}_{\mathcal{ V}D}VE+\mathcal{V}\nabla^{M}_{\mathcal{H}E}\end{split} \tag{2.5}\] for vector fields \(D,E\) on \(M\), where \(\nabla^{M}\) is the Levi-Civita connection of \(g_{M}\). It is also easy to verify that \(\mathcal{T}\) is vertical, \(\mathcal{T}_{D}=\mathcal{T}_{VD}\), and \(A\) is horizontal, \(\mathcal{A}_{D}=\mathcal{A}_{\mathcal{H}D}\). On the other hand, from (2.5) we have [12] \[\nabla_{V}W =\ \mathcal{T}_{V}W+\hat{\nabla}_{V}W, \tag{2.6}\] \[\nabla_{V}X =\mathcal{H}\nabla_{V}X+\mathcal{T}_{V}X,\] (2.7) \[\nabla_{X}V =\ \mathcal{A}_{X}V+\mathcal{V}\nabla_{X}V,\] (2.8) \[\nabla_{X}Y =\ \mathcal{A}_{X}Y+\mathcal{H}\nabla_{X}Y \tag{2.9}\] for \(X,Y\in\Gamma((KerF_{*})^{\perp})\) and \(V,W\in\Gamma(KerF_{*})\), where \(\hat{\nabla}_{V}W=\mathcal{V}\nabla_{V}W.\) Also, for any vector field \(X\) on \(M\) and any section \(V\) of \((rangeF*)^{\perp}\), we denote by \(\nabla_{X}^{F\perp}V\), the orthogonal projection of \(\nabla_{X}^{N}V\) on \((rangeF_{*})^{\perp}\), where \(\nabla F_{*}^{\perp}\) is a linear connection on \((rangeF_{*})^{\perp}\) such that \(\nabla^{F\perp}g_{N}=0\). Further, for a Riemannian map, we have [12] \[\nabla_{F_{*}X}^{N}V=-S_{V}F_{*}X+\nabla_{X}^{F\perp}V, \tag{2.10}\] where \(S_{V}F_{*}X\) is the tangential component of \(\nabla_{F_{*}X}^{N}V\) at \(p\in M,\ \nabla_{F_{*}X}^{N}V(p)\in T_{F(p)}N,\) \(S_{V}F_{*}X(p)\in F_{*p}(T_{p}M)\) and \(\nabla_{X}^{F\perp}V(p)\in(F_{*p}(T_{p}M))^{\perp}.\) It is easy to check that \(S_{V}F_{*}X\) is bilinear in \(V\) and \(F_{*}X\), and \(S_{V}F_{*}X\) at \(p\) depends only on \(V_{p}\) and \(F_{*p}X_{p}.\) By direct computations, we can obtain \[g_{N}(S_{V}F_{*}X,F_{*}Y)=g_{N}(V,(\nabla F_{*})(X,Y)) \tag{2.11}\] for \(X,Y\in\Gamma((kerF_{*})^{\perp})\) and \(V\in\Gamma((rangeF_{*})^{\perp})\). Moreover, let \(F:(M^{m},g_{M})\rightarrow(N^{n},g_{N})\) be a conformal submersion [8]. Then, we have: \[g(R(U,V)W,S)=g(R^{KerF_{*}}(U,V)W,S)+g(T_{U}W,T_{V}S)-g(T_{V}W,T_{U}S), \tag{2.12}\] \[g(R(X,Y)Z,B) =\frac{1}{\lambda^{2}}g(R^{(KerF_{*}^{\perp})}(X,Y)Z,B)+\frac{1} {4}\{g(\mathcal{V}[X,Z],\mathcal{V}[Y,B])-g(\mathcal{V}[Y,Z],\mathcal{V}[X,B]) \tag{2.13}\] \[+2g(\mathcal{V}[X,Y],\mathcal{V}[Z,B])\}+\frac{\lambda^{2}}{2}\{g (X,Z)g(\nabla_{Y}grad(\frac{1}{\lambda^{2}}),B)\] \[-g(Y,Z)g(\nabla_{X}grad(\frac{1}{\lambda^{2}}),B)+g(Y,B)g(\nabla_ {X}grad(\frac{1}{\lambda^{2}}),Z)\] \[-g(X,B)g(\nabla_{Y}grad\frac{1}{\lambda^{2}},Z)\}+\frac{\lambda^{ 4}}{4}\{(g(X,B)g(Y,Z)\] \[-g(Y,B)g(X,Z))||grad(\frac{1}{\lambda^{2}})||^{2}\] \[+g(X(\frac{1}{\lambda^{2}})Y-Y(\frac{1}{\lambda^{2}})X,B(\frac{1} {\lambda^{2}})Z-Z(\frac{1}{\lambda^{2}})B)\},\] where \(X,Y,Z,B\in\Gamma(KerF_{*})^{\perp}\) and \(U,V,W,S\in\Gamma(KerF_{*})\). Now, let \(F\) be a conformal Riemannian map, then for any \(X,Y\in\Gamma((kerF_{*})^{\perp})\), the second fundamental form \((\nabla F_{*})(X,Y)\) of \(F\), is given by [11] \[(\nabla F_{*})(X,Y)^{rangeF_{*}}=X(ln\lambda)F_{*}Y+Y(ln\lambda)F_{*}X-g_{M}(X,Y)F_{*}(gradln\lambda). \tag{2.14}\] Further, if \((rangeF_{*})^{\perp}\)-component of \((\nabla F_{*})(X,Y)\) is denoted by \((\nabla F_{*})^{\perp}(X,Y)\), then we can write [7] \[(\nabla F_{*})(X,Y)=(\nabla F_{*})(X,Y)^{rangeF_{*}}+(\nabla F_{*})^{\perp}(X,Y). \tag{2.15}\] 3 Conformal pointwise slant Riemannian maps _(CP-SRM)_ from Kahler manifolds to Riemannian manifolds In this section, we introduce the notion of conformal pointwise slant Riemannian maps from Kahler manifolds to Riemannian manifolds, construct an example and discuss the geometry of such maps. **Definition 3.1**.: _Let \(F\) be a conformal Riemannian map from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\). If for every point \(k\in K\), the Wirtinger angle \(\theta(X)\) between \(JX\) and the space \((kerF_{*})_{k}\) is independent of the choice of \(X\), where \(X\in\Gamma(kerF_{*})\) is a nonzero vector, then \(F\) is said to be a conformal pointwise slant Riemannian map. In this case, the angle \(\theta\) is regarded as a Function on \(K,\) known as slant function of the conformal poinwise slant Riemannian map (CPSRM)._ Suppose, \(F\) be a _CPSRM_ from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\) with slant function \(\theta,\) then for any \(V\in\Gamma(kerF_{*}),\) we have \[JV=\phi V+\omega V, \tag{3.1}\] where \(\phi V\in\Gamma(kerF_{*})\) and \(\omega V\in\Gamma((kerF_{*})^{\perp}).\) Also for any \(X\in\Gamma((kerF_{*})^{\perp}),\) we have \[JX=\mathcal{B}X+\mathcal{C}X, \tag{3.2}\] where \(\mathcal{B}X\in\Gamma(kerF_{*})\) and \(\mathcal{C}X\in\Gamma((kerF_{*})^{\perp}).\) Assuming \(\mu\) as a orthogonal complementary distribution to \(\omega(\Gamma(kerF_{*}))\) in \(\Gamma((kerF_{*})^{\perp}),\) we can write \[\Gamma((kerF_{*})^{\perp})=\omega(\Gamma(kerF_{*}))\oplus\mu.\] Further, for a pointwise slant Riemannian map, we have [6] \[(\nabla_{V}\omega)W =\mathcal{C}\mathcal{T}_{V}W-\mathcal{T}_{V}\phi W, \tag{3.3}\] \[(\nabla_{V}\phi)W =\mathcal{B}\mathcal{T}_{V}W-\mathcal{T}_{V}\omega W, \tag{3.4}\] where \(\nabla\) is a Levi-Civita coonnection on \(K\) and \[(\nabla_{V}\omega)W =\mathcal{H}\nabla_{V}\omega W-\omega\hat{\nabla}_{V}W, \tag{3.5}\] \[(\nabla_{V}\phi)W =\hat{\nabla}_{V}\phi W-\phi\hat{\nabla}_{V}W \tag{3.6}\] for \(V,W\in\Gamma(kerF_{*}).\) We say that \(\omega\) is parallel with respect to the Levi-Civita connection \(\nabla\) on \(kerF_{*},\) if its covariant derivative with respect to \(\nabla\) vanishes, i.e., \((\nabla_{V}\omega)W=0\) for \(V,W\in\Gamma(kerF_{*}).\) **Example 3.1**.: _Consider a Riemannian manifold \((K=\mathbb{R}^{4},g_{K})\) and a pair of almost complex structures \(\{J_{1},J_{2}\}\) on \(K\) satisfying \(J_{1}J_{2}=-J_{2}J_{1}\), where_ \[J_{1}(u_{1},u_{2},u_{3},u_{4})=(u_{3},u_{4},-u_{1},-u_{2})\] \[J_{2}(u_{1},u_{2},u_{3},u_{4})=(u_{2},-u_{1},-u_{4},u_{3}).\] _Let \(t:\mathbb{R}^{4}\rightarrow\mathbb{R}\) be a real-valued function, hence we can define a complex structure_ \[J_{t}=(cost)J_{1}+(sint)J_{2}\] _on \(K\), then \((K,g_{K},J_{t})\) is an almost complex structure. Again, consider a map \(F:(K=\mathbb{R}^{4},g_{K})\rightarrow(L=\mathbb{R}^{4},g_{L})\) from a Kahler manifold \(K\) to a Riemannian manifold \(L,\) defined by_ \[F(x_{1},x_{2},x_{3},x_{4})=(e^{x_{1}}cosx_{3},0,e^{x_{1}}sinx_{3},0),\] by simple computation we have_ \[kerF_{*}=span\big{\{}U=\frac{\partial}{\partial x_{2}},V=\frac{\partial}{\partial x _{4}}\big{\}},\] \[(kerF_{*})^{\perp}=span\big{\{}X=e^{x_{1}}cosx_{3}\frac{\partial}{\partial x_{1} }-e^{x_{1}}sinx_{3}\frac{\partial}{\partial x_{4}},Y=e^{x_{1}}sinx_{3}\frac{ \partial}{\partial x_{1}}+e^{x_{1}}cosx_{3}\frac{\partial}{\partial x_{4}} \big{\}}\] _and_ \[rangeF_{*}=span\big{\{}F_{*}X=e^{2x_{1}}\frac{\partial}{\partial y_{1}},F_{*}Y= ne^{2x_{1}}\frac{\partial}{\partial x_{3}}\big{\}},\] _hence \(F\) is a CPSRM from a Kahler manifold \(K\) to a Riemannian manifold \(L\) with \(\lambda=e^{x_{1}}\) and slant function \(\theta=t.\)_ Lemma 3.1.: _Let \(F\) be a Riemannian map from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\) with slant function \(\theta.\) Then \(F\) is a conformal pointwise slant Riemannian map if and only if there exists a constant \(\beta\in[-1,0]\) such that_ \[\phi^{2}V=\beta V\] _for \(V\in\Gamma(kerF_{*}).\) If \(F\) is a conformal slant Riemannian map, then \(\beta=-cos^{2}\theta.\)_ The proof of the above lemma is exactly same as the proof for conformal slant Riemannian maps (see [14]). Now, from (3.1) and Lemma 3.1, we have the following result. Lemma 3.2.: _Let \(F\) be a conformal pointwise slant Riemannian map from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\) with slant function \(\theta.\) Then, we have_ \[g_{K}(\phi V,\phi W)=cos^{2}\theta g_{K}(V,W), \tag{3.7}\] \[g_{K}(\omega V,\omega W)=sin^{2}\theta g_{K}(V,W) \tag{3.8}\] _for any \(V,W\in\Gamma(kerF_{*}).\)_ Also, from (3.1) and (3.2) we have the following result. Lemma 3.3.: _Let \(F\) be a CPSRM from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\). Then, for any \(X,Y\in\Gamma((kerF_{*})^{\perp})\) and \(V\in\Gamma(kerF_{*})\), we have_ 1. \(g_{1}(X,\mathcal{C}Y)=-g_{1}(\mathcal{C}X,Y),\)__ 2. \(g_{1}(\mathcal{C}X,\mathcal{C}Y)=-g_{1}(X,\mathcal{C}^{2}Y),\)__ 3. \(g_{1}(X,\mathcal{C}^{2}Y)=g_{1}(\mathcal{C}^{2}X,Y),\)__ 4. \(g_{1}(X,\omega\phi V)=-g_{1}(\mathcal{C}X,\omega V).\)__ **Theorem 3.2**.: _Let \(F\) be a CPSRM from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\). If \(\omega\) is parallel with respect to \(\overset{K}{\nabla}\) on \(kerF_{*},\) then we have_ \[\mathcal{T}_{\phi V}\phi V=-cos^{2}\theta\mathcal{T}_{V}V,\] _where \(V\in\Gamma(kerF_{*})\) and \(\theta\) is a slant function of CPSRM._ Proof.: Let \(V,W\in\Gamma(kerF_{*})\) and \(\omega\) is parallel with respect to \(\overset{K}{\nabla}\) on \(kerF_{*}\), from (3.3), we have \[\mathcal{CT}_{V}W=\mathcal{T}_{V}\phi W,\] interchanging \(V\) and \(W\) in above equation, subtracting the resultant from above equation, we get \[\mathcal{T}_{V}\phi W=\mathcal{T}_{W}\phi V. \tag{3.9}\] Putting \(W=\phi V\) and using lemma 3.1 in (3.9), we have \[\mathcal{T}_{\phi V}\phi V=-\mathcal{T}_{V}cos^{2}\theta V. \tag{3.10}\] From (2.6), we can write \[\begin{split}\mathcal{T}_{V}cos^{2}\theta V&=cos^{2 }\theta\mathcal{H}\nabla_{V}V-\mathcal{H}(sin2\theta V(\theta)V)\\ &=cos^{2}\theta\mathcal{T}_{V}V.\end{split} \tag{3.11}\] Hence, from (3.10) and (3.11), we get required result. **Theorem 3.3**.: _Let \(F\) be a CPSRM from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\) with slant function \(\theta\). Then, any two of the following assertion imply the third one_ 1. \((kerF_{*})^{\perp}\) _is integrable,_ 2. _for any_ \(X,Y\in\Gamma((kerF_{*})^{\perp})\) _and_ \(V\in\Gamma(kerF_{*}),\)__ \[\begin{split} g_{L}\big{(}\overset{L}{\nabla}_{X}^{F}F_{*}( \omega\phi V),F_{*}Y\big{)}-g_{L}\big{(}\overset{L}{\nabla}_{Y}^{F}F_{*}( \omega\phi V),F_{*}X\big{)}&=g_{L}\big{(}\overset{L}{\nabla}_{X}^ {F}F_{*}(\omega V),F_{*}(\mathcal{C}Y)\big{)}\\ &-g_{L}\big{(}\overset{L}{\nabla}_{Y}^{F}F_{*}(\omega V),F_{*}( \mathcal{C}X)\big{)},\end{split}\] 3. \(F\) _is a horizontally homothetic map._ Proof.: Let \(X,Y\in\Gamma((kerF_{*})^{\perp})\) and \(V\in\Gamma(kerF_{*})\), then we have \[g_{K}([X,Y],V)=g_{K}(\overset{K}{\nabla}_{X}Y-\overset{K}{\nabla}_{Y}X,V). \tag{3.12}\] Since \(K\) is a Kahler manifold, from (3.1) and (3.12), we have \[\begin{split} g_{K}([X,Y],V)&=g_{K}(\overset{K}{ \nabla}_{X}\phi^{2}V+\overset{K}{\nabla}_{X}\omega\phi V,Y)-g_{K}(\overset{K} {\nabla}_{X}\omega V,JY)\\ &-g_{K}(\overset{K}{\nabla}_{Y}\phi^{2}V+\overset{K}{\nabla}_{ Y}\omega\phi V,X)+g_{K}(\overset{K}{\nabla}_{Y}\omega V,JX),\end{split}\] using the property of conformal map and lemma 3.1, above equation can be written as \[\begin{split} sin^{2}\theta g_{K}([X,Y],V)&=g_{K}( sin2\theta X(\theta)V,Y)+\frac{1}{\lambda^{2}}\big{(}g_{L}(F_{*}(\overset{K}{ \nabla}_{X}\omega\phi V),F_{*}Y)\\ &-g_{L}(F_{*}(\overset{K}{\nabla}_{Y}\omega\phi V),F_{*}X)-g_{L} (F_{*}(\overset{K}{\nabla}_{X}\omega V),F_{*}(\mathcal{C}Y))\\ &+g_{L}(F_{*}(\overset{K}{\nabla}_{Y}\omega V),F_{*}(\mathcal{C} X))\big{)}.\end{split} \tag{3.13}\] Since \(F\) is a conformal Riemannian map, using (2.4), (2.14) and (2.15), we get \[\begin{split} sin^{2}\theta g_{K}([X,Y],V)&=\frac{1}{ \lambda^{2}}\Big{(}g_{L}\big{(}-(\nabla F_{*})(X,\omega\phi V)-X(ln\lambda)F_{*} (\omega\phi V)-\omega\phi V(ln\lambda)F_{*}X\\ &+g_{K}(X,\omega\phi V)F_{*}(grad(ln\lambda))+\nabla_{X}^{L}F_{*} (\omega\phi V),F_{*}Y\big{)}\\ &-g_{L}\big{(}-(\nabla F_{*})(Y,\omega\phi V)-Y(ln\lambda)F_{*}( \omega\phi V)-\omega\phi V(ln\lambda)F_{*}Y\\ &+g_{K}(Y,\omega\phi V)F_{*}(grad(ln\lambda))+\nabla_{Y}^{L}F_{*} (\omega\phi V),F_{*}X\big{)}\\ &-g_{L}\big{(}-(\nabla F_{*})(X,\omega V)-X(ln\lambda)F_{*}( \omega V)-\omega V(ln\lambda)F_{*}X\\ &+g_{K}(X,\omega V)F_{*}(grad(ln\lambda))+\nabla_{X}^{L}F_{*}( \omega V),F_{*}(CY)\big{)}\\ &+g_{L}\big{(}-(\nabla F_{*})(Y,\omega V)-Y(ln\lambda)F_{*}( \omega V)-\omega V(ln\lambda)F_{*}Y\\ &+g_{K}(Y,\omega V)F_{*}(grad(ln\lambda))+\nabla_{Y}^{L}F_{*}( \omega V),F_{*}(\mathcal{C}X))\Big{)}.\end{split}\] After simplifying the above equation and using Lemma 3.3, we get \[\begin{split} sin^{2}\theta g_{K}([X,Y],V)&=3X(ln \lambda)g_{K}(\mathcal{C}Y,\omega V)-3Y(ln\lambda)g_{K}(\mathcal{C}X,\omega V) \\ &+2(\omega V)(ln\lambda)g_{K}(X,\mathcal{C}Y)+\mathcal{C}X(ln \lambda)g_{K}(Y,\omega V)-\mathcal{C}Y(ln\lambda)g_{K}(X,\omega V)\\ &+\frac{1}{\lambda^{2}}\Big{(}g_{L}\big{(}\nabla_{X}^{L}F_{*}( \omega\phi V),F_{*}Y\big{)}-g_{L}\big{(}\nabla_{Y}^{L}F_{*}(\omega\phi V),F_{ *}X\big{)}\\ &-g_{L}\big{(}\nabla_{X}^{L}F_{*}(\omega V),F_{*}(\mathcal{C}Y) \big{)}+g_{L}\big{(}\nabla_{Y}^{L}F_{*}(\omega V),F_{*}(\mathcal{C}X)\big{)} \Big{)}.\end{split} \tag{3.14}\] Now, assuming assertions \((i)\) and \((ii)\) are satisfied by (3.14), and taking \(X=Y\), we have \[g_{K}(\omega V,\mathcal{H}grad(ln\lambda))g_{K}(X,\mathcal{C}X)=0, \tag{3.15}\] which is possible only if \(\mathcal{H}grad(ln\lambda)=0\), this implies \((iii)\). Similarly, one can easily show that assertions \((ii)\) and \((iii)\), imply \((i)\) and assertions \((i)\) and \((iii)\) imply \((ii)\). Hence, the theorem. **Theorem 3.4**.: _Let \(F\) be a CPSRM from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\). Then vertical distribution \(kerF_{*}\) defines a totally geodesic foliation on \(K\) if and only if_ \[\lambda^{2}g_{K}(\mathcal{T}_{V}\mathcal{B}X,\omega W)=g_{L}\big{(}(\nabla F_ {*})(V,\omega\phi W),F_{*}X\big{)}-g_{L}\big{(}(\nabla F_{*})(V,\omega W),F_{*} (\mathcal{C}X)\big{)}, \tag{3.16}\] _where \(V,W\in\Gamma(kerF_{*})\) and \(X\in\Gamma((kerF_{*})^{\perp})\)._ Proof.: Let \(V,W\in\Gamma(kerF_{*})\) and \(X\in\Gamma((kerF_{*})^{\perp}).\) Since \(K\) is a Kahler manifold, from (3.1), we have \[g_{K}(\overset{K}{\nabla}_{V}W,X)=g_{K}(\overset{K}{\nabla}_{V}(\phi W+\omega W ),JX), \tag{3.17}\] using (3.1), (3.2) and lemma 3.1 in (3.17), we get \[g_{K}(\overset{K}{\nabla}_{V}W,X) =-g_{K}(\overset{K}{\nabla}_{V}cos^{2}\theta W,X)-g_{K}(\overset {K}{\nabla}_{V}\omega\phi W,X)+g_{K}(\overset{K}{\nabla}_{V}\omega W,JX),\] \[sin^{2}\theta g_{K}(\overset{K}{\nabla}_{V}W,X) =-sin2\theta g_{K}(V(\theta)W,X)-g_{K}(\overset{K}{\nabla}_{V} \omega\phi W,X)+g_{K}(\overset{K}{\nabla}_{V}\omega W,\mathcal{B}X+\mathcal{ C}X),\] further, using the condition of conformality and (2.6) in above equation, we have \[\begin{split} sin^{2}\theta g_{K}(\overset{K}{\nabla}_{V}W,X)& =-g_{K}(\mathcal{T}_{V}\mathcal{B}X,\omega W)+\frac{1}{\lambda^{2}} \Big{(}g_{L}\big{(}F_{*}(\overset{K}{\nabla}_{V}\omega W),F_{*}(\mathcal{C}X) \big{)}\\ &-g_{L}\big{(}F_{*}(\overset{K}{\nabla}_{V}\omega\phi W),F_{*}X \big{)}\Big{)}.\end{split} \tag{3.18}\] If \(kerF_{*}\) defines a totally geodesic foliation on \(K\), then from (2.4) and (3.18), we have (3.16). This completes the proof. **Corollary 3.1**.: _Let \(F\) be a CPSRM from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\). If \(kerF_{*}\) defines totally geodesic foliation on \(K\), then_ \[g_{K}(\mathcal{T}_{V}\mathcal{B}X,\omega V)=2V(ln\lambda)g_{K}(\omega\phi W,X).\] Proof.: Let \(V,W\in\Gamma(kerF_{*})\) and \(X\in\Gamma((kerF_{*})^{\perp})\), then from theorem 3.3, we have \[\lambda^{2}g_{K}(\mathcal{T}_{V}\mathcal{B}X,\omega W)=g_{L}\big{(}(\nabla F_ {*})(V,\omega\phi W)^{rangeF_{*}},F_{*}X\big{)}-g_{L}\big{(}(\nabla F_{*})(V, \omega W)^{rangeF_{*}},F_{*}(\mathcal{C}X)\big{)},\] using (2.14) in above equation, we get \[\lambda^{2}g_{K}(\mathcal{T}_{V}\mathcal{B}X,\omega W)=V(ln\lambda)\Big{(}g_{ L}\big{(}F_{*}(\omega\phi W),F_{*}X\big{)}-g_{L}\big{(}F_{*}(\omega W),F_{*}(CX) \big{)}\Big{)}. \tag{3.19}\] Applying lemma 3.3 in above equation, we get required result. **Theorem 3.5**.: _Let \(F\) be a CPSRM from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\). Then, any two of the following assertions imply the third one_ * \((kerF_{*})^{\perp}\) _defines a totally geodesic foliation on_ \(K\)_._ * \(\lambda\) _is constant on_ \((kerF_{*})^{\perp}\)_._ * \[\lambda^{2}g_{K}(\mathcal{A}_{X}\mathcal{B}Y,\omega V)=g_{L}\big{(}\overset{L} {\nabla}_{X}^{F}F_{*}(\omega V),F_{*}(\mathcal{C}Y)\big{)}-g_{L}\big{(} \overset{L}{\nabla}_{X}^{F}F_{*}(\omega\phi V),F_{*}Y\big{)},\] (3.20) _where_ \(X,Y\in\Gamma((kerF_{*})^{\perp}\) _and_ \(V\in\Gamma(kerF_{*})\)_._ Proof.: Let \(X,Y\in\Gamma((kerF_{*})^{\perp}\) and \(V\in\Gamma(kerF_{*})\) and \(K\) is a Kahler manifold, then from (3.1), we have \[g_{K}(\overset{K}{\nabla}_{X}Y,V)=-g_{K}(\overset{K}{\nabla}_{X}\phi V+ \overset{K}{\nabla}_{X}\omega V,JY), \tag{3.21}\] from (3.2), (3.21) and lemma 3.1, we get \[sin^{2}\theta g_{K}(\overset{K}{\nabla}_{X}Y,V)=g_{K}(sin2\theta X(\theta)V, Y)+g_{K}(\overset{K}{\nabla}_{X}\omega\phi V,Y)-g_{K}(\overset{K}{\nabla}_{X} \omega V,\mathcal{B}Y)-g_{K}(\overset{K}{\nabla}_{X}\omega V,\mathcal{C}Y). \tag{3.22}\] Since \(F\) is a conformal Riemannian map, from (2.8) and (3.22), we have \[sin^{2}\theta g_{K}(\overset{K}{\nabla}_{X}Y,V)=g_{K}(\mathcal{A}_{X} \mathcal{B}Y,\omega V)+\frac{1}{\lambda^{2}}\Big{(}g_{L}\big{(}F_{*}(\overset{K }{\nabla}_{X}\omega\phi V),F_{*}Y\big{)}-g_{L}\big{(}F_{*}(\overset{K}{\nabla }_{X}\omega V),F_{*}(\mathcal{C}Y)\big{)}\Big{)},\] using (2.4), (2.14) and (2.15), we have \[\begin{split} sin^{2}\theta g_{K}(\overset{K}{\nabla}_{X}Y,V)& =g_{K}(\mathcal{A}_{X}\mathcal{B}Y,\omega V)+\frac{1}{\lambda^{2}} \Big{(}g_{L}\big{(}\overset{L}{\nabla}_{X}^{F}F_{*}(\omega\phi V)-X(ln\lambda) F_{*}(\omega\phi V)-(\omega\phi V)(ln\lambda)F_{*}X\\ &+g_{K}(X,\omega\phi V)F_{*}(grad(ln\lambda)),F_{*}Y\Big{)}-g_{L} \big{(}\overset{L}{\nabla}_{X}^{F}F_{*}(\omega V)-X(ln\lambda)F_{*}(\omega V) \\ &-\omega V(ln\lambda)F_{*}X+g_{K}(X,\omega V)F_{*}(grad(ln\lambda )),F_{*}(\mathcal{C}Y)\big{)}\Big{)}.\end{split} \tag{3.23}\] Assuming, assertions \((i)\) and \((ii)\) are true, from (3.23), we have \((iii)\). Similarly, if assertions \((ii)\) and \((iii)\) are true, then from (3.23), we get \((i)\). Further, if assertions \((i)\) and \((iii)\) are true, taking \(X=Y\) and using lemma 3.3 in (3.23), we have \[(\omega\phi V)(ln\lambda)g_{K}(X,X)+X(ln\lambda)g_{K}(\omega\phi V,X)+ \mathcal{C}X(ln\lambda)g_{K}(X,\omega V)=0,\] which implies that \[\begin{split} g_{K}(\omega\phi V,grad(ln\lambda))&=0,\\ g_{K}(X,grad(ln\lambda))&=0,\\ g_{K}(\mathcal{C}X,grad(ln\lambda))&=0.\end{split}\] This is possible if and only if \(\lambda\) is constant on \((kerF_{*})^{\perp}\). Hence the theorem. **Theorem 3.6**.: _Let \(F\) be a CPSRM from a Kahler manifold \((K,J,g_{K})\) to a Riemannian manifold \((L,g_{L})\) with \(\theta\) as a slant function. Then \(F\) is harmonic if and only if \(\omega\) is parallel and \(\lambda\) is constant on \((kerF_{*})^{\perp}\)._ Proof.: Consider a canonical orthogonal frame \(e_{1},sec\theta\phi e_{1},e_{2},sec\theta\phi e_{2},...e_{r},sec\theta\phi e _{r},csc\theta\omega e_{1},...,\)\(csc\theta\omega e_{2r},\tilde{e}_{1},...,\tilde{e}_{s}\) such that \(\{e_{1},sec\theta\phi e_{1},e_{2},sec\theta\phi e_{2},...e_{r},sec\theta\phi e _{r}\}\) is an orthonormal basis of \(kerF_{*}\) and \(\{\tilde{e}_{1},...,\tilde{e}_{s}\}\) is of \(\mu\). Then the map \(F\) is said to be harmonic if and only if \[\begin{split} trace|_{kerF_{*}}\Big{\{}&\sum_{i=1}^{ r}\big{(}(\nabla F_{*})(e_{i},e_{i})+sec_{2}\theta(\nabla F_{*})(\phi e_{i}, \phi e_{i})\big{)}\Big{\}}\\ &+trace|_{(kerF_{*})^{\perp}}\Big{\{}csc^{2}\theta\sum_{i=1}^{2r} (\nabla F_{*})(\omega e_{i},\omega e_{i})+\sum_{j=1}^{s}(\nabla F_{*})( \tilde{e}_{j},\tilde{e}_{j})\Big{\}}=0.\end{split} \tag{3.24}\] Since \(K\) is a Kahler manifold and \(F\) ia a _CPSRM_, from (2.6) and (2.4), we have \[\sum_{i=1}^{r}\big{(}(\nabla F_{*})(e_{i},e_{i})+sec_{2}\theta(\nabla F_{*})( \phi e_{i},\phi e_{i})\big{)}=-\sum_{i=1}^{r}F_{*}(\mathcal{T}_{e_{i}}e_{i}+ sec^{2}\theta\mathcal{T}_{\phi e_{i}}\phi e_{i}). \tag{3.25}\] Further, from (2.14), (2.15) and lemma 3.2, we get \[\begin{split} csc^{2}\theta\sum_{i=1}^{2r}(\nabla F_{*})(\omega e _{i},\omega e_{i})+\sum_{j=1}^{s}(\nabla F_{*})(\tilde{e}_{j},\tilde{e}_{j})& =csc^{2}\theta\sum_{i=1}^{2r}\big{(}(\nabla F_{*})^{\perp}(\omega e _{i},\omega e_{i})\\ &+2g_{K}(\omega e_{i},grad(ln\lambda))F_{*}(\omega e_{i})\big{)} \\ &+\sum_{j=1}^{s}\big{(}(\nabla F_{*})^{\perp}(\tilde{e}_{j}, \tilde{e}_{j})\\ &+2g_{K}(\tilde{e}_{j},grad(ln\lambda))F_{*}(\tilde{e}_{j})\big{)} \\ &-(2r+s)F_{*}(gradln\lambda),\end{split} \tag{3.26}\] after simplifying (3.26), we get \[\begin{split} csc^{2}\theta\sum_{i=1}^{2r}(\nabla F_{*})(\omega e_{i },\omega e_{i})+\sum_{j=1}^{s}(\nabla F_{*})(\tilde{e}_{j},\tilde{e}_{j})& =csc^{2}\theta\sum_{i=1}^{2r}\big{(}(\nabla F_{*})^{\perp}(\omega e _{i},\omega e_{i})\\ &+\sum_{j=1}^{s}\big{(}(\nabla F_{*})^{\perp}(\tilde{e}_{j}, \tilde{e}_{j})\\ &+(4-2r-s)F_{*}(gradln\lambda),\end{split} \tag{3.27}\] since, \(F\) is harmonic, from (3.24), (3.25), (3.27) and theorem 3.2, we obtain the required result. **Theorem 3.7**.: _Let \(F\) be a CPSRM from a complex space form \((K(v),g_{K})\) to a Riemannian manifold \((L,g_{L})\) with \((rangeF_{*})^{\perp}=\{0\}\). Then_ \[Ric^{(kerF_{*})}(U)\leq\frac{v}{4}(2r-1+3cos^{2}\theta)g_{K}(U,U)+2rg(T_{U}U,H), \tag{3.28}\] _where \(U\in\Gamma(kerF_{*})\), \(H\) is mean curvature vector field, \(v\) is constant holomorphic sectional curvature and \(dim(kerF_{*})=2r\). The equality holds if and only if the fibers are totally geodesic._ Proof.: Let \(F:(K,g_{K},J)\rightarrow(L,g_{L})\) be a _CPSRM_ with \((rangeF_{*})^{\perp}=\{0\}.\) For every point \(p\in\Gamma(TK)\), let \(E_{1},...,E_{2r},csc\theta\omega e_{1},...,csc\theta\omega e_{2r},\tilde{e}_{ 1},...,\tilde{e}_{s}\) be an orthonormal basis of \(T_{p}K(v)\) such that \(kerF_{*}=span\{E_{1},...,E_{2r}\}\), and \((kerF_{*})^{\perp}=span\{csc\theta\omega e_{1},...,csc\theta\omega e_{2r}, \tilde{e}_{1},...,\tilde{e}_{s}\}\), then for any \(U,V,W,S\in\Gamma(KerF_{*})\), using (2.12), we have \[g(R^{KerF_{*}}(U,V)W,S)=g(R_{K}(U,V)W,S)-g(T_{U}W,T_{V}S)+g(T_{V}W,T_{U}S). \tag{3.29}\] Further, from (2.3) and (3.29), we get \[\begin{split} g(R^{KerF_{*}}(U,V)W,S)&=\frac{v}{4} \Big{\{}g_{K}(U,S)g_{K}(V,W)-g_{K}(U,W)g_{K}(V,S)\\ &+g_{K}(U,JW)g_{K}(JV,S)-g_{K}(V,JW)g_{K}(JU,S)\\ &+2g_{K}(U,JV)g_{K}(JW,S)\Big{\}}.\end{split} \tag{3.30}\] Putting \(U=S\) and \(V=W=E_{i},i=1,...,2r\) in above equation, for any vertical vector \(U\), we have \[Ric^{(KerF_{*})}(U)=\frac{v}{4}(2r-1+3cos^{2}\theta)g_{K}(U,U)+2rg(T_{U}U,H)- g(T_{U}E_{i},T_{E_{i}}U). \tag{3.31}\] Hence, from above equation we get the required result. **Theorem 3.8**.: _Let \(F\) be a CPSRM from a complex space form \((K(v),g_{K})\) to a Riemannian manifold \((L,g_{L})\) with \((rangeF_{*})^{\perp}=\{0\}\). Then_ \[\begin{split}\frac{1}{\lambda^{2}}g(Ric^{(KerF_{*}^{\perp})}(X)& \leq\frac{v}{4}\Big{\{}(2r+s+2)||X||^{2}+3g_{K}(\omega\mathcal{B}X,X )\Big{\}}-\frac{1}{4}||3\mathcal{V}[X,X_{n}]||^{2}\\ &-\frac{\lambda^{2}}{2}\{2g_{K}(X,X_{n})g_{K}(\nabla_{X}grad\frac {1}{\lambda^{2}},X_{n})-(2r+s)g_{K}(\nabla_{X}grad\frac{1}{\lambda^{2}},X)\\ &-||X||^{2}g_{K}(\nabla_{X_{n}}grad\frac{1}{\lambda^{2}},X_{n}) \}-\frac{\lambda^{4}}{4}\Big{\{}((2r+s)||X||^{2}\\ &-(g_{K}(X,X_{n}))^{2})||grad\frac{1}{\lambda^{2}}||^{2}\Big{\}}, \end{split} \tag{3.32}\] _where \(X\in\Gamma(KerF_{*})^{\perp}\), \(\{X_{n}=X_{i}+X_{j}\}_{i=1,...,2r,j=1,...,s}\) is orthonormal basis for \((kerF_{*})^{\perp}\), \(v\) is constant holomorphic sectional curvature and \(dim(kerF_{*})^{\perp}=2r+s.\) The equality holds if and only if \(F\) is conformal homothetic map._ Proof.: Let \(F:(K,g_{K},J)\rightarrow(L,g_{L})\) be a _CPSRM_ with \((rangeF_{*})^{\perp}=\{0\}\) and \(e_{1},sec\theta\phi e_{1},e_{2},\)\(sec\theta\phi e_{2},...e_{r},sec\theta\phi e_{r},csc\theta\omega e_{1},..., csc\theta\omega e_{2r},\tilde{e}_{1},...,\tilde{e}_{s}\) be a canonical orthogonal frame such that \(\{e_{1},sec\theta\phi e_{1},e_{2},sec\theta\phi e_{2},...e_{r},sec\theta\phi e _{r}\}\) is an orthonormal basis of \(kerF_{*}\) and \(\{\tilde{e}_{1},...,\tilde{e}_{s}\}\) is of \(\mu\). Let \(X,Y,Z,B\in\Gamma(KerF_{*})^{\perp}\) and \(v\) be the constant holomorphic sectional curvature, then from (2.13) and (2.3), we have \[\frac{1}{\lambda^{2}}g(R^{(KerF_{*}^{\perp})}(X,Y)Z,B) =\frac{v}{4}\Big{\{}g_{K}(X,B)g_{K}(Y,Z)-g_{K}(X,Z)g_{K}(Y,B)\] \[+g_{K}(X,JZ)g_{K}(JY,B)-g_{K}(Y,JZ)g_{K}(JX,B)\] \[+2g_{K}(X,JY)g_{K}(JZ,B)\Big{\}}-\frac{1}{4}\Big{\{}g(\mathcal{V} [X,Z],\mathcal{V}[Y,B])\] \[-g(\mathcal{V}[Y,Z],\mathcal{V}[X,B])+2g(\mathcal{V}[X,Y], \mathcal{V}[Z,B])\Big{\}}\] \[-\frac{\lambda^{2}}{2}\Big{\{}g(X,Z)g(\nabla_{Y}grad(\frac{1}{ \lambda^{2}}),B)-g(Y,Z)g(\nabla_{X}grad(\frac{1}{\lambda^{2}}),B)\] \[+g(Y,B)g(\nabla_{X}grad(\frac{1}{\lambda^{2}}),Z)-g(X,B)g(\nabla_ {Y}grad\frac{1}{\lambda^{2}},Z)\Big{\}}\] \[-\frac{\lambda^{4}}{4}\Big{\{}\big{(}g(X,B)g(Y,Z)-g(Y,B)g(X,Z) \big{)}||grad(\frac{1}{\lambda^{2}})||^{2}\] \[+g\Big{(}X(\frac{1}{\lambda^{2}})Y-Y(\frac{1}{\lambda^{2}})X,B( \frac{1}{\lambda^{2}})Z-Z(\frac{1}{\lambda^{2}})B\Big{)}\Big{\}}. \tag{3.33}\] By using, \(X=B\) and \(Y=Z=csc\theta\omega e_{i}+\tilde{e}_{j}=X_{i}+X_{j}=X_{n},n=i+j,i=1,...,2r;j=1,...,s\) in above equation, we have \[\frac{1}{\lambda^{2}}g(Ric^{(KerF_{*}^{\perp})}(X) =\frac{v}{4}\Big{\{}(2r+s+2)g_{K}(X,X)+3g_{K}(\omega\mathcal{B}X,X )\Big{\}}-\frac{1}{4}||3\mathcal{V}[X,X_{n}]||^{2}\] \[-\frac{\lambda^{2}}{2}\Big{\{}2g_{K}(X,X_{n})g_{K}(\nabla_{X}grad \frac{1}{\lambda^{2}},X_{n})-g_{K}(X_{n},X_{n})\] \[g_{K}\big{(}\nabla_{X}grad(\frac{1}{\lambda^{2}}),X\big{)}-g_{K}( X,X)g_{K}(\nabla_{X_{n}}grad\frac{1}{\lambda^{2}},X_{n})\Big{\}}\] \[-\frac{\lambda^{4}}{4}\Big{\{}\big{(}g_{K}(X,X)g_{K}(X_{n},X_{n}) -(g_{K}(X,X_{n}))^{2}\big{)}||grad\frac{1}{\lambda^{2}}||^{2}\] \[+||\big{(}X(\frac{1}{\lambda^{2}})X_{n}-X_{n}(\frac{1}{\lambda^{2} })X||^{2}\big{)}\Big{\}},\] \[\frac{1}{\lambda^{2}}g(Ric^{(KerF_{*}^{\perp})}(X) =\frac{v}{4}\Big{\{}(2r+s+2)||X||^{2}+3g_{K}(\omega\mathcal{B}X,X )\Big{\}}-\frac{1}{4}||3\mathcal{V}[X,X_{n}]||^{2}\] \[-\frac{\lambda^{2}}{2}\{2g_{K}(X,X_{n})g_{K}(\nabla_{X}grad \frac{1}{\lambda^{2}},X_{n})-(2r+s)g_{K}(\nabla_{X}grad\frac{1}{\lambda^{2}},X)\] \[-||X||^{2}g_{K}(\nabla_{X_{n}}grad\frac{1}{\lambda^{2}},X_{n}) \}-\frac{\lambda^{4}}{4}\Big{\{}\big{(}(2r+s)||X||^{2}\] \[-(g_{K}(X,X_{n}))^{2}\big{)}||grad\frac{1}{\lambda^{2}}||^{2}+|| \big{(}X(\frac{1}{\lambda^{2}})X_{n}-X_{n}(\frac{1}{\lambda^{2}})X\big{)}||^{2 }\Big{\}}, \tag{3.34}\] hence from (3.34), we get the result. 4 Conformal pointwise slant Riemannian maps _(CP-SRM)_ from Riemannian manifolds to Kahler manifolds In this section, we introduce the notion of conformal pointwise slant Riemannian maps from Riemannian manifolds to Kahler manifolds with an example and discuss the geometry of such maps. Definition 4.1.: _Let \(G\) be a conformal Riemannian map from a Riemannian manifold \((L,g_{L})\) to a Kahler manifold \((K,\varphi,g_{K})\). If for every point \(k\in K\), the Wirtinger angle \(\Theta(Z)\) between \(\varphi G_{*}(Z)\) and the space \(rangeG_{*}\) is independent of the choice of \(G_{*}Z\), where \(G_{*}Z\in\Gamma(kerG_{*})\) is a nonzero vector, then \(G\) is said to be a conformal pointwise slant Riemannian map. In this case, the angle \(\Theta\) is regarded as a function on \(K,\) known as the slant function of conformal poinwise slant Riemannian map (CPSRM)._ Let, \(G\) be a _CPSRM_ from a Riemannian manifold \((L,g_{L})\) to a Kahler manifold \((K,\varphi,g_{K})\) with slant function \(\Theta\), then for \(G_{*}Z\in\Gamma(rangeG_{*}),\) we have \[\varphi G_{*}Z=\rho G_{*}Z+\varpi G_{*}Z, \tag{4.1}\] where \(\rho G_{*}Z\in\Gamma(rangeG_{*})\) and \(\varpi G_{*}Z\in\Gamma((rangeG_{*})^{\perp}).\) Also for any \(P\in\Gamma((rangeG_{*})^{\perp}),\) we have \[\varphi P=\mathcal{D}P+\mathcal{E}P, \tag{4.2}\] where \(\mathcal{D}P\in\Gamma(rangeG_{*})\) and \(\mathcal{E}P\in\Gamma((rangeG_{*})^{\perp}).\) Assuming \(\eta\) as a orthogonal complementary distribution to \(\varpi(\Gamma(rangeG_{*}))\) in \(\Gamma((rangeG_{*})^{\perp}),\) we can write \[\Gamma((rangeG_{*})^{\perp})=\varpi(\Gamma(rangeG_{*}))\oplus\eta.\] **Example 4.1**.: _Consider a Riemannian manifold \((K=\mathbb{R}^{4},g_{K})\) and a pair of almost complex structures \(\{J_{1},J_{2}\}\) on \(K\) satisfying \(J_{1}J_{2}=-J_{2}J_{1}\), where_ \[J_{1}(u_{1},u_{2},u_{3},u_{4})=(u_{3},u_{4},-u_{1},-u_{2})\] \[J_{2}(u_{1},u_{2},u_{3},u_{4})=(u_{2},-u_{1},-u_{4},u_{3})\] _. Let \(t:\mathbb{R}^{4}\rightarrow\mathbb{R}\) be a real-valued function, hence we can define a complex structure_ \[J_{t}=(cost)J_{1}+(sint)J_{2}\] _on \(K\), then \((K,g_{K},J_{t})\) is an almost complex structure. Again, consider a map \(G:(L=\mathbb{R}^{6},g_{L})\rightarrow(K=\mathbb{R}^{4},g_{K})\) from a Riemannian manifold \(L\) to a Kahler manifold \(K\), defined by_ \[G(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6})=\pi^{a}(x_{3}sinha-x_{2}cosha,0,x_{5} cosha-x_{4}sinha,\sqrt{2}cosb),\] _where \(a,b\) are constants. By simple computation, we have_ \[(kerG_{*})=span\big{\{}\frac{\partial}{\partial x_{1}},\pi^{a}cosha\frac{ \partial}{\partial x_{2}}+\pi^{a}sinha\frac{\partial}{\partial x_{3}},\pi^{a} cosha\frac{\partial}{\partial x_{4}}+\pi^{a}sinha\frac{\partial}{ \partial x_{5}},\frac{\partial}{\partial x_{6}}\big{\}}\] \[(kerG_{*})^{\perp}=span\big{\{}X=-\pi^{a}cosha\frac{\partial}{\partial x_{2}}+ \pi^{a}sinha\frac{\partial}{\partial x_{3}},Y=-\pi^{a}cosha\frac{\partial}{ \partial x_{4}}+\pi^{a}sinha\frac{\partial}{\partial x_{5}}\big{\}}\] _and_ \[rangeG_{*}=span\big{\{}G_{*}X=\pi^{2a}(cosh^{2}a+sinh^{2}a,0,0,0),G_{*}Y=\pi^{2a}(0,0,sinh^{2}a+cosh^{2}a,0)\big{\}},\] _hence \(G\) is a CPSRM from a Riemannian manifold \(L\) to a Kahler manifold \(K\) with rank \(G=2,\)\(\lambda=\pi^{\alpha}(cosh^{2}a+sinh^{2}a)^{1/2}\) and slant function \(\Theta=t.\)_ **Lemma 4.1**.: _Let \(G\) be a conformal Riemannian map from a Riemannian manifold \((L,g_{L})\) to a Kahler manifold \((K,\varphi,g_{K})\). Then, \(G\) is a CPSRM if and only if there exists a constant \(\delta\in[-1,0],\) such that_ \[\rho^{2}G_{*}(Z)=\delta G_{*}(Z)\] _for \(Z\in\Gamma(kerG_{*})^{\perp}\). If \(G\) is a conformal pointwise slant Riemannian map, then \(\delta=-cos^{2}\Theta.\)_ By using (4.1) and Lemma 4.1, we have following lemma. **Lemma 4.2**.: _Let \(G\) be a CPSRM from a Riemannian manifold \((L,g_{L})\) to a Kahler manifold \((K,\varphi,g_{K})\) with slant function \(\Theta.\) Then, we have_ \[g_{K}(\rho G_{*}(Y),\rho G_{*}(Z)) =\lambda^{2}cos^{2}\Theta g_{L}(Y,Z) \tag{4.3}\] \[g_{K}(\varpi G_{*}(Y),\varpi G_{*}(Z)) =\lambda^{2}sin^{2}\Theta g_{L}(Y,Z) \tag{4.4}\] _for any \(Y,Z\in\Gamma((kerF_{*})^{\perp}).\)_ **Theorem 4.2**.: _Let \(G\) be a CPSRM from a Riemannian manifold \((L,g_{L})\) to a Kahler manifold \((K,\varphi,g_{K})\). Then any of the following two assertions imply the third one_ * \(rangeG_{*}\) _is integrable._ * \(g_{K}(\nabla_{Z}^{\mathbb{C}\perp}\varpi\rho G_{*}Y-\nabla_{Y}^{\mathbb{C} \perp}\varpi\rho G_{*}Z,P)=g_{K}(\nabla_{Y}^{\mathbb{C}\perp}\varpi G_{*}Z- \nabla_{Z}^{\mathbb{C}\perp}\varpi G_{*}Y,\mathcal{E}P),\)__ * \(g_{K}((\nabla G_{*})(Y,{}^{*}\!G_{*}\mathcal{D}P)^{\perp},\varpi G_{*}Z)=g_{K }((\nabla G_{*})(Z,{}^{*}\!G_{*}\mathcal{D}P)^{\perp},\varpi G_{*}Y),\)__ _where, \(Y,Z\in\Gamma((kerG_{*})^{\perp})\), \(P\in\Gamma(rangeG_{*})^{\perp})\) and \({}^{*}G_{*}\) is adjoint map of \(G_{*}\)._ Proof.: Let \(Y,Z\in\Gamma((kerG_{*})^{\perp}),\)\(P\in\Gamma((rangeG_{*})^{\perp})\) and \(K\) is a Kahler manifold, then we have \[g_{K}([G_{*}Y,G_{*}Z],P)=g_{K}(\nabla_{Y}^{\mathbb{C}}\varphi G_{*}Z,\varphi P )-g_{K}(\nabla_{Z}^{\mathbb{C}}\varphi G_{*}Y,\varphi P). \tag{4.5}\] Using (4.1) in (4.5), we get \[g_{K}([G_{*}Y,G_{*}Z],P) =g_{K}(\nabla_{Y}^{\mathbb{C}}\rho G_{*}Z,\varphi P)+g_{K}( \nabla_{Y}^{\mathbb{C}}\varpi G_{*}Z,\varphi P) \tag{4.6}\] \[-g_{K}(\nabla_{Z}^{\mathbb{C}}\rho G_{*}Y,\varphi P)-g_{K}(\nabla _{Z}^{\mathbb{C}}\rho G_{*}Y,\varpi P),\] again, using (4.1) and Lemma 4.1 in (4.6), we have \[sin^{2}\Theta g_{K}([G_{*}Y,G_{*}Z],P) =-g_{K}(sin2\Theta Y(\Theta)G_{*}Z,P)+g_{K}(sin2\Theta Z(\Theta)G _{*}Y,P)\] \[-g_{K}(\nabla_{Y}^{\mathbb{C}}\varpi\rho G_{*}Z,P)+g_{K}(\nabla_{ Z}^{\mathbb{C}}\varpi\rho G_{*}Y,P)\] \[+g_{K}(\nabla_{Y}^{\mathbb{C}}\varpi G_{*}Z,\mathcal{D}P)-g_{K}( \nabla_{Z}^{\mathbb{C}}\varpi G_{*}Y,\mathcal{D}P)\] \[+g_{K}(\nabla_{Y}^{\mathbb{C}}\varpi G_{*}Z,\mathcal{E}P)-g_{K}( \nabla_{Z}^{\mathbb{C}}\varpi G_{*}Y,\mathcal{E}P),\] applying (2.10) in above equation, then simplifying, we get \[\begin{split} sin^{2}\Theta g_{K}([G_{*}Y,G_{*}Z],P)&=g _{K}(-\nabla_{Y}^{G^{\perp}}\varpi\rho G_{*}Z+\nabla_{Z}^{G^{\perp}}\varpi\rho G _{*}Y,P)\\ &+g_{K}(\nabla_{Y}^{G^{\perp}}\varpi G_{*}Z-\nabla_{Z}^{G^{\perp }}\varpi G_{*}Y,\mathcal{E}P),\\ &-g_{K}(\varpi G_{*}Z,\nabla_{Y}^{K}\mathcal{D}P)+g_{K}(\varpi G _{*}Y,\nabla_{Z}^{K}\mathcal{D}P).\end{split} \tag{4.7}\] Let \({}^{*}G_{*}\) be the adjoint map of \(G_{*}\), using (2.4), (2.14) and (2.15), we get \[\begin{split} sin^{2}\Theta g_{K}([G_{*}Y,G_{*}Z],P)& =g_{K}(-\nabla_{Y}^{G^{\perp}}\varpi\rho G_{*}Z+\nabla_{Z}^{G^{ \perp}}\varpi\rho G_{*}Y,P)\\ &+g_{K}(\nabla_{Y}^{G^{\perp}}\varpi G_{*}Z-\nabla_{Z}^{G^{\perp }}\varpi G_{*}Y,\mathcal{E}P)\\ &-g_{K}(\varpi G_{*}Z,G_{*}(\nabla_{Y}^{L}{}^{*}G_{*}\mathcal{D}P )+Y(ln\lambda)\mathcal{D}P+({}^{*}G_{*}\mathcal{D}P)(ln\lambda)G_{*}Y\\ &-g_{L}(Y,{}^{*}G_{*}\mathcal{D}P)G_{*}(grad(ln\lambda))+(\nabla G _{*})^{\perp}(Y,{}^{*}G_{*}\mathcal{D}P))\\ &+g_{K}(\varpi G_{*}Y,G_{*}(\nabla_{Z}^{L}{}^{*}G_{*}\mathcal{D}P )+Z(ln\lambda)\mathcal{D}P+({}^{*}G_{*}\mathcal{D}P)(ln\lambda)G_{*}Z\\ &-g_{L}(Z,{}^{*}G_{*}\mathcal{D}P)G_{*}(grad(ln\lambda))+(\nabla G _{*})^{\perp}(Z,{}^{*}G_{*}\mathcal{D}P)).\\ &=g_{K}(-\nabla_{Y}^{G^{\perp}}\varpi\rho G_{*}Z+\nabla_{Z}^{G^{ \perp}}\varpi\rho G_{*}Y,P)\\ &+g_{K}(\nabla_{Y}^{G^{\perp}}\varpi G_{*}Z-\nabla_{Z}^{G^{\perp }}\varpi G_{*}Y,\mathcal{E}P)\\ &-g_{K}((\nabla G_{*})^{\perp}(Y,{}^{*}G_{*}\mathcal{D}P),\varpi G _{*}Z)\\ &+g_{K}((\nabla G_{*})^{\perp}(Z,{}^{*}G_{*}\mathcal{D}P),\varpi G _{*}Y).\end{split} \tag{4.8}\] Assuming that assertions \((i)\) and \((ii)\) are true. Applying \((i)\) and \((ii)\) in (4.8), we get \((iii)\). If assertions \((ii)\) and \((iii)\) are true, then from (4.8) we have \((i)\). Further, if assertions \((i)\) and \((ii)\) are true, then using \((i)\) and \((ii)\) in (4.8), we get \((ii)\). **Theorem 4.3**.: _Let \(G\) be a CPSRM from a Riemannian manifold \((L,g_{L})\) to a Kahler manifold \((K,\varphi,g_{K})\). Then any two of the following assertions imply the third one_ 1. \((rangeG_{*})^{\perp}\) _is integrable_ 2. \((rangeG_{*})^{\perp}\) _defines a totally geodesic foliation._ 3. \(g_{K}(\nabla_{P}\varpi\rho G_{*}Y,Q)-g_{K}(\nabla_{P}\varpi G_{*}Y,\mathcal{ E}Q)=g_{K}(\nabla_{Q}^{K}\varpi\rho G_{*}Y,P)-g_{K}(\nabla_{Q}^{K}\varpi G_{*}Y, \mathcal{E}P),\)__ _where \(P,Q\in\Gamma((rangeG_{*})^{\perp})\) and \(Y\in\Gamma((kerG_{*})^{\perp})\)._ Proof.: Let \(P,Q\in\Gamma((rangeG_{*})^{\perp})\), then for any \(Y\in\Gamma((kerG_{*})^{\perp})\), we have \[g_{K}([P,Q],G_{*}Y)=-g_{K}(\nabla_{P}^{K}G_{*}Y,Q)+g_{K}(\nabla_{Q}^{K}G_{*}Y,P),\] since \(K\) is a Kahler manifold, from above equation, we have \[g_{K}([P,Q],G_{*}Y)=-g_{K}(\nabla_{P}^{K}\varphi G_{*}Y,\varphi Q)+g_{K}(\nabla _{Q}^{K}\varphi G_{*}Y,\varphi P), \tag{4.9}\] from (4.1), (4.9) and Lemma 4.1, we have \[\begin{split} g_{K}([P,Q],G_{*}Y)&=-g_{K}(\nabla_{P }^{K}(cos^{2}\Theta)G_{*}Y,Q)+g_{K}(\nabla_{Q}^{K}(cos^{2}\Theta)G_{*}Y,P)+g_{K} (\nabla_{P}^{K}\varpi\rho G_{*}Y,Q)\\ &-g_{K}(\nabla_{Q}^{K}\varpi\rho G_{*}Y,P)-g_{K}(\nabla_{P}^{K} \varpi G_{*}Y,\varphi Q)+g_{K}(\nabla_{Q}^{K}\varpi G_{*}Y,\varphi P),\end{split}\] using (4.2) in above equation, we get \[\begin{split} sin^{\Theta}g_{K}([P,Q],G_{*}Y)&=g_{K}( sin2\Theta P(\Theta)G_{*}Y,Q)-g_{K}(sin2\Theta Q(\Theta)G_{*}Y,P)\\ &+g_{K}(\overset{K}{\nabla}_{P}\varpi\rho G_{*}Y,Q)-g_{K}( \overset{K}{\nabla}_{Q}\varpi\rho G_{*}Y,P)\\ &-g_{K}(\overset{K}{\nabla}_{P}\varpi G_{*}Y,\mathcal{D}Q+ \mathcal{E}Q)+g_{K}(\overset{K}{\nabla}_{Q}\varpi G_{*}Y,\mathcal{D}P+ \mathcal{E}P)\end{split} \tag{4.10}\] Assuming if the assertions \((i)\) and \((ii)\) are true, from (4.10), we have \((iii)\). Similarly, assuming assertions \((ii)\) and \((iii)\), then from (4.10), we get \((i)\) and further, if assertions \((i)\) and \((iii)\) are true, from (4.10), we get \((ii)\). Hence, we get the theorem. **Theorem 4.4**.: _Let \(G\) be a CPSRM from a Riemannian manifold \((L,g_{L})\) to a Kahler manifold \((K,\varphi,g_{K})\) and \(\Theta\) be a proper slant function on \(K\). Then \(rangeG_{*}\) defines a totally geodesic foliation on \(K\) if and only if_ \[g_{K}\big{(}(\nabla G_{*})^{\perp}(Y,{}^{*}\!G_{*}\mathcal{D}P),\varpi G_{*}Z \big{)}=g_{K}(\nabla_{Y}^{G\perp}\varpi G_{*}Z,\mathcal{E}P)-g_{K}(\nabla_{Y} ^{G\perp}\varpi\rho G_{*}Z,P), \tag{4.11}\] _where \(Y,Z\in\Gamma((kerG_{*})^{\perp}),P\in\Gamma((rangeG_{*})^{\perp})\) and \({}^{*}\!G_{*}\) is the adjoint map of \(G_{*}\)._ Proof.: Let \(Y,Z\in\Gamma((kerG_{*})^{\perp})\) and \(K\) be a Kahler manifold, then for any \(P\in\Gamma((rangeG_{*})^{\perp})\), we have \[g_{K}(\overset{K}{\nabla}_{Y}^{G}G_{*}Z,P)=g_{K}(\overset{K}{\nabla}_{Y}^{G} \varpi G_{*}Z,\varphi P),\] applying (4.1) in above equation, we get \[g_{K}(\overset{K}{\nabla}_{Y}^{G}G_{*}Z,P)=g_{K}(\overset{K}{\nabla}_{Y}^{G} \rho G_{*}Z+\overset{K}{\nabla}_{Y}^{G}\varpi G_{*}Z,\varphi P), \tag{4.12}\] again, using (4.1) in (4.12), we have \[g_{K}(\overset{K}{\nabla}_{Y}^{G}G_{*}Z,P)=-g_{K}(\overset{K}{\nabla}_{Y}^{G }\rho^{2}G_{*}Z+\overset{K}{\nabla}_{Y}^{G}\varpi\rho G_{*}Z,P)+g_{K}( \overset{K}{\nabla}_{Y}^{G}\varpi G_{*}Z,\varphi P). \tag{4.13}\] Using lemma 4.1 and (2.10) in (4.13), we can write \[\begin{split} sin^{2}\Theta g_{K}(\overset{K}{\nabla}_{Y}^{G}G_ {*}Z,P)&=-2sin2\Theta Y(\Theta)g_{K}(G_{*}Z,P)-g_{K}(\nabla_{Y}^{G \perp}\varpi\rho G_{*}Z,P)\\ &-g_{K}(S_{\varpi G_{*}Z}G_{*}Y,\mathcal{D}P)+g_{K}(\nabla_{Y}^{G \perp}\varpi G_{*}Z,\mathcal{E}P),\end{split}\] using (2.11) in above equation, further applying (2.15), we get \[\begin{split} sin^{2}\Theta g_{K}(\overset{K}{\nabla}_{Y}^{G}G_ {*}Z,P)&=-g_{K}(\nabla_{Y}^{G\perp}\varpi\rho G_{*}Z,P)+g_{K}( \nabla_{Y}^{G\perp}\varpi G_{*}Z,\mathcal{E}P)\\ &-g_{K}\big{(}(\nabla G_{*})^{\perp}(Y,{}^{*}\!G_{*}\mathcal{D}P),\varpi G_{*}Z\big{)}.\end{split} \tag{4.14}\] If \(rangeG_{*}\) defines a totally geodesic foliation on \(K\), from (4.14) we get the required result. **Theorem 4.5**.: _Let \(G\) be a CPSRM from a Riemannian manifold \((L,g_{L})\) to a Kahler manifold \((K,\varphi,g_{K})\) and \(\Theta\) be a proper slant function on \(K\). Then, any two of the following assertions imply the third one_ 1. \((rangeG_{*})^{\perp}\) _defines a totally geodesic foliation on_ \(K\)_,_ 2. \(G\) _is a horizontally homothetic map,_ 3. \[\begin{split} g_{K}\big{(}(\nabla G_{*})^{\perp}({}^{*}G_{*} \mathcal{D}Q,{}^{*}G_{*}\mathcal{D}P),\mathcal{E}\varpi G_{*}Y\big{)}&= sin^{2}\Theta\big{(}\lambda^{2}g_{L}(\overset{L}{\nabla}_{{}^{*}G_{*} \mathcal{D}Q}{}^{*}G_{*}\mathcal{D}P,Y)\\ &-g_{K}(S_{\mathcal{E}P}\mathcal{D}Q,G_{*}Y)\big{)}+g_{K}( \overset{K}{\nabla}_{P}\varpi G_{*}Y,\mathcal{E}Q)\\ &-g_{K}(\overset{K}{\nabla}_{P}\varpi\rho G_{*}Y,Q)-g_{K}([P, \mathcal{D}Q],\varpi G_{*}Y)\end{split}\] (4.15) _where \(P,Q\in\Gamma((rangeG_{*})^{\perp}),Y\in\Gamma((kerG_{*})^{\perp})\) and \({}^{*}G_{*}\) is the adjoint map of \(G_{*}\)._ Proof.: Let \(P,Q\in\Gamma((rangeG_{*})^{\perp}),Y\in\Gamma((kerG_{*})^{\perp})\) and \(K\) be a Kahler manifold, from (4.1) we can write \[g_{K}(\overset{K}{\nabla}_{P}Q,G_{*}Y)=-g_{K}(\overset{K}{\nabla}_{P}\rho G_ {*}Y,\varphi Q)-g_{K}(\overset{K}{\nabla}_{P}\varpi G_{*}Y,\varphi Q), \tag{4.16}\] using (4.1) and (4.2) in (4.16), we get \[g_{K}(\overset{K}{\nabla}_{P}Q,G_{*}Y)=g_{K}(\overset{K}{\nabla}_{P}\rho^{2} G_{*}Y,Q)+g_{K}(\overset{K}{\nabla}_{P}\varpi\rho G_{*}Y,Q)+g_{K}(\overset{K}{ \nabla}_{P}\mathcal{D}Q,\varpi G_{*}Y)-g_{K}(\overset{K}{\nabla}_{P}\varpi G _{*}Y,\mathcal{E}Q),\] applying lemma 4.1 in above equation and simplifying, we get \[\begin{split} g_{K}(\overset{K}{\nabla}_{P}Q,G_{*}Y)& =cos^{2}\Theta g_{K}(\overset{K}{\nabla}_{P}Q,G_{*}Y)+g_{K}( \overset{K}{\nabla}_{P}\varpi\rho G_{*}Y,Q)+g_{K}(\overset{K}{\nabla}_{ \mathcal{D}Q}\mathcal{E}P,\varphi\varpi G_{*}Y)\\ &+g_{K}(\overset{K}{\nabla}_{\mathcal{D}Q}\mathcal{D}P,\varphi \varpi G_{*}Y)+g_{K}([P,\mathcal{D}Q],\varpi G_{*}Y)-g_{K}(\overset{K}{\nabla} _{P}\varpi G_{*}Y,\mathcal{E}Q).\end{split} \tag{4.17}\] Suppose \({}^{*}G_{*}\) be the adjoint map of \(G_{*}\), using (2.4), (2.10), (2.14) and (2.15) in (4.17), we get \[\begin{split} sin^{2}\Theta g_{K}(\overset{K}{\nabla}_{P}Q,G_{*}Y )&=g_{K}(\overset{K}{\nabla}_{P}\varpi\rho G_{*}Y,Q)-g_{K}(S_{ \mathcal{E}P}\mathcal{D}Q,\varphi\varpi G_{*}Y)+g_{K}(\nabla_{\mathcal{D}Q}^{ \perp}\mathcal{E}P,\varphi\varpi G_{*}Y)\\ &+g_{K}(\overset{L}{\nabla}_{{}^{*}G_{*}\mathcal{D}Q}{}^{*}G_{*} \mathcal{D}P+{}^{*}G_{*}\mathcal{D}Q(ln\lambda)\mathcal{D}P+{}^{*}G_{*} \mathcal{D}P(ln\lambda)\mathcal{D}Q\\ &-g_{L}({}^{*}G_{*}\mathcal{D}Q,{}^{*}G_{*}\mathcal{D}PG_{*}(grad( ln\lambda))+(\nabla G_{*})^{\perp}({}^{*}G_{*}\mathcal{D}Q,{}^{*}G_{*} \mathcal{D}P),\varphi\varpi G_{*}Y)\\ &+g_{K}([P,\mathcal{D}Q],\varpi G_{*}Y)-g_{K}(\overset{K}{\nabla }_{P}\varpi G_{*}Y,\mathcal{E}Q),\end{split}\] applying (4.2) and lemma 4.2 in above equation and simplifying, we get \[\begin{split} sin^{2}\Theta g_{K}(\overset{K}{\nabla}_{P}Q,G_{*}Y )&=g_{K}(\overset{K}{\nabla}_{P}\varpi\rho G_{*}Y,Q)+(\nabla G_{* })^{\perp}({}^{*}G_{*}\mathcal{D}Q,{}^{*}G_{*}\mathcal{D}P),\mathcal{E} \varpi G_{*}Y)\\ &+g_{K}([P,\mathcal{D}Q],\varpi G_{*}Y)-g_{K}(\overset{K}{\nabla }_{P}\varpi G_{*}Y,\mathcal{E}Q)+sin^{2}\Theta\Big{(}g_{K}(S_{\mathcal{E}P} \mathcal{D}Q,G_{*}Y)\\ &-\lambda^{2}g_{L}(\mathcal{H}\overset{L}{\nabla}_{{}^{*}G_{*} \mathcal{D}Q}{}^{*}G_{*}\mathcal{D}P,Y)-{}^{*}G_{*}\mathcal{D}Q(ln\lambda)g_{ K}(\mathcal{D}P,G_{*}Y)\\ &-{}^{*}G_{*}\mathcal{D}P(ln\lambda)g_{K}(\mathcal{D}Q,G_{*}Y)+Y( ln\lambda)g_{K}(\mathcal{D}P,\mathcal{D}Q)\Big{)}.\end{split} \tag{4.18}\] Suppose, assertions \((i)\) and \((ii)\) are true, then from (4.18) we have \((iii)\). Similarly, if \((ii)\) and \((iii)\) are true, from (4.18) we get assertion \((i)\). Again, if \((i)\) and \((iii)\) are true, from (4.18), we have \[sin^{2}\Theta\big{(}-^{*}G_{*}\mathcal{D}Q(ln\lambda)g_{K}(\mathcal{D}P,G_{*}Y) -^{*}G_{*}\mathcal{D}P(ln\lambda)g_{K}(\mathcal{D}Q,G_{*}Y)+Y(ln\lambda)g_{K} (\mathcal{D}P,\mathcal{D}Q)\big{)}=0,\] since \(\Theta\neq 0\) and putting \(P=Q\), we have \[2g_{K}\big{(}\mathcal{D}P,G_{*}(grad(ln\lambda))\big{)}g_{K}(\mathcal{D}P,G_{* }Y)=g_{K}\big{(}G_{*}X,G_{*}(grad(ln\lambda))\big{)}g_{K}(\mathcal{D}P, \mathcal{D}P),\] which is possible only if \(\lambda\) is constant on \((kerG_{*})^{\perp}\), hence the assertion \((ii)\). **Theorem 4.6**.: _Let \(G\) be a CPSRM from a Riemannian manifold \((L,g_{L})\) to a Kahler manifold \((K,\varphi,g_{K})\). Then, \(G\) is harmonic if the following conditions are satisfied:_ \[\begin{split}&(i)\\ &\quad trace\big{\{}S_{\varpi\rho G_{*}(\cdot)}G_{*}(\cdot)+S_{ \mathcal{E}\varpi G_{*}(\cdot)}G_{*}(\cdot)+G_{*}\big{(}\nabla^{L}_{(\cdot)}{}^ {*}G_{*}(\mathcal{D}\varpi G_{*}(\cdot))\big{)}\\ &\quad-(\nabla G_{*})\big{(}\cdot,^{*}G_{*}(\mathcal{D}\varpi G_ {*}(\cdot))\big{)}^{rangeG_{*}}-sin2\Theta(\cdot)(\Theta)G_{*}(\cdot)-sin^{2} \Theta G_{*}(\nabla^{L}_{(\cdot)}(\cdot))\big{\}}=0,\end{split} \tag{4.19}\] \[\begin{split}&(ii)\\ &\quad trace\big{\{}\nabla^{G\perp}_{(\cdot)}\varpi\rho G_{*}( \cdot)+(\nabla G_{*})^{\perp}\big{(}\cdot,^{*}G_{*}(\mathcal{D}\varpi G_{*}( \cdot))\big{)}+\nabla^{G\perp}_{(\cdot)}{}^{*}\mathcal{E}\varpi G_{*}(\cdot) \big{\}}=0,\end{split} \tag{4.20}\] \((iii)\) _fibers are minimal._ Proof.: Let \(Y\in\Gamma(kerF_{*})^{\perp}\) and \(K\) be a Kahler manifold, from (2.4) we have \[(\nabla G_{*})(Y,Y)=-\varphi\nabla^{G}_{Y}\varphi G_{*}Y-G_{*}(\nabla^{M}_{Y} Y), \tag{4.21}\] further, from (4.1), (4.2) and (4.21), we get \[(\nabla G_{*})(Y,Y)=-\nabla^{G}_{Y}\rho^{2}G_{*}Y-\nabla^{G}_{Y}\varpi\rho G_ {*}Y-\nabla^{G}_{Y}\mathcal{D}\varpi G_{*}Y-\nabla^{G}_{Y}\mathcal{E}\varpi G _{*}Y-G_{*}(\nabla^{L}_{Y}Y),\] using lemma 4.1 and (2.11) in above equation, we have \[\begin{split}(\nabla G_{*})(Y,Y)&=cos^{2}\Theta \nabla^{K}_{Y}G_{*}Y-sin2\Theta Y(\Theta)G_{*}Y+S_{\varpi\rho G_{*}Y}G_{*}Y- \nabla^{G\perp}_{Y}\varpi\rho G_{*}Y\\ &-\nabla^{G}_{Y}\mathcal{D}\varpi G_{*}Y+S_{\mathcal{E}\varpi G_ {*}Y}-\nabla^{G\perp}_{Y}\mathcal{E}\varpi G_{*}Y-G_{*}(\nabla^{L}_{Y}Y), \end{split} \tag{4.22}\] Let \({}^{*}G_{*}\) be the adjoint map of \(G_{*}\), using (2.4) in (4.22) and separating components of \(rangeG_{*}\) and \((rangeG_{*})^{\perp}\), we get \[\begin{split} sin^{2}\Theta(\nabla G_{*})(Y,Y)^{rangeG_{*}}& =S_{\varpi\rho G_{*}Y}G_{*}Y+S_{\mathcal{E}\varpi G_{*}Y}G_{*}Y+G_ {*}\big{(}\nabla^{L}_{Y}{}^{*}G_{*}(\mathcal{D}\varpi G_{*}Y)\big{)}\\ &-(\nabla G_{*})\big{(}Y,^{*}G_{*}(\mathcal{D}\varpi G_{*}Y) \big{)}^{rangeG_{*}}-sin2\Theta Y(\Theta)G_{*}Y\\ &-sin^{2}\Theta G_{*}(\nabla_{Y}Y),\end{split} \tag{4.23}\] and \[sin^{2}\Theta(\nabla G_{*})^{\perp}(Y,Y)=-\nabla^{G\perp}_{Y}\varpi\rho G_{*} Y-(\nabla G_{*})^{\perp}\big{(}Y,^{*}G_{*}(\mathcal{D}\varpi G_{*}Y)\big{)}- \nabla^{G\perp}_{Y}\mathcal{E}\varpi G_{*}Y, \tag{4.24}\] Again, let \(W\in\Gamma(kerG_{*})\), from (2.4), we have \[(\nabla G_{*})(W,W)=-G_{*}(\nabla_{W}W)=-G_{*}(\mathcal{T}_{W}W) \tag{4.25}\] Thus, from (4.23), (4.24) and (4.25), we get the required results. **Theorem 4.7**.: _Let \(G\) be a CPSRM from a Riemannian manifold \((L,g_{L})\) to a Kahler manifold \((K,\varphi,g_{K})\) and \(\Theta\) be the slant function on \(K\). Then we have_ \[\begin{split} sin^{4}\Theta||(\nabla G_{*})(Y,Y)^{rangeG_{*}}||^{2}& \geq sin^{4}\Theta||G_{*}(\overset{L}{\nabla}_{Y}Y)||^{2}+|||S_{ \mathcal{E}\varpi G_{*}Y}G_{*}Y||^{2}+||S_{\varpi\rho G_{*}Y}G_{*}Y||^{2}\\ &+||G_{*}\big{(}\overset{L}{\nabla}_{Y}{}^{*}G_{*}(\mathcal{D} \varpi G_{*}Y)\big{)}||^{2}+||(\nabla G_{*})\big{(}Y,{}^{*}G_{*}(\mathcal{D} \varpi G_{*}Y)\big{)}^{rangeG_{*}}||^{2}\\ &+2\Big{\{}sin^{2}\Theta\big{(}\lambda^{2}sin2\Theta Y(\Theta)g_ {L}(\nabla_{Y}Y,Y)-g_{L}(\overset{L}{\nabla}_{Y}Y,\overset{L}{\nabla}_{Y}{}^ {*}G_{*}(\mathcal{D}\varpi G_{*}Y)\big{)}\\ &-g_{K}\big{(}S_{\varpi\rho G_{*}Y}G_{*}Y+S_{\mathcal{E}\rho G_{* }Y}G_{*}Y\] \[-(\nabla G_{*})(Y,{}^{*}G_{*}(\mathcal{D}\varpi G_{*}Y))^{rangeG_{* }},G_{*}(\overset{L}{\nabla}_{Y}Y)\big{)}\big{)}\\ &-sin2\Theta Y(\Theta)\big{(}\lambda^{2}g_{L}(\overset{L}{\nabla }_{Y}{}^{*}G_{*}(\mathcal{D}\varpi G_{*}Y),Y)-g_{K}(S_{\varpi\rho G_{*}Y}G_{*}Y )\\ &+S_{\mathcal{E}\varpi G_{*}Y}G_{*}Y-(\nabla G_{*})\big{(}Y,{}^{ *}G_{*}(\mathcal{D}\varpi G_{*}Y)\big{)}^{rangeG_{*}},G_{*}Y\big{)}\\ &+g_{K}\big{(}S_{\mathcal{E}\varpi G_{*}Y}G_{*}Y+G_{*}(\overset{L }{\nabla}_{Y}{}^{*}G_{*}(\mathcal{D}\varpi G_{*}Y))\] \[-(\nabla G_{*})\big{(}Y,{}^{*}G_{*}(\mathcal{D}\varpi G_{*}Y) \big{)}^{rangeG_{*}},S_{\varpi\rho G_{*}Y}G_{*}Y\big{)}+g_{K}\big{(}S_{ \mathcal{E}\varpi G_{*}Y}G_{*}Y\] \[-(\nabla G_{*})\big{(}Y,{}^{*}G_{*}(\mathcal{D}\varpi G_{*}Y) \big{)}^{rangeG_{*}},G_{*}(\overset{L}{\nabla}_{Y}{}^{*}G_{*}(\mathcal{D} \varpi G_{*}Y))\big{)}\Big{\}}, \tag{4.26}\] _equality holds when \(\Theta\) is constant, also_ \[\begin{split} sin^{4}\Theta||(\nabla G_{*})^{\perp}(Y,Y)||^{2}& =||\nabla_{Y}^{G\perp}\varpi\rho G_{*}Y||^{2}+||\nabla_{Y}^{G\perp }\mathcal{E}\varpi G_{*}Y||^{2}+||(\nabla G_{*})^{\perp}(Y,{}^{*}G_{*}( \mathcal{D}\varpi G_{*}Y))||^{2}\\ &+2\Big{\{}g_{K}\big{(}\nabla_{Y}^{G\perp}\mathcal{E}\varpi G_{* }Y+(\nabla G_{*})^{\perp}(Y,{}^{*}G_{*}(\mathcal{D}\varpi G_{*}Y)),\nabla_{Y} ^{G\perp}\varpi\rho G_{*}\big{)}\\ &+g_{K}\big{(}\nabla_{Y}^{G\perp}\mathcal{E}\varpi G_{*}Y, \nabla_{Y}^{G\perp}\mathcal{E}\varpi G_{*}Y\big{)}\Big{\}}\end{split} \tag{4.27}\] Proof.: After taking the products of (4.23) and (4.24) by itself and further rearranging terms we get the required results. ## 5 Acknowledgments The first author is thankful to UGC for providing financial assistance in terms of MANF scholarship vide letter with UGC-Ref. No. 1844/(CSIR-UGC NET JUNE 2019). The second author is thankful to DST Gov. of India for providing financial support in terms of DST-FST label-I grant vide sanction number SR/FST/MS-I/2021/104(C).
2301.02418
Dynamical tides in Jupiter and the role of interior structure
Context. The Juno spacecraft has obtained highly accurate tidal Love numbers, which provide important constraints on the tidal response and interior structure of Jupiter. Aims. In order to exploit these observations, it is necessary to develop an approach for accurately calculating the tidal response of Jupiter for a given interior model and to investigate the role of interior structure. Methods. We directly solve the linearized tidal equations of a compressible, self-gravitating, rotating and viscous fluid body using a pseudo-spectral method. The Coriolis force is fully taken into account but the centrifugal effect is neglected. We can simultaneously obtain the real and imaginary parts of the tidal Love numbers for a given planetary interior model. Results. We calculate the tidal responses for three simple interior models of Jupiter which may contain a compact rigid core or an extended dilute core. All of models we consider can explain the fractional correction $\Delta k_{22}\approx -4\%$ due to dynamical tides, but all have difficulties to reconcile the observed $\Delta k_{42}\approx -11\%$ for the high-degree tidal Love number. We show that the Coriolis force significantly modifies gravity modes in an extended dilute core at the tidal frequency relevant to the Galilean satellites. We demonstrate that a thin stable layer in the outer region, if exists, would also influence the tidal responses of Jupiter.
Yufeng Lin
2023-01-06T08:33:51Z
http://arxiv.org/abs/2301.02418v2
# Dynamical tides in Jupiter and the role of interior structure ###### Abstract Context:The Juno spacecraft has obtained highly accurate tidal Love numbers, which provide important constraints on the tidal response and interior structure of Jupiter. Aims:In order to exploit these observations, it is necessary to develop an approach to accurately calculate the tidal response of Jupiter for a given interior model and to investigate the role of the interior structure. Methods:We directly solve the linearized tidal equations of a compressible, self-gravitating, rotating, and viscous fluid body using a pseudo-spectral method. The Coriolis force is fully taken into account, but the centrifugal effect is neglected. We are able to simultaneously obtain the real and imaginary parts of the tidal Love numbers for a given planetary interior model. Results:We calculated the tidal responses for three simplified interior models of Jupiter which may contain a compact rigid core or an extended dilute core. All of the models we consider can explain the fractional correction \(\Delta k_{22}\approx-4\%\) due to dynamical tides, but they all have difficulties reconciling the observed \(\Delta k_{22}\approx-11\%\) for the high-degree tidal Love number. We show that the Coriolis force significantly modifies gravity modes in an extended dilute core at the tidal frequency relevant to the Galilean satellites. We demonstrate that the existence of a thin stable layer in the outer region would also influence the tidal responses of Jupiter. ## 1 Introduction Tidal interactions between Jupiter and the Galilean satellites play an important role in the orbital evolution of the system and the internal dynamics of the moons (Lainey et al., 2009). The highly active volcanic eruptions on Io are believed to be due to strong tides raised by Jupiter (Peale et al., 1979). Meanwhile, tides are also raised in Jupiter by its moons, which are probably dominated by Io (Gavrilov and Zharkov, 1977). The tidal response of a gaseous body such as Jupiter is conventionally treated as a hydrostatic deformation, which acquires a small phase lag with respect to the tidal forcing due to dissipative processes. This is known as the equilibrium tide. However, the equilibrium tide alone does not suffice to account for the observed strong tidal dissipation in Jupiter (Lainey et al., 2009) and the gravitational perturbations recently measured by the Juno spacecraft (Durante et al., 2020). In fact, the equilibrium tide does not satisfy the momentum equation of tidal flows and thus corrections have to be made to fully account for the tidal response of Jupiter. The corrections to the equilibrium tide are collectively referred to as the dynamical tide, which usually involves wave-like motions in the planet and depends on the tidal frequency as well as the interior structure (Ogilvie, 2014). The dynamical tide may provide extra channels of tidal dissipation and produce additional gravitational perturbations in addition to the hydrostatic deformation. The Juno spacecraft has obtained highly accurate tidal Love numbers, \(k_{lm}\)(Durante et al., 2020), which quantitatively characterize the tidal response of Jupiter to a tidal forcing component represented in spherical harmonics of degree \(l\) and order \(m\). The observed tidal Love numbers by Juno exhibit non-negligible discrepancies with respect to the theoretically calculated hydrostatic values (Wahl et al., 2020), suggesting that the dynamical tide has to be considered to explain the observed tidal response. Specifically, Juno observations found \(\Delta k_{22}\approx-4\%\) for the dominant tidal component \(l=2\) and \(m=2\) and \(\Delta k_{42}\approx-11\%\) for the high-degree tidal component \(l=4\) and \(m=2\), where \(\Delta k_{lm}=(k_{lm}-k_{lm}^{(lm)})/k_{lm}^{(lm)}\) represents the fractional correction to the hydrostatic value \(k_{lm}^{(lm)}\)(Wahl et al., 2020; Idini and Stevenson, 2021, 2022). As the dynamical tides are sensitive to the tidal frequency and the interior structure, the detected gravitational signatures of dynamical tides may provide important constraints on Jupiter's interior (Idini and Stevenson, 2021; Lai, 2021; Idini and Stevenson, 2021, 2022; Dewberry and Lai, 2022). Recent studies (Idini and Stevenson, 2021; Lai, 2021) have revealed that the discrepancy in \(k_{22}\) can be mainly attributed to the Coriolis effect on the fundamental modes (\(f\)-modes). More recently, Idini and Stevenson (2022) proposed that the resonant locking with a gravity mode in an extended dilute core can explain \(\Delta k_{42}\approx-11\%\). This provides an independent constraint on the existence of a dilute core in Jupiter, which has also been suggested by the Juno measurements of gravitational moments of Jupiter (Wahl et al., 2017; Militzer et al., 2022). However, the tidal constraint on the existence of a dilute core is uncertain. The calculation of the tidal response in Idini and Stevenson (2022) inadequately treated the rotational (Coriolis) effect, which plays an important role in Jupiter's tidal responses because the tidal frequencies of Galilean satellites are comparable to the spin frequency of Jupiter. Including the Coriolis force introduces inertial waves in the neutrally buoyant regions (Ogilvie and Lin, 2004; Wu, 2005) as well as mixed gravity waves and inertial waves (gravito-inertial waves) in the stably stratified region (Dintrans et al., 1999; Xu and Lai, 2017). The mechanism proposed by Idini and Stevenson (2022) is also struggling to reconcile both the real part (relevant to the gravitational perturbation) and imaginary part (relevant to the tidal dissipation) of the tidal Love numbers. For this study, we developed a method to directly calculate the tidal response of a fully compressible, self-gravitating, rotating, and viscous fluid body. The Coriolis force is fully taken into account, but the centrifugal force is neglected, which allows us to numerically solve the problem in spherical geometry using a pseudo-spectral method based on spherical harmonic expansions (Ogilvie & Lin, 2004; Lin & Ogilvie, 2017). As we directly solve the tidally forced problem with explicit viscosity, we could simultaneously obtain the real and imaginary parts of the tidal Love number for a given planetary interior model. Our approach is different from recent studies on dynamical tides of Jupiter (Lai, 2021; Idini & Stevenson, 2022; Dewberry & Lai, 2022). They obtained the eigen modes of the inviscid fluid body first and then calculated the tidal Love number (only the real part) through projecting the tidal force onto each eigen modes. We consider three nominal interior models of Jupiter to investigate the dependence of the tidal response on the tidal frequency and the interior structure. We focus on the effect of a compact rigid core, an extended dilute core, and a thin stably stratified layer in the outer region on tidal responses. All of the simplified models can explain the observed \(\Delta k_{22}\approx-4\%\) as previous studies have shown. However, these simplified models cannot account for the observed \(\Delta k_{22}\approx-11\%\). Resonances with gravito-inertial modes in an extended dilute core near the tidal frequency of Io can produce non-negligible dynamical correction to \(k_{42}\), but it is insufficient to explain the Juno observation based on our simplified model. ## 2 Tidal model We consider linear tidal responses of a rotating gaseous planet to a tidal potential component of \(\Psi_{l}^{m}=\mathcal{A}(r/R)^{\gamma}T_{l}^{m}(\theta,\phi)\mathrm{e}^{-i \omega t}\), where \(\mathcal{A}\) is the tidal amplitude, \(R\) is the radius of the planet, \(T_{l}^{m}(\theta,\phi)\) represents spherical harmonics, and \(\omega\) is the tidal frequency. The resulting tides of the planet produce an external gravitational potential perturbation \(\Phi^{\prime}=\mathcal{B}(R/r)^{\gamma+1}\Psi_{l}^{m}(\theta,\phi)\mathrm{e}^{ -i\omega t}\) (and probably other spherical harmonic components). The ratio \(K_{l}^{m}(\omega)=\mathcal{B}/\mathcal{A}\) defines the tidal Love number, which depends on the tidal frequency. The tidal Love number \(K_{l}^{m}\) is a complex number because there exists a phase lag between the forcing and the gravitational perturbations due to dissipative processes (Ogilvie, 2014). While the real part \(k_{lm}=\mathrm{Re}[K_{l}^{m}]\) measures the in-phase gravitational perturbations with the tidal forcing, the imaginary part \(\mathrm{Im}[K_{l}^{m}]\) quantifies the out-of-phase tidal response and is related to the dissipation rate. The ratio between the real and imaginary parts is related to the tidal quality factor \[Q=\mathrm{sgn}(\omega)\frac{k_{lm}}{\mathrm{Im}[K_{l}^{m}]}, \tag{1}\] where \(\mathrm{sgn}(\omega)=\pm 1\) is the sign function. Because the phase lag is generally very small, that is \(Q\gg 1\), the magnitude of the imaginary part is typically much smaller than the real part. For this study, we develope an approach to directly and simultaneously calculate the real and imaginary parts of the tidal Love number for a given planetary model. ### Linearized equations For a compressible, self-gravitating, and rotating fluid body which may contain a rigid core of radius \(R_{i}\), linear perturbations to a tidal potential \(\Psi\propto\mathrm{e}^{-i\omega t}\) in the rotating frame are described by the following equations (e.g., Ogilvie & Lin, 2004): \[-i\omega\mathbf{u}^{\prime}=-2\mathbf{\Omega}\mathbf{\times}\mathbf{u}^{\prime}-\frac{1}{\rho _{0}}\mathbf{\nabla}P^{\prime}+\frac{\rho^{\prime}}{\rho_{0}^{2}}\mathbf{\nabla}P_{0}- \mathbf{\nabla}\Phi^{\prime}-\mathbf{\nabla}\Psi+\mathbf{f}_{r}, \tag{2}\] \[-i\omega\rho^{\prime}+\mathbf{\nabla}\mathbf{\cdot}(\rho_{0}\mathbf{u}^{\prime})=0, \tag{3}\] \[-i\omega\left(\frac{P^{\prime}}{\Gamma P_{0}}-\frac{\rho^{\prime}}{\rho_{0}} \right)+\mathbf{u}^{\prime}\cdot\left(\frac{1}{\Gamma}\nabla\ln P_{0}-\nabla\ln \rho_{0}\right)=0 \tag{4}\] \[\nabla^{2}\Phi^{\prime}=4\pi G\rho^{\prime}, \tag{5}\] where \(\mathbf{u}\) is the velocity, \(\mathbf{\Omega}\) the rotation rate, \(\rho\) the density, \(P\) the pressure, \(\Gamma\) the adiabatic index, and \(G\) the gravitational constant. In the above equations, the subscript \({}_{0}\) denotes physical quantities in the hydrostatic state (without tidal potential) and the notations with the prime represent Eulerian perturbations induced by the tidal forcing. In the momentum equation (2), we have explicitly included a viscous force \(\mathbf{f}_{r}\) defined as \[\mathbf{f}_{r}=\frac{1}{\rho_{0}}\mathbf{\nabla}\cdot(2\mu\mathbf{S}), \tag{6}\] where \(\mu\) is the dynamic shear viscosity (we neglected the bulk viscosity) and \(\mathbf{S}\) is the strain-rate tensor: \[\mathbf{S}=\frac{1}{2}\left[\mathbf{\nabla}\mathbf{u}^{\prime}+(\mathbf{\nabla}\mathbf{u}^{\prime} )^{T}\right]-\frac{1}{3}(\mathbf{\nabla}\cdot\mathbf{u}^{\prime})\mathbf{I}. \tag{7}\] We have included the viscous force in the momentum equation, but we neglected the viscous heating in the energy equation, that is the density and pressure perturbations were treated as adiabatic. For this study, we have fully taken the Coriolis force into account, but neglected the centrifugal distortion for numerical convenience. The centrifugal effect can be measured by \(\epsilon=\Omega/\omega_{dyn}\), that is the ratio between the spin frequency \(\Omega\) and the dynamical frequency \(\omega_{dyn}=(GM/R^{3})^{1/2}\). This ratio is not particularly small for Jupiter (\(\epsilon=0.288\)). Indeed, the centrifugal distortion of Jupiter has non-negligible contributions to the total Love number \(k_{4m}\), especially for the high-degree Love number \(k_{42}\) because the tidal response at \(l=m=2\) can produce a gravitational perturbation at \(l=4\) and \(m=2\) in an oblate figure (Idini & Stevenson, 2022). For the hydrostatic \(k_{42}^{(h)}\) of Jupiter due to Io, 93% of the total value is actually contributed by the centrifugal coupling with \(k_{22}\) and only the remaining 7% is produced by the tidal forcing at \(l=4\) and \(m=2\)(Wahl et al., 2020; Idini & Stevenson, 2022). In this paper we do not aim to directly fit the \(k_{lm}\) observed by Juno, but rather focus on the fractional corrections \(\Delta k_{lm}\) by the dynamical tides. In terms of the fractional correction \(\Delta k_{lm}\), the centrifugal contribution to \(\Delta k_{22}\) can be neglected in leading order (Lai, 2021). However, the centrifugal contribution to \(\Delta k_{42}\) cannot be neglected even in leading order because \(k_{42}^{(h)}\) is mostly contributed to by the centrifugal coupling with \(k_{22}\). This complicates the comparison between the calculated \(\Delta k_{42}\) in a spherical figure and the observation. Nevertheless, the calculated \(\Delta k_{42}\) in a spherical figure can be multiplied by the factor 0.07 to account for Jupiter's centrifugal coupling effect for qualitative comparisons with the observation (Idini & Stevenson, 2022). Such a comparison would assume that the tidally excited internal modes are not significantly modified by the centrifugal deformation. By neglecting the centrifugal deformation, the unperturbed basic state is spherically symmetric, that is to say it depends on the radius \(r\) only. Given the density \(\rho_{0}(r)\) and pressure \(P_{0}(r)\) profiles of the unperturbed state, the radial gravitational acceleration (inward) \(g(r)\) and the Brunt-Vaisala frequency \(N(r)\) are then determined by \[g=\frac{d\Phi_{0}}{dr}=-\frac{1}{\rho_{0}}\frac{dP_{0}}{dr}, \tag{8}\] \[N^{2}=g\left(\frac{1}{\Gamma}\frac{d\ln P_{0}}{dr}-\frac{d\ln\rho_{0}}{dr} \right). \tag{9}\] ### Numerical method In order to obtain the complex Love numbers, we numerically solved Eqs. (2-5) using a pseudo-spectral method for the prescribed basic states, which are subject to the relevant boundary conditions. The numerical scheme is based on the method used in previous studies (Ogilvie & Lin, 2004; Lin & Ogilvie, 2017), but we extended the method to solve the full set of linearized equations (2-5) without making a low-frequency approximation (Ogilvie, 2013). By introducing \(h^{\prime}=P^{\prime}/\rho_{0}\) and eliminating the density perturbation \(\rho^{\prime}\), Eqs. (2-5) can be reduced to the following equations: \[-i\omega\rho_{0}\mathbf{n}^{\prime} = -2\rho_{0}\mathbf{\Omega}\boldsymbol{\times}\mathbf{u}^{\prime}- \boldsymbol{\nabla}(\rho_{0}h^{\prime})+\boldsymbol{g}\nabla^{2}\varphi^{ \prime}/(4\pi G) \tag{10}\] \[-\rho_{0}\boldsymbol{\nabla}\varphi^{\prime}-\boldsymbol{\nabla }\boldsymbol{\nabla}+\boldsymbol{\nabla}\cdot(2\mu\boldsymbol{S}),\] \[-i\omega h^{\prime}=-c_{s}^{2}(N^{2}u_{r}^{\prime}/g+\boldsymbol{\nabla}\cdot (\rho_{0}\mathbf{u}^{\prime})/\rho_{0}), \tag{11}\] \[-i\omega\nabla^{2}\varphi^{\prime}=-4\pi G\boldsymbol{\nabla}\cdot(\rho_{0} \mathbf{u}^{\prime}), \tag{12}\] where \(u_{r}^{\prime}\) is the radial velocity perturbation and \(c_{s}^{2}=\Gamma P_{0}/\rho_{0}\) is the square of the adiabatic sound speed. We imposed boundary conditions including the regularity of the gravitational perturbations, zero radial velocity on the rigid inner boundary, and vanishing Lagrange pressure perturbation at the surface, that is \(\delta P=P^{\prime}+u_{r}^{\prime}/(-i\omega)\nabla P_{0}=0\). In terms of \(h^{\prime}\) and \(u_{r}^{\prime}\), the last boundary condition can be written as (Dewberry et al., 2021) \[(-i\omega h^{\prime}-gu_{r}^{\prime})|_{r=R}=0. \tag{13}\] As the viscous force was included, additional boundary conditions were required to complete the boundary value problem. We used the so-called stress-free conditions, in other words the tangential stresses vanish at both boundaries. For a given tidal potential \(\Psi_{t}^{m}=\mathcal{A}(r/R)Y_{t}^{m}(\theta,\phi)\mathrm{e}^{-i\omega t}\), the tidal perturbations (including both equilibrium and dynamical tides) \(\mathbf{u}^{\prime}\), \(h^{\prime}\) and \(\Phi^{\prime}\) can be expanded as \[\mathbf{u}^{\prime}=\sum_{n=m}^{L}u_{n}^{m}(r)\boldsymbol{R}_{n}^{m}+\sum_{n =m}^{L}v_{n}^{m}(r)\boldsymbol{S}_{n}^{m}+\sum_{n=m}^{L}v_{n}^{m}(r) \boldsymbol{T}_{n}^{m}, \tag{14}\] \[h^{\prime}=\sum_{n=m}^{L}h_{n}^{m}(r)Y_{n}^{m}(\theta,\phi), \tag{15}\] \[\Phi^{\prime}=\sum_{n=m}^{L}\Phi_{n}^{m}(r)Y_{n}^{m}(\theta,\phi), \tag{16}\] where \(\boldsymbol{R}_{n}^{m}\), \(\boldsymbol{S}_{n}^{m}\), \(\boldsymbol{T}_{n}^{m}\) are vector spherical harmonics \[\boldsymbol{R}_{n}^{m}=Y_{n}^{m}(\theta,\phi)\hat{\boldsymbol{r}},\quad \boldsymbol{S}_{n}^{m}=r\boldsymbol{\nabla}Y_{n}^{m}(\theta,\phi),\quad \boldsymbol{T}_{n}^{m}=r\boldsymbol{\nabla}\boldsymbol{\times}\boldsymbol{R }_{n}^{m}. \tag{17}\] As the basic state is axisymmetric, the perturbations involve spherical harmonics with the same order \(m\) as the tidal potential \(\Psi_{t}^{m}\), but the Coriolis force would couple all spherical harmonics with degree \(n\geq m\). For numerical calculations, we had to make a truncation at certain degree \(L\). Substituting expansions of Eqs. (14-16) into Eqs. (10-12) and projecting onto spherical harmonics, we ended with a set of ordinary differential equations (ODEs) involving \(u_{n}^{m}(r)\), \(v_{n}^{m}(r)\), \(w_{n}^{m}(r)\), \(h_{n}^{m}(r)\), and \(\Phi_{n}^{m}(r)\). For the radial dependence, we used Chebyshev collocation on \(N_{r}\) Gauss-Lobatto nodes (Rieuort et al., 2001). The boundary conditions were applied through replacing the ODEs with the corresponding boundary conditions on the boundary nodes. The regularity of gravitational perturbations requires \[r\frac{d\Phi_{n}^{m}}{dr}+(n+1)\Phi_{n}^{m}=0\quad\text{at $r=R$}, \tag{18}\] \[r\frac{d\Phi_{n}^{m}}{dr}-n\Phi_{n}^{m}=0\quad\text{at $r=R_{i}$}. \tag{19}\] The vanishing Lagrangian pressure perturbation at the surface and zero radial velocity at the rigid inner boundary give \[-i\omega h_{n}^{m}=gu_{n}^{m}\quad\text{at $r=R$}, \tag{20}\] \[u_{n}^{m}=0\quad\text{at $r=R_{i}$}. \tag{21}\] The stress-free boundary condition is given as (Ogilvie, 2009) \[u_{n}^{m}+r\frac{dv_{n}^{m}}{dr}-v_{n}^{m}=0,\quad r\frac{dw_{n}^{m}}{dr}-w_{ n}^{m}=0 \tag{22}\] at both boundaries. Using the numerical discretization described above, the boundary value problem becomes a linear system involving a large complex block-tridiagonal matrix. The solution of the linear system was obtained using the standard direct solver. We used typical truncations of \(L=200\) and \(N_{r}=100\) for this study. Once the solution of the linear system is obtained numerically, the complex tidal Love number is readily given by \[K_{t}^{m}=\Phi_{t}^{m}(r=R) \tag{23}\] for the tidal potential component \(\Psi_{t}^{m}=\mathcal{A}(r/R)Y_{t}^{m}(\theta,\phi)\mathrm{e}^{-i\omega t}\) (we simply set \(\mathcal{A}=1\) for the linear tidal response). We note that the solution includes both the equilibrium and dynamical tides. For the real part of Love numbers, of particular interest is the fractional correction of dynamical tides \[\Delta k_{lm}=(k_{lm}-k_{lm}^{(ln)})/k_{lm}^{(ln)}, \tag{24}\] where \(k_{lm}^{(ln)}\) is the hydrostatic value and it was calculated by setting \(\omega=0\). As our calculations neglect the centrifugal effect which significantly influences the high-degree Love number \(k_{42}\), the calculated value of \(\Delta k_{42}\) should be multiplied by the factor 0.07 when compared with the observation as we have discussed in Sec. 2.1. We can also calculate the tidal dissipation rate \(D_{\nu}\) from the velocity perturbations \[D_{\nu}=\int_{V}2\mu S^{2}dV, \tag{25}\] where the integral was taken over the fluid domain. The dissipation rate is related to the imaginary part of the tidal Love number (Ogilvie, 2014) \[D_{\nu}=\frac{(2l+1)R\mathcal{R}^{2}}{8\pi G}\omega\mathrm{Im}[K_{l}^{m}], \tag{26}\] which can be used as an independent validation of the numerical code. The above relation is satisfied to a high degree of accuracy for all of numerical calculations presented in this paper. ### Interior models In order to solve Eqs. (10-12), we need to prescribe basic state profiles \(\rho_{0}(r)\), \(g(r)\), and \(N^{2}(r)\) to model Jupiter's interior. Our understanding of Jupiter's interior has been significantly improved by Juno observations (Stevenson, 2020), yet some degrees of uncertainty remain. In this study, we do not aim to build a realistic model of Jupiter's interior, but focus on the fractional contributions of dynamical tides to the tidal Love number for different possible scenarios of Jupiter's interior. We consider three nominal interior models (Fig. 1) based on a polytrope of index 1, which is a good leading order approximation for Jupiter (Stevenson, 2020). For all of models used in this study, the unperturbed density and gravity follow a hydrostatic polytrope of index 1, \[\rho_{0}=\frac{\pi M}{4R^{3}}\frac{\sin kr}{kr}, \tag{27}\] \[g=\frac{GM}{r^{2}}\left[\sin(kr)-kr\cos(kr)\right], \tag{28}\] where \(k=\pi/R\). The first model consists of a small rigid core of radius \(0.25R\) and an isentropic fluid envelope, that is \(\Gamma=2\) and \(N^{2}=0\) in the fluid region (Fig. 1(a)). The second model assumes an extended dilute core of radius \(0.7R\) and an isentropic envelope (Fig. 1(b)). The dilute core is treated as a stably stratified fluid layer with the Brunt-Vaisala frequency given by \[\frac{N^{2}}{\omega_{dyn}^{2}}=\tilde{N}_{2}\sin\left(\frac{\pi r}{R_{c}} \right), \tag{29}\] where \(\tilde{N}_{2}=0.25\) and \(R_{c}=0.7\) for this model. As we fixed the density and pressure profiles to that of a polytrope, the stratification was effectively realized by adjusting the adiabatic index (\(\Gamma>2\)) in the dilute core (Lai, 2021). This model is similar to the one used in Idini & Stevenson (2022b), but they adjusted the density profile to model the stable stratification in the dilute core while fixing the adiabatic index \(\Gamma=2\). The third model is based on the model in Fig. 1(a), but we further added a stably stratified layer between \(0.8R\) and \(0.9R\) (Fig. 1(c)), possibly resulting from H-He immiscibility (Debras & Chabrier, 2019; Stevenson et al., 2022). The Brunt-Vaisala frequency in the top stable layer is prescribed as \[\frac{N^{2}}{\omega_{dyn}^{2}}=\tilde{N}_{2}\frac{1}{[1+\mathrm{e}^{-100(r- 0.8)}][1+\mathrm{e}^{100(r-0.9)}]}. \tag{30}\] The degree of stratification of this layer remains uncertain, but it is estimated that typical values of \(N^{2}/\omega_{dyn}^{2}\) would be roughly between 0.1 and 0.8 for Jupiter (Christensen et al., 2020; Gastine & Wicht, 2021). Here we set a moderate value \(\tilde{N}_{2}=0.5\). We note that an interior model with the coexistence of a dilute core and a top stable layer is also possible (Debras & Chabrier, 2019). As this kind of model involves two different stably stratified layers, it would be difficult to characterize the role of the top stable layer on tides. We considered only the combination of a compact rigid core and a top stable layer for simplicity. In all of these models, we set the total mass \(M\), the radius \(R\), and the spin rate \(\Omega\) such that the ratio \(\epsilon=\Omega/\sqrt{GM/R^{3}}=0.288\), corresponding the value of Jupiter. Our calculations also require the fluid viscosity, which is difficult to estimate in detail for giant planets. We simply assumed the dynamic viscosity \(\mu\) is proportional to the background density \(\rho_{0}\), so the kinematic viscosity \(\nu=\mu/\rho_{0}\) is constant. The viscosity can be measured by the dimensionless number \(Ek=\nu/(\Omega R^{2})\), known as the Ekman number. We set \(Ek=10^{-6}\) for most of the calculations (unless otherwise specified), roughly corresponding to the effective viscosity based on mixing-length theory (Guillot et al., 2004). As we have mentioned that we do not aim to construct a realistic interior model for Jupiter in this study. These simplified models were designed to investigate the effects of a compact rigid core, an extended dilute core, and a top stable layer on the tidal responses of Jupiter. Nevertheless, the fractional corrections \(\Delta k_{lm}\) and the tidal quality factor \(Q\) for these simplified models can be used to make some qualitative comparisons with the observations (Lai, 2021; Idini & Stevenson, 2021, 2022). ## 3 Results In this paper, we focus on the dominant tidal component \(\Psi_{2}^{2}\) and a high-degree tesseral component \(\Psi_{4}^{2}\), for which non-negligible dynamical corrections have been detected as we have discussed in Sec. 1. Our calculations are limited to the frequency range of \(-2\leq\omega/\Omega\leq-1\), which is relevant to the tidal frequencies of the Galilean moons. The negative tidal frequency means that the tidal forcing is retrograde in the corotating frame with the planet based on our convention. For the real part of Love numbers, we show the fractional correction \(\Delta k_{lm}\). In order to make comparisons with the Juno observation, the calculated \(\Delta k_{42}\) was multiplied by 0.07 to compensate for the centrifugal effect which is neglected in our calculations. Because of the negative tidal frequency, the imaginary part of Love numbers is also negative in our calculations and is related to the tidal quality factor by \(k_{lm}/Q_{l}=-\mathrm{Im}[K_{l}^{m}]\) according to Eq. (1). ### Full polytrope model Before presenting results for the interior models in Fig. 1, we first show the tidal response of a full isentropic polytrope, that is to say neutrally buoyant in the whole fluid sphere. This model serves as a reference for other models and has been used to investigate the dynamical tides of Jupiter in recent analytical studies (Idini & Stevenson, 2021; Lai, 2021). Fig. 2 shows both the real and imaginary parts of the Love numbers as a function of the tidal frequency for the full polytrope model. We can see that \(\Delta k_{22}\) is negative in the frequency range we considered and it smoothly varies as the tidal frequency except at a burst around \(\omega/\Omega=-1.08\), which corresponds to a resonance with an inertial mode. Away from resonances, our numerical results are consistent with recent theoretical calculations and produce \(\Delta k_{22}\approx-4\%\) at the tidal frequency of Io (Lai, 2021; Idini & Stevenson, 2021). These studies also revealed that the dynamical correction \(\Delta k_{22}\) can be attributed to the Coriolis effect on the \(f\)-modes. Apart from the \(f\)-modes, the rotating sphere of isentropic fluid also supports smooth inertial modes restored by the Coriolis force in the frequency range of \(0<|\omega/\Omega|<2\)(Greenspan, 1968; Lockitch & Friedman, 1999). The burst of \(\Delta k_{22}\) at \(\omega/\Omega=-1.08\) is indeed due to the resonant excitation of the inertial mode as shown in Fig. 3(a), but we noticed that the resonance occurs only in a very narrow frequency range. However, this inertial mode has more significant contributions to \(\Delta k_{42}\). The angular structure of an inertial mode cannot be described by single spherical harmonics in general (Lockitch & Friedman, 1999), but the density perturbations (and thus the gravitational perturbations) are dominated by the spherical harmonics \(Y_{4}^{2}(\theta,\phi)\) for the resonant inertial mode at \(\omega/\Omega=-1.0836\) as we can see from Fig. 3(a). This suggests a likely strong coupling between the tidal potential component \(\Psi_{4}^{2}\) and the inertial mode in Fig. 3(a), that is to say large tidal overlap as described in Wu (2005b), leading to significant dynamical corrections to \(k_{42}\). The dynamical correction can reach \(\Delta k_{42}\approx-10\%\) (after the centrifugal correction) near the resonance at \(\omega/\Omega=-1.0836\). However, the tidal frequencies of the Galilean satellites are too far away from this resonance. The curve of \(\Delta k_{42}\) also shows a spike around \(\omega/\Omega=-1.51\), corresponding to a narrow resonance with a high-degree inertial mode (\(\rho^{\prime}\) is dominated by \(Y_{6}^{2}(\theta,\phi)\) as shown in Fig. 3(b)). Interestingly, the tidal frequency of Io is close to this resonance, but the dynamical correction caused by this resonant mode is insufficient to account for the observed \(\Delta k_{42}\approx-11\%\). The frequencies of inertial modes in Fig. 3 are slightly shifted compared to the results of Lockitch & Friedman (1999) for a polytrope of index 1 (see their table 6 and note the different conventions for the sign of frequencies) because they assumed \(\epsilon\to 0\) whereas we set \(\epsilon=0.288\). The imaginary parts of the Love numbers in Fig. 2 show that resonances with inertial modes significantly enhance the tidal dissipation. The enhanced dissipation due to resonant inertial modes in a neutrally buoyant sphere has been demonstrated by Wu (2005b), but using different density profiles. When the tidal frequency is away from resonances, the dissipation rate for the full isentropic polytrope is too small to account for the observed tidal quality factor \(Q\)(Lainey et al., 2009). ### Compact rigid core model We now consider tidal responses for the interior model with a compact rigid core. Basically, the inner region (\(r\leq 0.25R\)) of a whole fluid polytrope becomes solid for this model. Fig. 4 shows the frequency dependence of the Love numbers for the compact rigid core model. We can see that the real parts are largely similar to those of a full polytrope, but the imaginary parts are rather Figure 1: Three nominal models of Jupiter’s interior used in this study. The top panel shows the schematic models and the bottom panel shows the density (normalized by the density at the center) and the Brunt-Väisälla frequency (normalized by the dynamical frequency) as a function of the radius. The blue shadow in the bottom panel indicates solid regions. (a) A compact rigid core model; (b) an extended dilute core model; and (c) a compact rigid core and an outer stable layer model. different from those of a full polytrope, showing enhanced tidal dissipation by introducing the rigid core. The rigid core model also supports inertial waves in the fluid envelope, but these waves have some peculiar behaviors due to the singularity in a spherical shell (Stewartson & Rickard, 1969). Smooth inertial modes generally do not exist in a spherical shell even with uniform density (Rieutord et al., 2001), and localized wave beams spawned from the critical latitudes propagate in the bulk along the characteristics of the inertial wave equations (e.g., Ogilvie, 2009). However, Lin & Ogilvie (2021) recently revealed that resonant Figure 3: Density perturbations (left half) and radial velocity perturbations (right half) in the meridional plane to the tidal component \(\Psi_{4}^{2}\) at two resonant frequencies in Fig. 2. Amplitudes were normalized by the maximum absolute values. Figure 2: Complex Love number as a function of the tidal frequency for a full isentropic polytrope of index 1. The top panel shows the fractional correction \(\Delta k_{\rm in}\) of the real part of the Love numbers. The fractional correction \(\Delta k_{2}\) (orange curve in the top panel) was multiplied by 0.07. The bottom panel shows the minus imaginary part \(-{\rm Im}[K_{\rm in}^{m}]\), which is equivalent to \(k_{\rm in}/Q_{i}\). Vertical dashed lines indicate tidal frequencies of four Galilean moons of Jupiter (from right to left: Io, Europa, Ganymede, and Callisto). The horizontal dashed line in the bottom panel represents the astrometric observation of the frequency independent \(k_{2}/Q_{2}\) from Lainey et al. (2009). tidal responses in a spherical shell correspond to eigen modes with large-scale flows hidden beneath localized wave beams using a uniform density model. Furthermore, it was shown that the hidden large-scale structures basically resemble inertial modes in a full sphere. This is in line with our results for the nonuniform density model in this study. The real parts, \(k_{22}\) and \(k_{42}\), are relevant to only large-scale density perturbations, which are similar to inertial modes in a full sphere as one can see from Fig. 5. Therefore, the curves of \(\Delta k_{lm}\) for the rigid core model resemble those of a full polytrope, but we note slight shifts of the resonant frequencies due to the presence of a rigid core. As for the full polytrope, the compact rigid core model can produce \(\Delta k_{22}=-4\%\) as observed, but it cannot produce sufficient dy Figure 4: As for Fig. 2, but for the interior model with a compact rigid core. The fractional correction \(\Delta k_{42}\) (orange curve in the top panel) was multiplied by 0.07. Figure 5: Density perturbations (left half) and gravitational perturbations (right half) in the meridional plane to the tidal component \(\Psi_{4}^{2}\) for the interior model with a compact rigid core at (a) \(\omega/\Omega=-1.5092\) (resonance) and (b) \(\omega/\Omega=1.53\) (nonresonance) with \(Ek=10^{-7}\). Amplitudes were normalized by the maximum absolute values. namical correction in the high-degree Love number \(k_{42}\) near the tidal frequency of Io to account for the observed \(\Delta k_{42}=-11\%\). On the other hand, the imaginary parts are largely modified by the presence of a small rigid core. We can see that the tidal dissipation is significantly enhanced by the localized wave beams spawned from the critical latitudes both in and out of resonances. The velocity perturbations in Fig. 5(b) indeed exhibit localized waves propagating in the bulk, which can generate significant viscous dissipation but they do not produce significant density and gravitational perturbations. In Fig. 4, we also see that several peaks in the tidal dissipation (bottom panel) do not lead to obvious fluctuations in \(\Delta k_{lm}\) (top panel), corresponding to resonances with higher degree modes that contribute little to the low degree (i.e., \(l=2\) and \(l=4\)) gravitational perturbations. In summary for the compact rigid core model, the tidal dissipation is significantly enhanced with respect to the full polytrope case. This is in line with the early work of Ogilvie and Lin (2004), who showed the enhanced tidal dissipation due to inertial waves in the convective envelope of rotating stars and planets. The averaged dissipation in the tidal frequency range of Galilean moons gives rise to a comparable tidal quality factor as observed by Lainey et al. (2009). However, the fractional correction to the real part of Love number \(\Delta k_{42}\) is insufficient to explain the observation. ### Dilute core model An extended dilute core rather than a compact core in Jupiter has been suggested recently based on Juno gravitational measurements (Wahl et al., 2017; Militzer et al., 2022). In this subsection, we consider tidal responses for the interior model with an extended dilute core as shown in Fig. 1(b). The dilute core was treated as a stably stratified layer which supports gravity waves restored by the buoyancy. If the Coriolis force is fully taken into account, dynamical tides in the dilute core region would be in the form of gravito-inertial waves (Dintrans et al., 1999; Xu and Lai, 2017). Idini and Stevenson (2022) recently calculated the tidal response of Jupiter with an extended dilute core, but they did not fully consider the Coriolis effect, which turns out to be important as we subsequently show. Fig. 6 shows the frequency dependence of the Love numbers for the dilute core model. For the tidal component \(\Psi_{2}^{2}\) (blue curves), the dynamical correction \(\Delta k_{22}\) is generally similar to that of the full polytrope, except for the absence of obvious spikes in the dilute core model. However, the imaginary part exhibits several peaks and troughs, suggesting possible resonances with high-degree mixed modes that enhance the tidal dissipation but do not significantly contribute to the \(l=2\) gravitational perturbations. The overall tidal dissipation is also enhanced with respect to the full polytrope due to the excitation of gravito-inertial waves in the dilute core and inertial waves in the convective envelope. The frequency-averaged tidal dissipation tends to be compatible with the observed tidal quality factor as we can see from Fig. 6. For the tidal component \(\Psi_{4}^{2}\), Fig. 6 also shows results without including the Coriolis force (green curves) for comparison. We note that the fractional correction \(\Delta k_{42}\) is always positive when the Coriolis force is neglected, probably because the pure gravity modes enhance the in-phase gravitational perturbations and thus produce positive dynamical corrections. Nevertheless, we observed distinct resonant responses at certain tidal frequencies from both real and imaginary parts of the Love number for the non-Coriolis case. For instance, the resonance at \(\omega/\Omega=-1.5193\), which is close to the tidal frequency of Io, corresponds to the first gravity mode of \(l=4\) and \(m=2\) as shown in Fig. 7 (a). Indeed, Idini and Stevenson (2022) propose the resonant locking between this gravity mode 1 (referred to as \({}_{4}^{2}g_{1}\)) and the Jupiter-Io orbital evolution to explain the observed \(\Delta k_{42}\) for Jupiter. In Idini and Stevenson (2022), the Coriolis force was neglected for the calculation of gravity modes, but approximated rotational corrections were made to obtain the Love number. However, fully taking the Coriolis force into account significantly alters the tidal responses as we can see from Fig. 6. The dynamical correction \(\Delta k_{42}\) exhibits several large fluctuations especially in the frequency range of \(-1.5<\omega/\Omega<-1\). This is due to the mixing of gravity modes and inertial modes in the dilute core, leading to more chances for resonances. The most significant dynamical corrections are produced near the tidal frequency \(\omega/\Omega=-1.2\), which is close to the frequency of the purely inertial mode as shown in Fig. 3(a). Of course, the inertial mode is mixed with gravity modes in the dilute core for this model. The resonance around \(\omega/\Omega=-1.2\) can produce more than \(-10\%\) dynamical corrections in \(k_{42}\) (after the centrifugal correction), but it is too far away from the tidal frequency of Io. The resonance close to the tidal frequency of Io (also close to the frequency of pure gravity mode \({}_{2}^{2}g_{1}\)) occurs at \(\omega/\Omega=-1.4448\) when the Coriolis force is considered. Fig. 7 (b) shows the spatial structure of this resonant response. The Coriolis effect not only leads to a non-negligible shift in the mode frequency, but also largely modifies the mode structure. The perturbations are in the from of gravito-inertial waves in the dilute core and become pure inertial waves in the neutrally buoyant envelope. Non-negligible dynamical corrections are induced by this resonance at \(\omega/\Omega=-1.4448\), but the corrections are insufficient (after the centrifugal correction) to account for the observed \(\Delta k_{42}=-11\%\). As the resonance is very narrow, we used 200 equally spaced frequency points in the tidal frequency interval of [-1.45, -1.43]. The peak amplitude of \(\Delta k_{42}\) in this frequency interval is comparable to the amplitude in the calculations using only 20 frequency points, suggesting that the frequency sampling points are sufficient to capture the resonant peak. Footnote 1: They used slightly different background density \(\rho_{0}(r)\) and Brunt-Vaisälä frequency \(N(r)\), so the mode frequency is slightly shifted. Comparing the orange and green curves in the bottom panel of Fig. 6, we can see that the tidal dissipation is increased by about two orders of magnitude when the Coriolis force is included. This suggests that the excitation of pure gravity waves is a less efficient tidal dissipation mechanism (unless resonances take place) based on our linear calculations, though the nonlinear interaction or wave breaking of gravity waves may lead to efficient tidal dissipation (e.g., Barker, 2011; Weinberg et al., 2012). ### Outer stable layer model We finally consider the effect of an outer stable layer, which may exist in Jupiter resulting from H-He immiscibility (Debras and Chabrier, 2019). Fig. 8 shows the Love numbers as a function of the tidal frequency for the interior model (c) in Fig. 1, which includes a compact rigid core and a top stable layer between \(0.8R\) and \(0.9R\). For the tidal responses to \(\Psi_{2}^{2}\), the dynamical correction \(\Delta k_{22}\) is similar to the case without the stable layer, but the presence of the thin stable layer eliminates the spike due to the resonant inertial mode at the tidal frequency around \(\omega/\Omega=-1.08\). The overall tidal dissipation due to \(\Psi_{2}^{2}\) is comparable to the counterpart without the top stable layer (blue curve in the bottom panel of Fig. 4), but the fluctuation amplitudes (the differences between peaks and troughs) are smaller. For the tidal responses to \(\Psi_{4}^{2}\), we also show results for \(Ek=10^{-7}\) (green curves) to illustrate the effect of fluid viscosity in Fig. 8. One can see that the viscosity has little influence on the real part of the Love number. The tidal dissipation weakly depends on viscosity at peaks and troughs, but the overall dissipation tends to be insensitive to viscosity. Indeed, Ogilvie (2013) has shown that the frequency-averaged dissipation is independent of viscosity. The dynamical correction \(\Delta k_{42}\) is also similar to the case without the stable layer. We can see large variations in \(\Delta k_{42}\) at the Figure 6: As for Fig. 2, but for the interior model with an extended dilute core. Green lines represent results without including the Coriolis force. The fractional correction \(\Delta k_{2}\) (orange and green curves in the top panel) was multiplied by 0.07. Figure 7: Density perturbations (left half) and radial velocity perturbations (right half) in the meridional plane to the tidal component \(\Psi_{4}^{2}\) for the interior model with an extended dilute core. (a) Without including the Coriolis force at \(\omega/\Omega=-1.5193\) (resonance); (b) including the Coriolis force at \(\omega/\Omega=-1.4448\) (resonance). Amplitudes were normalized by the maximum absolute values. tidal frequency around \(\omega/\Omega=-1.165\), which corresponds to a resonant mode as shown in Fig. 9. This mode is complicated because it involves three different layers for the interior model considered here. The fluid body is primarily neutrally buoyant and supports inertial waves. However, the fluid domain is separated by the thin stable layer, which suppresses radial fluid motions and creates a "barrier" for the communication between inertial waves in the inner and outer regions (see the radial velocity and vorticity perturbations in Fig. 9). In addition, the thin stable layer supports rotationally modified gravity waves. The density perturbations are mainly restricted in the stable layer and the outer envelope (\(r>0.8R\)). Despite the complicated velocity and density perturbations, the gravitational perturbations are dominated by the \(l=4\) component with relatively simple radial dependence. In this regard, this complicated mode is relevant to the \(l=4\) inertial mode without the stable layer, leading to large dynamical effects. Figure 8: As for Fig. 2, but for the interior model with a small rigid core and a top stably stratified layer. Green lines represent results at the Ekman number \(Ek=10^{-7}\). The fractional correction \(\Delta k_{42}\) (orange and green curves in the top panel) was multiplied by \(0.07\). Figure 9: Perturbations in the meridional plane to the tidal component \(\Psi_{4}^{2}\) at \(\omega/\Omega=-1.1650\) for the interior model (c) in Fig. 1. (a) Density (left half) and radial velocity (right half) perturbations; (b) gravitational (left half) and vorticity (right half) perturbations. Amplitudes were normalized by the maximum absolute values. The dashed lines denote \(r=0.8R\). corrections around the tidal frequency at \(\omega/\Omega\approx-1.1\) as in Fig. 4. However, the dynamical correction \(\Delta k_{42}\approx-1.3\) is negligible after the centrifugal correction at the tidal frequency of Io. ## 4 Conclusions We have developed a numerical method for calculating the tidal responses of a compressible, self-gravitating, rotating, and viscous fluid body. We have fully taken the Coriolis force into account, but neglected the centrifugal distortion, which allowed us to solve the problem in the spherical geometry. We used the pseudo-spectral method based on spherical harmonics in the angular directions and Chebyshev collocation in the radial direction. Different from recent studies on Jupiter's dynamical tides (Lai 2021; Idini & Stevenson 2022b; Dewberry & Lai 2022), we directly solved the tidally forced problem and explicitly added the fluid viscosity, which allowed us to simultaneously obtain the real and imaginary parts of the tidal Love numbers for a given planetary interior model. In this study, we have considered three simplified interior models (Fig. 1) of Jupiter based on a polytrope of index 1. We have focused on the tidal components \(\Psi_{2}^{2}\) and \(\Psi_{4}^{2}\) in the frequency range of \(-2\leq\omega/\Omega\leq-1\), which is relevant to the tidal frequencies of Galilean moons. Our numerical results show that the dynamical correction \(\Delta k_{22}\) is generally insensitive to the interior models. All of the models we considered can give rise to the observed \(\Delta k_{22}\approx-4\%\) at the tidal frequency of Io, which is also in line with previous studies (Idini & Stevenson 2021; Lai 2021). The tidal dissipation is significantly enhanced by the presence of a compact rigid core model or an extended dilute core with respect to the full polytrope, leading to comparable tidal quality factor \(Q\) as observed by Lainey et al. (2009). For the tidal responses to the \(\Psi_{4}^{2}\) component, all of models we considered are difficult to give rise to \(\Delta k_{42}\approx-11\%\) near the tidal frequency of Io. For the interior model with a compact rigid core, significant dynamical corrections are generated at the tidal frequency around \(\omega/\Omega\approx-1.1\) due to the resonance with an inertial mode whose gravitational perturbations are dominated by the spherical harmonics of \(l=4\) and \(m=2\). However, this resonance is too far away from the tidal frequencies of Galilean moons. For the interior model with an extended dilute core, we demonstrate that the gravity modes in the dilute core can be significantly modified by the Coriolis force, leading to the mixed gravito-inertial modes. Resonances with gravito-inertial modes in the dilute core can produce non-negligible dynamical corrections, but they are insufficient to explain the observed \(\Delta k_{42}\approx-11\%\) near the tidal frequency of Io based on our simplified model. We also briefly investigated the effect of a top stable layer on Jupiter's tides. The thin stable layer acts as a "barrier" and tends to restrict the density and velocity perturbations mainly in the outer envelope. However, our numerical results show that the top stable layer has little influence on the real part of tidal Love numbers. As we have mentioned, we do not aim to construct a realistic interior model of Jupiter in this study. These simplified models were designed to characterize the tidal responses of some possible scenarios of Jupiter's interior. Because the dynamical tides highly depend on the tidal frequency, the satellite-dependent tidal Love numbers would provide more constraints on the interior of Jupiter (Idini & Stevenson 2022b). In addition, seismology is the most effective approach to determine the interior structure of planets, though the detection of Jupiter's oscillations remains a big challenge (Gaulme et al. 2011). Nevertheless, the numerical scheme we developed in this study can also be used for theoretical calculations of oscillation modes of giant planets. There are some caveats, which should be considered in future. First, we did not consider the centrifugal deformation in order to solve the problem in the spherical geometry. The centrifugal effect plays a significant role in the tidal Love numbers of Jupiter, especially for the high-degree tidal components. Although we have made the centrifugal corrections when the numerical results were qualitatively compared with the observations, both the Coriolis and centrifugal effects should be self-consistently taken into account for quantitative comparisons with the high precision observations in the future. Second, giant planets exhibit differential rotations, which also influence the oscillation modes and thus tidal responses (Dewberry et al. 2021). Finally, Jupiter has the strongest magnetic field among planets in the Solar System and mainly consists of electrically conducting fluid (metallic hydrogen), so the magnetic effects (Lin & Ogilvie 2018; Wei 2022) should also play a part in the tides of Jupiter. ###### Acknowledgements. The author would like to thank an anonymous referee for constructive comments and Dail Kong for fruitful discussions. This study was supported by the B-type Strategic Priority Program of the CAS (XDB41000000), National Natural Science Foundation of China (grant no. 42174215) and the research project on Civil Aerospace Technologies of CNSA (D020308). Numerical calculations were performed on the Taiyi cluster supported by the Center for Computational Science and Engineering of Southern University of Science and Technology.
2303.14403
Centers and invariant straight lines of planar real polynomial vector fields and its configurations
In the paper, we first give the least upper bound formula on the number of centers of planar real polynomial Hamiltonian vector fields. This formula reveals that the greater the number of invariant straight lines of the vector field and the less the number of its centers. Then we obtain some rules on the configurations of centers of planar real polynomial Hamiltonian Kolmogorov vector fields when the number of centers is exactly the least upper bound. As an application of these results, we give an affirmative answer to a conjecture on the topological classification of configurations for the cubic Hamiltonian Kolmogorov vector fields with four centers. Moreover, we discuss the relationship between the number of centers of planar real polynomial vector fields and the existence of limit cycles, and prove that cubic real polynomial Kolmogorov vector fields have no limit cycles if the number of its centers reaches the maximum. More precisely, it is shown that the cubic real polynomial Kolmogorov vector field must have an elementary first integral in $\mathbb{R}^2\setminus\{xy=0\}$ if it has four centers, and the number of configurations of its centers is one more than that of the cubic polynomial Hamiltonian Kolmogorov vector fields.
Hongjin He, Changjian Liu, Dongmei Xiao
2023-03-25T08:48:24Z
http://arxiv.org/abs/2303.14403v1
# Centers and invariant straight lines of planar real polynomial vector fields and its configurations ###### Abstract. In the paper, we first give the least upper bound formula on the number of centers of planar real polynomial Hamiltonian vector fields. This formula reveals that the greater the number of invariant straight lines of the vector field and the less the number of its centers. Then we obtain some rules on the configurations of centers of planar real polynomial Hamiltonian Kolmogorov vector fields when the number of centers is exactly the least upper bound. As an application of these results, we give an affirmative answer to a conjecture on the topological classification of configurations for the cubic Hamiltonian Kolmogorov vector fields with four centers. Moreover, we discuss the relationship between the number of centers of planar real polynomial vector fields and the existence of limit cycles, and prove that cubic real polynomial Kolmogorov vector fields have no limit cycles if the number of its centers reaches the maximum. More precisely, it is shown that the cubic real polynomial Kolmogorov vector field must have an elementary first integral in \(\mathbb{R}^{2}\setminus\{xy=0\}\) if it has four centers, and the number of configurations of its centers is one more than that of the cubic polynomial Hamiltonian Kolmogorov vector fields. Key words and phrases:The number of centers; the least upper bound; configuration; invariant straight lines; real polynomial vector fields 2020 Mathematics Subject Classification: 34C07, 34C25, 37C27, 14H70 (\({}^{\dagger}\)) Corresponding author ## 1. Introduction Hilbert 16th problem has two parts, see [29]. The first part is mainly to ask the relative position of the closed and separate branches (_ovals_, briefly) of real algebraic curves of the \(n\)-th order in the real projective plane \(\mathbb{RP}^{2}\) when their number is the maximum \((n-1)(n-2)/2+1\), determined by Harnack. The second part is to ask the maximum number and relative position of limit cycles for planar real polynomial differential equations \[\frac{dx}{dt}=f(x,y),\ \ \frac{dy}{dt}=g(x,y),\ (x,y)\in\mathbb{R}^{2}, \tag{1.1}\] where \(f(x,y)\) and \(g(x,y)\) are real coefficients polynomial of degree \(n\) and \(m\) in two real variables \(x\) and \(y\), respectively. And the limit cycle of system (1.1) is an isolated closed orbit (looks like oval) in \(\mathbb{R}^{2}\). It is well-known that the second part of the sixteenth problem remains unsolved even for planar quadratic differential equations, see [31, 32, 37, 44, 45, 46] and references therein. An inherent complexity of this problem is ## 1. Introduction In this paper we study the _algebraic limit cycles_ of a given class of \(n\)-th order real algebraic curves. We consider the following class of \(n\)-th order real algebraic curves: \[\begin{array}{l}X=\{x\in\mathbb{R}^{2}:x\in\mathbb{R}^{2}\}\quad\text{and}\quad X \in\mathbb{R}^{2}\\ \quad\text{and}\quad X\in\mathbb{R}^{2}\end{array}\] where \(X\) is a polynomial of degree \(n\). The _algebraic limit cycles_ of \(X\) are defined as follows: \[\begin{array}{l}X=\{x\in\mathbb{R}^{2}:x\in\mathbb{R}^{2}\}\quad\text{and} \quad X\in\mathbb{R}^{2}\\ \quad\text{and}\quad X\in\mathbb{R}^{2}\end{array}\] where \(\mathbb{R}^{2}\) is the set of all \(x\)-th order real algebraic curves. The _algebraic limit cycles_ of \(X\) are defined as follows: \[\begin{array}{l}X=\{x\in\mathbb{R}^{2}:x\in\mathbb{R}^{2}\}\quad\text{and} \quad X\in\mathbb{R}^{2}\\ \quad\text{and}\quad X\in\mathbb{R}^{2}\end{array}\] where \(\mathbb{R}^{2}\) is the set of all \(x\)-th order real algebraic curves. The _algebraic limit cycles_ of \(X\) are defined as follows: \[\begin{array}{l}X=\{x\in\mathbb{R}^{2}:x\in\mathbb{R}^{2}\}\quad\text{and} \quad X\in\mathbb{R}^{2}\\ \quad\text{and}\quad X\in\mathbb{R}^{2}\end{array}\] where \(\mathbb{R}^{2}\) is the set of all \(x\)-th order real algebraic curves. The _algebraic limit cycles_ of \(X\) are defined as follows: \[\begin{array}{l}X=\{x\in\mathbb{R}^{2}:x\in\mathbb{R}^{2}\}\quad\text{and} \quad X\in\mathbb{R}^{2}\\ \quad\text{and}\quad X\in\mathbb{R}^{2}\end{array}\] where \(\mathbb{R}^{2}\) is the set of all \(x\)-th order real algebraic curves. The _algebraic limit cycles_ of \(X\) are defined as follows: \[\begin{array}{l}X=\{x\in\mathbb{R}^{2}:x\in\mathbb{R}^{2}\}\quad\text{and} \quad X\in\mathbb{R}^{2}\\ \quad\text{and}\quad X\in\mathbb{R}^{2}\end{array}\] maximum number of centers of any planar polynomial Hamiltonian vector fields \(X\) and possible configurations of centers. The first goal of this paper is to answer the question and find the least upper bound formula of the number \(c(X)\) of centers of \(X\), which depends on the degree of \(X\) and the number of its infinite critical points. This improves and generalizes the main theorem in [9]. In particular, the existence of an invariant straight line implies that a pair of infinite critical points of \(X\) exists. Accordingly, this least upper bound formula establishes the connection the number of centers of planar polynomial Hamiltonian vector fields with the number of its invariant straight lines. This will help us to obtain configurations of the centers of \(X\). In [39] authors studied configurations of centers for cubic polynomial Hamiltonian vector fields with two intersecting invariant straight lines, and conjectured that there are only two types of configurations of centers if this cubic vector field has four centers. In the paper, we consider general polynomial Hamiltonian vector fields with only two intersecting invariant straight lines, that is, polynomials \(H(x,y)\) with real coefficients of degree \(n+1\) have only two different linear factors. Without loss of generality, we assume that the polynomials can be written as \(H(x,y)=xyF(x,y)\), where \(F(x,y)\) are real polynomials of degree \(n-1\) with \(n-1\geq 2\). Hence, the corresponding Hamiltonian vector fields are \[X_{hk}=\left(-x(F(x,y)+y\frac{\partial F}{\partial y}),y(F(x,y)+x\frac{ \partial F}{\partial x})\right),\] which was called _polynomial Hamiltonian Kolmogorov vector fields_ (or _HK-vector fields_ for short) in [39]. The second aim in the paper is to study the possible configurations of centers of \(X_{hk}\) if the number of its centers is exactly the least upper bound. We obtain some rules on configurations of centers of \(X_{hk}\) by index theory and perturbation technique. Especially, when \(n=3\), using these rules we can prove the conjecture in [39] is true, that is, the cubic HK-vector fields with four centers have only two different types of configurations of centers. Moreover, we also describe completely the different possible global phase portraits of this vector field in Poincare disk. The last purpose of this paper is to discuss whether planar real polynomial system (1.1) has no limit cycles if the number of its centers is the maximum. The problem on quadratic polynomial vector fields has been solved, see [36, 43, 47, 50, 51] and references therein. However, this problem remains unsolved for cubic polynomial vector field. And the least upper bound on the number of centers is still open for the vector fields of associated system (1.1) \[Y=f(x,y)\frac{\partial}{\partial x}+g(x,y)\frac{\partial}{\partial y}=(f(x,y),g(x,y)),\] where \(f(x,y)\) and \(g(x,y)\) are real coefficients polynomials of degree \(n\) and \(m\), respectively if \(\max\{m,n\}\geq 4\), see [22]. Note that a critical point \(p\) of \(Y\) is a real solution \((x_{0},y_{0})\) of \(f(x,y)=0\) and \(g(x,y)=0\), which is also called _finite critical point of \(Y\)_. We would like to connect the existence of a first integral in \(\mathbb{R}^{2}\setminus\mathcal{B}\) and the maximum number of its centers for system (1.1), where the set \(\mathcal{B}\) consists of these orbits whose limit sets contains only critical points of system (1.1). Of course, system (1.1) may have a first integral defined in \(\mathbb{R}^{2}\setminus\mathcal{B}\) even thought the number of its centers is not the maximum, for example, system (1.1) has a global center in [27, 28]. In this paper, we study the problem for planar cubic polynomial vector fields with two intersecting invariant straight lines. Without loss of generality, the cubic polynomial vector fields with two intersecting invariant straight lines can be written as \[Y_{k}=(xP(x,y),yQ(x,y)),\] where \(P(x,y)\) and \(Q(x,y)\) are all quadratic real polynomials. We call \(Y_{k}\) as _cubic polynomial Kolmogorov vector fields_. It is clear that \(Y_{k}\) has at most four centers. Combining different techniques from algebraic curves, topology and differential equations, we prove that \(Y_{k}\) has no limit cycles if the number of its centers reaches the maximum. More precisely, it is shown that the cubic real polynomial Kolmogorov vector field must have an elementary first integral in \(\mathbb{R}^{2}\setminus\{(x,y):\ xy=0\}\) if it has four centers, and there are only three different kinds of configurations of centers of \(Y_{k}\) in some equivalent sense. This reveals that non-Hamiltonian cubic polynomial Kolmogorov vector fields with four centers can have one more configurations of centers than that of Hamiltonian ones. In the study, we mainly use some properties of algebraic curves in the complex projective plane \(\mathbb{CP}^{2}\), Poincare compactification of planar polynomial vector fields and index theory, and develop some perturbation techniques and qualitative analytical method. This paper is organized as follows. In sake of convenience for readers, in Section 2 we introduce some notations and necessary preliminaries from algebraic curves and vector fields, and provide some known results in literature. In Section 3, we obtain the least upper bound formula of the number \(c(X)\) of centers for planar polynomial Hamiltonian vector fields \(X\). In Section 4, we investigate the possible configurations of centers of HK-vector fields \(X_{hk}\) if the number of its centers is exactly the least upper bound. As an application, in last section we study dynamics of the cubic Kolmogorov vector fields \(X_{hk}\) and \(Y_{k}\) when they have four centers, respectively. The conjecture proposed in [39] is proved, and the difference on the configurations of the four centers is discovered for cubic polynomial Hamiltonian and non-Hamiltonian Kolmogorov vector fields. ## 2. Preliminaries In the section, we first introduce some notations and concepts in algebraic curves and vector fields, then review some known results (for detail please see the literature [5, 9, 10, 11, 19] and references therein). All of them will be used later. Let us denote the set of all real planar polynomial vector fields \((f(x,y),g(x,y))\) by \(\mathcal{Y}_{n,m}\), without loss of generality, we always assume \(n\geq m\) in this article. Then \[\mathcal{Y}_{n,m}=\left\{Y:Y=(f(x,y),g(x,y)),\ \deg(f)=n,\deg(g)=m,n\geq m,\ (x,y)\in \mathbb{R}^{2}\right\}.\] Notice that \(f(x,y)\) and \(g(x,y)\) can be expanded into the sum of homogeneous polynomials as follows. \[f(x,y)=\sum_{i=0}^{n}f_{i}(x,y),\;\;g(x,y)=\sum_{j=0}^{m}g_{j}(x,y), \tag{2.1}\] where \(f_{i}(x,y)\) and \(g_{j}(x,y)\) are the \(i\)th order and \(j\)th order homogeneous parts of \(f(x,y)\) and \(g(x,y)\), respectively, \(i=0,1,\cdots,n\) and \(j=0,1,\cdots,m\). Using the Poincare compactification of \(Y\), we can calculate the infinite critical points of \(Y\), and obtain that \((x_{0},y_{0})\) is _an infinite critical point of \(Y\)_ if and only if it is _a nonzero real solution_ of the following \(n\)th order homogeneous polynomial equation \[-yf_{n}(x,y)+xg_{n}(x,y)=0\text{ if }n=m, \tag{2.2}\] or \[-yf_{n}(x,y)=0\text{ if }n>m. \tag{2.3}\] Clearly, the infinite critical points of \(Y\) appear in pairs of diametrally opposite points, see [24]. From the algebraic viewpoint, we usually discuss the common zeros of polynomials \(f(x,y)\) and \(g(x,y)\) in the complex plane \(\mathbb{C}^{2}\) and the complex projective plane \(\mathbb{CP}^{2}\). Any a point in \(\mathbb{CP}^{2}\) can be represented by using its projective coordinates in three local charts \(W_{i}\), \(i=1,2,3\), \[\mathbb{CP}^{2}=\cup_{i=1}^{3}W_{i},\;W_{i}=\{[x:y:z]:=[x_{1}:x_{2}:x_{3}]\in \mathbb{CP}^{2},\;x_{i}\neq 0\}.\] Thus, we identify each \((x,y)\in\mathbb{C}^{2}\) with \([x:y:1]\in\mathbb{CP}^{2}\), and points at infinity of \(\mathbb{C}^{2}\) on the straight line are of the form \([x:y:0]\in\mathbb{CP}^{2}\) with \(x^{2}+y^{2}\neq 0\). We often use coordinates \((x,y,z)\) to instead homogeneous coordinates \([x:y:z]\) if there is no confusion. Therefore, the common zero \(p=(x_{*},y_{*},0)\) of polynomials \(f(x,y)\) and \(g(x,y)\) is at infinity of \(\mathbb{C}^{2}\) if and only if \[f_{n}(x_{*},y_{*})=0=g_{m}(x_{*},y_{*}). \tag{2.4}\] Obviously, \((x_{*},y_{*})\) is a nonzero solution of (2.2) or (2.3) if \((x_{*},y_{*})\in\mathbb{R}^{2}\). Hence, a common zero \(p=(x_{*},y_{*},0)\) of polynomials \(f(x,y)\) and \(g(x,y)\) at infinity of \(\mathbb{C}^{2}\) is an infinite critical point of the vector field \(Y\) if \((x_{*},y_{*})\in\mathbb{R}^{2}\). But an infinite critical point of the vector field \(Y\) may not be a common zero of polynomials \(f(x,y)\) and \(g(x,y)\) at infinity of \(\mathbb{C}^{2}\) by (2.2) or (2.3). Note that two real polynomials \(f(x,y)\) and \(g(x,y)\) have no common components in \(\mathbb{R}[x,y]\) if and only if \(f(x,y)\) and \(g(x,y)\) have no common components in \(\mathbb{C}[x,y]\). Hereafter we consider the vector field \(Y\in\mathcal{Y}_{n,m}\) in the complex number domain \(\mathbb{C}\) or complex plane \(\mathbb{C}^{2}\) for convenience. And the original real vector fields \(Y\in\mathcal{Y}_{n,m}\) can be regarded as this complex vector fields confined in \(\mathbb{R}^{2}\). Let \(f(x,y)\in\mathbb{C}[x,y]\) be a complex coefficients polynomial of degree \(n\) in two variables \(x\) and \(y\). The _affine plane curve_\(f(x,y)\) is the zero set of this polynomial \[V(f):=\{(x,y)\in\mathbb{C}^{2}:\;f(x,y)=0\}.\] Let \(p=(x_{0},y_{0})\) be a point in \(V(f)\). A natural number \(k\) is called _the multiplicity of the curve \(f(x,y)\) at \(p\)_, denoted by \(k=m_{p}(f)\), if \[f(x+x_{0},y+y_{0})=f_{k}(x,y)+f_{k+1}(x,y)+\cdots+f_{n}(x,y),\ f_{k}(x,y)\not \equiv 0,\] where \(f_{i}(x,y)\) is the \(i\)th order homogeneous polynomial in two variables \(x\) and \(y\), \(k\leq i\leq n\). Clearly \(k\geq 1\). Then there exist natural numbers \(r_{i}\in\mathbb{N}\) and complex numbers \(a_{i},b_{i}\in\mathbb{C}\) such that \[f_{k}(x,y)=\prod_{\sum r_{i}=k}L_{i}^{r_{i}},\ L_{i}=a_{i}x+b_{i}y,\ L_{i}\neq L _{j}\ \text{for i}\neq\text{j}.\] The line \(L_{i}\) in \(\mathbb{C}^{2}\) is called _the tangent lines to \(f(x,y)\) at \(p\)_ and \(r_{i}\) is called _the multiplicity of this tangent \(L_{i}\)_. Let \(\mathcal{O}_{p}(\mathbb{C}^{2})\) be the ring of rational functions defined at a point \(p\in\mathbb{C}^{2}\). And let \(<f,g>\mathcal{O}_{p}(\mathbb{C}^{2})\) be the ideal generated by two affine plane curves \(f(x,y)\) and \(g(x,y)\) in \(\mathcal{O}_{p}(\mathbb{C}^{2})\). Then _the intersection number of \(f(x,y)\) and \(g(x,y)\)_ at the point \(p\), denoted by \(I(p,f\cap g)\), is defined by \[I(p,f\cap g)=dim_{\mathbb{C}}\frac{\mathcal{O}_{p}(\mathbb{C}^{2})}{<f,g> \mathcal{O}_{p}(\mathbb{C}^{2})}.\] The intersection number is the unique number which satisfies some properties (see [19] for detail), and these properties also tell us how to calculate the intersection number of \(f(x,y)\) and \(g(x,y)\) at the point \(p\). Consider homogenization \(f^{*}\) of \(n\)-th order polynomial \(f\in\mathbb{C}[x,y]\), \[f^{*}=z^{n}f(\frac{x}{z},\frac{y}{z})=\sum_{i=0}^{n}z^{n-i}f_{i}(x,y).\] \(f\) is regard as the restriction of the projective plane algebraic curve \(f^{*}\) on the chart \[W_{3}=\{(x,y,1):\ (x,y,z)\in W_{3},z=1\}\] since \(f^{*}(x,y,1)=f(x,y)\). Therefore, for a point \(p=[x_{0}:y_{0}:1]\in\mathbb{CP}^{2}\) and two projective plane algebraic curves \(f^{*}\) and \(g^{*}\), it can be defined _the intersection number of \(f^{*}\) and \(g^{*}\) at \(p\)_, \(I([x_{0}:y_{0}:1],f^{*}\cap g^{*})\), by \(I((x_{0},y_{0}),f\cap g)\). It is not hard to prove that the intersection number \(I([x_{0}:y_{0}:1],f^{*}\cap g^{*})\) does not depend on the choice of charts. A well-known result about the intersection number of two projective plane algebraic curves is Bezout's theorem as follows, whose proof can be found in [5, 19]. **Bezout Theorem**_Let \(f\) and \(g\) be projective plane algebraic curves of degree \(n\) and \(m\) respectively. If \(f\) and \(g\) do not have common components, and \(A\) is the set of all common zeros of \(f\) and \(g\) in \(\mathbb{CP}^{2}\). Then_ \[\sum_{p\in\mathbb{CP}^{2}}I(p,f\cap g)=\sum_{p\in A}I(p,f\cap g)=nm.\] We now consider such a vector field \(Y\in\mathcal{Y}_{n,m}\) whose elements \(f(x,y)\) and \(g(x,y)\)_have no common zeros at infinity_ in \(\mathbb{CP}^{2}\), which is equivalent to the condition that \(f_{n}(x,y)\) and \(g_{m}(x,y)\) do not have common components in \(\mathbb{C}^{2}\). Let us denote the subset consisting of these vector fields by \(\Psi_{n,m}\), that is, \[\Psi_{n,m}=\left\{Y:Y\in\mathcal{Y}_{n,m},f_{n}(x,y)\ \text{and}\ g_{m}(x,y) \text{have no common components}\right\}.\] It can be check that \(\mathcal{Y}_{n,m}\backslash\Psi_{n,m}\) is contained in an algebraic hypersurface of \(\mathcal{Y}_{n,m}\), that is, \(\Psi_{n,m}\)_is generic in \(\mathcal{Y}_{n,m}\)_, see [12] for detail. Further we consider vector fields \(Y\in\mathcal{Y}_{n,m}\) such that \(f(x,y)\) and \(g(x,y)\)_have exactly \(nm\) different common zeros_ in \(\mathbb{C}^{2}\), denoted the set consisting of these vector fields by \(G_{n,m}\), \[G_{n,m}=\{Y:\;Y\in\mathcal{Y}_{n,m}\text{ and }\sharp\{(x,y)\in\mathbb{C}^{2}:f(x,y)=g(x,y)=0\}=nm\},\] where \(\sharp\{\cdot\}\) denotes the number of elements of the set \(\{\cdot\}\). Then by Bezout's theorem, \(G_{n,m}\subset\Psi_{n,m}\). And these common zeros \(p=[x_{0}:y_{0}:1]\) of \(f(x,y)\) and \(g(x,y)\) in \(\mathbb{CP}^{2}\) are all _finite critical points_ of the vector fields \(Y\in G_{n,m}\) if \((x_{0},y_{0})\in\mathbb{R}^{2}\). Thus, they are isolated and elementary, where the critical point \(p\) of vector fields \(Y=(f(x,y),g(x,y))\) is called _elementary_ if Jacobian matrix of the vector field \((f,g)\) with respect to \((x,y)\) at \((x_{0},y_{0})\) has no zero eigenvalues. It is easily proved that \(\mathcal{Y}_{n,m}\backslash G_{n,m}\) is also contained in an algebraic hypersurface of \(\mathcal{Y}_{n,m}\), which implies that \(G_{n,m}\)_is generic in \(\mathcal{Y}_{n,m}\)_too. Note that vector field \(Y\) in \(\mathbb{R}^{2}\) can be induced two vector fields \(\tilde{Y}_{\pm}\) in the northern hemispheres \(\mathbb{S}^{2}_{+}\) and southern hemispheres \(\mathbb{S}^{2}_{-}\), respectively via central projection by considering the plane \(\mathbb{R}^{2}\) as the tangent space at the north pole of the unit sphere \(\mathbb{S}^{2}\), called _of Poincare sphere_, in \(\mathbb{R}^{3}\). The induced vector field \(\tilde{Y}\) in each hemisphere is analytically conjugate to \(Y\) in \(\mathbb{R}^{2}\), and the equator \(\mathbb{S}^{1}\) of \(\mathbb{S}^{2}\) is bijective correspondence with the points at infinity of \(\mathbb{R}^{2}\). The global dynamics of \(Y\) in the whole \(\mathbb{R}^{2}\) including its dynamical behavior near infinity is analytically conjugate to that of \(\tilde{Y}\) in \(\mathbb{S}^{2}_{+}\cup\mathbb{S}^{1}\), which is called _Poincare disc_. By using a scaling the independent time variable, we can extend the induced vector fields \(\tilde{Y}\) in \(\mathbb{S}^{2}\setminus\mathbb{S}^{1}\) to an analytical vector field \(P(Y)\) defined in the whole \(\mathbb{S}^{2}\). This is the _Poincare compactification_ of \(Y\) in \(\mathbb{R}^{2}\), where \(P(Y)\) is analytically equivalent to \(Y\), and the analytical expression of \(P(Y)\) can be computed in the six local charts \(U_{i}\) of the differentiable manifold \(\mathbb{S}^{2}\), see [18] for detail. Then the Poincare-Hopf theorem tell us that the sum of the indices at the critical points of \(P(Y)\) is equal to the Euler-Poincare characteristic of the compact manifold \(\mathbb{S}^{2}\) if \(P(Y)\) has only isolated critical points. Note that all critical points of \(P(Y)\) on \(\mathbb{S}^{2}\) are isolated if all critical points of \(\tilde{Y}\) in \(\mathbb{S}^{2}_{+}\cup\mathbb{S}^{1}\) are isolated. And the indices of the corresponding critical points of \(P(Y)\) and \(Y\) are the same. We use the same notations in [9] to denote the sum of indices of all isolated finite critical points (resp. all isolated infinite critical points) of \(Y\) by \(\sum_{f}i\) (resp. \(\sum_{inf}i\)). Similarly, we can define the sum of the absolute values of indices of all isolated finite critical points (resp. all isolated infinite critical points) by \(\sum_{f}|i|\) and \(\sum_{inf}|i|\). Hence, if \(Y\) has finitely many critical points (including finite critical points and infinite critical points), then by Poincare-Hopf theorem we have \[2\sum_{f}i+\sum_{inf}i=2. \tag{2.5}\] Now let us recall the index of a critical point \(p\) of vector field \(Y\) in \(\mathbb{R}^{2}\). Assume that \(\gamma\) is an oriented simple closed curve which does not pass through critical points of \(Y\), and there is a unique critical point \(p\) of \(Y\) in the interior surrounded by \(\gamma\). Then the topological degree of the map \(h:\ \gamma\to S^{1}\) (the unit circle), given by \(h(M)=\frac{Y(M)}{\|Y(M)\|}\) for \(\forall M\in\gamma\), is called _the index of the critical point \(p\) of \(Y\)_, denoted by \(i_{Y}(p)\). The index \(i_{Y}(p)\) is an integer, which can be calculated by Poincare method as follows: given a direction vector \(v\) in \(\mathbb{R}^{2}\), we check if there exist only finitely many points \(M_{i}\in\gamma\), \(i=1,\cdots,k\) such that the direction of vector field \(Y\) at the point \(M_{i}\), denoted by \(Y(M_{i})\), is parallel to \(v\). Let \(q_{+}\) (resp. \(q_{-}\)) be the number of points \(M_{i}\) at which the vector \(Y(M)\) passes through the given direction \(v\) in the counterclockwise (resp. clockwise) sense when a point \(M\) on \(\gamma\) moves along \(\gamma\) in counterclockwise sense. Then the index \(i_{Y}(p)\) of \(p\) is \[i_{Y}(p)=\frac{q_{+}-q_{-}}{2},\] see [11] for detail. There have been some useful estimations on the index \(i_{Y}(p)\). Let us revisit some of them in [9, 11] which are used in this study. **Lemma 2.1**.: _(Lemma 1.1 in [11]) Let \(p\) be an isolated critical point of a vector field \(Y=(f(x,y),g(x,y))\). Then_ \[|i_{Y}(p)|\leq\min\{m_{p}f,m_{p}g\},\] _where \(m_{p}f\) and \(m_{p}g\) are the multiplicity of algebraic curves \(f(x,y)\) and \(g(x,y)\) at \(p\), respectively._ Note that the intersection number \(I(p,f\cap g)\) has a property: \(I(p,f\cap g)\geq m_{p}(f)m_{p}(g)\) and the equality holds if and only if \(f(x,y)\) and \(g(x,y)\) have no common tangent lines at \(p\). By Lemma 2.1 we have **Lemma 2.2**.: _(Lemma 1.3 in [9]) Let \(p\) be an isolated critical point of a vector field \(Y=(f(x,y),g(x,y))\). Then_ \[(i_{Y}(p))^{2}\leq I(p,f\cap g).\] Note that \(I(p,f\cap g)\geq 1\) if \(p\in f\cap g\). By Bezout's theorem, we can achieve an important estimation about the sum of the absolute values of indices of all isolated finite critical points. **Proposition 2.3**.: _(Lemma 1.4 in [9]) Assume that a vector field \(Y\in\mathcal{Y}_{n,m}\). If all finite critical points of \(Y\) are isolated, then_ \[\sum_{f}|i|\leq nm,\] Next result is the other important estimation of indices of all finite critical points, see appendix of [11] to get the proof. **Proposition 2.4**.: _Assume that a vector field \(Y\in\mathcal{Y}_{n,m}\). If all finite critical points and all infinite critical points of \(Y\) are isolated, then_ \[|\sum_{f}i|\leq\min\{n,m\}=m,\] Last we recall the relationship between the local dynamics of Hamiltonian vector field \(X\) at an isolated finite critical point and its index, and an estimation on maximum number of centers of polynomial vector field \(Y\in\mathcal{Y}_{n,n}\) as follows. **Lemma 2.5**.: _(Proposition 2.1 in [9]) Let \(p\) be an isolated finite critical point of Hamiltonian vector field \(X=(-\frac{\partial H}{\partial y},\frac{\partial H}{\partial x})\in\mathcal{Y} _{n,m}\). Then the index \(i_{X}(p)\leq 1\) of \(X\) at \(p\) characterizes the topology behaviour of orbits near \(p\), i.e._ 1. \(i_{X}(p)=1\) _if and only if the critical point_ \(p\) _is a center._ 2. \(i_{X}(p)=1-h\leq 0\) _if and only if the neighbourhood of critical point_ \(p\) _is only composed of_ \(2h\) _hyperbolic sectors, where_ \(h\) _is a positive integer, and hyperbolic sector is saddle sector._ **Lemma 2.6**.: _(Theorem A in [10]) Assume that \(C_{m}\) is the maximum number of centers of polynomial vector field \(Y\in\mathcal{Y}_{n,n}\), where \(n>1\). Then_ \[[\frac{n^{2}+1}{2}]\leq C_{m}\leq\frac{n(n+1)}{2}-1,\] _where \([\cdot]\) denotes the integer part of the number._ ## 3. The number of centers of Hamiltonian polynomial vector fields \(X\) In this section we consider Hamiltonian vector fields \(X\) with polynomial Hamiltonian functions \(H(x,y)\) of degree \(n+1\), and polynomial \(H_{n+1}(x,y)\) is the \((n+1)\)-th order homogeneous parts of \(H(x,y)\). Assume that \(X\in\mathcal{Y}_{n,m}\), that is, \[\deg\left(-\partial H(x,y)/\partial y\right)=n,\ \deg\left(\partial H(x,y)/ \partial x\right)=m,\ n\geq m.\] Then \(H_{n+1}(x,y)=ay^{n+1}\) with \(a\neq 0\) if \(n>m\). From (2.2) or (2.3), we know that the linear factors of \(H_{n+1}(x,y)\) determine all infinite critical points of \(X\). Hence, \(X\) has finitely many infinite critical points. Assume that \(H_{n+1}(x,y)\) has \(r\) linear factors. Then \(X\) has \(r\) pairs of infinite critical points, where \(r\) is a nonnegative integer. Our main result in the section is to give the least upper bound on number \(c(X)\) of centers for polynomial Hamiltonian vector fields \(X\) with exactly \(2r\) infinite critical points as follows, which improves and generalizes Theorem 3.1 in [9]. **Theorem 3.1**.: _Let \(X=(-\frac{\partial H}{\partial y},\frac{\partial H}{\partial x})\in\mathcal{Y} _{n,m}\) be a polynomial Hamiltonian vector field, and let \(c(X)\) be the number of centers of \(X\). If vector fields \(X\) have exactly \(2r\) infinite critical points, then_ \[c(X)\leq C_{n,m}=\left\{\begin{array}{rl}&[\frac{n^{2}+1-r}{2}],\quad n=m,\\ &[\frac{nm+1}{2}],\quad n>m,\end{array}\right.\] _where \(r\) is a nonnegative integer. Moreover, this bound \(C_{n,m}\) can be realized._ ### The upper bound for \(X\) with common components Before proving Theorem 3.1, we first prove an auxiliary result. Note that \(H(x,y)\) is any a polynomial functions of degree \(n+1\). So polynomials \(\partial H/\partial x\) and \(-\partial H/\partial y\) may have common components. The following proposition provides an estimation of \(c(X)\) if \(X\) has non-isolated critical points. **Proposition 3.2**.: _If polynomials \(\partial H/\partial x\) and \(-\partial H/\partial y\) have common components, then the number \(c(X)\) of centers of the polynomial Hamiltonian vector field \(X\) satisfies that \(c(X)\leq C_{n,m}-1\)._ Proof.: Suppose that polynomials \(\partial H/\partial x\) and \(-\partial H/\partial y\) have common factors \(\bar{H}(x,y)\) which is a polynomial of degree \(s\) (\(1\leq s\leq m\)). Then the corresponding Hamionian system of \(X\) can be written to \[\begin{split}\frac{dx}{dt}=&-\frac{\partial H}{ \partial y}=-\bar{H}(x,y)\bar{f}(x,y),\\ \frac{dy}{dt}=&\frac{\partial H}{\partial x}=\bar{H }(x,y)\bar{g}(x,y),\end{split} \tag{3.1}\] where \(\bar{f}(x,y)\) and \(\bar{g}(x,y)\) are polynomials of degree \(n-s\) and \(m-s\), respectively. And polynomials \(\bar{f}(x,y)\) and \(\bar{g}(x,y)\) have no common components in \(\mathbb{R}^{2}\). Consider the set \[B=\{(x,y):\ \bar{H}(x,y)=0,(x,y)\in\mathbb{R}^{2}\}\subset\mathbb{R}^{2}.\] Then \(B\) has only two possibility: \(B=\emptyset\) or \(B\neq\emptyset\). We now study the number of centers of system (3.1) in the two cases. Case (i): \(B=\emptyset\). Then either \(\bar{H}>0\) or \(\bar{H}<0\) in \(\mathbb{R}^{2}\). Hence, system (3.1) is orbitally equivalent to the following polynomial system \[\begin{split}\frac{dx}{dt}=&-\bar{f}(x,y)=-\frac{1 }{\bar{H}(x,y)}\frac{\partial H}{\partial y},\\ \frac{dy}{dt}=&\bar{g}(x,y)=\frac{1}{\bar{H}(x,y)} \frac{\partial H}{\partial x}\end{split} \tag{3.2}\] by time scaling. Hence, system (3.1) and system (3.2) have the same number of centers, that is \(c(X)=c(\bar{X})\), here \(\bar{X}=(-\bar{f}(x,y),\bar{g}(x,y))\). Case (ii): \(B\neq\emptyset\). Then system (3.1) is orbitally equivalent to system (3.2) in each connected components of \(\mathbb{R}^{2}\setminus B\) by time scaling. Hence, system (3.1) and system (3.2) have the same number of centers in \(\mathbb{R}^{2}\setminus B\). Now we consider the critical points of system (3.1) in the set \(B\). Suppose \(p_{0}\in B\) is a center of system (3.1), we claim that \(p_{0}\) is a center of system (3.2) too. Indeed, if \(p_{0}\) is a center of system (3.1), then \(p_{0}\) must be an isolated zero of \(\bar{H}(x,y)=0\) in \(\mathbb{R}^{2}\). Thus, there exists a small neighbourhood \(U(p_{0})\) of \(p_{0}\) such that either \(\bar{H}(x,y)>0\) or \(\bar{H}(x,y)<0\) in \(U(p_{0})\setminus\{p_{0}\}\). Hence, system (3.1) is orbitally equivalent to system (3.2) in \(U(p_{0})\setminus\{p_{0}\}\) by time scaling, which implies that every orbits of system (3.2) are closed orbits in \(U(p_{0})\setminus\{p_{0}\}\). By definition of center, \(p_{0}\) must be a center of system (3.2). It follows that all centers of system (3.1) are the centers of system (3.2). So \[c(X)\leq c(\bar{X}).\] We now estimate the upper bound of \(c(\bar{X})\). Since polynomials \(\bar{f}(x,y)\) and \(\bar{g}(x,y)\) do not have common components in \(\mathbb{R}^{2}\), all finite critical points of system (3.2) are isolated. And the infinite critical points of system (3.2) correspond to the real linear factors of the polynomial \[x\bar{g}_{n-s}-y\bar{f}_{n-s}=\frac{1}{\bar{H}(x,y)}\left(x\frac{\partial H_{n +1}}{\partial x}+y\frac{\partial H_{n+1}}{\partial y}\right)=(n+1)\frac{H_{n +1}}{\bar{H}(x,y)},\;\text{as}\;n=m\] or \[-y\bar{f}_{n-s}=\frac{1}{\bar{H}(x,y)}\left(y\frac{\partial H_{n+1}}{\partial y }\right)=(n+1)\frac{H_{n+1}}{\bar{H}(x,y)},\;\text{as}\;n>m,\] where \(\bar{f}_{n-s}\) and \(\bar{g}_{n-s}\) are the highest homogenous parts of polynomials \(\bar{f}(x,y)\) and \(\bar{g}(x,y)\) respectively. Thus, the infinite critical points of system (3.2) are isolated. By Lemma 2.5 and Proposition 2.4, we have \[c(\bar{X})-\sum_{i_{\bar{X}}(p)\leq 0}i_{\bar{X}}(p)=\sum_{f}i\leq m-s.\] On the other hand, by Proposition 2.3, \[c(\bar{X})+\sum_{i_{\bar{X}}(p)\leq 0}i_{\bar{X}}(p)=\sum_{f}|i|\leq(n-s)(m-s).\] Hence, we have \[c(\bar{X})\leq\frac{1}{2}(m-s+(n-s)(m-s))\leq\frac{1}{2}n(m-1).\] If \(n>m\geq 1\), then \[c(\bar{X})\leq\frac{1}{2}n(m-1)\leq\frac{1}{2}nm-1\leq[\frac{nm+1}{2}]-1.\] If \(n=m\), by Lemma 2.6, we can obtain a better estimation \[c(\bar{X})\leq\frac{1}{2}(n-s+(n-s)(n-s))-1\leq\frac{1}{2}n(n-1)-1.\] Note that \(\frac{1}{2}n(n-1)\leq[\frac{n^{2}+1-r}{2}]\) for all \(0\leq r\leq n+1\). Hence, \[c(X)\leq c(\bar{X})\leq C_{n,m}-1\] for all \(n\geq m\geq 1\). ### The least upper bound for \(X\) with no common components From Proposition 3.2, we prove Theorem 3.1 only in the case that \(\frac{\partial H}{\partial x}\) and \(\frac{\partial H}{\partial y}\) do not have common components. And we divide the proof of Theorem 3.1 into two parts. The first part is to prove that the number \(c(X)\) of centers has a upper bound \(C_{n,m}\). And in the second part we prove the upper bound \(C_{n,m}\) is sharp, that is, there exists a Hamiltonian vector field \(X\in\mathcal{Y}_{n,m}\), which has \(C_{n,m}\) centers. Proof of the first part of theorem 3.1.: Since the degree of polynomials \(\frac{\partial H}{\partial y}\) and \(\frac{\partial H}{\partial x}\) is \(n\) and \(m\) respectively, we distinguish two cases \(n=m\) and \(n>m\) to prove that the number \(c(X)\) of centers of polynomial Hamiltonian vector fields \(X\) has the upper bound \(C_{n,m}\). **Case 1.** when \(n=m\), we consider Poincare compactification \(P(X)\) of \(X\). We study the index of infinite critical points of \(X\) in two local charts \(U_{1}\) and \(U_{2}\) \[U_{1}=\{(x,y,z)\in\mathbb{S}^{2}:\ x>0\},\quad U_{2}=\{(x,y,z)\in\mathbb{S}^{2}: \ y>0\}\] of Poincare sphere \(\mathbb{S}^{2}\). Since the arguments on studying index are similar in the two local charts, without loss of generality, we assume that all infinite critical points of \(X\) lie on the local chart \(U_{1}\). Thus, those \(r\) pairs of infinite critical points of \(X\) are \(p_{i}=(1,\pm y_{i},0),\ i=1,2,\dots,r\). We claim that the index \(i_{P(X)}(p_{i})\) at infinite critical points \(p_{i}\) of vector field \(P(X)\) satisfies \[i_{P(X)}(p_{i})\geq 1-I_{i}, \tag{3.3}\] where \(I_{i}\) is the intersection number of polynomials \(-\frac{\partial H}{\partial y}\) and \(\frac{\partial H}{\partial x}\) at \(p_{i}\) in \(\mathbb{CP}^{2}\), that is, \[I_{i}=I(p_{i},f^{*}\cap g^{*})=I\left([1:y_{i}:0],\left(-\frac{\partial H}{ \partial y}\right)^{*}\cap\left(\frac{\partial H}{\partial x}\right)^{*} \right).\] In fact, on the local chart \(U_{1}\), the corresponding differential system of \(P(X)\) has the form \[\begin{split}&\frac{dy}{dt}=-yf^{*}(1,y,z)+g^{*}(1,y,z)=\sum_{i=0} ^{n}(n+1-i)z^{i}H_{n+1-i}(1,y),\\ &\frac{dz}{dt}=-zf^{*}(1,y,z)=z\sum_{i=0}^{n}z^{i}\frac{\partial H _{n+1-i}}{\partial y}(1,y)\end{split} \tag{3.4}\] by Poincare transformation \(x\mapsto\frac{1}{z},y\mapsto\frac{y}{z}\). System (3.4) has critical points \(p_{i}=(\pm y_{i},0),\ i=1,\cdots,r\) in \(U_{1}\). To estimate the index of \(P(X)\) at \(p_{i}=(y_{i},0)\), by using Poincare method, we choose a circle \(S_{\varepsilon}=\{(y,z):(y-y_{i})^{2}+z^{2}=\varepsilon^{2},0<\varepsilon\ll 1\}\) in \(U_{1}\) such that \(S_{\varepsilon}\) does not pass through any critical points of (3.4). Given a direction vector \(v=(1,0)\), it can be checked that the points on \(S_{\varepsilon}\) at which the direction vector of (3.4) being parallel to \(v\) must satisfy \(zf^{*}(1,y,z)=0\). We then consider a point \(M\) on \(S_{\varepsilon}\) moving counterclockwise to calculate the number \(q_{+}\) (resp. \(q_{-}\)) of points on \(S_{\varepsilon}\cap\{(y,z):zf^{*}(1,y,z)=0\}\) at which the direction vector of (3.4) at \(M\) passes through the given direction \(v\) in the counterclockwise (resp. clockwise) sense, that is, the number of times the direction vector \(P(X)(M)\) passes through the given direction \(v\) along \(S_{\varepsilon}\) in the counterclockwise (resp. clockwise) sense as follows. Firstly we calculate the contribution of points \(S_{\varepsilon}\cap\{z=0\}=\{(y_{i}-\varepsilon,0),(y_{i}+\varepsilon,0)\}\) to the index at \(p_{i}\). Since \(p_{i}\) is an infinite critical point of (3.4), it follows \(H_{n+1}(1,y_{i})=0\). Hence, \(H_{n+1}(1,y)\) has the expression \[H_{n+1}(1,y)=a_{l}(y-y_{i})^{l}+a_{l+1}(y-y_{i})^{l+1}\cdots+a_{n+1}(y-y_{i})^ {n+1},\] where \(l\geq 1\) and \(a_{l}\neq 0\). The direction vector of \(P(X)\) at the point near \((y_{i}-\varepsilon,0)\) and \((y_{i}+\varepsilon,0)\) has the following approximation \[P(X) =((n+1)H_{n+1}(1,y)+O(z),z(\frac{\partial H_{n+1}}{\partial y}(1,y )+O(z)))\] \[\approx((n+1)a_{l}(y-y_{i})^{l},la_{l}(y-y_{i})^{l-1}z).\] Hence, this direction vector depends on the sign of \(a_{l}\) and the parity of \(l\). After making some analysis on \(a_{l}\) and \(l\), we can sketch the four cases on direction vectors \(P(X)\) at the point near \((y_{i}-\varepsilon,0)\) and \((y_{i}+\varepsilon,0)\) in counterclockwise sense for all \(i=1,2,\ldots,r\), see Figure 3.1. Next we consider the contribution of points \(S_{\varepsilon}\cap\{f^{*}(1,y,z)=0\}\) to the index at \(p_{i}\). Let \(q_{+}^{*}\) (resp. \(q_{-}^{*}\)) be the number of points on curve \(f^{*}(1,y,z)=0\) at which the direction vector \(P(X)(M)\) pass through the direction \(v\) along \(S_{\varepsilon}\) in the counterclockwise(resp. clockwise) sense. So we have \[q_{+}=2+q_{+}^{*},\quad q_{-}=q_{-}^{*}.\] Following the definition of index by Poincare, the index at point \(p_{i}\) is \[i_{P(X)}(p_{i})=\frac{q_{+}-q_{-}}{2}=1+\frac{q_{+}^{*}-q_{-}^{*}}{2}.\] Note that the curve \(f^{*}(1,y,z)=0\) and \(S_{\varepsilon}\) have at most \(2m_{p_{i}}f^{*}\) common points when \(0<\varepsilon\ll 1\) (see [3] for detail). So \[q_{+}^{*}+q_{-}^{*}\leq 2m_{p_{i}}f^{*},\] where \(m_{p_{i}}f^{*}\) is the multiplicity of the curve \(f^{*}(1,y,z)\) at \(p_{i}\). Therefore, \[|i_{P(X)}(p_{i})-1|\leq\frac{q_{+}^{*}+q_{-}^{*}}{2}\leq m_{p_{i}}f^{*}. \tag{3.5}\] Note that \(x\frac{\partial H_{n+1}}{\partial x}(x,y)+y\frac{\partial H_{n+1}}{\partial y }(x,y)=(n+1)H_{n+1}(x,y)\). It follows \[\frac{\partial H_{n+1}}{\partial x}(1,y_{i})=\left\{\begin{array}{rl}&-a_{1 }y_{i},\quad l=1,\\ &0,\quad l\geq 2.\end{array}\right. \tag{3.6}\] This implies that \(m_{p_{i}}g^{*}\geq 1\) if \(l\geq 2\). Further by the properties of Intersection number, we have \[m_{p_{i}}f^{*}\leq m_{p_{i}}f^{*}m_{p_{i}}g^{*}\leq I_{i}.\] By (3.5), we obtain \[i_{P(X)}(p_{i})\geq 1-m_{p_{i}}f^{*}\geq 1-I_{i}.\] This leads that (3.3) holds for \(l\geq 2\). On the other hand, if \(l=1\), the Jacobian matrix of (3.4) at \(p_{i}\) is \[\begin{bmatrix}(n+1)a_{1}&*\\ 0&a_{1}\end{bmatrix}.\] Because \(a_{1}\neq 0\), \(p_{i}\) is an elementary node of (3.4). Thus, \(i_{P(X)}(p_{i})=1\). Hence the inequality (3.3) holds for \(l=1\). Summarizing the above analysis, we obtain the claim (3.3) is true. Then we have \[\sum_{inf}i=2\sum_{i=1}^{r}i_{P(X)}(p_{i})\geq 2(r-\sum_{i=1}^{r}I_{i}). \tag{3.7}\] By Lemma 2.5, Poincare-Hopf Theorem and (3.7), we have \[c(X)+\sum_{i_{X}(p)\leq 0}i_{X}(p)=\sum_{f}i=\frac{1}{2}(2-\sum_{inf}i)\leq 1-r +\sum_{i=1}^{r}I_{i}. \tag{3.8}\] On the other hand, by Lemma 2.5, we have \[c(X)-\sum_{i_{X}(p)\leq 0}i_{X}(p)=\sum_{f}|i|. \tag{3.9}\] Adding two inequalities (3.8) and (3.9), it follows that \[c(X)\leq\frac{1-r+\sum_{f}|i|+\sum_{i=1}^{r}I_{i}}{2}.\] Note that \[\sum_{f}|i|+\sum_{i=1}^{r}I_{i} \leq\sum_{p\in\frac{\partial H}{\partial y}\cap\frac{\partial H} {\partial x}\subset\mathbb{R}^{2}}I\left(p,\frac{\partial H}{\partial y}\cap \frac{\partial H}{\partial x}\right)+\sum_{i=1}^{r}I_{i}\] \[\leq\sum_{p\in\mathbb{CP}^{2}}I\left(p,\frac{\partial H}{\partial y }\cap\frac{\partial H}{\partial x}\right)=n^{2}\] by Bezout's Theorem and Lemma 2.2. Therefore, we have \(c(X)\leq\frac{n^{2}+1-r}{2}\). Let \[C_{n,m}=C_{n,n}=\left[\frac{n^{2}+1-r}{2}\right]\text{ if }n=m.\] Since \(c(X)\) is a nonnegative integer, \(c(X)\leq C_{n,m}\) in the case \(n=m\). Hence, we finish the proof of \(c(X)\leq C_{n,m}=\left[\frac{n^{2}+1-r}{2}\right]\) in the case \(n=m\). **Case 2.** when \(n>m\), \(H(x,y)\) has the form \[H(x,y)=a_{n+1}y^{n+1}+a_{n}y^{n}+\cdots+a_{m+2}y^{m+2}+H_{m+1}(x,y)+\cdots+H_{ 1}(x,y).\] where \(a_{n+1}\neq 0\). Hence, \(H_{n+1}(x,y)=a_{n+1}y^{n+1}\) which has a unique linear factor \(y\). By Poincare compactification \(P(X)\) of \(X\), we have the following differential system in local chart \(U_{1}\) \[\begin{split}\frac{dy}{dt}&=-yf^{*}(1,y,z)+z^{n-m}g^{*} (1,y,z)\\ &=a_{n+1}y^{n+1}+\sum_{i=1}^{n}(n+1-i)z^{i}H_{n+1-i}(1,y),\\ \frac{dz}{dt}&=-zf^{*}(1,y,z)=z\sum_{i=0}^{n}z^{i} \frac{\partial H_{n+1-i}}{\partial y}(1,y).\end{split} \tag{3.10}\] System (3.10) has a unique critical point \(p_{0}=(0,0)\) which corresponds to one pair of infinite critical points of \(X\). We now discuss the estimation on index of system (3.10) at \(p_{0}\) in two cases: \(g^{*}(1,0,0)=0\) and \(g^{*}(1,0,0)\neq 0\). Case (2.i): if \(g^{*}(1,0,0)=0\), that is \(\frac{\partial H_{m+1}}{\partial x}(1,0)=0\), then \(X\notin\Psi_{n,m}\) by definition of the set \(\Psi_{n,m}\). Using the similar arguments on the estimation of the index at \(p_{i}\) in case 1: \(n=m\), and \(l>2\) in (3.6), we can obtain the inequality (3.5) as follows \[|i_{P(X)}(p_{0})-1|\leq m_{p_{0}}f^{*}.\] Note that \(g^{*}(1,0,0)=0\) implies that \(m_{p_{0}}g^{*}\geq 1\). Thus we have \[i_{P(X)}(p_{0})\geq 1-m_{p_{0}}f^{*}\geq 1-m_{p_{0}}f^{*}m_{p_{0}}g^{*}\geq 1-I( p_{0},f^{*}\cap g^{*}).\] By Poincare-Hopf Theorem, we have \[\sum_{f}i=\frac{1}{2}(2-\sum_{inf}i)\leq I(p_{0},f^{*}\cap g^{*}). \tag{3.11}\] By Bezout's Theorem and Lemma 2.2, we have \[\sum_{f}|i|+I(p_{0},f^{*}\cap g^{*})\leq\sum_{p\in f^{*}\cap g^{*}\subset \mathbb{R}^{2}}I(p,f^{*}\cap g^{*})+I(p_{0},f^{*}\cap g^{*})\leq nm. \tag{3.12}\] Adding two inequalities (3.11) and (3.12), we know \[2c(X)=\sum_{f}i+\sum_{f}|i|\leq nm\] by Lemma 2.5. Let \[C_{n,m}=[\frac{nm+1}{2}]\text{ if }n>m.\] Hence, \(c(X)\leq\frac{nm}{2}\leq[\frac{nm+1}{2}]=C_{n,m}\) if \(n>m\). Case (2.ii): if \(g^{*}(1,0,0)\neq 0\), that is \(\frac{\partial H_{m+1}}{\partial x}(1,0)\neq 0\), then \(X\in\Psi_{n,m}\). Without loss of generality, assume that \(g^{*}(1,0,0)=\frac{\partial H_{m+1}}{\partial x}(1,0)>0\). Let us estimate the index of \(P(X)\) at \(p_{0}\). Taking a circle \(S_{\varepsilon}=\{(y,z):y^{2}+z^{2}=\varepsilon^{2},0<\varepsilon\ll 1\}\) around \(p_{0}\) and a direction vector \(v=(1,0)\) in local chart \(U_{1}\), we consider the points at which the field vector of (3.10) is parallel to \(v\). These points lie on curve \(zf^{*}(1,y,z)=0\), that is, \(z=0\) or \(f^{*}(1,y,z)=0\). Since \(g^{*}(1,0,0)=\frac{\partial H_{m+1}}{\partial x}(1,0)>0\), \(g^{*}(1,y,z)>0\) inside \(S_{\varepsilon}\) when \(\varepsilon\) is small enough. Then \(dy/dt=z^{n-m}g^{*}(1,y,z)\) on the intersection points of real algebraic curve \(f^{*}(1,y,z)=0\) and \(S_{\varepsilon}\), which leads that \(dy/dt\) has the same sign with \(z^{n-m}\). Therefore, near the intersection points of \(z=0\) and \(S_{\varepsilon}\), the direction vector of \(P(X)\) on \(S_{\varepsilon}\) can be approximated as \[P(X)\approx((n+1)a_{n+1}y^{n+1},(n+1)a_{n+1}y^{n}z).\] By analysis the sign of \(a_{n+1}\) and the parity of \(n\) and \(m\), we easily obtain the eight cases on direction vectors \(P(X)\) at the intersection points of \(zf^{*}(1,y,z)=0\) and \(S_{\varepsilon}\) in counterclockwise sense, see Figure 3.2. Then we have \[i_{P(X)}(p_{0})=\begin{cases}&0,\quad\text{both $n$ and $m$ are odd and $a_{n+1}>0$},\\ &+2,\quad\text{both $n$ and $m$ are odd and $a_{n+1}<0$},\\ &+1,\quad\text{others}.\end{cases} \tag{3.13}\] Thus, \(\sum_{inf}i=2i_{P(X)}(p_{0})\geq 0\). Hence, we have \[c(X)+\sum_{i_{X}(p)\leq 0}i_{X}(p)=\sum_{f}i=\frac{1}{2}(2-\sum_{inf}i)\leq 1\] by Poincare-Hopf Theorem. And from Bezout's Theorem and Lemma 2.2, it follows \[c(X)-\sum_{i_{X}(p)\leq 0}i_{X}(p)=\sum_{f}|i|\leq nm.\] Adding the above two inequalities, we obtain \(c(X)\leq\frac{nm+1}{2}\). Notes \(c(X)\) is an integer. We have \(c(X)\leq[\frac{nm+1}{2}]=C_{n,m}\) if \(n>m\). Sum up case (2.i) and (2.ii), we finish the proof of \(c(X)\leq C_{n,m}=[\frac{nm+1}{2}]\) in the case \(n>m\). Hence, the proof of the first part of Theorem 3.1 is complete. To prove the upper bound \(C_{n,m}\) is sharp in Theorem 3.1, we need to construct a Hamiltonian vector field \(X\in\mathcal{Y}_{n,m}\) such that \(X\) has \(C_{n,m}\) centers. Let us first prove some properties of Hamiltonian vector fields \(X\) in \(\Psi_{n,n}\), that is, \(\partial H_{n+1}(x,y)/\partial y\) and \(\partial H_{n+1}(x,y)/\partial x\) have no common components. **Lemma 3.3**.: _Hamiltonian vector field \(X=(-\frac{\partial H}{\partial y},\frac{\partial H}{\partial x})\) is in \(\Psi_{n,n}\) if and only if the multiplicity of any a complex polynomial factor of \(H_{n+1}(x,y)\) Figure 3.2. The direction vector \(P(X)(M)\) on \(S_{\varepsilon}\) near the intersection points of \(zf^{*}(1,y,z)=0\) and \(S_{\varepsilon}\). _is equal to 1. Furthermore, all infinite critical points of \(X\) are elementary nodes if \(X\in\Psi_{n,n}\)._ Proof.: We prove necessity by contradiction. Assume that \(H_{n+1}(x,y)\) has polynomial factors with multiplicity \(r\geq 2\). Without loss of generality, we assume that there exists \(a\in\mathbb{C}\) such that \((y+ax)^{r}|H_{n+1}(x,y)\). This implies that \(-\frac{\partial H_{n+1}(x,y)}{\partial y}\) and \(\frac{\partial H_{n+1}(x,y)}{\partial x}\) has a common factor \(y+ax\). It is a contradiction with the fact \(X\in\Psi_{n,n}\). Hence, the multiplicity of any complex polynomial factor of \(H_{n+1}(x,y)\) is equal to 1 if \(X=(-\frac{\partial H}{\partial y},\frac{\partial H}{\partial x})\in\Psi_{n,n}\). On the other hand, if \(X\notin\Psi_{n,n}\), then there exists \((x_{0},y_{0})\neq(0,0)\in\mathbb{C}^{2}\) such that \(\frac{\partial H_{n+1}}{\partial x}(x_{0},y_{0})=\frac{\partial H_{n+1}}{ \partial y}(x_{0},y_{0})=0\) by definition of \(\Psi_{n,n}\). Note that \[(n+1)H_{n+1}(x_{0},y_{0})=x_{0}\frac{\partial H_{n+1}}{\partial x}(x_{0},y_{0 })+y_{0}\frac{\partial H_{n+1}}{\partial y}(x_{0},y_{0})=0.\] Without loss of generality, assume that \(x_{0}=1\). It follows that \(H_{n+1}(x,y)\) has a linear factor \(y-y_{0}x\). If the multiplicity of linear factor \(y-y_{0}x\) is equal to 1, then \(y=y_{0}\) is a simple root for the equation \(H_{n+1}(1,y)=0\). It is a contradiction to the fact \(\frac{\partial H_{n+1}}{\partial y}(x_{0},y_{0})=0\). Thus, the proof of sufficiency is finished. We now consider all infinite critical points of \(X\). Since the arguments are similar in the two local charts \(U_{1}\) and \(U_{2}\), without loss of generality, let all infinite critical points be on a local chart \(U_{1}\) and \(P(X)\) has the form (3.4) on this chart. \(H_{n+1}(1,y_{i})=0\) if \((y_{i},0)\) is an infinite critical point of \(P(X)\) which is equivalent to that \(H_{n+1}(x,y)\) has a real linear factor \(y-y_{i}x\). Therefore, the Jacobian matrix of \(P(X)\) at \((y_{i},0)\) is given by \[\begin{bmatrix}(n+1)\frac{\partial H_{n+1}}{\partial y}(1,y_{i})&*\\ 0&\frac{\partial H_{n+1}}{\partial y}(1,y_{i})\end{bmatrix},\] which has two different real nonzero eigenvalues \(\frac{\partial H_{n+1}}{\partial y}(1,y_{i})\) and \((n+1)\frac{\partial H_{n+1}}{\partial y}(1,y_{i})\) because the multiplicity of any complex factors of \(H_{n+1}(x,y)\) is equal to 1. It follows that all the infinite critical points of \(X\) are elementary nodes. **Lemma 3.4**.: _Suppose Hamiltonian vector field \(X=(-\frac{\partial H}{\partial y},\frac{\partial H}{\partial x})\in\Psi_{n,m}\), and \(X\) satisfies one of the following two conditions:_ * \(m=n\) _and_ \(X\) _has exactly_ \(2r\) _infinite critical points;_ * \(n>m\) _and_ \(nm\) _is even._ _Then \(X\) has at most \(nm-C_{n,m}\) saddles, and the following three statements are equivalent._ * \(X\) _has_ \(nm\) _finite critical points,_ * \(X\) _has_ \(nm-C_{n,m}\) _saddles,_ * \(X\) _has_ \(C_{n,m}\) _centers,_ _where \(C_{n,m}=\begin{cases}&[\frac{n^{2}+1-r}{2}],\quad\text{as $n=m$},\\ &[\frac{nm+1}{2}],\quad\text{as $n>m$},\end{cases}\) which is defined in Theorem 3.1, and a saddle is an isolated critical point whose neighbourhood is exactly consisting of four hyperbolic sectors._ Proof.: First of all, we prove that Hamiltonian vector field \(X\) has at most \(nm-C_{n,m}\) saddles under the conditions (a) or (b). If the condition (a) holds, that is \(n=m\) and \(X\) has exactly \(2r\) infinite critical points, then from Lemma 3.3 we know that \(2r\) infinity critical points of \(X\) are elementary nodes. Thus, \(\sum_{inf}i=2r\). By Poincare-Hopf Theorem, we have \[\sum_{f}i=\frac{1}{2}(2-\sum_{inf}i)=\frac{1}{2}(2-2r)=1-r. \tag{3.14}\] If the condition (b) holds, that is \(n>m\) and \(nm\) is even, then \(X\) has a unique pair of infinite critical points, without loss of generality, denote them by \(p_{0}=(\pm 1,0,0)\). From the equality (3.13), we have \(i_{P(X)}(p_{0})=1\). Hence, \(\sum_{inf}i=2i_{P(X)}(p_{0})=2\). By Poincare-Hopf Theorem, we have \[\sum_{f}i=\frac{1}{2}(2-\sum_{inf}i)=0. \tag{3.15}\] Let now us denote the number of saddles of \(X\) by \(s(X)\). By Lemma 2.5 and combining (3.14) and (3.15), we have \[s(X)-c(X)\leq-\sum_{f}i=\left\{\begin{array}{cl}&r-1,\quad\text{as $n=m$,}\\ &0,\quad\text{as $n>m$ and $nm$ is even,}\end{array}\right. \tag{3.16}\] On the other hand, by Bezout's Theorem, we have \[s(X)+c(X)\leq nm. \tag{3.17}\] Adding the inequality (3.16) and (3.17), we obtain that \[s(X)\leq\left\{\begin{array}{cl}&\frac{n^{2}+r-1}{2}=n^{2}-\frac{n^{2}+1-r}{ 2},\quad\text{as $n=m$,}\\ &\frac{nm}{2}=nm-\frac{nm}{2},\quad\text{as $n>m$ and $nm$ is even.}\end{array}\right.\] Note that \(r\) pairs of infinity critical points of \(X\) correspond to all different real linear factors of \(H_{n+1}(x,y)\). Because the multiplicity of any real factors of \(H_{n+1}(x,y)\) is equal to \(1\) by Lemma 3.3, \(n+1-r\) is even, that is \(n\not\equiv r\) (mod \(2\)). Thus \(n^{2}+1-r\) is even which implies that \(\frac{n^{2}+1-r}{2}=[\frac{n^{2}+1-r}{2}]=C_{n,n}\). This leads that \(s(X)\leq n^{2}-C_{n,n}\) when \(n=m\). It is clear that \(\frac{nm}{2}=[\frac{nm+1}{2}]\) when \(nm\) is even, which leads that \(s(X)\leq nm-C_{n,m}\) when \(n>m\) and \(nm\) is even. Next we prove statements \((i),(ii)\) and \((iii)\) are equivalent. (i) \(\Rightarrow\) (ii): Since Hamiltonian vector field \(X\) has \(nm\) finite critical points in \(\mathbb{R}^{2}\), each a critical point is one of elementary center and elementary saddle by the property of the intersection number. Then \[c(X)+s(X)=nm\ \Rightarrow\ \ c(X)=nm-s(X).\] If \(n=m\), we have \[(n^{2}-s(X))-s(X)=\sum_{f}i=1-r.\] Hence, \(s(X)=\frac{n^{2}+r-1}{2}=n^{2}-\frac{n^{2}+1-r}{2}\). Note \(\frac{n^{2}+1-r}{2}=[\frac{n^{2}+1-r}{2}]=C_{n,n}\) when \(X\) belongs to \(\Psi_{n,n}\). This leads that \(s(X)=n^{2}-C_{n,n}\). If \(n>m\) and \(nm\) is even, we have \[(nm-s(X))-s(X)=\sum_{f}i=0.\] Hence, \(s(X)=\frac{nm}{2}=nm-\frac{nm}{2}\). Note \(\frac{nm}{2}=[\frac{nm+1}{2}]=C_{n,m}\) when \(nm\) is even. This leads that \(s(X)=nm-C_{n,m}\). (ii) \(\Rightarrow\) (iii): By Poincare-Hopf Theorem and Lemma 2.5, we have \[c(X)-(nm-C_{n,m})\geq\sum_{f}i=\begin{cases}&1-r,\quad\text{as $n=m$},\\ &0,\quad\text{as $n>m$ and $nm$ is even},\end{cases}\] since \(X\) has \(nm-C_{n,m}\) saddles. It follows that \[c(X)\geq nm+\sum_{f}i-C_{n,m}=\begin{cases}&\frac{n^{2}+1-r}{2},\quad\text{as $n=m$},\\ &\frac{nm}{2},\quad\text{as $n>m$ and $nm$ is even},\end{cases}\] i.e. \(c(X)\geq C_{n,m}\). On the other hand, \(c(X)\leq C_{n,m}\) by the first part of Theorem 3.1. Therefore, \(c(X)=C_{n,m}\). (iii) \(\Rightarrow\) (i): Assume that \(p_{1},p_{2},\ldots,p_{l}\) are finite critical points of \(X\) which are not centers. Then \[i_{X}(p_{i})\leq 0,\quad\forall i=1,2,\ldots,l\] by Lemma 2.5. Note that \(\sum_{inf}i=2r\). And when \(X=(-\frac{\partial H}{\partial y},\frac{\partial H}{\partial x})\in\Psi_{n,m}\), we recall that \[C_{n,m}=\begin{cases}&[\frac{n^{2}+1-r}{2}]=\frac{n^{2}+1-r}{2},\quad\text{as $n =m$},\\ &[\frac{nm+1}{2}]=\frac{nm}{2},\quad\text{as $n>m$ and $nm$ is even}.\end{cases}\] By Poincare-Hopf Theorem, we have \[\sum_{i=1}^{l}i_{X}(p_{i})=\sum_{f}i-C_{n,m}=\begin{cases}&1-r-\frac{n^{2}+1- r}{2}=-\frac{n^{2}+r-1}{2},\quad\text{as $n=m$},\\ &-\frac{nm}{2},\quad\text{as $n>m$ and $nm$ is even}.\end{cases} \tag{3.18}\] Let \(I_{i}=I(p_{i},(-\frac{\partial H}{\partial y})\cap\frac{\partial H}{\partial x})\). By Bezout's Theorem, \[C_{n,m}+\sum_{i=1}^{l}I_{i}\leq\sum_{p\in(-\frac{\partial H}{\partial y})\cap \frac{\partial H}{\partial x}\subseteq\mathbb{R}^{2}}I(p,(-\frac{\partial H} {\partial y})\cap\frac{\partial H}{\partial x})\leq\sum_{p\in\mathbb{C} \mathbb{P}^{2}}I(p,(-\frac{\partial H}{\partial y})\cap\frac{\partial H}{ \partial x})=nm.\] By using Lemma 2.2, we have \[-\sum_{i=1}^{l}i_{X}(p_{i}) =\sum_{i=1}^{l}|i_{X}(p_{i})|\leq\sum_{i=1}^{l}\sqrt{I_{i}}\leq \sum_{i=1}^{l}I_{i}\] \[\leq nm-C_{n,m}=\begin{cases}&\frac{n^{2}+r-1}{2},\quad\text{as $n=m$},\\ &\frac{nm}{2},\quad\text{as $n>m$ and $nm$ is even},\end{cases}\] which implies that \[\sum_{i=1}^{l}|i_{X}(p_{i})|=\sum_{i=1}^{l}\sqrt{I_{i}}=\sum_{i=1}^{l}I_{i}= nm-C_{n,m}\] by (3.18). Therefore, \[I_{i}=1,\ i_{X}(p_{i})=-1,\ l=nm-C_{n,m},\ i=1,\cdots,l,\] because \(I_{i}\) is positive integer and \(i_{X}(p_{i})\) is nonpositive integer. Hence, the number of finite critical points of \(X\) is \[C_{n,m}+l=C_{n,m}+(nm-C_{n,m})=nm.\] This lemma is proved. **Remark 3.1** Lemma 3.4 tell us that \(X\in G_{n,n}\) if \(X\) has \(C_{n,n}\) centers. This improves Proposition 3.5 in [9]. However, if both \(n\) and \(m\) are odd with \(n>m\), it can be proved that the statements \((iii)\Rightarrow(ii)\) and \((iii)\Rightarrow(i)\) are true by following the proof of Lemma 3.4. But \((i)\nRightarrow(ii)\) and \((i)\nRightarrow(iii)\), please see the following system \[\begin{split}&\dot{x}=y(y-1)\ldots(y-(n-1)),\\ &\dot{y}=x(x-1)\ldots(x-(m-1)),\end{split} \tag{3.19}\] where both \(n\) and \(m\) are odd with \(n>m\). It is easily checked that system (3.19) has \(nm\) finite critical points, in which there are exactly \(\frac{nm-1}{2}\) centers and \(\frac{nm+1}{2}\) saddles. Thus, \((i)\nRightarrow(ii)\) and \((i)\nRightarrow(iii)\). Moreover, \[\frac{nm+1}{2}>nm-[\frac{nm+1}{2}]=nm-C_{n,m},\] which implies that the number of saddles of system (3.19) is greater than \(nm-C_{n,m}\). This leads that the conclusion in Lemma 3.4, \(X\) has at most \(nm-C_{n,m}\) saddles, does not hold if \(n>m\) and both \(n\) and \(m\) are odd. We are now in the position to prove the second conclusion of Theorem 3.1: the upper bound \(C_{m,n}\) is sharp. Proof of the second part of Theorem 3.1.: We shall construct a polynomial Hamiltonian vector field \(X\in\mathcal{Y}_{n,n}\) which has \(C_{n,m}\) centers. When \(n>m\), an example has been given in [9]. So we only consider the case \(n=m\), and distinguish two cases depending on the relation between \(n\) and \(r\). **Case I.**\(n\not\equiv r\) (mod 2). In this case, both \(\frac{n+1-r}{2}\) and \(\frac{n^{2}+1-r}{2}\) are integers. Let \[H(x,y)=\prod_{i=1}^{\frac{n+r+1}{2}}\bar{H}_{i} \tag{3.20}\] where \[\bar{H}_{i}(x,y)=\begin{cases}4^{i}x^{2}+\frac{y^{2}}{4^{i}}-1,\quad\text{as } 1\leq i\leq\frac{n+1-r}{2},\\ 2^{i}x+\frac{y}{2^{i}}-1,\quad\text{as }\frac{n+1-r}{2}+1\leq i\leq\frac{n+1+r}{2}. \end{cases}\] Then \[H_{n+1}(x,y)=\prod_{i=1}^{\frac{n+1-r}{2}}\left(4^{i}x^{2}+\frac{y^{2}}{4^{i}} \right)\prod_{i=\frac{n+1-r}{2}+1}^{\frac{n+r+1}{2}}\left(2^{i}x+\frac{y}{2^{i }}\right).\] Thus, the Hamilton vector field \(X\) with Hamiltonian function \(H(x,y)\) in (3.20) are in \(\Psi_{n,n}\) by Lemma 3.3. It is easily checked that the following conclusions are true. 1. \(\sharp\{(x,y)\in\mathbb{R}^{2}:\bar{H}_{i}=\bar{H}_{j}=0,i\neq j\}=deg(\bar{H}_{i })\cdot deg(\bar{H}_{j})\); 2. \(\{(x,y)\in\mathbb{R}^{2}:\bar{H}_{i}=\bar{H}_{j}=\bar{H}_{k}=0,i\neq j,j\neq k, i\neq k\}=\emptyset\); 3. \(H_{n+1}(x,y)\) has exactly \(r\) different real linear factors \(2^{i}x+\frac{y}{2^{i}}\), \(i=\frac{n+1-r}{2}+1,\cdots,\frac{n+1+r}{2}\). To show that \(X\) with Hamiltonian function (3.20) has \(\frac{n^{2}+1-r}{2}\) centers, it is enough to prove that \(X\) have \(\frac{n^{2}+r-1}{2}\) saddles by Lemma 3.4. Let us to study the critical points of algebraic curve \(H(x,y)=0\) in \(\mathbb{R}^{2}\). It is clear that \(H(x,y)\) in (3.20) has \(r\) linear factors \(2^{i}x+\frac{y}{2^{i}}-1\) with \(\frac{n+1-r}{2}+1\leq i\leq\frac{n+1+r}{2}\), and \(\frac{n+1-r}{2}\) quadratic polynomial factors \(4^{i}x^{2}+\frac{y^{2}}{4^{i}}-1\) with \(1\leq i\leq\frac{n+1-r}{2}\). Therefore, the common points of either two lines or two ellipses or one ellipse and one line, that is the points \((x_{0},y_{0})\in\{\bar{H}_{k}=0\}\cap\{\bar{H}_{l}=0\}\) with \(k\neq l\), are critical points of \(X\). By straightforward calculation, we obtain that 1. the common points of two lines have \(\frac{r(r-1)}{2}\); 2. the common points of two ellipses have \(\frac{1}{2}(n+1-r)(n-1-r)\); 3. the common points of one ellipse and one line have \(r(n+1-r)\). Hence, the total common points are \[\frac{r(r-1)}{2}+\frac{1}{2}(n+1-r)(n-1-r)+r(n+1-r)=\frac{n^{2}+r-1}{2}.\] It is easily checked that those common points are saddles of \(X\) with Hamiltonian function (3.20) since the determinant of Jacobian matrixes of \(X\) at any a common point \((x_{0},y_{0})\in\{\bar{H}_{k}=0\}\cap\{\bar{H}_{l}=0\}\) is \[-\left(\prod_{i=1,i\neq k,l}^{\frac{n+r+1}{2}}\bar{H}_{i}(x_{0},y_{0})\right)^ {2}\cdot\left(\left|\begin{matrix}\frac{\partial\bar{H}_{k}}{\partial x}&\frac{ \partial\bar{H}_{k}}{\partial y}\\ \frac{\partial\bar{H}_{l}}{\partial x}&\frac{\partial\bar{H}_{l}}{\partial y} \end{matrix}\right|(x_{0},y_{0})\right)^{2}<0\] by (A) and (B). Therefore, \(s(X)\geq\frac{n^{2}+r-1}{2}\). By Lemma 3.4 we know that \(s(X)\leq\frac{n^{2}+r-1}{2}\). Hence, \(s(X)=\frac{n^{2}+r-1}{2}\). This implies that \(X\) has \(\frac{n^{2}+1-r}{2}\) centers as \(n\not\equiv r\) (mod 2). **Case II.**\(n\equiv r\) (mod 2). In this case, \(n+1-r\) is odd, so Hamiltonian vector field \(X\in\mathcal{Y}_{n,n}\setminus\Psi_{n,n}\), which is non-generic. Let us construct a Hamiltonian function \[H(x,y)=\prod_{i=1}^{\frac{n+r}{2}}\hat{H}_{i} \tag{3.21}\] where \[\hat{H}_{i}=\begin{cases}x-\frac{1}{2}y^{2}+2,\quad i=1,\\ \\ 4^{i}x^{2}+\frac{y^{2}}{4^{i}}-1,\quad 2\leq i\leq\frac{n-r}{2}+1,\\ \\ 2^{i}x+\frac{y}{2^{i}}-1,\quad\frac{n-r}{2}+2\leq i\leq\frac{n+r}{2}.\end{cases}\] Then \[H_{n+1}(x,y)=-\frac{1}{2}y^{2}\prod_{i=2}^{\frac{n-r}{2}+1}\left(4^{i}x^{2}+\frac {y^{2}}{4^{i}}\right)\prod_{i=\frac{n-r}{2}+2}^{\frac{n+r}{2}}\left(2^{i}x+\frac {y}{2^{i}}\right).\] It can be checked that conclusions (A) and (B) in Case I still hold, and \(H_{n+1}(x,y)\) has exactly \(r\) different linear factors \(y\) and \(2^{i}x+\frac{y}{2^{i}}\), where \(\frac{n-r}{2}+2\leq i\leq\frac{n+r}{2}\). Using the similar arguments in Case I, we can prove that the Hamiltonian vector field \(X\) with Hamiltonian function (3.21) has \(\frac{n^{2}+r-2}{2}\) finite critical points which are all elementary saddles. To obtain the number \(c(X)\) of centers of \(X\) with Hamiltonian function (3.21), we claim that \(\sum_{inf}i=2r\) for \(X\). If it is true, by Lemma 2.5 and (2.5), we have \[c(X)-\frac{n^{2}+r-2}{2}\geq\sum_{f}i=\frac{1}{2}(2-\sum_{inf}i)=1-r.\] It follows \(c(X)\geq\frac{n^{2}-r}{2}=[\frac{n^{2}+1-r}{2}]\) when \(n\equiv r\ (\text{mod}\ 2)\). From the proof of first part of Theorem 3.1, we have \(c(X)=[\frac{n^{2}+1-r}{2}]\), that is \(X\) with Hamiltonian function (3.21) has \([\frac{n^{2}+1-r}{2}]\) centers. Then the proof is completed. Now we prove the claim \(\sum_{inf}i=2r\). It is to calculate the index of every infinite critical points of \(X\). Note that all infinite critical points of \(X\) come from the lines \(2^{i}x+\frac{y}{2^{i}}=0\) with \(\frac{n-r}{2}+2\leq i\leq\frac{n+r}{2}\) and a parabola \(y^{2}\) by the expression of \(H_{n+1}(x,y)\). Using the similar arguments in the proof of Lemma 3.3, we can see that \(r-1\) pairs of infinite critical points corresponding to the lines \(2^{i}x+\frac{y}{2^{i}}=0\) with \(\frac{n-r}{2}+2\leq i\leq\frac{n+r}{2}\) are all elementary nodes and each of their indexes is \(+1\). The only issue left is to calculate the index at one pair of infinite critical points corresponding to \(y^{2}=0\). The method used is the same to that in the proof of first part of Theorem 3.1. Recall the Poincare compactification \(P(X)\) of \(X\) has the form (3.4) on the local chart \(U_{1}\). Taking the circle \(S_{\varepsilon}=\{(y,z):y^{2}+z^{2}=\varepsilon^{2},0<\varepsilon\ll 1\}\) and a direction vector \(v=(1,0)\) on the chart \(U_{1}\), we consider the points on \(S_{\varepsilon}\) at which the field vector \(P(X)\) is parallel to \(v\), i.e. \((-\varepsilon,0),(\varepsilon,0)\) and the intersection points of \(S_{\varepsilon}\) and curve \(f^{*}(1,y,z)=0\). Denote \(\prod_{i=2}^{\frac{n+r}{2}}\hat{H}_{i}(x,y)\) by \(\bar{H}(x,y)\). The vector field of \(P(X)\) near points \((-\varepsilon,0)\) and \((\varepsilon,0)\) can be approximate as \[P(X)\approx(-\frac{(n+1)}{2}\bar{H}^{*}(1,0,0)y^{2},-\bar{H}^{*}(1,0,0)yz),\] where \(\bar{H}^{*}(x,y,z)\) is the homogenization of polynomial \(\bar{H}(x,y)\). It is easily calculated that \[\bar{H}^{*}(1,0,0)=\prod_{i=2}^{\frac{n-r}{2}+1}4^{i}\cdot\prod_{i=\frac{n-r}{ 2}+2}^{\frac{n+r}{2}}2^{i}>0.\] Further we have \[(\frac{\partial\bar{H}}{\partial x})^{*}(1,0,0)>0,\quad(\frac{\partial\bar{H}}{ \partial y})^{*}(1,0,0)>0.\] Then we calculate the contribution of points \((-\varepsilon,0)\) and \((\varepsilon,0)\) to index at infinite critical point \((0,0)\). Consider the curve \(f^{*}(1,y,z)=0\), it can be written as \[f^{*}(1,y,z)=y\bar{H}^{*}(1,y,z)-(z-\frac{1}{2}y^{2}+2z^{2})(\frac{\partial \bar{H}}{\partial y})^{*}(1,y,z)=0.\] Then we have \[\nabla f^{*}(1,y,z)=(\frac{\partial f^{*}}{\partial y}(1,y,z),\frac{\partial f ^{*}}{\partial z}(1,y,z))=(\bar{H}^{*}(1,0,0),-(\frac{\partial\bar{H}}{ \partial y})^{*}(1,0,0)),\] which means \(m_{(0,0)}f^{*}=1\). Hence, by Implicit Function Theorem, \(f^{*}(1,y,z)=0\) can be regarded as a one-dimension manifold locally near the point \((0,0)\). Notes the tangent line of \(f^{*}(1,y,z)=0\) at \((0,0)\) is \(\bar{H}^{*}(1,0,0)y-(\frac{\partial\bar{H}}{\partial y})^{*}(1,0,0)z=0\). So when \(\varepsilon\) is small enough, curve \(f^{*}(1,y,z)=0\) has two intersection points \((y_{1},z_{1}),(y_{2},z_{2})\) with \(S_{\varepsilon}\) such that \(y_{1}>0,z_{1}>0\) and \(y_{2}<0,z_{2}<0\). Near the intersection points of curve \(f^{*}(1,y,z)=0\) and \(S_{\varepsilon}\), let us calculate the \(y\) direction of vector field \(P(X)\) \[\dot{y} =g^{*}(1,y,z)\] \[=z\bar{H}^{*}(1,y,z)+(z-\frac{1}{2}y^{2}+2z^{2})(\frac{\partial \bar{H}}{\partial x})^{*}(1,y,z)\] \[=z\bar{H}^{*}(1,y,z)+\frac{y\bar{H}^{*}(1,y,z)}{(\frac{\partial \bar{H}}{\partial y})^{*}(1,y,z)}(\frac{\partial\bar{H}}{\partial x})^{*}(1,y,z)\] \[=\frac{\bar{H}^{*}(1,y,z)}{(\frac{\partial\bar{H}}{\partial y})^ {*}(1,y,z)}\left(z(\frac{\partial\bar{H}}{\partial y})^{*}(1,y,z)+y(\frac{ \partial\bar{H}}{\partial x})^{*}(1,y,z)\right).\] Since \(\bar{H}^{*}(1,0,0)>0\), \((\frac{\partial\bar{H}}{\partial x})^{*}(1,0,0)>0\) and \((\frac{\partial\bar{H}}{\partial y})^{*}(1,0,0)>0\), we have \[\dot{y}|_{(y,z)=(y_{1},z_{1})}>0,\quad\dot{y}|_{(y,z)=(y_{2},z_{2})}<0.\] Figure 3.3. The director vector of \(P(X)\) near the intersection points of \(y=0\) and \(S_{\varepsilon}\). So the direction vector of \(P(X)\) near these intersection points can be sketched in Figure 3.3. Hence, the index at the infinite critical point \((0,0)\) in the chart \(U_{1}\) is \(+1\) too. Summarizing the above analysis, we have proved all indices at the \(r\) pairs of infinite critical points are \(+1\). Thus \(\sum_{inf}i=2r\) and the claim is proved. So far, we finish the proof of Theorem 3.1. **Remark 3.2**: Theorem 3.1 in [9] is our result (Theorem 3.1) when \(r=0\) in the case \(m=n\). From the construction of polynomial Hamiltonian vector fields \(X\) having \(C_{n,m}\) centers in proof of our Theorem 3.1, we can see that the existence of invariant straight lines reduces the number of centers. If the \(r\) pairs of infinite critical points of \(X\) come from \(r\) invariant straight lines, then the maximum number of centers reduces to \([\frac{n^{2}+1-r}{2}]\). This implies that the number of real linear factors of polynomial Hamiltonian functions \(H(x,y)\) affects the number of ovals of level sets \(H(x,y)=h\) in \(\mathbb{R}^{2}\). And topological classifications of infinite critical points of \(X\) play a key role in understanding the geometry of real algebraic curve \(H(x,y)\) in \(\mathbb{R}^{2}\). If the number of center of \(X\) arrives the least upper bound \(C_{n,m}\), then all infinite critical points of \(X\) are elementary nodes or singularities with index \(+1\), respectively. If \(X\) has a unique finite critical point, and all infinite critical points of \(X\) have exactly two hyperbolic sectors, then every level sets \(H(x,y)=h\) are ovals in \(\mathbb{R}^{2}\) (cf. [27]). It is well known that there are three types center: elementary center, nilpotent center and degenerate center for vector fields (cf. [27, 39]). In the following we characterize the type of center if Hamiltonian vector fields have \(C_{n,m}\) centers. **Proposition 3.5**.: _Suppose that Hamiltonian vector field \(X=(-\frac{\partial H}{\partial y},\frac{\partial H}{\partial x})\in\mathcal{Y} _{n,m}\) with \(2r\) infinite critical points has \(C_{n,m}\) centers, where \(r\) is a nonnegative integer. Then all centers of \(X\) are elementary._ Proof.: We first prove that for any a polynomial Hamiltonian vector field \(X=(-\frac{\partial H}{\partial y},\frac{\partial H}{\partial x})\in\mathcal{Y }_{n,m}\) with \(2r\) infinite critical points, there exists a family of polynomial Hamiltonian vector fields \(X_{\varepsilon}\) with at least \(2r\) infinite critical points for any \(0<\varepsilon\ll 1\) such that \(X_{\varepsilon}\in\Psi_{n,m}\) and \(\lim_{\varepsilon\to 0}H(x,y,\varepsilon)=H(x,y)\). It is obvious if \(X\in\Psi_{n,m}\). We only need to study \(X\in\mathcal{Y}_{n,m}\setminus\Psi_{n,m}\). Consider a perturbation of Hamiltonian function \(H(x,y)\) if \(X\in\mathcal{Y}_{n,m}\setminus\Psi_{n,m}\), we distinguish two cases: \(n=m\) and \(n>m\). When \(n=m\), since \(X\) has \(2r\) infinite critical points, \(H_{n+1}(x,y)\) has exactly \(r\) different linear factors. Thus \(H_{n+1}(x,y)\) has the following expression \[H_{n+1}(x,y)=\prod_{i=1}^{r}L_{i}(x,y)^{k_{i}}\prod_{i=1}^{l}M_{i}(x,y)^{l_{i}},\] where \(k_{i},l_{i}\geq 1\) and \(\sum_{i=1}^{r}k_{i}+\sum_{i=1}^{l}l_{i}=n+1\), \(L_{1},\cdots,L_{r}\) are the \(r\) different real homogeneous linear polynomials and \(M_{1},\cdots,M_{l}\) are the \(l\) different irreducible real homogeneous quadratic polynomials. Let \[\tilde{L}_{i}(x,y)=\prod_{j=1}^{k_{i}}(L_{i}(x,y)+\frac{\varepsilon}{j}x),\quad \tilde{M}_{i}(x,y)=\prod_{j=1}^{l_{i}}(M_{i}(x,y)+\frac{\varepsilon}{j}x^{2}), \ 0<\varepsilon\ll 1\] and \[H_{n+1}(x,y,\varepsilon)=\prod_{i=1}^{r}\tilde{L}_{i}(x,y)\prod_{i=1}^{l} \tilde{M}_{i}(x,y).\] Then the multiplicity of every factors of \(H_{n+1}(x,y,\varepsilon)\) is one and \(H_{n+1}(x,y,\varepsilon)\) has \(\sum_{i=1}^{r}k_{i}\) different real linear factors and \(\sum_{i=1}^{r}k_{i}\geq r\). It can be checked that \[\lim_{\varepsilon\to 0}H_{n+1}(x,y,\varepsilon)=H_{n+1}(x,y).\] Let \[H(x,y,\varepsilon)=H(x,y)-H_{n+1}(x,y)+H_{n+1}(x,y,\varepsilon).\] Then \(\lim_{\varepsilon\to 0}H(x,y,\varepsilon)=H(x,y)\), the Hamiltonian vector fields \(X_{\varepsilon}\) with Hamiltonian function \(H(x,y,\varepsilon)\) have at least \(2r\) infinite critical points for any \(0<\varepsilon\ll 1\), and \(X_{\varepsilon}\in\Psi_{n,n}\) by Lemma 3.3. When \(n>m\), \(X\in\mathcal{Y}_{n,m}\setminus\Psi_{n,m}\) leads to \(\frac{\partial H_{m+1}}{\partial x}(1,0)=0\). Let \[H(x,y,\varepsilon)=H(x,y)+\varepsilon x^{m+1},\ 0<\varepsilon\ll 1.\] Then \(\frac{\partial H_{m+1}(x,y,\varepsilon)}{\partial x}(1,0,\varepsilon)= \varepsilon>0\), that is, \(\frac{\partial H_{m+1}(x,y,\varepsilon)}{\partial x}\) has no factors \(y\). This implies that \(\frac{\partial H_{m+1}(x,y,\varepsilon)}{\partial x}\) and \(\frac{\partial H_{n+1}(x,y,\varepsilon)}{\partial y}\) do not have common factors. Denote Hamiltonian vector fields \((-\frac{\partial H(x,y,\varepsilon)}{\partial y},\frac{\partial H(x,y, \varepsilon)}{\partial x})\) by \(X_{\varepsilon}\). Hence, \(X_{\varepsilon}\in\Psi_{n,m}\), \(\lim_{\varepsilon\to 0}H(x,y,\varepsilon)=H(x,y)\) and \(X_{\varepsilon}\) has only two infinite critical points, that is \(r=1\). We then prove that \(X_{\varepsilon}\) has exactly \(nm\) finite critical points in \(\mathbb{R}^{2}\). Let \(p\) be a finite critical point of \(X\). By the additivity of the indices of vector fields (see [2] for detail), the index at \(p\) of \(X\), which is equal to \(+1\), is the sum of indices at those critical points of vector field \(X_{\varepsilon}\) which tend to \(p_{0}\) when \(\varepsilon\to 0\). Notes all the indices at critical points of \(X_{\varepsilon}\) is less than or equal to \(+1\) by Lemma 2.5. This argument implies vector field \(X_{\varepsilon}\) have at least \(C_{n,m}\) centers. Denoted the least upper bound of centers of \(X_{\varepsilon}\) by \(C_{n,m}^{\varepsilon}\). Then \(C_{n,m}^{\varepsilon}\geq C_{n,m}\). On the other hand, by Theorem 3.1, \(X_{\varepsilon}\) has at most \(C_{n,m}^{\varepsilon}\) centers, where \[C_{n,m}^{\varepsilon}=\begin{cases}&[\frac{n^{2}+1-\sum_{i=1}^{r}r_{i}}{2}] \leq[\frac{n^{2}+1-r}{2}]=C_{n,n},\quad\text{as $n=m$},\\ &[\frac{nm+1}{2}]=C_{n,m},\quad\text{as $n>m$}.\end{cases}\] Hence, we have \(C_{n,m}^{\varepsilon}=C_{n,m}\) and \(X_{\varepsilon}\) has exactly \(C_{n,m}^{\varepsilon}\) centers. By Lemma 3.4 and Remark 3.1, \(X_{\varepsilon}\) has exactly \(nm\) finite critical points and all the finite critical points of \(X_{\varepsilon}\) are elementary. Finally, we prove the conclusion, all centers of \(X\) are elementary, by contradiction. Assume that \(X\) has a non-elementary center \(p_{0}\) whose index is \(+1\). Then the intersection number, denoted by \(i_{p_{0}}\), of \(H_{x}(x,y)\) and \(H_{y}(x,y)\) at \(p_{0}\) is larger than \(1\). Thus when \(\varepsilon\) is sufficiently small, the sum of the intersection numbers of \(H_{x}(x,y,\varepsilon)\) and \(H_{y}(x,y,\varepsilon)\) at the intersection points on a neighborhood \(U_{p_{0}}\) of \(p_{0}\) in \(\mathbb{C}^{2}\), is also \(i_{p_{0}}\). Since all the critical points of \(X_{\varepsilon}\) are real, the intersection points of \(H_{x}(x,y,\varepsilon)\) and \(H_{y}(x,y,\varepsilon)\) on \(U_{p_{0}}\) are real. At last, when \(\varepsilon\) is sufficiently small, the sum of the indices of critical points of \(X_{\varepsilon}\) on \(U_{p_{0}}\) is the index of \(X\) at \(p_{0}\), which is \(+1\). Note that the indices at the finite critical points of \(X_{\varepsilon}\) are either \(+1\) or \(-1\). Hence, we must have that there are two centers of \(X_{\varepsilon}\) on \(U_{p_{0}}\), which shows that \(X_{\varepsilon}\) has at least \(C_{n,m}+1\) centers. It is a contradiction with the fact that \(X_{\varepsilon}\) has exactly \(C_{n,m}\) centers. This finishes the proof. ## 4. Configurations of centers of Hamiltonian Kolmogorov systems In this section we study Hamiltonian polynomial vector fields having two intersecting invariant straight lines in \(\mathbb{R}^{2}\). Since with an affine transformation these two invariant straight lines can become the axes of coordinates, we consider vector fields \(X_{hk}\) and investigate the possible configurations of centers when \(X_{hk}\) has \(C_{n,m}\) centers. The corresponding differential system of \(X_{hk}\) is \[\begin{split}\frac{dx}{dt}=&-x\left(F(x,y)+y\frac{ \partial F}{\partial y}\right),\\ \frac{dy}{dt}=& y\left(F(x,y)+x\frac{\partial F}{ \partial x}\right),\end{split} \tag{4.1}\] where \(F(x,y)\) is a polynomial of degree \(n-1\) and \(H(x,y)=xyF(x,y)\). It is clear that system (4.1) has two intersecting invariant straight lines \(x=0\) and \(y=0\). Hence, the center of system (4.1) can only be in the interior of four quadrants of \(\mathbb{R}^{2}\). We say system (4.1) has _a configuration \((i_{1};i_{2};i_{3};i_{4})\) of centers if there exist exactly \(i_{j}\) centers in the interior of the \(j\)th quadrant of \(\mathbb{R}^{2}\) for \(j=1,2,3,4\), where \(i_{j}\) is a nonnegative integer_. Note that linear transformations do not change the total number of centers of system (4.1). Using linear transformations of the variables \(x\) and \(y\) if necessary, we can assume that \(i_{1}=\min\{i_{1},i_{2},i_{3},i_{4}\}\) and \(i_{2}\leq i_{4}\) since the configurations \((i_{2};i_{1};i_{4};i_{3}),(i_{4};i_{3};i_{2};i_{1})\) and \((i_{1};i_{4};i_{3};i_{2})\) of centers can be obtained by the following transformations \[(x,y)\mapsto(-x,y),\ (x,y)\mapsto(x,-y),\ (x,y)\mapsto(y,x), \tag{4.2}\] respectively if system (4.1) has configuration \((i_{1};i_{2};i_{3};i_{4})\) of centers. Therefore, we say _two configurations of centers are equivalent_ if there exist some transformations in (4.2) such that one configuration of centers can be transformed to the other by these transformations or their composition. Hereafter we consider all different configurations of centers of system (4.1) in this equivalent sense if the number of centers of system (4.1) reaches the least upper bound \(C_{n,m}\). From Theorem 3.1 and Proposition 3.5, we obtain the least upper bound of centers of system (4.1) and property of centers as follows. **Proposition 4.1**.: _If system (4.1) is a HK polynomial system of degree \(n\) in \(\mathbb{R}^{2}\), then system (4.1) has at most \([\frac{n^{2}-1}{2}]\) centers. Furthermore, this bound \([\frac{n^{2}-1}{2}]\) is sharp, and all the centers are elementary if system (4.1) has \([\frac{n^{2}-1}{2}]\) centers._ By Proposition 4.1, we can see that the number of centers depends on the degree \(n\) of this system, and the dynamics of system (4.1) are trivial when \(n=1,2\). If \(n=1\), then system (4.1) is a linear system which has no center. If \(n=2\), then system (4.1) is Lotka-Volterra system, which has at most one center. And there exists a unique configuration of centers for Lotka-Volterra system in the equivalent sense, which is \((0;0;0;1)\). Therefore, the interesting problem is to study configurations of centers for system (4.1) with degree \(n\geq 3\). We now state the main result in this section. **Theorem 4.2**.: _Suppose that system (4.1) has \([\frac{n^{2}-1}{2}]\) centers with \(n\geq 3\) and its configurations are \((i_{1};i_{2};i_{3};i_{4})\). Then the following statements hold._ 1. \(i_{j}\neq 0,\quad j=2,3,4\)_._ 2. \(\min\{i_{1}+i_{3},i_{2}+i_{4}\}\geq n-1\) _(resp._ \(n-2\)_) if_ \(n\) _is odd (resp. even)._ 3. _If_ \(i_{1}=0\)_, then_ \(i_{2}\geq[\frac{n-1}{2}]\) _and_ \(i_{3}\geq n-2\) _(resp._ \(n-1\)_) when_ \(n\) _is even (resp. odd)._ 4. \([\frac{n^{2}+4}{8}]\leq\max\{i_{1},i_{2},i_{3},i_{4}\}\leq[\frac{n^{2}-2n+2}{2}]\) _and_ \(i_{1}\leq[\frac{n^{2}-1}{8}]\)_._ 5. _If there exists_ \(j\in\{1,2,3,4\}\) _such that_ \(i_{j}=[\frac{n^{2}-2n+2}{2}]\)_, then the configuration of centers of system (_4.1_) must be_ \[(i_{1};i_{2};i_{3};i_{4})=\left(0;[\frac{n-1}{2}];[\frac{n^{2}-2n+2}{2}];[\frac {n-1}{2}]\right).\] Before proving Theorem 4.2, let us give some preliminaries. Note that the set consisting of HK-vector fields \(X_{hk}\in\Psi_{n,n}\) is generic in the space of all HK-vector fields \(X_{hk}\). Using the similar arguments in proof of Proposition 3.5, we can obtain **Proposition 4.3**.: _Suppose the HK-vector field \(X_{hk}^{*}\in\mathcal{Y}_{n,n}\setminus\Psi_{n,n}\) has \([\frac{n^{2}-1}{2}]\) centers. Then there exists a HK-vector field \(X_{hk}\in\Psi_{n,n}\) such that \(X_{hk}\) has the same configuration of centers to that of \(X_{hk}^{*}\)._ For convenience, we denote the vector fields \(X_{hk}\in\Psi_{n,n}\) of system (4.1) having \([\frac{n^{2}-1}{2}]\) centers by \[\mathcal{HK}=\left\{X_{hk}:\ X_{hk}\in\Psi_{n,n},\ \text{system (\ref{eq:1}) has $[\frac{n^{2}-1}{2}]$ centers}\right\}.\] By Proposition 4.3, we only need to consider the configurations of centers of \(X\in\mathcal{HK}\). By Lemma 3.3 and Lemma 3.4, we have the following proposition. **Proposition 4.4**.: _Assume that vector fields \(X\in\mathcal{HK}\), that is \(X\in\Psi_{n,n}\) and \(X\) has \([\frac{n^{2}-1}{2}]\) centers. Then_ 1. _when_ \(n\) _is even,_ \(X\) _has_ \(\frac{n^{2}-2}{2}\) _elementary centers and_ \(\frac{n^{2}+2}{2}\) _elementary saddles in_ \(\mathbb{R}^{2}\)_, and six infinite critical points which are elementary nodes._ 2. _when_ \(n\) _is odd,_ \(X\) _has_ \(\frac{n^{2}-1}{2}\) _elementary centers and_ \(\frac{n^{2}+1}{2}\) _elementary saddles in_ \(\mathbb{R}^{2}\)_, and four infinite critical points which are elementary nodes._ To study \(X\in\mathcal{HK}\) in the complete plane \(\mathbb{R}^{2}\) including its behavior near infinity, it is suffice to study \(X\) in the Poincare disc. This disc is divided into four sectors \(\mathcal{S}_{j}\) by invariant lines \(x=0\) and \(y=0\) of \(X\), that is, \[\mathcal{S}_{j}=\{(x,y):\ x=r\cos\theta,\ y=r\sin\theta,\ (j-1)\frac{\pi}{2} \leq\theta\leq j\frac{\pi}{2},\ 0\leq r\leq M,\ M\gg 1\},\] where \(j=1,\cdots,4.\) Note that all the critical points on the boundaries \(\theta=(j-1)\frac{\pi}{2},0<r<M\) and \(\theta=j\frac{\pi}{2},0<r<M\) of \(\mathcal{S}_{j}\) are elementary saddles and all the infinite critical points on the boundary \(r=M\) of \(\mathcal{S}_{j}\) are elementary nodes. Therefore, the sum of indices at the critical points of \(X\) in the interior of \(\mathcal{S}_{j}\), denoted by \(\sum_{int_{j}}i\), can be characterized by the indices at the critical points on boundary of \(\mathcal{S}_{j}\) as follows. **Lemma 4.5**.: _Suppose that \(X\in\mathcal{HK}\), and \(X\) has \(s_{j}(X)\) saddles and \(n_{j}(X)\) nodes in the boundaries of \(\mathcal{S}_{j}\) except the three vertexes of \(\mathcal{S}_{j}\): \((0,0)\), \((M\cos((j-1)\frac{\pi}{2})),M\sin((j-1)\frac{\pi}{2}))\) and \((M\cos(j\frac{\pi}{2}),M\sin(j\frac{\pi}{2}))\), \(j=1,\cdots,4\). Then_ \[s_{j}(X)=2\sum_{int_{j}}i+n_{j}(X),\ j=1,\cdots,4.\] Proof.: Since \(X\in\mathcal{HK}\), by Proposition 4.4 we know that either \(n_{j}(X)=0\) or \(n_{j}(X)=1\) if \(n\) is even, and \(n_{j}(X)=0\) if \(n\) is odd for \(j=1,\cdots,4\). For simplicity, we let \[U(x,y)=-F(x,y)-y\frac{\partial F}{\partial y},\ W(x,y)=F(x,y)+x\frac{\partial F }{\partial x}.\] Consider the corresponding system (4.1) of \(X\) in the first sector \(\mathcal{S}_{1}\), by the transformation \((x,y)\mapsto(\sqrt{x},\sqrt{y})\) we have \[\begin{split}\dot{x}&=\frac{1}{2}xU(x^{2},y^{2}),\\ \dot{y}&=\frac{1}{2}yW(x^{2},y^{2}).\end{split} \tag{4.3}\] Since the systems (4.1) and (4.3) are topologically conjugate in the sector \(\mathcal{S}_{1}\), the number and topological classification of finite and infinite critical points of system (4.3) and system (4.1) are the same in the sector \(\mathcal{S}_{1}\). Note that system (4.3) is invariant under transformations \((x,y)\mapsto(-x,y)\ (x,y)\mapsto(x,-y)\) and \((x,y)\mapsto(-x,-y)\). Therefore, system (4.3) can be defined in \(\mathbb{R}^{2}\), which has \(4n_{1}(X)+4\) nodes at infinity, and \(2s_{1}(X)+1\) saddles on \(x\)-axis and \(y\)-axis. Applying Poincare-Hopf theorem to system (4.3), we obtain that \[2\sum_{f}i+\sum_{inf}i=2\left(-(2s_{1}(X)+1)+4\sum_{int_{1}}i\right)+4n_{1}(X) +4=2,\] and it follows \(s_{1}(X)=2\sum_{int_{1}}i+n_{1}(X)\). Using the similar arguments it can be prove that \(s_{j}(X)=2\sum_{int_{j}}i+n_{j}(X)\) for \(j=2,3,4\). We omitted them to save space. The proof is finished. Lemma 4.5 is a powerful tool to study the configuration of centers for \(X\in\mathcal{HK}\). It tell us that the configurations of centers can be controlled by the configurations of all the saddles of \(X\) as follows. **Corollary 4.6**.: _Suppose that \(X\in\mathcal{HK}\). If there exists a sector \(\mathcal{S}_{j}\) such that \(X\) has at least two (resp. one) saddles except the origin \((0,0)\) on the \(x\)-axis and \(y\)-axis in \(\mathcal{S}_{j}\) when \(n\) is even (resp. odd), then there exists at least one center in the interior of this sector._ Now we are ready to prove Theorem 4.2. Proof of Theorem 4.2.: Due to Proposition 4.3, we only need to prove Theorem 4.2 for \(X\in\mathcal{HK}\). It is clear that conclusion (i) of Theorem 4.2 comes directly from Proposition 4.4 and Corollary 4.6. Note that the arguments applied to verify conclusions (ii) - (v) are similar for both even \(n\) and odd \(n\). Thus, in the following we consider only the case that \(n\) is even. We first prove conclusion (ii). By Lemma 2.5, we have \[i_{j}\geq\sum_{int_{j}}i,\] where \(i_{j}\) is the number of centers in the \(j\)th sector. Notes that \(X\in\mathcal{HK}\), which has \(2(n-1)\) saddles on the set \(\{(x,y)\neq(0,0):xy=0\}\) and six nodes at infinity. By Lemma 4.5, we know that one of following statements holds: 1. \(\sum_{int_{1}}i+\sum_{int_{3}}i=\sum_{int_{2}}i+\sum_{int_{4}}i+1=n-1\), when \(F_{n-1}(x,y)\) has a linear factor \(x+ky\) with \(k>0\); 2. \(\sum_{int_{1}}i+\sum_{int_{3}}i+1=\sum_{int_{2}}i+\sum_{int_{4}}i=n-1\), when \(F_{n-1}(x,y)\) has a linear factor \(x+ky\) with \(k<0\), where \(F_{n-1}(x,y)\) is the \(n-1\)-th homogenous part of polynomial \(F(x,y)\). Hence, conclusion (ii) holds. Let us now prove conclusion (iii). If there is no centers in the first sector \(\mathcal{S}_{1}\), then the number of saddles on \(\{(x,0):x>0\}\cup\{(0,y):y>0\}\) is less than one by Corollary 4.6. From some calculations in two cases that \(F_{n-1}(x,y)\) has a linear factor \(x+ky\) with \(k>0\) and \(k<0\), respectively, one of following statements holds 1. if \(k>0\), \(i_{1}=\sum_{int_{1}}i=0\), \(\sum_{int_{3}}i=n-1\), \(\sum_{int_{2}}i=\sum_{int_{4}}i=\frac{n-2}{2}\). 2. if \(k<0\), \(i_{1}=\sum_{int_{1}}i=0\), \(\sum_{int_{3}}i=n-2\) and either \(\sum_{int_{2}}i=\frac{n}{2}\), \(\sum_{int_{4}}i=\frac{n-2}{2}\) or \(\sum_{int_{2}}i=\frac{n-2}{2}\), \(\sum_{int_{4}}i=\frac{n}{2}\). So we have \[i_{2}\geq\sum_{int_{2}}i\geq\frac{n-2}{2}=[\frac{n-1}{2}],\quad i_{3}\geq \sum_{int_{3}}i\geq n-2\] when \(n\) is even. That is conclusion (iii). To prove conclusion (iv), we assume that the number of centers of \(X\) is the maximum in some \(\mathcal{S}_{j}\) of four sectors, without loss of generality, we assume that \[i_{3}=\max\{i_{1},i_{2},i_{3},i_{4}\}.\] Then \[\begin{split} i_{3}&\leq i_{1}+i_{3}=\frac{n^{2}-2}{2}-( i_{2}+i_{4})\leq\frac{n^{2}-2}{2}-(\sum_{int_{2}}i+\sum_{int_{4}}i)\\ &\leq\frac{n^{2}-2}{2}-(n-2)=\frac{n^{2}-2n+2}{2}.\end{split} \tag{4.4}\] On the other hand, there exists a sector \(\mathcal{S}_{j}\) such that the number \(i_{j}\) of centers in \(\mathcal{S}_{j}\) satisfying \(i_{j}\geq\frac{[\frac{n^{2}-1}{2}]}{4}\). Otherwise \[[\frac{n^{2}-1}{2}]=\sum_{j=1}^{4}i_{j}<4\cdot\frac{[\frac{n^{2}-1}{2}]}{4}=[ \frac{n^{2}-1}{2}],\] which is a contradiction. We now claim that \[\left\{\frac{[\frac{n^{2}-1}{2}]}{4}\right\}=[\frac{n^{2}+4}{8}],\] where \(\{m\}\) represents _the minimum integer which is not less than \(m\)_. In fact, since \(n\) is even, there exists an integer \(k\) such that either \(n=4k\) or \(n=4k+2\). We have \[\left\{\frac{[\frac{(4k)^{2}-1}{2}]}{4}\right\} =\left\{\frac{(4k)^{2}-2}{8}\right\}=2k^{2}=[\frac{(4k)^{2}+4}{8} ],\text{ as }n=4k.\] \[\left\{\frac{[\frac{(4k+2)^{2}-1}{2}]}{4}\right\} =\left\{\frac{16k^{2}+16k+2}{8}\right\}\] \[=2k^{2}+2k+1=\frac{(4k+2)^{2}+4}{8},\text{ as }n=4k+2.\] Hence, \([\frac{n^{2}+4}{8}]\leq i_{3}\). Using the same method, we can obtain that \(i_{1}\leq[\frac{n^{2}-1}{8}]\). Finally, we prove (v). By inequality (4.4), \(i_{1}=0\) if \(i_{3}=\frac{n^{2}-2n+2}{2}\). Since \(i_{1}=\min\{i_{1},i_{2},i_{3},i_{4}\}\), we can always assume that \(i_{3}=\frac{n^{2}-2n+2}{2}\). Note that inequality (4.4) are actually an equality. Thus, we have \[i_{2}+i_{4}=\sum_{int_{2}}i+\sum_{int_{4}}i=n-2,\quad i_{1}=0.\] Then we have following statements. * \(\sum_{int_{2}}i+\sum_{int_{4}}i=n-2\), which implies that \(F_{n-1}(x,y)\) has a linear factor \(x+ky\) with \(k>0\). * \(i_{2}+i_{4}=\sum_{int_{2}}i+\sum_{int_{4}}i\), which implies that there is no saddles in the interior of \(\mathcal{S}_{2}\) and \(\mathcal{S}_{4}\). Hence, \(i_{2}=\sum_{int_{2}}i\) and \(i_{4}=\sum_{int_{4}}i\). * By Lemma 4.5 and statement (a), \(i_{1}=0\) which implies all the saddles on \(x\)-axis and \(y\)-axis lie in \(\{(x,0):x\leq 0\}\cup\{(0,y):y\leq 0)\}\). It can be calculated by Lemma 4.5 that \[\sum_{int_{1}}i=0,\,\sum_{int_{2}}i=\frac{n-2}{2},\,\sum_{int_{3}}i=n-1,\,\sum _{int_{4}}i=\frac{n-2}{2}.\] From (b) and (c), we have \(i_{2}=i_{4}=\frac{n-2}{2}\). Thus, \(X\) has the configuration \[(i_{1};i_{2};i_{3};i_{4})=(0;\frac{n-2}{2};\frac{n^{2}-2n+2}{2};\frac{n-2}{2})=( 0;[\frac{n-1}{2}];[\frac{n^{2}-2n+2}{2}];[\frac{n-1}{2}]).\] Hence, Theorem 4.2 is verified. ## 5. Dynamics of cubic polynomial Kolmogorov systems with the maximum centers In this section, we study the dynamics of cubic polynomial Kolmogorov vector fields \(Y_{k}\) having four centers, where \(Y_{k}=(xP(x,y),yQ(x,y))\), \(P(x,y)\) and \(Q(x,y)\) are any two quadratic polynomials. Especially, if the cubic polynomial Kolmogorov vector fields are Hamiltonian, authors in [39] have systematically investigated the configurations of centers for the number and type of all possible centers, and left an open question _if there are only two types configurations of centers when the cubic polynomial Hamiltonian Kolmogorov vector fields \(X_{hk}\) have four centers_. Applying Theorem 4.2, we answer this open question affirmatively in the sense of equivalence, and obtain all global phase portraits for this cubic vector fields \(X_{hk}\) having four centers. If the cubic polynomial Kolmogorov vector fields \(Y_{k}\) are not Hamiltonian, we show that the cubic vector fields \(Y_{k}\) have a first integral, which is well-defined elementary function on \(\mathbb{R}^{2}\) except the set \(\{(x,y):xy=0\}\), and there exist only three types of configurations of centers for the \(Y_{k}\) in the equivalent sense. This reveals the difference between Hamiltonian and non-Hamiltonian integrable systems. ### Cubic polynomial Hamiltonian Kolmogorov systems Consider the corresponding system of cubic Hamiltonian vector fields \(X_{hk}\) \[\begin{split}\frac{dx}{dt}=&-x\left(\hat{F}_{2}(x,y )+y\frac{\partial\hat{F}_{2}}{\partial y}\right),\\ \frac{dy}{dt}=& y\left(\hat{F}_{2}(x,y)+x\frac{ \partial\hat{F}_{2}}{\partial x}\right),\end{split} \tag{5.1}\] where \(\hat{F}_{2}(x,y)\) is any a quadratic polynomial. Clearly system (5.1) has at most four centers. Authors in [39] have founded two configurations \((1;1;1;1)\) and \((1;0;1;2)\) of centers if system (5.1) has four centers. According to the equivalence of two configurations in section 4, it can be checked that the configuration \((1;0;1;2)\) of centers is equivalent to \((0;1;2;1)\). Thus, the open question proposed in [39] is ask _if there are only two configurations \((1;1;1;1)\) and \((0;1;2;1)\) of centers in the sense of equivalence when system (5.1) has four centers_. In the following we answer this open question affirmatively and obtain the global phase portraits of system (5.1). **Theorem 5.1**.: _Suppose that system (5.1) has four centers. Then system (5.1) has only two configurations \((0;1;2;1)\) and \((1;1;1;1)\) of centers in the sense of equivalence. Furthermore, the global dynamics of system (5.1) can be characterized as follows._ 1. _System (_5.1_) has exactly nine finite critical points, in which four centers and five saddles, and all five saddles lie in the level set_ \(xy\hat{F}_{2}(x,y)=0\)_._ 2. _System (_5.1_) has exactly two pairs of infinite critical points which correspond to infinity in the direction of the_ \(x\)_-axis and_ \(y\)_-axis. Those infinite critical points are all elementary nodes._ 3. _Using linear transformations (_4.2_) of variables_ \((x,y)\) _and time change_ \(t\mapsto-t\) _if necessary, system (_5.1_) has only two different topological types of global phase portraits in the Poincare disc, which are sketched in Figure_ 5.1 _and Figure_ 5.2_._ Proof.: Since \(n=3\) and system (5.1) has four centers, by conclusion (i) in Theorem 4.2, we know that system (5.1) has at least one center in \(\mathcal{S}_{j}\) for \(j=2,3,4\), that is, \(i_{2}\geq 1\), \(i_{3}\geq 1\) and \(i_{4}\geq 1\). If system (5.1) has one center in \(\mathcal{S}_{1}\), that is \(i_{1}=1\), then there exists a unique configuration \((1;1;1;1)\) of four centers for system (5.1). Figure 5.2. Global phase portrait of system (5.1) with the configuration \((1;1;1;1)\) of centers Figure 5.1. Global phase portrait of system (5.1) with the configuration \((0;1;2;1)\) of centers If system (5.1) has no centers in \(\mathcal{S}_{1}\), that is \(i_{1}=0\), then \(i_{3}\geq 3-1=2\) and \(i_{2}+i_{4}\geq 3-1=2\) by conclusion (ii) in Theorem 4.2. Thus, \(i_{2}=1,i_{3}=2,i_{4}=1\). This implies system (5.1) has a unique configuration \((0;1;2;1)\) of four centers. Summarizing the above analysis, we obtain that system (5.1) has only two configurations \((1;1;1;1)\) and \((0;1;2;1)\) of four centers. We now discuss the global dynamics of system (5.1) with four centers. Since system (5.1) has four centers, the following equations \[\hat{F}_{2}(x,y)+y\frac{\partial\hat{F}_{2}}{\partial y}=0,\ \hat{F}_{2}(x,y)+x \frac{\partial\hat{F}_{2}}{\partial x}=0\] have four solutions in the interior of \(\mathcal{S}_{j}\) for some \(j\in\{1,2,3,4\}\). Thus, the quadratic homogenous parts of polynomials \[\hat{F}_{2}(x,y)+y\frac{\partial\hat{F}_{2}}{\partial y}\ \ \text{and}\ \ \hat{F}_{2}(x,y)+x\frac{\partial\hat{F}_{2}}{\partial x}\] has no common linear factors. This implies that \(-x(\hat{F}_{2}(x,y)+y\frac{\partial\hat{F}_{2}}{\partial y})\) and \(y(\hat{F}_{2}(x,y)+x\frac{\partial\hat{F}_{2}}{\partial x})\) has no common linear factors. Hence, the vector fields \(X_{hk}\) of system (5.1) belongs to \(\Psi_{3,3}\) and \(\mathcal{HK}\). By conclusion (b) in Proposition 4.4, we obtain that system (5.1) has exactly nine finite critical points, in which four centers and five saddles, and four infinite critical points which are all elementary nodes. On the other hand, by Lemma 3.3, \(\hat{F}_{2}(x,y)\) has no real linear factors. Otherwise, \(xy\hat{F}_{2}(x,y)\) has three different real linear factors, then system (5.1) has at most \([\frac{3^{2}+1-3}{2}]=3\) centers by Theorem 3.1. This contradicts to the fact system (5.1) has four centers. Therefore, \(xy\hat{F}_{2}(x,y)\) has only two different real linear factors \(x\) and \(y\), which corresponds to four infinite critical points in the direction of the \(x\)-axis and \(y\)-axis. And it can be seen that five saddles lie in the level set of \(xy\hat{F}_{2}(x,y)=0\). It follows that the conclusions (i) and (ii) hold. Note that \(\hat{F}_{2}(x,y)\) has no real linear factor. This implies that the quadratic curve \(\hat{F}_{2}(x,y)=0\) is an ellipse. Saddle \((0,0)\) is a common point of two lines \(x=0\) and \(y=0\), and the other four saddles are common points of the ellipse and two lines \(x=0\) and \(y=0\), respectively. There are only two possibilities that an ellipse has four common points with two lines \(x=0\) and \(y=0\), which correspond to the configurations \((0;1;2;1)\) and \((1;1;1;1)\) in the sense of equivalence. In any case, there are four compact regions whose boundary are all formed by \(x=0\), \(y=0\) or \(\hat{F}_{2}(x,y)=0\). On the boundary of any compact region, \(xy\hat{F}_{2}(x,y)\equiv 0\). Thus in the interior of any compact region, there is one extreme point, which must be a center. Except these four centers and five saddles, there is no other critical points. Hence one can easily obtain that there are only two different topological types of global phase portraits of system (5.1) in the Poincare disc by using linear transformations (4.2) of variables \((x,y)\) and time change \(t\mapsto-t\) if necessary, see Figure 5.1 and Figure 5.2 respectively. **Remark 4.1** Even system (5.1) has four centers, it can be verified that the level set \(xy\hat{F}_{2}(x,y)=h\), for any \(h\in\mathbb{R}\), does not have four ovals in \(\mathbb{R}^{2}\), and there exist some \(h_{0}\), \(0\neq|h_{0}|\) such that the curves \(xy\hat{F}_{2}(x,y)=h_{0}\) has three ovals in \(\mathbb{R}^{2}\) for Figure 5.1 and two ovals in \(\mathbb{R}^{2}\) for Figure 5.2, respectively. ### Cubic polynomial Kolmogorov systems Consider the corresponding system of cubic Kolmogorov vector fields \(Y_{k}\) \[\begin{split}\frac{dx}{dt}=& xP(x,y),\\ \frac{dy}{dt}=& yQ(x,y),\end{split} \tag{5.2}\] where \(P(x,y)\) and \(Q(x,y)\) are any two quadratic polynomials. Notice that \(x=0\) and \(y=0\) are two invariant lines of system (5.2). Then the centers of system (5.2) should be in the interior of \(\mathcal{S}_{j}\) for some \(j=1,2,3,4\) if system (5.2) has centers. A natural question is to ask whether system (5.2) has limit cycles if the number of its centers is maximum four, and how many configurations of centers system (5.2) has if it is not Hamiltonian and it has four centers. We completely answer the two questions in the subsection as follows. **Theorem 5.2**.: _If system (5.2) has four centers, then it has an elementary first integral and no limit cycles. All the possible configurations of centers are \((1;1;1;1)\), \((0;1;2;1)\) and \((0;1;1;2)\) in the sense of equivalence._ In order to prove Theorem 5.2, we first study the integrability of system (5.2) if it has four centers. **Proposition 5.3**.: _If system (5.2) has four centers, then system (5.2) is integrable, that is, there exists an elementary first integral of system (5.2) in \(\mathbb{R}^{2}\setminus\{xy=0\}\). So system (5.2) has no limit cycles._ Proof.: If system (5.2) has four centers at \(p_{i}=(x_{i},y_{i})\), \(i=1,2,3,4\), then \(x_{i}y_{i}\neq 0\), \[P(x_{i},y_{i})=0,\ Q(x_{i},y_{i})=0,\ i=1,2,3,4.\] Hence, the quadratic algebraic curves \(P(x,y)=0\) and \(Q(x,y)=0\) have and only four intersection points, whose multiplicity is one by Bezout's Theorem. Note that the divergence of system (5.2) at the center \(p_{i}=(x_{i},y_{i})\) is zero, that is, \[P(x_{i},y_{i})+x_{i}\frac{\partial P(x_{i},y_{i})}{\partial x}+Q(x_{i},y_{i})+ y_{i}\frac{\partial Q(x_{i},y_{i})}{\partial y}=0,\ 1\leq i\leq 4. \tag{5.3}\] Therefore, according to Max Noether Fundamental Theorem (see [19] for detail), the quadratic polynomial \(P(x,y)+x\frac{\partial P(x,y)}{\partial x}+Q(x,y)+y\frac{\partial Q(x,y)}{ \partial y}\) can be linearly represented by polynomials \(P(x,y)\) and \(Q(x,y)\), in other words, there exist real constants \(\alpha\) and \(\beta\) such that for \(\forall(x,y)\in\mathbb{R}^{2}\), \[P(x,y)+x\frac{\partial P(x,y)}{\partial x}+Q(x,y)+y\frac{\partial Q(x,y)}{ \partial y}=(1-\alpha)P(x,y)+(1-\beta)Q(x,y).\] It is easy to check that \(x^{\alpha-1}y^{\beta-1}\) is an integrable factor of system (5.2) in \(\mathbb{R}^{2}\) except \(x\)-axis and \(y\)-axis, that is, there is a function \(\mathcal{F}(x,y)\) such that \[\frac{\partial\mathcal{F}(x,y)}{\partial y}=-x^{\alpha}y^{\beta-1}P(x,y),\quad \frac{\partial\mathcal{F}(x,y)}{\partial x}=x^{\alpha-1}y^{\beta}Q(x,y).\] This function \(\mathcal{F}(x,y)\) is called _a first integral_ of system (5.2). Suppose that \[P(x,y)=\sum_{i+j=0}^{2}p_{ij}x^{i}y^{j},\quad Q(x,y)=\sum_{i+j=0}^{2}q_{ij}x^{i }y^{j},\,p_{ij},q_{ij}\in\mathbb{R},\,i,j\in\{0,1,2\}.\] Then \[(\alpha+i)p_{ij}+(\beta+j)q_{ij}=0,\quad 0\leq i+j\leq 2. \tag{5.4}\] We now discuss the form of \(\mathcal{F}(x,y)\) depending on the values of \(\alpha\) and \(\beta\). If \(\alpha\not\in\{0,-1,-2\}\), then \(\alpha+i\neq 0\) since \(0\leq i\leq 2\). Thus, the first integral of system (5.2) is \[\mathcal{F}(x,y)=x^{\alpha}y^{\beta}R(x,y),\quad R(x,y)=\sum_{i+j=0}^{2}\frac{ q_{ij}}{\alpha+i}x^{i}y^{j}. \tag{5.5}\] Similarly, if \(\beta\not\in\{0,-1,-2\}\), then \(\beta+j\neq 0\) since \(0\leq j\leq 2\). So the first integral of system (5.2) is \[\mathcal{F}(x,y)=x^{\alpha}y^{\beta}R(x,y),\quad R(x,y)=-\sum_{i+j=0}^{2}\frac {p_{ij}}{\beta+j}x^{i}y^{j}. \tag{5.6}\] If \(\alpha+\beta\not\in\{0,-1,-2\}\), then \((\alpha+i)^{2}+(\beta+j)^{2}\neq 0\). Hence, the first integral of system (5.2) is \[\mathcal{F}(x,y) =x^{\alpha}y^{\beta}R(x,y),\] \[R(x,y) =\sum_{i+j=0,\alpha+i\neq 0}^{2}\frac{q_{ij}}{\alpha+i}x^{i}y^{j}- \sum_{i+j=0,\alpha+i=0}^{2}\frac{p_{ij}}{\beta+j}x^{i}y^{j}. \tag{5.7}\] If \(\alpha,\beta,\alpha+\beta\in\{0,-1,-2\}\), then the first integral of system (5.2) may contain logarithmic functions. Concretely, if \((\alpha,\beta)\in\{(0,0),(0,-1),(0,-2),(-1,-1)\}\), then the first integrals of system (5.2) are \[\mathcal{F}(x,y) =-p_{00}\ln|y|-(p_{01}+p_{11}x)y-\tfrac{p_{02}y^{2}}{2}+q_{00}\ln| x|+q_{10}x+\tfrac{q_{20}x^{2}}{2},\] \[\mathcal{F}(x,y) =-p_{01}\ln|y|+y^{-1}(p_{00}+p_{10}x+p_{20}x^{2})-p_{02}y+q_{11}x +q_{01}\ln|x|,\] \[\mathcal{F}(x,y) =\tfrac{y^{-2}}{2}(p_{00}+p_{10}x+p_{20}x^{2})+p_{11}xy^{-2}-p_{0 2}\ln|y|+q_{02}\ln|x|,\] \[\mathcal{F}(x,y) =x^{-1}y^{-1}(p_{00}+p_{10}x+p_{20}x^{2}-p_{02}y^{2})-p_{11}\ln| y|+q_{11}\ln|x|, \tag{5.8}\] in \(\mathbb{R}^{2}\setminus(\{(x,y):x=0\}\cup\{(x,y):y=0\})\), respectively. For the other \((\alpha,\beta)\in\{(-1,0),(-2,0)\}\), one can calculate the first integral of system (5.2) easily by symmetry, we omit it here. Hence, system (5.2) is integrable with a first integral \(\mathcal{F}(x,y)\) almost everywhere in \(\mathbb{R}^{2}\). As we have already observed in Proposition 5.3 that the first integral of system (5.2) may contain polynomials, or rational functions or logarithmic functions. The following lemma shows that it is not necessary to consider these system (5.2) with the first integral containing logarithmic functions in study the configurations of centers. **Lemma 5.4**.: _Assume that system (5.2) with four centers has a first integral containing logarithmic functions, then there exists another system (5.2) with a first integral without logarithmic functions such that the two systems have the same configuration of centers._ Proof.: Since their proof is similar, we only give the detailed proof in the case \((\alpha,\beta)=(0,0)\). Furthermore, we assume \(p_{00}q_{00}\neq 0\), else the problem becomes easier. Hence, system (5.2) with four centers has the form \[\begin{split}\frac{dx}{dt}=& x(p_{00}+p_{01}y+p_{11 }xy+p_{02}y^{2}),\\ \frac{dy}{dt}=& y(q_{00}+q_{10}x-p_{11}xy+q_{20}x^{2 }),\end{split} \tag{5.9}\] since system (5.2) has the first integral \(\mathcal{F}(x,y)\) with the first expression in (5.8) as \((\alpha,\beta)=(0,0)\). Consider a small perturbation of system (5.9) \[\begin{split}\frac{dx}{dt}=& x(p_{00}+p_{01}y+p_{11 }xy+p_{02}y^{2})-\varepsilon p_{00}x\left(q_{10}x-p_{01}y+\frac{q_{20}x^{2}}{2 }-p_{11}xy-\frac{p_{02}y^{2}}{2}\right),\\ \frac{dy}{dt}=& y(q_{00}+q_{10}x-p_{11}xy+q_{20}x^{ 2})+\varepsilon q_{00}x\left(q_{10}x-p_{01}y+\frac{q_{20}x^{2}}{2}-p_{11}xy- \frac{p_{02}y^{2}}{2}\right),\end{split} \tag{5.10}\] where \(0<\varepsilon\ll 1\). The first integral of system (5.10) is \[\mathcal{F}(x,y)=\frac{y^{-\varepsilon p_{00}}x^{\varepsilon q_{00}}}{ \varepsilon}\left(1+\varepsilon(q_{10}x-p_{01}y)+\varepsilon(\frac{q_{20}x^{ 2}}{2}-p_{11}xy-\frac{p_{02}y^{2}}{2})\right). \tag{5.11}\] Since \(\varepsilon>0\) is sufficiently small, in a small neighborhood of each center of system (5.9), system (5.10) has a critical point, which is a center or a focus. Note that the centers of system (5.10) are in the region \(xy\neq 0\) and in this region system (5.10) has an analytic first integral \(\mathcal{F}(x,y)\) in (5.11). Thus, this critical point of system (5.10) must be a center. This implies that system (5.10) has also four centers, and the configuration of centers of system (5.10) is the same to that of system (5.9). The proof is complete. From now on, we only consider that system (5.2) has a first integral with the form \[\mathcal{F}(x,y)=x^{\alpha}y^{\beta}R(x,y),\quad R(x,y)=\sum_{i+j=0}^{2}r_{ij} x^{i}y^{j}. \tag{5.12}\] Define \[\begin{array}{ll}\aleph&\{\mathcal{F}(x,y)\in(\ref{eq:F1}):\text{ each of three polynomials }r_{00}+r_{10}x+r_{20}x^{2},\\ &r_{00}+r_{01}y+r_{02}y^{2},r_{20}x^{2}+r_{11}xy+r_{02}y^{2}\text{ does not have multiple}\\ &\text{ factors and }\alpha\beta(\alpha+\beta+2)r_{00}r_{20}r_{02}\neq 0.\} \end{array}\] The next result shows that we only need to consider the first integral \(\mathcal{F}(x,y)\in\aleph\) in study the configurations of centers of system (5.2). **Lemma 5.5**.: _If system (5.2) has four centers, then there exists another system (5.2) which has a first integral \(\mathcal{F}(x,y)\in\aleph\) such that the two systems have the same configuration of centers._ Proof.: From Lemma 5.4, we only need to consider system (5.2) with the first integral (5.12). Firstly, we show \(\alpha\beta(\alpha+\beta+2)\neq 0\) if system (5.2) has four centers. In fact, from the first integral in (5.12), system (5.2) has the form \[\begin{split}\frac{dx}{dt}=& xP(x,y)=x(-\beta R(x,y)-y \frac{\partial R}{\partial y}(x,y)),\\ \frac{dy}{dt}=& yQ(x,y)=y(\alpha R(x,y)+x\frac{ \partial R}{\partial x}(x,y)).\end{split} \tag{5.13}\] If \(\alpha\beta=0\), for example \(\alpha=0\), then \(Q(x,y)=x\frac{\partial R}{\partial x}(x,y)\). Thus, by Bezout Theorem, \(P(x,y)=0,Q(x,y)=0\) has at most two isolated zeros in the interior of four quadrants in \(\mathbb{R}^{2}\). That means system (5.2) has at most two centers which is a contradiction with four centers. If \(\alpha+\beta+2=0\), we have \[Q_{2}(x,y)-P_{2}(x,y)=\alpha R_{2}(x,y)+\beta R_{2}(x,y)+x\frac{\partial R_{2 }}{\partial x}+y\frac{\partial R_{2}}{\partial y}=(\alpha+\beta+2)R_{2}(x,y)=0,\] where \(P_{2}(x,y)\), \(Q_{2}(x,y)\) and \(R_{2}(x,y)\) are the quadratic homogenous part of polynomials \(P(x,y)\), \(Q(x,y)\) and \(R(x,y)\), respectively. Therefore, \(Q(x,y)-P(x,y)\) is a linear polynomial. Hence, \(P(x,y)=Q(x,y)=0\) which is equivalent to \(P(x,y)=Q(x,y)-P(x,y)=0\) has at most two isolated zeros. It is also a contradiction with the fact there are four centers. We now consider the case \(\mathcal{F}(x,y)\not\in\aleph\). If \(r_{00}r_{20}r_{02}=0\), then we use the similar arguments in the proof of Lemma 5.4 to find a system whose first integral is in \(\aleph\). For example, we consider a small perturbation Kolmogorov system of system (5.2) having the following first integral \[\mathcal{F}_{\varepsilon}(x,y)=x^{\alpha}y^{\beta}(R(x,y)+\varepsilon(1+x^{2} +y^{2})),\] where \(0<\varepsilon\ll 1\). This perturbation Kolmogorov system has the same configuration of centers to that of the original system and \((r_{00}+\varepsilon)(r_{20}+\varepsilon)(r_{02}+\varepsilon)\neq 0\). Then we consider the system (5.2) with \(r_{00}r_{20}r_{02}\neq 0\), but there is at least one of \(r_{00}+r_{10}x+r_{20}x^{2},r_{00}+r_{01}y+r_{02}y^{2}\) or \(r_{20}x^{2}+r_{11}xy+r_{02}y^{2}\) which has multiple factor, that is \[(r_{10}^{2}-4r_{00}r_{20})(r_{01}^{2}-4r_{00}r_{02})(r_{11}^{2}-4r_{20}r_{02})=0.\] Then using perturbation technical again, we consider a small perturbation of this system such that the perturbation system has the first integral \[\mathcal{F}_{\varepsilon}(x,y)=x^{\alpha}y^{\beta}(R(x,y)+\varepsilon(x+y+xy)),\] and has the same configuration of centers to that of the original system. For the perturbation Kolmogorov system, we have \[\left(r_{10}+\varepsilon\right)^{2}-4r_{00}r_{20}\neq 0,\ \left(r_{01}+\varepsilon \right)^{2}-4r_{00}r_{02}\neq 0,\ \left(r_{11}+\varepsilon\right)^{2}-4r_{20}r_{02}\neq 0,\] This implies that for any system (5.2) having four centers, there exists a Kolmogorov system such that this Kolmogorov system has a first integral \(\mathcal{F}_{\varepsilon}(x,y)\in\aleph\). The proof is complete. From now on, we only consider system (5.2) with the first integral \(\mathcal{F}(x,y)\in\aleph\). We will determine the topological classification of the critical points of system (5.2) on the \(x\)-axis and \(y\)-axis. **Lemma 5.6**.: _Suppose that system (5.2) has a first integral \(\mathcal{F}(x,y)\subset\aleph\). Then all the critical points and infinite critical points are elementary. Furthermore, the following conclusions hold._ * _If_ \(\alpha<0\)_, then all the critical points of system (_5.2_) on the_ \(y\)_-axis must be nodes except the origin_ \((0,0)\)_;_ * _If_ \(\beta<0\)_, then all the critical points of system (_5.2_) on the_ \(x\)_-axis must be nodes except the origin_ \((0,0)\)_;_ * _If_ \(\alpha>0\) _and_ \(\beta>0\)_, then all the infinite critical points of system (_5.2_) must be nodes;_ * _If_ \(\alpha\beta<0\)_, then the origin_ \((0,0)\) _is a node; if_ \(\alpha\beta>0\)_, then the origin_ \((0,0)\) _is a saddle._ Proof.: These conclusions can be proved by calculation of Jacobian matrix at corresponding critical points directly. Since the arguments are similar in proof of conclusions (i)-(iv), we only prove conclusion (i) to save the space. We just mention the calculation on infinite critical points for conclusion (iii), which needs Poincare compactification used in the proof of Theorem 3.1. We now prove conclusion (i). If \(\alpha<0\), assume that system (5.2) has a critical point \(p=(0,y^{*})\) with \(y^{*}\neq 0\) on the \(y\)-axis, by equation (5.13), then we have \[Q(0,y^{*})=\alpha R(0,y^{*})=0.\] Further, the Jacobian matrix of system (5.13) at point \(p=(0,y^{*})\) is \[\begin{bmatrix}-y^{*}\frac{\partial R}{\partial y}(0,y^{*})&0\\ (\alpha+1)y^{*}\frac{\partial R}{\partial x}(0,y^{*})&\alpha y^{*}\frac{ \partial R}{\partial y}(0,y^{*})\end{bmatrix}.\] Since \(R(0,y)=r_{00}+r_{01}y+r_{02}y^{2}\) has no multiple factors, \(y=y^{*}\) is a simple root of \(R(0,y)\), i.e. \(\frac{\partial R}{\partial y}(0,y^{*})\neq 0\). It follows that \((0,y^{*})\) is an elementary node. Using the same notations in section 4, we say system (5.2) has the configuration \((i_{1};i_{2};i_{3};i_{4})\) of centers, which implies there are \(i_{j}\) centers in the interior of \(\mathcal{S}_{j}\), \(i_{j}\geq 0\) and \(i_{1}+i_{2}+i_{3}+i_{4}=4\). Since the first integral of system (5.2) \(\mathcal{F}(x,y)\in\aleph\), polynomials \(R(x,0)=r_{00}+r_{10}x+r_{20}x^{2},\ R(0,y)=\aleph\). \(r_{00}+r_{01}y+r_{02}y^{2}\) and \(R_{2}(1,y)=r_{20}+r_{11}y+r_{02}y^{2}\) have \(r_{x}^{+},r_{y}^{+},r_{inf}^{+}\) positive real roots and \(r_{x}^{-},r_{y}^{-},r_{inf}^{-}\) negative real roots, respectively. That means system (5.2) has exactly \(r_{x}^{+}\) critical points on positive \(x\)-axis and \(r_{x}^{-}\) critical points on negative \(x\)-axis, \(r_{y}^{+}\) critical points on positive \(y\)-axis and \(r_{y}^{-}\) critical points on negative \(y\)-axis, \(r_{inf}^{+}+2\) infinite critical points in sector \(\mathcal{S}_{1}\) and \(r_{inf}^{-}+2\) infinite critical points in sector \(\mathcal{S}_{2}\). Obviously, \(r_{x}^{+}+r_{x}^{-},r_{y}^{+}+r_{y}^{-},r_{inf}^{+}+r_{inf}^{-}\in\{0,2\}\). **Lemma 5.7**.: \(\max\{i_{1},i_{2},i_{3},i_{4}\}\leq 2\)_._ Proof.: Note that \(\max\{i_{1},i_{2},i_{3},i_{4}\}\in\{i_{3},i_{4}\}\). Without loss of generality, let \(i_{3}=\max\{i_{1},i_{2},i_{3},i_{4}\}\). Consider the following system \[\begin{split}\frac{du}{dt}&=-\frac{1}{2}uP(-u^{2}, -v^{2}),\\ \frac{dv}{dt}&=-\frac{1}{2}vQ(-u^{2},-v^{2}).\end{split} \tag{5.14}\] System (5.14) is topological conjugated with system (5.2) in the sector \(\mathcal{S}_{3}\) by the transformation \(x=-u^{2},y=-v^{2}\). Hence, all the finite and infinite critical points of system (5.14) and system (5.2) have the same topological classification in \(\mathcal{S}_{3}\). On the other hand, system (5.14) is invariant under the transformation \((u,v)\mapsto(-u,v)\) or \((u,v)\mapsto(u,-v)\). This implies that except the origin \((0,0)\), (5.14) has \(2r_{x}^{-}\) critical points on \(x\)-axis, \(2r_{y}^{-}\) critical points on \(y\)-axis and \(4+4r_{inf}^{+}\) infinite critical points in \(\mathbb{R}^{2}\). If \(\alpha<0,\beta<0\), then the origin is a saddle and the other critical points on \(x\)-axis and \(y\)-axis are all nodes. By Poincare-Hopf theorem and Lemma 5.6, we have \[4i_{3}+2r_{x}^{-}+2r_{y}^{-}-1=\sum_{f}i=\frac{1}{2}(2-\sum_{inf}i)\leq\frac{1 }{2}(2+4+4r_{inf}^{+})\leq 7,\] which implies that \(i_{3}\leq 2\). If \(\alpha<0,\beta>0\), then the origin is a node and the critical points on \(y\)-axis are all nodes, thus \[4i_{3}-2r_{x}^{-}+2r_{y}^{-}+1\leq\sum_{f}i=\frac{1}{2}(2-\sum_{inf}i)\leq 7,\] which implies that \(i_{3}\leq 2\). The case \(\alpha>0,\beta<0\) follows from the symmetry. If \(\alpha>0\) and \(\beta>0\), then the origin is a saddle and all the infinite critical points are nodes, thus \[4i_{3}-2r_{x}^{-}-2r_{y}^{-}-1\leq\sum_{f}i=\frac{1}{2}(2-\sum_{inf}i)=\frac{1 }{2}(2-(4+4r_{inf}^{+}))\leq-1,\] which implies that \(i_{3}\leq 2\). **Lemma 5.8**.: \(i_{3}+i_{4}\leq 3\) Proof.: Consider the system \[\begin{split}\frac{du}{dt}&=\frac{1}{2}uP(u^{2},-v^{2}),\\ \frac{dv}{dt}&=\frac{1}{2}vQ(u^{2},-v^{2}).\end{split} \tag{5.15}\] System (5.15) is topological conjugated with system (5.2) in the sector \(\mathcal{S}_{4}\) by the transformation \(x=u^{2},y=-v^{2}\). Note that except the origin, system (5.15) has \(2r_{x}^{+}\) critical points on \(x\)-axis, \(2r_{y}^{-}\) critical points on \(y\)-axis and \(4+4r_{inf}^{-}\) infinite critical points. If \(\alpha<0\) and \(\beta<0\), by Poincare-Hopf theorem and Lemma 5.6, \[4i_{4}-1\leq 4i_{4}+2r_{x}^{+}+2r_{y}^{-}-1=\sum_{f}i=\frac{1}{2}(2-\sum_{ inf}i)\leq 3+2r_{inf}^{-}.\] So \(i_{4}\leq 1+\frac{1}{2}r_{inf}^{-}\). Similarly, by the same discussion on system (5.14), we have \(i_{3}\leq 1+\frac{1}{2}r_{inf}^{+}\). Hence, we have \[i_{3}+i_{4}\leq 2+\frac{1}{2}(r_{inf}^{+}+r_{inf}^{-})\leq 3.\] If \(\alpha<0\) and \(\beta>0\), we have \[4i_{4}-2r_{x}^{+}+1\leq 4i_{4}-2r_{x}^{+}+2r_{y}^{-}+1\leq\sum_{f}i=\frac{1}{ 2}(2-\sum_{inf}i)\leq 3+2r_{inf}^{-}.\] Thus, \(i_{4}\leq\frac{1}{2}+\frac{1}{2}r_{inf}^{-}+\frac{1}{2}r_{x}^{+}\). Similarly, we have \(i_{3}\leq\frac{1}{2}+\frac{1}{2}r_{inf}^{+}+\frac{1}{2}r_{x}^{-}\) from the discussion on system (5.14). Hence, we have \[i_{3}+i_{4}\leq 1+\frac{1}{2}(r_{inf}^{+}+r_{inf}^{-})+\frac{1}{2}(r_{x}^{+}+ r_{x}^{-})\leq 3.\] From the symmetry, we can obtain that \(i_{3}+i_{4}\leq 3\) in the case \(\alpha>0,\beta<0\). Last we consider the case: \(\alpha>0\) and \(\beta>0\). Then we have \[4i_{4}-2r_{x}^{+}-5\leq 4i_{4}-2r_{x}^{+}-2r_{y}^{-}-1\leq\sum_{f}i=\frac{1}{ 2}(2-\sum_{inf}i)=-1-2r_{inf}^{-}\leq-1.\] Hence, \(i_{4}\leq 1+\frac{1}{2}r_{x}^{+}\). Similarly, we can obtain that \(i_{3}\leq 1+\frac{1}{2}r_{x}^{-}\). Thereby \[i_{3}+i_{4}\leq 2+\frac{1}{2}(r_{x}^{+}+r_{x}^{-})\leq 3.\] Similar to proof of Lemma 5.8, we have **Lemma 5.9**.: \(\max\{i_{1}+i_{3},i_{2}+i_{4}\}\leq 3\)_._ At last, we are in the position to prove Theorem 5.2. Proof of Theorem 5.2.: Recall that \(i_{1}=\min\{i_{1},i_{2},i_{3},i_{4}\}\), \(i_{2}\leq i_{4}\) and \(i_{1}+i_{2}+i_{3}+i_{4}=4\). If \(i_{1}=1\), then \(i_{1}=i_{2}=i_{3}=i_{4}=1\). Therefore, system (5.2) has the configuration \((1;1;1;1)\) of centers. In [39], authors have given a HK system (5.1) which has the configuration \((1;1;1;1)\) of centers. Here we give the following non-Hamiltonian Kolmogorov system \[\begin{split}\frac{dx}{dt}&=x(1-x^{2}-3y^{2}),\\ \frac{dy}{dt}&=2y(-1+2x^{2}+y^{2}),\end{split} \tag{5.16}\] which has the first integral \[\mathcal{F}_{1}(x,y)=x^{2}y(x^{2}+y^{2}-1),\] and the four centers \(p_{i}=(x_{i},y_{i})\) of system (5.16) are \[(x_{i},y_{i})\thickapprox(0.63,0.45),(-0.63,0.45),\,(-0.63,-0.45),\,(0.63,-0.45 ),\,i=1,2,3,4.\] See the first diagram in Figure 5.3. If \(i_{1}=0\), then by Lemma 5.8 and Lemma 5.9, all the possible different configures of centers are \((0;1;1;2)\) and \((0;1;2;1)\). Note that the configuration \((0;1;2;1)\) of centers can be realized by HK system (5.1) in [39], and Theorem 5.1 shows that the configuration \((0;1;1;2)\) of centers can not be realized by HK system (5.1). However, we can find a non-Hamiltonian Kolmogorov system \[\begin{split}\frac{dx}{dt}&=x(3-3x+8y+3x^{2}+6xy+y^ {2}),\\ \frac{dy}{dt}&=\frac{1}{2}y(1-3x+4y+5x^{2}+9xy+y^{2} ),\end{split} \tag{5.17}\] which has the first integral \[\mathcal{F}_{2}(x,y)=\frac{(x^{2}+3xy+y^{2}-x+4y+1)\sqrt{|x|}}{y^{3}},\ \ \forall y \neq 0,\] and the four centers \(p_{i}=(x_{i},y_{i})\) of system (5.17) are \[(x_{i},y_{i})\thickapprox(-22.02,13.81),\,(-0.09,-0.47),\,(1.37,-15.94),\,(0.94,-0.21),\,i=1,2,3,4.\] This implies that system (5.17) has the configuration \((0;1;1;2)\) of centers, see the second diagram in Figure 5.3. Thus, non-Hamiltonian Kolmogorov system (5.2) has the configuration \((0;1;1;2)\) of centers. Figure 5.3. The configurations of centers by systems (5.16), (5.17) and (5.18), respectively. Last we give an example to show that non-Hamiltonian Kolmogorov system (5.2) has the configuration \((0;1;2;1)\) of centers as follows. \[\begin{split}\frac{dx}{dt}&=-x(5+12x+24y+5x^{2}+12xy+ 15y^{2}),\\ \frac{dy}{dt}&=y(15+48x+36y+25x^{2}+24xy+15y^{2}), \end{split} \tag{5.18}\] which has the first integral \[\mathcal{F}_{3}(x,y)=x^{3}y(5x^{2}+6xy+5y^{2}+12x+12y+5),\] and the four centers \(p_{i}=(x_{i},y_{i})\), \(i=1,2,3,4\) of system (5.18) are \[(x_{i},y_{i})\thickapprox(-1.51,0.20),\ (-0.31,-0.09),\ (-1.31,-0.74),\ (0.28,-1.41).\] See the third diagram in Figure 5.3. We finish the proof. **Remark 5.2**: Theorem 5.2 reveals the difference between cubic Hamiltonian Kolmogorov system (5.1) and cubic Kolmogorov system (5.2). For Hamiltonian Kolmogorov system (5.1), we can obtain all different global topological phase portraits except the time reversal, see Figure 5.1 and 5.2. However, we only give three configurations of centers for cubic Kolmogorov system (5.2), see Figure 5.3. It is interesting problem that how many different global phase portraits the cubic Kolmogorov system (5.2) has, which is left for future study. ## Acknowledgments Hongjin He and Dongmei Xiao are partially supported by National Key R & D Program of China (No. 2022YFA1005900), the Innovation Program of Shanghai Municipal Education Commission (No. 2021-01-07-00-02-E00087) and the National Natural Science Foundations of China (Nos. 11931016; 12271353). Changjian Liu is partially supported by the National Natural Science Foundations of China (No. 12171491).
2301.03712
Cavity Quantum Electrodynamics with Hyperbolic van der Waals Materials
The ground-state properties and excitation energies of a quantum emitter can be modified in the ultrastrong coupling regime of cavity quantum electrodynamics (QED) where the light-matter interaction strength becomes comparable to the cavity resonance frequency. Recent studies have started to explore the possibility of controlling an electronic material by embedding it in a cavity that confines electromagnetic fields in deep subwavelength scales. Currently, there is a strong interest in realizing ultrastrong-coupling cavity QED in the terahertz (THz) part of the spectrum, since most of the elementary excitations of quantum materials are in this frequency range. We propose and discuss a promising platform to achieve this goal based on a two-dimensional electronic material encapsulated by a planar cavity consisting of ultrathin polar van der Waals crystals. As a concrete setup, we show that nanometer-thick hexagonal boron nitride layers should allow one to reach the ultrastrong coupling regime for single-electron cyclotron resonance in a bilayer graphene. The proposed cavity platform can be realized by a wide variety of thin dielectric materials with hyperbolic dispersions. Consequently, van der Waals heterostructures hold the promise of becoming a versatile playground for exploring the ultrastrong-coupling physics of cavity QED materials.
Yuto Ashida, Atac Imamoglu, Eugene Demler
2023-01-09T23:19:38Z
http://arxiv.org/abs/2301.03712v3
# Cavity Quantum Electrodynamics with Hyperbolic van der Waals Materials ###### Abstract The ground-state properties and excitation energies of a quantum emitter can be modified in the ultrastrong coupling regime of cavity quantum electrodynamics (QED) where the light-matter interaction strength becomes comparable to the cavity resonance frequency. Recent studies have started to explore the possibility of controlling an electronic material by embedding it in a cavity that confines electromagnetic fields in deep subwavelength scales. Currently, there is a strong interest in realizing ultrastrong-coupling cavity QED in the terahertz (THz) part of the spectrum, since most of the elementary excitations of quantum materials are in this frequency range. We propose and discuss a promising platform to achieve this goal based on a two-dimensional electronic material encapsulated by a planar cavity consisting of ultrathin polar van der Waals crystals. As a concrete setup, we show that nanometer-thick hexagonal boron nitride layers should allow one to reach the ultrastrong coupling regime for single-electron cyclotron resonance in a bilayer graphene. The proposed cavity platform can be realized by a wide variety of thin dielectric materials with hyperbolic dispersions. Consequently, van der Waals heterostructures hold the promise of becoming a versatile playground for exploring the ultrastrong-coupling physics of cavity QED materials. Strong coupling regime of cavity quantum electrodynamics (QED), where the emitter-cavity coupling strength exceeds decay rates, has played a central role in quantum information science. For instance, cavity-mediated interaction between qubits has allowed for implementing two-qubit gates with high fidelity [1; 2]. Moreover, cavity-mediated Raman transitions have the potential to realize all-to-all coupling with tunable range and strength, providing an indispensable tool for quantum simulators [3; 4]. Recent studies have started to explore the possibility of further increasing the coupling strength so that it becomes comparable to elementary excitation energies. Many of the common simplifications in cavity QED fail in this regime, rendering theoretical analysis challenging [5; 6; 7; 8]. A remarkable feature is that ultrastrong coupling can alter the ground-state electronic properties due to virtual processes where both emitters and cavity photons are excited [9; 10; 11; 12]. Consequently, a natural question to address is if and when the ultrastrong coupling regime can be attained and used to control material properties simply by cavity confinement in the absence of external driving. A back-of-the-envelope estimate for cavities supporting purely photonic excitations suggests that the ultrastrong coupling regime is out of reach due to the smallness of the fine structure constant [13]. However, this limit can be overcome in structured electromagnetic environments because hybridization with matter excitations enables one to control cavity frequencies independently of wavelengths. For instance, in superconducting circuits, a large kinetic inductance allows for high-impedance electromagnetic excitations, leading to the single-electron ultrastrong coupling in the microwave range [14; 15; 16; 17]. Meanwhile, many of the elementary excitations in quantum materials are in the terahertz (THz) regime. Recent experimental and theoretical studies have shown the potential of utilizing the cavity confinement as a means to modify such excitations [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30], which shows great promise for controlling quantum many body states and endowing them with new functionalities, including superconductivity [31; 32; 33; 34], ferroelectricity [35; 36; 37], magnetotransport [38; 39; 40], and topological properties [41; 42]. In view of the prospects of observing these intriguing phenomena in cavity QED materials, there is a strong motivation to examine the feasibility of attaining the single-electron ultrastrong coupling at THz frequencies. The aim of this Letter is to propose and analyze a planar cavity consisting of polar van der Waals (vdW) crystals as a promising platform for exploring the ultrastrong-coupling physics of cavity QED materials (Fig. 1a). We point out that one can attain a single-electron ultrastrong coupling by embedding a 2D material with electronic transitions in the THz regime into ultrathin hexagonal boron nitride (\(h\)-BN) layers. A special feature here is that electrons in the 2D material couple to the electric field component of tightly confined hy Figure 1: Schematic figure illustrating the proposed planar cavity setup consisting of thin polar van der Waals crystals (green shaded) whose optical axis is along \(z\) direction. In the narrow air gap between the two slabs, a 2D material (red shaded) is inserted and the electron there can ultrastrongly couple to electromagnetic fields of tightly confined hyperbolic polaritons (blue solid curve). The thickness (lateral extension) of surrounding materials is \(d\) (\(L\)). (b) Out-of-plane (solid curve) and in-plane (dashed curve) permittivities \(\epsilon_{z,t}\) of hyperbolic materials. The Restrahlen bands (red shaded) appear above each of the out-of-plane and in-plane phonon resonances \(\Omega_{x,t}\). perbolic phonon polaritons. The strong photon-phonon hybridization together with the low frequencies of polaritons then results in the significantly enhanced coupling strength over a broad range of momenta. This is challenging to achieve in conventional polar dielectrics with an isotropic dispersion, where sizable hybridization takes place in a limited range of momenta since light and matter are almost decoupled at large momenta due to the fast speed of light. The proposed setup should be contrasted to existing approaches in several crucial aspects. In metallic platforms such as nanoplasmonic cavities, inevitable Ohmic losses severely limit quality factors in deep subwavelength scales, and achieving the (ultra)strong coupling regime requires the collective enhancement, i.e., the single-electron ultrastrong coupling remains out of reach. Metallic structures also lead to static screening that could itself affect the electronic ground state. In superconducting circuits, the single-electron ultrastrong coupling has been attained, but the need of a large kinetic inductance restricts it to the microwave region which is much lower than THz frequencies relevant to excitations in real materials. In contrast, the present platform supports the confined hyperbolic polaritons in the THz regime while behaving as simple dielectrics at low frequencies, thus allowing for the THz single-electron ultrastrong coupling in the absence of Ohmic losses and strong screening. As a proof-of-concept demonstration, we will apply the proposed concept to a 2D electron gas with parabolic dispersion in the presence of the static magnetic field. We analyze the hybridization between hyperbolic polaritons and the cyclotron motion of the parabolic electrons. We show that the single-electron ultrastrong coupling regime is within the reach provided that the thicknesses of _h_-BN layers are chosen to be nanometer-scale. This consideration is motivated by recent advances demonstrating ultrasmall mode volumes of hyperbolic phonon polaritons in _h_-BN nanostructures [43, 44, 45, 46, 47, 48]. While we present quantitative estimates for _h_-BN nanocavities for the sake of concreteness, the proposed cavity scheme is applicable to the majority of polar vdW crystals [49], which exhibit hyperbolic polaritons originating from distinct in- and out-of-plane infrared-active phonons. _Hyperbolic phonon polaritons.--_ We begin our analysis by reviewing the properties of a planar cavity made out of layered thin polar vdW materials. Due to the weakness of interlayer coupling, such materials naturally possess two types of optical phonons corresponding to in-plane and out-of-plane ionic oscillations in the THz or mid-infrared regimes. This leads to the uniaxial anisotropy characterized by the out-of-plane (in-plane) dielectric permittivities \(\epsilon_{z}\) (\(\epsilon_{t}\)). In the frequency windows above each of the phonon resonances, the two dielectric responses can have opposite signs (Fig. 1b). This unique feature leads to the hyperbolic isofrequency surfaces defined by the dispersion relation of the transverse magnetic modes [50, 51], \[\frac{|\mathbf{q}|^{2}}{\epsilon_{z}}+\frac{|\mathbf{\kappa}|^{2}}{\epsilon_{t}}= \frac{\omega^{2}}{\epsilon_{0}c^{2}}, \tag{1}\] where \(\mathbf{q}\) (\(\mathbf{\kappa}\)) is the in-plane (out-of-plane) wavevector, \(\epsilon_{0}\) is the vacuum permittivity, and \(c\) is the speed of light. The opposite signs of \(\epsilon_{z,t}\) allow for excitations to have low frequencies even at large momenta. The resulting hybridized excitations are known as hyperbolic phonon polaritons. The corresponding dispersions are called the Reststrahlen bands of either Type I when \(\epsilon_{z}<0,\epsilon_{t}>0\) or Type II when \(\epsilon_{z}>0,\epsilon_{t}<0\). As a representative hyperbolic material, _h_-BN has a strong crystalline anisotropy leading to two spectrally distinct Reststrahlen bands [43, 44, 45, 46, 47, 48]. Below we focus on the electromagnetic couplings with Type I _h_-BN hyperbolic modes \(\omega_{\mathbf{q}n}\) lying in the narrow frequency window above the out-of-plane phonon frequency \(\Omega_{z}=41.1\,\mathrm{THz}\) (cf. top panel in Fig. 2). As we discuss below, these modes exhibit several advantages for the purpose of attaining the ultrastrong couplings, such as their relatively low frequencies, sizable photon components over a broad range of momenta \(q\), and ultraslow loss in deep subwavelength scales. _Single-electron ultrastrong coupling with hyperbolic materials.--_ We consider the planar cavity setup where a 2D material (e.g., a bilayer graphene) is inserted into the narrow air gap between two thin _h_-BN slabs whose thickness and lateral extensions are denoted by \(d\) and \(L\), respectively (Fig. 1a). Our starting point is the following cavity QED Hamiltonian of the 2D parabolic electron interacting with hyperbolic phonon Figure 2: (Top) Dispersions \(\omega_{\mathbf{q}n}\) of the hyperbolic phonon polaritons in the planar cavity setting. The inset shows the spatial profiles of the in-plane electric fields along \(z\) direction for each mode at \(qd=4\). For the sake of visibility, only the three lowest modes are plotted. (Bottom) Inverse of the square root of the effective dimensionless confinement length, \(\sqrt{d/d_{\mathbf{q}n}}\), characterizing the dimensionless single-electron coupling strength for each mode. The maximum value in the principal branch \(n=1\) is reached at \(q=q^{*}\). polaritons: \[\hat{H}=\frac{\left[\hat{\mathbf{p}}+e\mathbf{A}_{s}(\hat{\mathbf{r}})+e\hat{\mathbf{A}}(\hat{\mathbf{ r}})\right]^{2}}{2m}+\hat{H}_{\rm pol}, \tag{2}\] which can be derived from an effective theory of uniaxial polar dielectrics coupled to quantized electromagnetic fields in the Coulomb gauge [52; 53]. Here, \(e\) is the elementary charge, \(m\) is the electron mass, \(\hat{\mathbf{r}}\) and \(\hat{\mathbf{p}}\) are the electron position and momentum operators in the 2D lateral directions, respectively, \(\mathbf{A}_{s}\) represents an arbitrary static field, and \(\hat{H}_{\rm pol}\) is the free polariton Hamiltonian, \[\hat{H}_{\rm pol}=\sum_{\mathbf{q}n}\hbar\omega_{\mathbf{q}n}\hat{\gamma}_{\mathbf{q}n}^{ \dagger}\hat{\gamma}_{\mathbf{q}n}, \tag{3}\] where \(\hat{\gamma}_{\mathbf{q}n}\) (\(\hat{\gamma}_{\mathbf{q}n}^{\dagger}\)) annihilates (creates) a hyperbolic polariton with the in-plane wavevector \(\mathbf{q}\) in the branch \(n\in\mathbb{N}\). The 2D vector field \(\hat{\mathbf{A}}(\mathbf{r})\) is obtained by projecting the vector potential onto the 2D plane where the electron is placed, and can be expanded in terms of the polariton operators by \[\hat{\mathbf{A}}(\hat{\mathbf{r}})=\sum_{\mathbf{q}n}\mathcal{A}_{\mathbf{q}n}\mathbf{e_{q}}\left( \hat{\gamma}_{\mathbf{q}n}e^{i\mathbf{q}\cdot\hat{\mathbf{r}}}+\hat{\gamma}_{\mathbf{q}n}^{ \dagger}e^{-i\mathbf{q}\cdot\hat{\mathbf{r}}}\right), \tag{4}\] where \(\mathcal{A}_{\mathbf{q}n}\simeq\sqrt{\hbar/(2L^{2}\epsilon\omega_{\mathbf{q}n}d_{\bm {q}n})}\) is the amplitude with the effective confinement length \(d_{\mathbf{q}n}\) whose value is characterized by the polariton mode function. The effective polarization vector becomes \(\mathbf{e_{q}}\equiv\mathbf{q}/|\mathbf{q}|\) because, for the symmetric arrangement we assumed, the electric fields of the polaritons only have in-plane components along the propagation direction. While the vector potential is originally a 3D transverse vector field in the Coulomb gauge, it can effectively acquire longitudinal components when projected onto the 2D tangential plane. Figure 2 shows the results for Type I _h_-BN hyperbolic modes. The dispersions (top panel) start from the longitudinal phonon frequency and saturate to \(\Omega_{z}\) at high \(q\equiv|\mathbf{q}|\). Naturally, these hybridized modes are almost purely longitudinal (transverse) phonon excitations in the limit \(q\to 0\) (\(q\to\infty\)), which do not allow for a strong coupling to electrons in the air gap. Crucially, however, except for these two limits, there still exist the nonvanishing photon contributions leading to the sizable electric-dipole couplings at intermediate momenta (bottom panel). This is because the in-plane component, which couples to the 2D electron, has almost equal photonic and phononic contents while the out-of-plane component is predominantly phonon-like [53]. To further proceed, we use the unitary transformation \(\hat{U}=e^{-i\mathbf{\cdot}\hat{\mathbf{P}}_{b}/\hbar}\) with \(\hat{\mathbf{P}}_{b}=\sum_{\mathbf{q}n}\hbar\hat{\mathbf{q}}\hat{\gamma}_{\mathbf{q}n}^{ \dagger}\hat{\gamma}_{\mathbf{q}n}\)[54], resulting in the Hamiltonian \(\hat{H}_{U}\equiv\hat{U}^{\dagger}\hat{H}\hat{U}\) given by \[\hat{H}_{U}=\frac{\left[\hat{\mathbf{p}}\!+\!e\mathbf{A}_{s}(\hat{\mathbf{r}})\!+\!e\hat{ \mathbf{A}}(\mathbf{0})\!-\!\hat{\mathbf{P}}_{b}\right]^{2}}{2m}+\hat{H}_{\rm pol}. \tag{5}\] In this way, the electron-coordinate dependence in the quantized vector potential can be eliminated at the expense of generating the polariton momentum \(\hat{\mathbf{P}}_{b}\), leading to the nonlocal interaction among polaritons mediated by the electron. The dimensionful coupling strength between the electron and each of the dynamical quantized electromagnetic modes is given by \[g_{\mathbf{q}n}=e\mathcal{A}_{\mathbf{q}n}\sqrt{\frac{\omega_{\mathbf{q}n}}{m\hbar}}. \tag{6}\] Since \(g_{\mathbf{q}n}\) characterizes the single-electron coupling strength rather than the collective one, it depends on the lateral size \(L\) through \(g_{\mathbf{q}n}\propto L^{-1}\). Consequently, a natural measure for the effective coupling strength between the 2D electron and the continuum of polariton modes is given by the integrated value \(g_{\rm eff}\equiv\sqrt{\sum_{\mathbf{q}n}g_{\mathbf{q}n}^{2}}\), which scales as \(g_{\rm eff}\propto\mathrm{O}(L^{0})\)[55]. Previously, the deep strong coupling regime has been experimentally realized in superconducting circuits, where \(g_{\rm eff}\) becomes comparable to the microwave photon frequency. In the present setting, the use of ultrathin _h_-BN slabs with nanometer-scale thicknesses enables one to reach those regimes in the THz range, where \(g_{\rm eff}\) becomes comparable to or even exceeds \(\omega_{\mathbf{q}n}\). For instance, using the _h_-BN parameters [43], we estimate coupling strengths of order \(g_{\mathbf{q}n}/\Omega_{z}=1.7\times\sqrt{(10\,\mathrm{nm})^{3}/(L^{2}d_{\mathbf{q}n})}\), which, together with the results of the effective confinement length \(d_{\mathbf{q}n}\) in Fig. 2b, can lead to \(g_{\rm eff}\sim\Omega_{z}\) in nanoscale heterostructures. More specifically, one can attain \(g_{\rm eff}\simeq 2.0\,\Omega_{z}\) for the \(n=1\) principal branch when the cavity thickness (in-plane momentum cutoff) is set to be \(d=5\,\mathrm{nm}\) (\(\Lambda=2\,\mathrm{nm}^{-1}\)). As demonstrated below, one important consequence of attaining \(g_{\rm eff}\sim\Omega_{z}\) is that the electron mixes the otherwise independent cavity modes and creates a localized state at a frequency below the cavity resonance. We emphasize that this key feature is largely insensitive to a choice of the lateral size \(L\) and thus remains even in the 2D thermodynamic limit \(L/d\to\infty\). _Application to a 2D electron under the magnetic field.--_ As a proof-of-concept demonstration, we now focus on a prototypical setting of ultrastrong-coupling physics, namely, a 2D parabolic electron subject to a static perpendicular magnetic field. From now on, we consider the \(n=1\) principal hyperbolic mode that most strongly couples to the cyclotron motion of the electron, and abbreviate the subscript \(n\) for the sake of notational simplicity. We choose the symmetric gauge, \(\mathbf{A}_{s}(\mathbf{r})=(-By/2,Bx/2)^{\mathrm{T}}\), with the magnetic field \(B\) and introduce the annihilation operator of Landau levels by \[\hat{a}=\frac{1}{\sqrt{2}}\left[\frac{l_{B}}{\hbar}\left(\hat{p}_{x}-i\hat{p}_{y }\right)-\frac{i}{2l_{B}}\left(\hat{x}-i\hat{y}\right)\right], \tag{7}\] where \(l_{B}=\sqrt{\hbar/(eB)}\) is the magnetic length. Using the cyclotron frequency \(\omega_{c}=eB/m\) and the operator \(\hat{\mathbf{\pi}}=\frac{1}{\sqrt{2}}\left(\hat{a}+\hat{a}^{\dagger},i(\hat{a}- \hat{a}^{\dagger})\right)^{\mathrm{T}}\), we obtain \[\hat{H}_{U}\!=\!\frac{\hbar\omega_{c}}{2}\left(\hat{\mathbf{\pi}}\!+\!\sum_{\mathbf{q}} \mathbf{e_{q}}\left[c_{\mathbf{q}}\left(\hat{\gamma}_{\mathbf{q}}\!+\!\hat{\gamma}_{\mathbf{q}}^{ \dagger}\right)\!-\!ql_{B}\hat{\gamma}_{\mathbf{q}}^{\dagger}\hat{\gamma}_{\mathbf{q}} \right]\right)^{2}\!+\!\hat{H}_{\rm pol}, \tag{8}\] where \(c_{\mathbf{q}}=g_{\mathbf{q}}/\sqrt{\omega_{\mathbf{q}}\omega_{c}}\) is the dimensionless coefficient characterizing the coupling strength of the cyclotron motion to the hyperbolic polaritons with momentum \(\mathbf{q}\). In the long-wavelength limit \(ql_{B}\to 0\), the polariton-polariton interaction in Eq. (8) disappears and the problem reduces to the quadratic one. As demonstrated below, however, this interaction term can in general contribute to the dynamics and affect the absorption spectrum especially in the ultrastrong coupling regimes. This is because the electron couples most strongly with the electromagnetic modes at a finite momentum around \(q\sim q^{*}\) (cf. Fig. 2b) whose corresponding length scale \(1/q^{*}\) is comparable to the magnetic length \(l_{B}\) near the cyclotron resonance. The low-energy excitations can be studied by analyzing the magnetoabsorption spectrum \[A(\omega)=\mathrm{Re}\left[\int_{0}^{\infty}e^{i\omega t}\langle\mathrm{GS}| \hat{a}e^{-i\hat{H}t}\hat{a}^{\dagger}|\mathrm{GS}\rangle\right], \tag{9}\] where \(|\mathrm{GS}\rangle\) is the ground state. To reveal its qualitative features, we perform a simple variational analysis as follows. Specifically, we first determine the variational ground state of Eq. (8) in the form of a product of coherent states. We then expand the Hamiltonian around this state, obtain the fluctuations up to the quadratic terms, and determine the excitation spectrum via the exact diagonalization of the effective quadratic Hamiltonian [53]. Figure 3 shows the obtained magnetoabsorption spectra, where the cavity thickness \(d\) is varied while the aspect ratio \(L/d\) is kept constant. The blue solid curve in each panel shows the bound-state energy of the Landau polariton. As the thickness is decreased, the spectrum starts to exhibit the anticrossings around the cyclotron resonance. In particular, when the cavity length becomes a few nanometers, there emerge the large separations between the branches that are comparable to the elementary excitation energies; this is a hallmark of the ultrastrong coupling regime. As discussed before, a key feature here is the formation of the dressed bound state consisting of the electron and the localized polaritons, which manifests itself as the anticrossed lower branch in the spectrum (cf. the blue curve in Fig. 3). Importantly, this feature remains independently of lateral size \(L\), while the effect of increasing \(L/d\) is the appearance of a continuum of cavity modes above the lower branch [53]. It is worthwhile to note that the positions of the anticrossings are shifted above the bare resonances \(\omega_{c}/\Omega_{z}\simeq 1\). This upward shift originates from the renormalization of the effective polariton energies due to the repulsive polariton-polariton interaction. Since the latter comes from the spatial dependence of the vector potential, this effect is absent in the long-wavelength limit. We also remark that the appearance of the multiple anticrossed branches in Fig. 3 are due to the discretized in-plane momentum \(q\), whose value is set by the periodic boundary conditions in the lateral directions. _Discussions.--_ The proposed platform for ultrastrong-coupling cavity QED materials can be realized by various polar vdW materials exhibiting hyperbolic phonon polaritons, including Bi\({}_{2}\)Se\({}_{3}\), Bi\({}_{2}\)Te\({}_{3}\), MoS\({}_{2}\) and MoO\({}_{3}\) as well as \(h\)-BN [49, 56, 57, 58, 59, 60]. Thus, by confining materials in the cavity consisting of these vdW structures, one can strongly couple electronic excitations to quantized electromagnetic modes in a wide spectral range from mid- or far-infrared to THz regimes. Moreover, the layered nature of vdW crystals should readily allow one to tune the cavity coupling strengths by controlling the thickness of the surrounding crystals. While polaritons in these materials exhibit ultralow loss originating from multi-phonon or disorder-induced scatterings [43, 44, 45, 46, 59, 60], it merits further study to explore if and when one could engineer such dissipation to realize phases or dynamics unique to open systems [61, 62]. Our analysis focused on 2D materials and it remains an interesting open problem how one can realize ultrastrong coupling between light and 3D bulk materials if at all possible. Finally, in quantum information science, a material excitation consisting of localized polaritons might find applications in photon storage, quantum memory, or nondissipative emitter interactions [63, 64]. In summary, we showed that the planar cavity consisting of vdW heterostructures provides a promising platform to attain the single-electron ultrastrong coupling in the THz or mid Figure 3: Magnetoabsorption spectra \(A(\omega)\) of the 2D electron in the cavity plotted against a cyclotron resonance \(\omega_{c}=eB/m\) at different cavity thicknesses \(d\). The blue solid curve in each panel corresponds to the bound-state energy of the Landau polariton. We impose the periodic boundary conditions on the lateral directions and fix the aspect ratio \(L/d=\pi\). We use the \(h\)-BN parameters in Ref. [43] and set the in-plane momentum cutoff \(\Lambda=2\,\mathrm{nm}^{-1}\). infrared regions. As a proof-of-concept demonstration, we presented the analysis of the magnetoabsorption spectrum for the 2D electron confined in the \(h\)-BN cavity, where the coupling can be ultrastrong provided that the thicknesses are judiciously controlled. We expect that our results open a way to study a variety of recent predictions in the emerging field of cavity QED materials. We are grateful to Iliya Esin, Ilya Esterlis, Tim Kaxiras, Igor Khanonkin, Frank Koppens, Kanta Masuki, Gil Refael, Tao Shi, and Kenji Yasuda for fruitful discussions. Y.A. acknowledges support from the Japan Society for the Promotion of Science through Grant No. JP19K23424. A.I. was supported by the Swiss National Science Foundation (SNSF) under Grant Number 200020_207520. E.D. acknowledges support from the ARO grant "Control of Many-Body States Using Strong Coherent Light-Matter Coupling in Terahertz Cavities" and the Swiss National Science Foundation under Division II.
2310.15631
Linear magneto-conductivity as a DC probe of time-reversal symmetry breaking
Several optical experiments have shown that in magnetic materials the principal axes of response tensors can rotate in a magnetic field. Here we offer a microscopic explanation of this effect, and propose a closely related DC transport phenomenon -- an off-diagonal \emph{symmetric} conductivity linear in a magnetic field, which we refer to as linear magneto-conductivity (LMC). Although LMC has the same functional dependence on magnetic field as the Hall effect, its origin is fundamentally different: LMC requires time-reversal symmetry to be broken even before a magnetic field is applied, and is therefore a sensitive probe of magnetism. We demonstrate LMC in three different ways: via a tight-binding toy model, density functional theory calculations on MnPSe$_3$, and a semiclassical calculation. The third approach additionally identifies two distinct mechanisms yielding LMC: momentum-dependent band magnetization and Berry curvature. Finally, we propose an experimental geometry suitable for detecting LMC, and demonstrate its applicability using Landauer-B\"{u}ttiker simulations. Our results emphasize the importance of measuring the full conductivity tensor in magnetic materials, and introduce LMC as a new transport probe of symmetry.
Veronika Sunko, Chunxiao Liu, Marc Vila, Ilyoun Na, Yuchen Tang, Vladyslav Kozii, Sinéad M. Griffin, Joel E. Moore, Joseph Orenstein
2023-10-24T08:53:59Z
http://arxiv.org/abs/2310.15631v1
# Linear magneto-conductivity as a DC probe of time-reversal symmetry breaking ###### Abstract Several optical experiments have shown that in magnetic materials the principal axes of response tensors can rotate in a magnetic field. Here we offer a microscopic explanation of this effect, and propose a closely related DC transport phenomenon -- an off-diagonal _symmetric_ conductivity linear in a magnetic field, which we refer to as linear magneto-conductivity (LMC). Although LMC has the same functional dependence on magnetic field as the Hall effect, its origin is fundamentally different: LMC requires time-reversal symmetry to be broken even before a magnetic field is applied, and is therefore a sensitive probe of magnetism. We demonstrate LMC in three different ways: via a tight-binding toy model, density functional theory calculations on MnPSe\({}_{3}\), and a semiclassical calculation. The third approach additionally identifies two distinct mechanisms yielding LMC: momentum-dependent band magnetization and Berry curvature. Finally, we propose an experimental geometry suitable for detecting LMC, and demonstrate its applicability using Landauer-Buttiker simulations. Our results emphasize the importance of measuring the full conductivity tensor in magnetic materials, and introduce LMC as a new transport probe of symmetry. In all materials, a current passing perpendicular to an applied magnetic field introduces a voltage transverse to both the current and the field [1]. This phenomenon, called the Hall effect, has been an invaluable tool for probing the concentration and character of charge carriers. In magnetic materials an applied field is not necessary to observe the Hall effect: for example, the anomalous Hall effect (AHE) can be determined by the electronic structure and Berry curvature in momentum space [2], whereas the topological Hall effect is induced in non-collinear magnetic structures by real-space Berry curvature [3]. Regardless of microscopic origins, any Hall effect requires time-reversal symmetry (TRS) to be broken [4; 5]. The Hall effect arises from the anti-symmetric component of the conductivity tensor (\(\sigma_{xy}^{a}=(\sigma_{xy}-\sigma_{yx})/2\)), and is usually extracted from transverse voltage as a function of magnetic field (\(H\)), assuming implicitly that the Hall effect is the only \(H\)-odd component of \(\sigma\), i.e. that the symmetric off-diagonal component \(\sigma_{xy}^{s}=(\sigma_{xy}+\sigma_{yx})/2\) is \(H\)-even. In _non-magnetic_ materials the validity of this assumption is guaranteed by the Onsager's reciprocity relations, which encode microscopic time reversibility. Here we show that this standard assumption is dangerous in magnetic systems: certain magnetic structures allow \(\sigma_{xy}^{s}\) to have an \(H\)-odd component, which need not be small. A typical measurement cannot distinguish such response from the Hall effect, leading to possible misinterpretations of the measured signal. On the other hand, detecting a component of \(\sigma_{xy}^{s}\) that is proportional to the applied field would provide evidence of TRS breaking that is complementary to the AHE, as the symmetry constraints for the two effects to be observed are distinct [6]. Measuring the full conductivity tensor \(\sigma_{ij}\) as a function of magnetic field is therefore both a powerful probe of symmetry, and a necessity to correctly estimate the carrier concentration in magnetic materials. The equivalent phenomenon at optical frequencies manifests as an \(H\)-linear rotation of principal optical axes, and is referred to as off-diagonal linear magneto-birefringence (LMB). It can be used to determine the magnetic point group of a material, as has been reported Figure 1: (a) The model considers a one electron \(f\to d\) transition, with a degenerate initial state manifold and a final state manifold split by the three terms in Eq. 1: \(\Delta_{CF}\) separates the \(d\) manifold into states of \(e_{g}\) and \(t_{2g}\) symmetry. (b) The ratio of off-diagonal symmetric conductivity to the average diagonal conductivity as a function of \(H_{z}/H_{y}\), calculated for the different final states in Fig. 1a. (c) The orbital wave functions corresponding to \(\left\langle L_{\alpha}^{\mathrm{eff}}\right\rangle=1\), for \(\alpha=y,z,(z+y)/\sqrt{2}\). several times [7; 8]. Although the symmetry constraints for its existence are known [9], the microscopic understanding of the mechanism behind the principal axis rotation is still lacking. Furthermore, since the symmetry-allowed form of the conductivity tensor does not depend on the frequency, an analogous DC transport effect is in principle allowed. To the best of our knowledge, this DC phenomenon has not been explored theoretically nor experimentally. In this work, we take up this quest to examine the field-induced principal axis rotation in the DC limit, an effect we refer to as linear magneto-conductivity (LMC), and elucidate how LMC arises from the electronic structure of a magnetic material. We start from a microscopic understanding of the off-diagonal LMB in a system of localized orbitals in a magnetic field, and use the obtained insights to construct a tight-binding toy model which yields a nonzero DC LMC. Within this model the effect originates from a field-induced rotation of the Fermi surface (FS), resembling the rotation of principal optical axes that characterizes LMB. To gain insight into the physical origin of this effect, we derive the field-linear \(\sigma_{xy}^{s}\) from semiclassical equations of motion and identify two mechanisms: Berry curvature and momentum-dependent band magnetization. Although TRS needs to be broken, neither of these mechanisms relies on the existence of a net magnetization, suggesting that LMC may be present in a wide range of magnetic materials. We propose an experimental geometry suitable for measuring the full conductivity tensor, and demonstrate it by a mesoscopic transport calculation. Finally, we use density-functional theory (DFT) calculations on an example material, MnPSe\({}_{3}\), to show that a large LMC is present beyond simple toy models, and needs to be seriously considered in experimental investigations. We start by studying the origin of the optical effect. We develop a minimal model of EuCd\({}_{2}\)P\({}_{2}\), a material in which off-diagonal LMB has been observed [8], and use it to calculate the optical conductivity tensor by employing the Kubo formula [6]. For simplicity we consider an isolated Eu\({}^{2+}\) ion in an octahedral crystal field of \(D_{3d}\) symmetry, and model a one-electron \(f\to d\) transition (Fig. 1a). The initial state configuration (\({}^{8}S_{7/2}\), \(L=0\), \(S=7/2\)) consists of seven degenerate spin-polarized \(f\) states, which are not split by crystal field or spin-orbit coupling, since \(L=0\). The final state \(5d\) manifold is split by a crystal field (\(\Delta_{CF}\hat{H}_{CF}\)), spin-orbit coupling (\(\lambda\)) and a Zeeman term induced by a magnetic field \(\mathbf{H}\): \[\hat{H}_{L}\left(\Delta_{CF},\lambda,\mathbf{H}\right)=\Delta_{CF}\hat{H}_{CF }+\lambda\mathbf{\hat{L}}\cdot\mathbf{\hat{S}}-\mu_{B}\mathbf{H}\cdot\big{(}2 \mathbf{\hat{S}}+\mathbf{\hat{L}}\big{)}, \tag{1}\] where \(\mathbf{\hat{L}}\) and \(\mathbf{\hat{S}}\) are orbital angular momentum (OAM) and spin operators, respectively, and \(\mu_{B}\) is the Bohr magneton. The resulting energy spectrum, assuming \(\Delta_{CF}\gg\lambda\gg H\), is shown in Fig. 1a. The conductivity tensor [6] can now be evaluated for transitions between the degenerate initial state manifold and each of the final states, calculated assuming an applied \(\mathbf{H}\) within the \(yz\) plane. This field captures the effect of an intrinsic magnetization \(M_{y}\) that is tilted by an applied \(H_{z}\). We indeed find a non-zero \(\sigma_{xy}^{s}\sim H_{z}/H_{y}\) (Fig. 1b), but only for \(t_{2g}\) final states, indicating the importance of OAM, which is quenched in the \(e_{g}\) manifold. The response is caused by a subtle interference effect between wave functions which hold a non-zero OAM in the \(y\) and \(z\) directions (\(L_{y}^{\text{eff}}\) and \(L_{z}^{\text{eff}}\), Fig. 1c). Both the amplitude and the phase of the wave functions have a non-trivial angular dependence; the former is caused by the CF which encodes spatial symmetries, and the latter by the field-induced OAM. Their interference results in the rotation of charge density, leading to the principal axis rotation. The effect is therefore expected for optical transitions in which either the initial or the final state supports a non-zero OAM, despite the lowering of symmetry by the CF, as is the case for \(t_{2g}\) orbitals. The above insight into the mechanism behind the off-diagonal LMB at optical frequencies suggests that an analogous effect in the DC limit might be achieved in Figure 2: Rotation of Fermi surfaces and change of conductivity in a tight binding model as a function of magnetization. (a) Top view of the triangular lattice of magnetic ions with moment \(\mathbf{M}=(0,M_{y},M_{z})\) in the \(y-z\) plane. The panels (b-d) study quantities when \(\mathbf{M}\) points along \(-45^{\circ}\), \(0^{\circ}\), or \(+45^{\circ}\) from the \(y\) axis, represented by red, grey, and blue arrows. (b) The band dispersion along high symmetry path in the Brillouin zone (BZ) for the three orientations of \(\mathbf{M}\). \(t\) is the overall hopping energy scale and \(E_{F}\) is the Fermi energy. (c) FS at the Fermi energy \(E=E_{F}\) for the three orientations of \(\mathbf{M}\). As \(\mathbf{M}\) is oriented from \(-45^{\circ}\) to \(+45^{\circ}\), the FS rotates accordingly in a clockwise way. (d) The conductivity angle \(2\sigma_{xy}^{s}/(\sigma_{xx}+\sigma_{yy})\) as a function of \(M_{z}/M_{y}\). For (c) and (d) we assumed the filling at \(E=E_{F}\). a system with \(t_{2g}\) orbitals at the Fermi level. We therefore construct such a tight-binding (TB) model, based on hoppings between local \(d\) orbitals experiencing a strong octahedral crystal field splitting, and sitting on a triangular lattice (Fig. 2a, \(D_{3d}\) symmetry) [6]. The TB Hamiltonian has the form \[\hat{H}_{TB}=\hat{H}_{L}+\sum_{\mathbf{r}\mathbf{r}^{\prime},\ell\ell^{\prime}, \sigma\sigma^{\prime}}t_{\mathbf{r},\mathbf{r}^{\prime},\ell\ell^{\prime}, \sigma\sigma^{\prime}}d^{\dagger}_{\mathbf{r},\ell,\sigma}d_{\mathbf{r}^{ \prime},\ell^{\prime},\sigma^{\prime}}, \tag{2}\] where \(\ell\) (\(\sigma\)) label the \(d\) orbitals (spins). The first term, \(\hat{H}_{L}=\sum_{\mathbf{r}}d^{\dagger}_{\mathbf{r}}\hat{H}_{L}(\Delta_{ \mathrm{CF}},\lambda,\mathbf{M})d_{\mathbf{r}}\), is equivalent to the local term in Eq. (1), with magnetization \(\mathbf{M}\) in place of an external field, while the second term describes nearest neighbor hopping between such orbitals; the hopping matrix elements \(t_{\mathbf{r}\mathbf{r}^{\prime},\ell\ell^{\prime},\sigma\sigma^{\prime}}\) are obtained from the Slater-Koster parameterization [6; 10]. We plot in Fig. 2b the band structure of the \(t_{2g}\)-derived bands calculated for three orientations of magnetization \(\mathbf{M}\): along \(y\) axis and \(\pm 45^{\circ}\) away from the \(y\) axis in the \(yz\) plane. The bands are clearly modified by the magnetization. More insights can be gleaned from the effect of \(\mathbf{M}\) on the FS: the FS is elliptical for all three orientations of \(\mathbf{M}\), and it rotates in the \(k_{x}k_{y}\) plane as the magnetization \(\mathbf{M}\) rotates in the \(yz\) plane. The rotation of the FSs induced by the magnetization has profound consequences on transport. The conductivity tensor depends on the geometry of an elliptical FS as: \[\sigma=\sigma_{0}\mathbb{1}+\sigma_{0}\mathcal{E}^{2}\!\left(\!\!\begin{array} []{cc}-\cos 2\theta&\sin 2\theta\\ \sin 2\theta&\cos 2\theta\end{array}\!\!\right), \tag{3}\] where \(\mathbb{1}\) is the unit matrix, and \(\theta\) and \(\mathcal{E}\) denote the orientation of the major axis of the ellipse and its eccentricity, respectively. The isotropic part \(\sigma_{0}\mathbb{1}\) denotes the conductivity in absence of \(\mathbf{M}\), as enforced by the three-fold rotation and mirror symmetries [6]. Importantly, the principal axes of the conductivity tensor correspond to the major and minor axes of the ellipse, and a nonzero FS rotation \(\theta\) induces a nonzero off-diagonal conductivity. Since \(\sigma_{xy}\) is directly proportional to \(\sin 2\theta\), we expect the opposite sign of \(\sigma_{xy}\) for the two FSs shown in Fig. 2c, calculated for the magnetization along \(\pm 45^{\circ}\). We confirm the above picture by calculating \(\sigma_{xy}^{s}\) as a function of the orientation of \(\mathbf{M}\) using the Drude formula. In Fig. 2d we plot the ratio \(2\sigma_{xy}^{s}/(\sigma_{xx}+\sigma_{yy})\), which is independent of the relaxation time \(\tau\), as a function of \(M_{z}/M_{y}\). Importantly, \(\sigma_{xy}^{s}\) is an odd function of \(M_{z}\), and exhibits linear dependence at small \(M_{z}\), just like the (anti-symmetric) Hall conductivity. Therefore, extracting the \(H\)-linear component of the transverse voltage measured in a single geometry, as is usually done, is not sufficient to isolate the Hall signal in a magnetic material, and independent measurements of \(\sigma_{xy}\) and \(\sigma_{yx}\) are needed. Our TB model shows that a magnetization can induce LMC, but it does not prove that magnetization is _necessary_ for this effect to take place; this is reminiscent of the AHE, which can be induced by a magnetization, but it does not require it. To explore this analogy, we have derived the expression for conductivity \(\sigma_{ij}^{s}\) of a two-dimensional (2D) system up to the linear order in \(\mathbf{B}=B_{z}\hat{z}\) within the semiclassical approximation [11]. Note that in semiclassics the electrons are affected by the field \(\mathbf{B}=\mu_{0}(\mathbf{H}+\mathbf{M})\). The full derivation is given in [6], including corrections due to a field-induced shift of the chemical potential, while the final expression relevant for the discussion here is [12]: \[\sigma_{xy}^{s}=B_{z}\frac{\tau e^{2}}{\hbar}\int\frac{d^{2}k}{(2\pi)^{2}} \tag{4}\] where \(\tau\) is the relaxation time and \(e\) the elementary charge. Inside the integral, \(f^{\prime}_{0}\) is the derivative of the Fermi function \(f_{0}(\varepsilon_{0,\mathbf{k}})\) with respect to the single-particle energy \(\varepsilon_{0,\mathbf{k}}\), \(v^{0}_{i}\), \(i=x,y\) are the components of the Fermi velocity and we defined their derivative with respect to momentum as \(v^{0}_{ij}=\partial_{i}v^{0}_{j}=\partial_{j}v^{0}_{i}\), \(\Omega_{\mathbf{k}}\) is the Berry curvature, and \(\mathbf{m_{k}}=m_{\mathbf{k}}\hat{z}\) is the magnetization of the conduction electrons, defined by the shift of the single-particle energy in an external field, \(\widetilde{\varepsilon}_{\mathbf{k}}=\varepsilon_{0,\mathbf{k}}-\mathbf{B} \cdot\mathbf{m_{k}}\)[13]. We find two distinct types of contributions at linear order in \(B_{z}\): due to magnetization and Berry curvature. The first three terms in the integral describe a "magnetization" contribution; it is further divided into a "velocity" part (first two terms) and a "band curvature" part (third term). The fourth term, on the other hand, describes a "Berry curvature" contribution. The physical origin and behavior of these terms differ. The "magnetization" contribution arises due to momentum-dependent changes of magnetization and band velocities. Note that if the magnetization is constant or linear in momentum across the momentum space, the "velocity" part of the "magnetization" contribution to LMC vanishes, suggesting that it is expected to be significant in systems with strongly mixed orbital character at the Fermi level. This finding is consistent with the intuition obtained from the local-orbital picture (Fig. 1): orbital mixing, promoted by crystal field and spin-orbit coupling, leads to non-zero expected value of OAM, as well as the momentum-depended response to a magnetic field. On the other hand, the "band curvature" part of the "magnetization" contribution to LMC vanishes if the band disperses linearly in momentum space. The "Berry curvature" contribution is proportional to the Berry curvature \(\Omega_{\mathbf{k}}\) multiplied by the components of the Fermi velocity, and it arises from Berry phase correction to the phase space volume [14]. This term can in principle give a non-zero contribution even if \(\Omega_{\mathbf{k}}\) is constant across the momentum space. Symmetry such as a twofold rotation or a mirror reflection can promote the LMC effect. We note that although AHE can also arise from \(\Omega_{\mathbf{k}}\), its functional form differs substantially from that of LMC: \[\sigma_{xy}^{\text{AHE}}=-\frac{e^{2}}{\hbar}\int\frac{d^{2}k}{(2\pi)^{2}}\, \Omega_{\mathbf{k}}\,f_{0}. \tag{5}\] First of all, \(\sigma_{ij}^{\text{AHE}}\) does not depend on the Fermi velocity or the relaxation time. Second, the integral in Eq. (5) is taken over all occupied states (characterized by non-zero Fermi function \(f_{0}\)), while the one in Eq. (4) is taken over the vicinity of FS (characterized by non-zero \(f_{0}^{\prime}\)). These differences emphasize that the AHE and LMC are two distinct probes of TRS breaking, as already deduced from symmetry considerations [6]. Having confirmed that DC linear magneto-conductivity is not only allowed by symmetry, but also expected based on microscopic arguments, we now turn towards two experimental aspects: we propose an experimental geometry suited for the detection of LMC, and demonstrate a sizable expected LMC in a realistic material by first-principles calculations. Long Hall bars are typically used for transport measurement to allow the current to become homogeneous between the measurement contacts, and therefore reject effects from the injection region. Simply exchanging current and voltage contacts in a single device, which would theoretically probe \(\sigma_{xy}\) and \(\sigma_{yx}\), would introduce different artifacts in the two measurements. Another strategy might be to fabricate two devices, with the bars along two orthogonal crystal directions. However, this would introduce sample-to-sample variation, which may be a serious limitation since LMC and the Hall effect exhibit different functional dependence on the scattering rate. This leads us to propose an L-shaped device, illustrated in Fig. 3(a), to independently measure \(\sigma_{xy}\) and \(\sigma_{yx}\) in the same geometry and in the same crystal, allowing for robust extraction of \(\sigma^{a}\) and \(\sigma^{s}\). Such devices can be structured from single crystals using focused ion beam milling, or created in thin flakes using lithographic techniques. To demonstrate this approach, we perform Landauer-Buttiker calculations in the \(L\)-shaped geometry [6], assuming bands given by the TB model above (Eq. (2), Fig. 2). The resulting \(2\sigma_{xy}^{s}/\left(\sigma_{xx}+\sigma_{yy}\right)\) is plotted as a function of the orientation of \(\mathbf{M}\) in Fig. 3b. The qualitative agreement of the Landauer-Buttiker calculations with the Drude formula (see Fig. 2d) is excellent. For comparison, we also plot \(2\sigma_{xy}^{a}/\left(\sigma_{xx}+\sigma_{yy}\right)\), from which we see that \(\sigma_{xy}^{s}\) is larger than the Hall conductivity for our choice of parameters, demonstrating that this regime can indeed be reached. We note that the ratio \(\sigma_{xy}^{s}/\sigma_{xy}^{a}\) depends on the scattering time, and becomes larger with increasing purity. Finally, we expand our TB model and semiclassical analysis by evaluating LMC from DFT calculations of a real material, a monolayer of MnPSe\({}_{3}\)[6]. This is a member of the family of magnetic 2D transition metal phosphorous tricalcogenides that have gathered increasing attention recently for their wealth of interesting magnetic, optical and topological properties [15; 16; 17; 18]. MnPSe\({}_{3}\) adopts the trigonal \(R\overline{3}\) space group where a layer of magnetic Mn atoms sits on a honeycomb lattice surrounded by PSe\({}_{3}\) layers above and below it (Fig. 4a). Its ground state magnetic order is antiferromagnetic - here we consider the metastable ferromagnetically ordered case, as has been previously investigated [17; 18]. We choose to calculate LMC in MnPSe\({}_{3}\) because it has the same point group symmetry as our toy model (\(D_{3d}\), Fig. 2), and its low-energy electronic structure contains \(t_{2g}\) orbitals of Mn. We performed DFT calculations for two orientations of Mn magnetic moments within the \(yz\) plane (Fig. 4a, right panel), and used the Drude formula to evaluate \(2\sigma_{xy}^{s}/\left(\sigma_{xx}+\sigma_{yy}\right)\) as a function of the Fermi level. As can be seen in Fig. 4b, a non-zero \(\sigma_{xy}^{s}\) is predicted across a wide range of energies. Its sign depends on the orientation of the magnetic moment, thus validating our understanding of the LMC and its presence in MnPSe\({}_{3}\). An important finding is the magnitude of the effect. For a Fermi energy close to the valence band maximum (VBM) \(\sigma_{xy}^{s}\) reaches values as large as 25% of the diagonal conductivity. This dramatic enhancement occurs because the small FSs close to the VBM can be significantly perturbed by the magnetization, and emphasizes the importance of measuring the full conductivity tensor in studies of magnetic semiconductors and (semi)metals. Even without this enhancement, \(\sigma_{xy}^{s}\) reaches \(1\%-3\%\) of the diagonal conductivity over a wide range of Fermi energies, which is substantial compared to typically observed Hall effect values. Indeed, both anomalous Hall effect [19] and topological Hall effect [20] are considered "giant" when they correspond to a few percent of longitudinal conduc Figure 3: (a) L-shape device geometry. The scattering region (leads) is denoted by the black (red) lattice. Leads 1 and 10 are the source and drain electrodes, while leads 2-8 are voltage probes. (b) Conductivity angle for the symmetric (red) and Hall (green) conductivity as a function of \(M_{z}/M_{y}\). The calculations are done based on the tight binding model (Eq. (2), Fig. 2), with a Fermi level of \(-1.95\). Solid line is the average value performed over 5 disorder realizations (shown in dashed, paler color). tivity. To conclude, the impact of our work is threefold. First of all, we have identified a microscopic mechanism responsible for the rotation of principal optical axes in a magnetic field. Furthermore, we demonstrated a fundamentally new transport effect, referred to as linear magneto-conductivity, using three complementary approaches: a tight-binding toy model, DFT calculations and a semiclassical calculation. The latter is particularly powerful, as it identifies two distinct mechanisms yielding LMC: momentum dependent band magnetization, and Berry curvature. Finally, we have shown the importance of measuring the full conductivity tensor in magnetic materials, both because the Hall effect can otherwise be misidentified, and because such measurements serve as a powerful probe of symmetry. We hope that our results inspire new ways of probing the effect of time-reversal symmetry breaking on conduction electrons in future measurements. M.V. is grateful to Stephen R. Power for helpful discussions. V.S. is supported by the Miller Institute for Basic Research in Science, UC Berkeley. C.L. acknowledges fellowship support from the Gordon and Betty Moore Foundation through the Emergent Phenomena in Quantum Systems (EPiQS) program. M.V. was supported as part of the Center for Novel Pathways to Quantum Coherence in Materials, an Energy Frontier Research Center funded by the US Department of Energy, Office of Science, Basic Energy Sciences. J.M. acknowledges support from the Quantum Materials program under the Director, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, of the U.S. Department of Energy, Contract No. DE-AC02-05CH11231. J.O received support from the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF4537 to J.O. at UC Berkeley. I. N. and S. G. were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No.DE-AC02-05-CH11231 within the Theory of Materials program.Computational resources were provided by the National Energy Research Scientific Computing Center and the Molecular Foundry, DOE Office of Science User Facilities supported by the Office of Science, U.S. Department of Energy under Contract No. DEAC02-05CH11231.The work performed at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences of the U.S. Department of Energy, under the same contract No. DE-AC02-05CH11231. J.O received support from the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF4537 to J.O. at UC Berkeley. I. N. and S. G. were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No.DE-AC02-05-CH11231 within the Theory of Materials program.Computational resources were provided by the National Energy Research Scientific Computing Center and the Molecular Foundry, DOE Office of Science User Facilities supported by the Office of Science, U.S. Department of Energy under Contract No. DEAC02-05CH11231.The work performed at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences of the U.S. Department of Energy, under the same contract No. DE-AC02-05CH11231. J.O received support from the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF4537 to J.O. at UC Berkeley. I. N. and S. G. were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No.DE-AC02-05-CH11231 within the Theory of Materials program.Computational resources were provided by the National Energy Research Scientific Computing Center and the Molecular Foundry, DOE Office of Science User Facilities supported by the Office of Science, U.S. Department of Energy under Contract No. DEAC02-05CH11231.The work performed at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences of the U.S. Department of Energy, under the same contract No. DE-AC02-05CH11231. J.O received support from the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF4537 to J.O. at UC Berkeley. I. N. and S. G. were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No.DE-AC02-05-CH11231 within the Theory of Materials program.Computational resources were provided by the National Energy Research Scientific Computing Center and the Molecular Foundry, DOE Office of Science User Facilities supported by the Office of Science, U.S. Department of Energy under Contract No. DEAC02-05CH11231.The work performed at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences of the U.S. Department of Energy, under the same contract No. DE-AC02-05CH11231.
2303.15709
Testing type II seesaw leptogenesis at the LHC
Type II seesaw leptogenesis simultaneously explains the origin of neutrino masses, the baryon asymmetry of our universe, and the inflation. The Large Hadron Collider(LHC) provides an opportunity to directly test type II seesaw leptogenesis by looking for the predicted triplet Higgs. In this paper, we perform an analysis of the detection prospect for the triplet Higgs at the LHC through the multi-electron channels. We find that due to the contribution of $pp\to H^{\pm \pm }H^{\mp }$ process, the sensitivity of multi-electron channels searching for the doubly-charged Higgs pair production can be improved. We also investigate the $3e+ {E}^{\rm miss}_{\rm T}$ signals to probe the $pp\to H^{\pm \pm }H^{\mp }$ production and find that the future high luminosity LHC could probe a triplet Higgs around 1.2 TeV at $2\sigma$ level.
Chengcheng Han, Zhanhong Lei, Weihao Liao
2023-03-28T03:37:05Z
http://arxiv.org/abs/2303.15709v2
# Testing the type II seesaw leptogenesis at the LHC ###### Abstract The type II seesaw leptogenesis simultaneously explains the origin of neutrino masses, the baryon asymmetry of our universe, and the inflation. The large hadron collider(LHC) provides an opportunity to directly test the type II seesaw leptogenesis by looking for the predicted triplet Higgs. In this paper, we perform an analysis of the detection prospect for the triplet Higgs at the LHC through the multi-lepton channels. We find that due to the contribution of \(pp\to H^{\pm\pm}H^{\mp}\) process, the sensitivity of multi-lepton channels searching for the doubly-charged Higgs pair production can be improved. We also investigate the \(3l+E_{\rm T}^{\rm miss}\) signals to probe the \(pp\to H^{\pm\pm}H^{\mp}\) production and find that the future high luminosity LHC could probe a triplet Higgs around 1.2 TeV at \(2\sigma\) level. ## 1 Introduction Introduction One of the unresolved issues in modern physics is the origin of the neutrino mass. In the standard model(SM) neutrinos are massless, but the observation of the neutrino oscillation indicates that neutrinos have tiny masses, which requires the extension of the SM. The most popular ideas for generating neutrino masses are the so-called seesaw mechanisms, which can be classified into three types. The type I/III seesaw introduces additional three(at least two) singlet/triplet fermions [1; 2; 3; 4; 5], while the type II seesaw only includes an additional triplet scalar which provides a minimal framework to explain the origin of neutrino masses [6; 7; 8; 9; 10; 11]. In the model of type II seesaw, the triplet Higgs can directly couple to the lepton sectors, and if the neutral component of the triplet Higgs gets a vev, the Majorana mass of the neutrinos can be generated. Interestingly, the type II seesaw could also provide a feasible leptogenesis if it also plays the role of the inflaton, as pointed out by a recent study [12; 13]. Therefore, this simple model could explain three important problems at the same time: the origin of the neutrino masses, the baryon asymmetry of our universe, and inflation. Comparing with the leptogenesis from type-I seesaw which generally requires a high scale right-handed neutrino [14], the type II seesaw leptogenesis allows the triplet Higgs to be as light as TeV scale which could be directly probed by the Large Hadron Collider(LHC). Indeed LHC already performs some surveys and currently sets a limit of around a few hundred GeV for the doubly charged Higgs contained in the triplet Higgs depending on its decay products [15; 16; 17; 18; 19]. The decay of the doubly charged Higgs is sensitive to the vacuum expectation value of the triplet Higgs. For a large vev \(v_{\Delta}\gtrsim 0.1\) MeV, it mainly decays into two gauge bosons, otherwise, it would decay into dileptons [20]. However, if the baryon asymmetry is generated by the type II seesaw, to avoid the lepton number to be washed out, the vev of the triplet Higgs \(v_{\Delta}<1\) keV is preferred. Therefore, looking for the triplet Higgs through the leptonic channel would provide a visible way to test the type II seesaw leptogenesis. In this paper, we investigate the detection capability of the triplet Higgs in future large hadron colliders. Previous studies on this aspect have been investigated in numerous works including, for example, Refs. [21; 22; 23; 24; 25; 26; 27; 28; 29]. The test of the type II seesaw leptogenesis from lepton flavor violation can be also found in [30]. In the model of the standard model with additional triplet Higgs, after electroweak symmetry breaking, besides the SM-like Higgs there are 6 additional scalars present in the spectrum which can be denoted as \(A,H,H^{\pm},H^{\pm\pm}\) where \(A,H\) are the extra CP-odd/even neutral scalars, \(H^{\pm},H^{\pm\pm}\) are the charged Higgs and doubly-charged Higgs respectively. The charged Higgs or the doubly-charged Higgs can be pair-produced through the Drell-Yan process, providing good channels to probe the triplet Higgs at the colliders. The ATLAS group already performs a search for the doubly-charged Higgs assuming it mostly decaying into dileptons, and the mass of the doubly-charged Higgs \(H^{\pm\pm}\) up to around 800 GeV has been excluded [15; 17]1. Depending on the number of observed leptons, the detection strategy is classified mainly into three categories: 4-lepton channel, 3-lepton channel, and 2-lepton channel. Each channel has different sensitivity and the final result is derived from the combination of these three channels. On the other hand, since the triplet Higgs is a triplet under the SM \(SU(2)_{L}\) group, the charged Higgs can be produced together with the doubly-charged Higgs. This production rate can be even higher than the \(H^{\pm\pm}\) pair production. Noticing that the charged Higgs decays into a lepton and a neutrino, the \(H^{\pm\pm}H^{\mp}\) production will also contribute to the ATLAS search channels and a better sensitivity could be derived. We will demonstrate this point later. In addition, since the charged Higgs would decay into a charged lepton and a neutrino, a large missing energy would be present for the \(H^{\pm\pm}H^{\mp}\) pair production. It would be intriguing to search for the \(H^{\pm\pm}H^{\mp}\) pair production via the signal of \(3l+E_{\rm T}^{\rm miss}\), which may provide a good sensitivity to the triplet Higgs. This paper is organized as follows: in Sec. 2 we give a brief introduction of the type II seesaw model. In Sec. 3 we calculate the production of the \(H^{\pm}\) and \(H^{\pm\pm}\) at the LHC. We analysis the sensitivity of the triplet Higgs at LHC including the contribution of \(H^{\pm\pm}H^{\mp}\) pair production, then we show the prospect for the \(H^{\pm\pm}H^{\mp}\) searches requiring a large missing energy for the final states in Sec. 5. We draw our conclusion in Sec. 6. ## 2 The type II seesaw model The scalar sector of type II seesaw model contains the SM Higgs doublet \(\Phi\) and a \(SU(2)_{L}\) triplet scalar field \(\Delta\) with hypercharge \(Y=1\) which can be written as \[\Delta=\begin{pmatrix}\frac{\delta^{+}}{\sqrt{2}}&\delta^{++}\\ \delta^{0}&-\frac{\delta^{+}}{\sqrt{2}}\end{pmatrix}\qquad\quad\Phi=\begin{pmatrix} \phi^{+}\\ \phi^{0}\end{pmatrix}. \tag{1}\] The most general renormalizable and gauge invariant Lagrangian for the scalar sector is, \[\mathcal{L}\supset\left(D_{\mu}\Phi\right)^{\dagger}D^{\mu}\Phi+Tr\left(D_{ \mu}\Delta\right)^{\dagger}D^{\mu}\Delta-V\left(\Phi,\Delta\right). \tag{2}\] Besides SM Yukawa interaction, one can include an additional Yukawa interaction term between the triplet Higgs and leptons, \[\mathcal{L}_{\nu}=-y_{\nu}L^{T}Ci\sigma_{2}\Delta L+h.c., \tag{3}\] where the \(y_{\nu}\) is the Yukawa coupling, \(L\) is the left-handed lepton doublet and \(C\) is the charge conjugation operator. After the spontaneous electroweak symmetry breaking (EWSB), the neutral part of \(\Delta\) and \(\Phi\) acquire a non-vanishing vacuum, \[\langle\Delta\rangle=\begin{pmatrix}0&0\\ \frac{v_{\Delta}}{\sqrt{2}}&0\end{pmatrix}\qquad\langle\Phi\rangle=\begin{pmatrix} 0\\ \frac{v_{\Phi}}{\sqrt{2}}\end{pmatrix}, \tag{4}\] where \(v_{\Delta}\) is the vacuum expectation value of the neutral part of triplet Higgs. Then the neutrino mass can be generated by \[m_{\nu}=\sqrt{2}y_{\nu}v_{\Delta}. \tag{5}\] Here \(m_{\nu}\) is a complex symmetric \(3\times 3\) matrix and the physical neutrino masses can be derived by diagonalizing \(m_{\nu}\) with PMNS matrix. The gauge invariant potential for the scalar sector can be written as follows: \[V(\Phi,\Delta) = -m_{\Phi}^{2}\Phi^{\dagger}\Phi+m_{\Delta}^{2}\text{Tr}(\Delta^{ \dagger}\Delta)+\left(\mu\Phi^{\text{T}}\text{i}\sigma_{2}\Delta^{\dagger}\Phi +\text{h.c.}\right)+\frac{\lambda}{4}(\Phi^{\dagger}\Phi)^{2} \tag{6}\] \[+ \lambda_{1}(\Phi^{\dagger}\Phi)\text{Tr}(\Delta^{\dagger}\Delta)+ \lambda_{2}\left[\text{Tr}(\Delta^{\dagger}\Delta)\right]^{2}+\lambda_{3} \text{Tr}[(\Delta^{\dagger}\Delta)^{2}]+\lambda_{4}\Phi^{\dagger}\Delta\Delta^ {\dagger}\Phi,\] where \(m_{\Phi}^{2}\) and \(m_{\Delta}^{2}\) are the mass parameters and the \(\mu\) term provides a source of lepton number violation. The \(\mu\) term violates lepton number two units for the lepton number assignments of \(l_{\Delta}=-2,l_{\Phi}=0\). After electroweak symmetry breaking we have a state of doubly-charged Higgs \(H^{\pm\pm}\) (\(\equiv\delta^{\pm\pm}\)), two states of charged scalars \(H^{\pm}\) and \(G^{\pm}\) which are combinations of \(\delta^{\pm}\) and \(\phi^{\pm}\), and the CP-even neutral states \(H^{0}\), \(h^{0}\) as well as the CP-odd states \(A^{0}\), \(G^{0}\), where \(G^{\pm}\) and \(G^{0}\) are the Goldstone bosons which will ultimately give the longitudinal degrees of freedom of the \(W^{\pm}\) and Z bosons. The mass-squared of the doubly-charged Higgs is given, \[m_{H^{\pm\pm}}^{2}=\frac{\sqrt{2}\mu v_{\Phi}^{2}-2\lambda_{3}v_{\Delta}^{3}- \lambda_{4}v_{\Phi}^{2}v_{\Delta}}{2v_{\Delta}}. \tag{7}\] The mass-squared of charged Higgs is \[m_{H^{\pm}}^{2}=\frac{2\sqrt{2}\mu v_{\Phi}^{2}+4\sqrt{2}\mu v_{\Delta}^{2}- \lambda_{4}v_{\Delta}v_{\Phi}^{2}-2\lambda_{4}v_{\Delta}^{3}}{4v_{\Delta}}. \tag{8}\] For the mass of the CP-even/odd scalars, one can get: \[m_{H^{0}}^{2} = \frac{1}{2}[A+C+\sqrt{\left(A-C\right)^{2}+4B^{2}}], \tag{9}\] \[m_{h^{0}}^{2} = \frac{1}{2}[A+C-\sqrt{\left(A-C\right)^{2}+4B^{2}}],\] (10) \[m_{A^{0}}^{2}=\frac{\mu\left(v_{\Phi}^{2}+4v_{\Delta}^{2}\right) }{\sqrt{2}v_{\Delta}}, \tag{11}\] where \[A=\frac{\lambda}{2}v_{\Phi}^{2},\quad B=-\sqrt{2}\mu v_{\Phi}+\left(\lambda_{1 }+\lambda_{4}\right)v_{\Phi}v_{\Delta},\quad C=\frac{\sqrt{2}\mu v_{\Phi}^{2} +4\left(\lambda_{1}+\lambda_{4}\right)v_{\Delta}^{3}}{2v_{\Delta}}. \tag{12}\] In the limit of \(v_{\Delta}\ll v_{\Phi}\), we have following masses relation of the physical eigenstates, \[m_{H^{\pm\pm}}^{2}-m_{H^{\pm}}^{2}\approx m_{H^{\pm}}^{2}-m_{H^{0}/A^{0}}^{2} \approx-\frac{\lambda_{4}v_{\Phi}^{2}}{4} \tag{13}\] One can define the mass-splitting parameter \(\Delta m=m_{H^{\pm\pm}}-m_{H^{\pm}}\) which describes the typical mass difference of the spectra for the triplet Higgs sector. For \(\Delta m<O(10)\) GeV and \(v_{\Delta}<10^{-4}\) GeV, the \(H^{\pm\pm}/H^{\pm}\) decays into \(l^{\pm\pm}/l^{\pm}\nu\). For \(\Delta m<O(10)\) GeV and \(v_{\Delta}>10^{-4}\) GeV, \(H^{\pm\pm}/H^{\pm}\) decays into \(W^{\pm\pm}/W^{\pm}Z\) or \(W^{\pm}h^{0}\). If \(\Delta m>O(10)\)GeV, the cascade decay channels would become significant. In the case of triplet Higgs leptogenesis, we have \(\Delta m<O(5)\) GeV and \(v_{\Delta}<10\) keV [32], thus the \(H^{\pm\pm}/H^{\pm}\) would mainly decay into dileptons, giving a typical multi-lepton signature at LHC. Production and decay of the triplet higgs The triplet Higgs can be produced at the LHC by the neutral current and charged current Drell-Yan process, \[q\bar{q}\stackrel{{\gamma^{\star}/Z^{*}}}{{\longrightarrow}}H^{\pm \pm}H^{\mp\mp}/H^{\pm}H^{\mp}/H^{0}A^{0}\qquad\quad q\bar{q}^{\prime} \stackrel{{ W^{*}}}{{\longrightarrow}}H^{\pm\pm}H^{\mp}/H^{\pm}H^{ 0}/H^{\pm}A^{0}\] and the Feynman diagrams for the \(H^{++}H^{--}\), \(H^{++}H^{-}\) production are presented in Fig. 1. In Fig. 2 we show the the cross section of \(H^{\pm\pm}H^{\mp\mp},H^{\pm}H^{\mp},H^{\pm\pm}H^{\mp}\) pair production with a varying mass of the triplet Higgs. Here, we assume \(H^{\pm\pm}\), \(H^{\pm}\) share a same mass parameter. Doubly-charged triplet Higgs has a considerable cross section and a distinctive decay signature, the same-charge lepton final state. Note that the \(H^{\pm\pm}H^{\mp}\) has an even larger cross section than the \(H^{\pm\pm}H^{\mp\mp}\) production, thus it may provide a better sensitivity of the triplet Higgs search. The decay modes of triplet Higgs with different \(v_{\Delta}-\Delta m\) parameters have been thoroughly discussed in Refs. [28, 29]. We consider \(\Delta m<O(1)\)GeV and \(v_{\Delta}<10^{-4}\)GeV,thus the doubly-charged Higgs \(H^{\pm\pm}\) mostly decays to leptonic final states. The decay branching ratio is given by, \[BR(H^{\pm\pm}\to l_{i}^{\pm}l_{j}^{\pm})=\frac{2}{1+\delta_{ij}} \frac{\left|y_{ij}^{\nu}\right|^{2}}{\sum_{mn}\left|y_{mn}^{\nu}\right|^{2}} \tag{1}\] with \(y_{\nu}=\frac{1}{\sqrt{2}v_{\Delta}}Udiag(m_{1}\),\(m_{2},m_{3})U^{T}\), where \(U\) is the lepton mixing matrix measured in neutrino oscillation experiments. The leptonic branching ratio also depends on the mass order of the neutrino as well as the neutrino mass spectrum. It has been found that for normal hierarchy(NH) and inverted hierarchy(IH), NH: \(BR(H^{++}\rightarrow\mu\mu),BR(H^{++}\rightarrow\tau\tau)\gg BR(H^{++} \to ee)\) IH: \(BR(H^{++}\to ee)\gg BR(H^{++}\rightarrow\mu\mu),BR(H^{++} \rightarrow\tau\tau)\) In the following study, we assume the \(BR(H^{++}\to ee)=100\%\) to present our result. ultilepton searches at the LHC The ATLAS collaboration has released a multilepton final states search with an integrated luminosity of 36.1fb\({}^{-1}\) of pp collisions at \(\sqrt{s}=13\) TeV [17]. This analysis focuses on the decays \(H^{\pm\pm}\to e^{\pm}e^{\pm},H^{\pm\pm}\to e^{\pm}\mu^{\pm}\) or \(H^{\pm\pm}\to\mu^{\pm}\mu^{\pm}\) with a branching ratio around 100%. The events are divided into three signal regions. The selection criteria utilised for each region are exhibited in Tab. 1. The final state events with \(2,3\) leptons is also considered due to the missing of the leptons in the detector. In our paper, we first simulated the experimental process by adding the contribution of \(H^{\pm\pm}H^{\mp}\) to the signal event since it will also contribute the \(2,3\) lepton signal region. In our simulation, we implement the triplet Higgs model in FeynRules [33], and import UFO files [34] into MadGraph [35] to generate signal events. We use the NNPDF23LO1 [36] for parton distribution function and the parton showering and hadronisation are simulated with PYTHIA8 [37]. We perform the detector simulations with Delphes [38] and data analysis with ROOT [39]. For the two-lepton and three-lepton signal regions(SR2L and SR3L), at least one pair of leptons with the same charge is required. The separation of the same-charge leptons, and the combined transverse momentum as well as the scalar sum of the leptons' transverse momenta are required to be \(\Delta R\,(l^{\pm}l^{\pm})>3.5\), \(P_{T}\,(l^{\pm}l^{\pm})>100\) GeV and \(\sum|P_{T}\,(l)|>300\) GeV, respectively. Each signal region is divided into three channels: electron channel, muon channel and mixed channel. The selection criteria for electron are \(|\eta|<2.47\) and \(P_{T}>30\) GeV, and for muon events are \(|\eta|<2.5\), \(P_{T}>30\) GeV. Besides the pre-selection cut described above, for the signal region SR3L and SR4L, events are rejected if any opposite-charge Figure 2: Pair production cross-sections of the triplet scalars at \(\sqrt{s}=13\) TeV for \(\Delta m=0\). same-flavour lepton pair is within 10 GeV of the \(Z\) boson mass to reduce the background from \(Z\) production. In the four-lepton signal region(SR4L), there must be two lepton pairs with the same charge and the total charge is zero. The \(\Delta M/\bar{M}\) requirement is applied to exclude background where the two same-charge pairs have incompatible invariant masses \(\left(\Delta M=\left|m^{++}-m^{--}\right|,\bar{M}=\frac{m^{++}+m^{--}}{2}\right)\). In the ALTAS experiment, for different \(\bar{M}\), the value of \(\Delta M\) is different. And we simply take \(\Delta M/\bar{M}<0.1\) in the four lepton channel. In all signal regions, the invariant mass of same-charge lepton pairs are required to be above 200 GeV. In order to restrain background events arising from top-quark decays, events with b-tagged jet are vetoed. To validate our simulation, we first simulate the signal events from \(pp\to H^{\pm\pm}H^{\mp\mp}\) production and get the signal cut efficiency. Using the observed signal event from the article, we apply the \(CLs\) method [40] to get the 95% CL upper limits on the \(pp\to H^{\pm\pm}H^{\mp\mp}\) cross section. The result is shown in Fig. 3 denoted as the black dashed curve [17]. As a comparison, the limit from ATLAS experiments are also shown as the black dotted line. It shows our limit is close to the one derived from the ATLAS experiment. Since \(pp\to H^{\pm\pm}H^{\mp}\) contributes the SR2L and SR3L signal region, we expect the real limit should be stronger. Therefore, we simulate the process \(pp\to H^{\pm\pm}H^{\mp}\to l^{\pm}l^{\pm}l^{\mp}\nu^{\pm}\) and get the corresponding signal efficiency. To combine our result, we denote the \(\sigma_{1,2}\) and \(\varepsilon_{1,2}\) as cross-section and cut efficiency for \(pp\to H^{\pm\pm}H^{\mp\mp}\) process and \(pp\to H^{\pm\pm}H^{\mp}\) process respectively, then the total signal events \(n=\mathcal{L}\sigma_{1}\varepsilon_{1}+\mathcal{L}\sigma_{2}\varepsilon_{2}\) for each signal region. We set the limit on the total signal events. To show our result, we can use an effective cut efficiency of \(\varepsilon_{2eff}=\varepsilon_{2}+\sigma_{1}/\sigma_{2}\varepsilon_{1}\) for \(pp\to H^{\pm\pm}H^{\mp}\) process and set the limit on the cross section of \(pp\to H^{\pm\pm}H^{\mp}\) production, which is shown as the red dashed curve in Fig. 3. It shows the combined limit is around 100 GeV stronger than the one derived only from \(pp\to H^{\pm\pm}H^{\mp\mp}\) process. ## 5 \(3l+E_{\rm T}^{\rm miss}\) signal Notice that \(pp\to H^{\pm\pm}H^{\mp}\) has a relatively larger cross section and the final states include a missing energy. It is intriguing to examine whether the \(3l+E_{\rm T}^{\rm miss}\) could provide a better sensitivity to the triplet Higgs. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & SR2L & SR3L & SR4L \\ \hline b-jet veto & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \(Z\) veto & & \(\circ\) & \(\circ\) \\ \hline \(P_{T}\left(l^{\pm}l^{\pm}\right)>100\)GeV & \(\circ\) & \(\circ\) & \\ \hline \(\sum\left|P_{T}\left(l\right)\right|>300\)GeV & \(\circ\) & \(\circ\) & \\ \hline \(\Delta R\left(l^{\pm},l^{\pm}\right)<3.5\) & \(\circ\) & \(\circ\) & \\ \hline \(\Delta M/\bar{M}\) & & & \(\circ\) \\ \hline \end{tabular} \end{table} Table 1: Selection criteria in all the signal region The relevant background for this signal mainly originates from diboson(\(ZZ,ZW,WW\)), \(t\bar{t}\), \(t\bar{t}W\), \(t\bar{t}Z\), \(t\bar{t}h\), triboson and Drell-Yan processes. However, as shown in the ATLAS paper, for the \(3l\) process, the diboson background is much more dominant than other backgrounds. Therefore for the background simulation, we only consider the events from the diboson process. The background and the signal are both simulated by using MadGraph with an MLM matching. For the cross section of the diboson, we also add the K-factor to include the NLO correction. The LO cross-section for the diboson process and the corresponding K-factor at \(\sqrt{s}=\) 13 TeV LHC [41] are shown in Tab. 2. To ensure simulation credibility and validate the charge misidentification effect in the electron channel, the same-charge region(SCR) is also considered, which only exert b-jet veto. For \(pp\to H^{\pm\pm}H^{\mp}\to l^{\pm}l^{\pm}l^{\mp}\nu^{\pm}\), large missing transverse energy will appear in the final states. We show the missing energy distribution of diboson process and \(pp\to H^{\pm\pm}H^{\mp}\to l^{\pm}l^{\pm}l^{\mp}\nu^{\pm}\) process in figure 4. It shows that a cut on the missing energy around few hundred GeV would \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \(ZZ\) & \(W^{+}Z\) & \(W^{-}Z\) & \(WW\) \\ \hline \(\sigma_{LO}\)[pb] & 9.89 & 15.51 & 9.53 & 67.74 \\ \hline K-factor & 1.62 & 1.84 & 1.91 & 1.66 \\ \hline \end{tabular} \end{table} Table 2: The LO cross-sections and K-factors for diboson production at \(\sqrt{s}=\) 13 TeV Figure 3: The limits for \(B(ee)/B(e\mu)/B(\mu\mu)=100\%/0\%/0\%\). The black and red solid lines represent the production cross section of the \(pp\to H^{\pm\pm}H^{\mp\mp}\) and \(pp\to H^{\pm\pm}H^{\mp}\) process. The black dashed line is the 95% CL limit we get for the \(pp\to H^{\pm\pm}H^{\mp\mp}\) process, which is comparable to the limit obtained by Atlas and depicted as black dotted line. The red dashed line is the 95% CL limit we get by adding the contribution of the two processes together. remove much of the background. The distinction of the missing energy distribution between signal and diboson background motivates us to add a missing energy cut \(E_{\rm T}^{\rm miss}>300\) GeV. The cut flow for the background and signal for a luminosity \(3000fb^{-1}\) at 13 TeV LHC are shown in Tab. 3. It clearly shows that only 10% of the background are left after imposing the cut \(E_{\rm T}^{\rm miss}>300\) GeV, while most of the signal events are still kept. Using expected discovery significance \(S/\sqrt{B}\), the results are shown in Fig 5 at 13 TeV LHC with a luminosity \(3000fb^{-1}\). We find the triplet mass less than 1.2 TeV can be reached at \(2\sigma\) for the high luminosity LHC in future. As a comparision, we also show the \(2\sigma\) sensitivity for the multi-lepton searches channels mentioned in last section where the missing energy cut is not imposed. We find that at when the triplet Higgs mass is below 800 GeV, the multi-lepton channel still provide a better sensitivity for the triplet Higgs. However, the \(3l+E_{\rm T}^{\rm miss}\) signal could reach a higher triplet Higgs mass when the triplet Higgs mass is larger than 800 GeV. The main reason for this is that when the mass of the triplet Higgs is low, the missing energy could be lower and the missing energy cut would also hurt the signal. We believe that an even larger missing energy cut could further suppress the background and a better sensitivity for the heavy triplet Higgs can be reached. ## 6 Conclusion The type II seesaw leptogenesis simultaneously explains the origin of neutrino masses, the baryon asymmetry of our universe, and the inflation. The large hadron collider(LHC) provides an opportunity to directly test the type II seesaw leptogenesis by looking for the Figure 4: The missing transverse energy distribution of \(pp\to H^{\pm\pm}H^{\mp}\to l^{\pm}l^{\pm}l^{\mp}\nu^{\pm}\) process and diboson background with the pre-selection. The mass of \(H^{\pm\pm}\), \(H^{\mp}\) are assumed to be 1 TeV here. predicted triplet Higgs. In this paper, we perform an analysis of the detection prospect for the triplet Higgs at the LHC through the multi-lepton channels. We find that due to the contribution of \(pp\to H^{\pm\pm}H^{\mp}\) process, the sensitivity of multi-lepton channels searching for the doubly-charged Higgs pair production can be improved. We also investigate the \(3l+E_{\rm T}^{\rm miss}\) signals to probe the \(pp\to H^{\pm\pm}H^{\mp}\) production and we find this channel may provide a better sensitivity than the multi-lepton channel. Our result shows that the future LHC could probe a triplet Higgs around 1.2 TeV at \(2\sigma\) level with a luminosity 3000 \(fb^{-1}\) for the \(3l+E_{\rm T}^{\rm miss}\) search channel. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & Diboson BKG & \(m_{H^{\pm\pm}}=600\) GeV & \(m_{H^{\pm\pm}}=900\) GeV & \(m_{H^{\pm\pm}}=1200\) GeV \\ \hline pre-selection & 14518 & 2249 & 242 & 38 \\ \hline \(m_{invariant}>200\)GeV & 3037 & 2199 & 241 & 38 \\ \hline \(P_{T}\,(l^{\pm}l^{\pm})>100\)GeV & 1379 & 2168 & 239 & 37 \\ \hline \(\sum|P_{T}\,(l)|>300\)GeV & 673 & 2139 & 237 & 37 \\ \hline \(\Delta R\,(l^{\pm},l^{\pm})<3.5\) & 490 & 1596 & 174 & 26 \\ \hline \(E_{\rm T}^{\rm miss}>300\) GeV & 49.1 & 790 & 111 & 20 \\ \hline Significance & - & 113 & 15.8 & 2.9 \\ \hline \end{tabular} \end{table} Table 3: The cut flow for the Diboson background and the signal with an integrated luminosity of \(3000fb^{-1}\) and \(\sqrt{s}=\) 13 TeV. Figure 5: The sensitivity of future searches with \(3000fb^{-1}\). ## Acknowledgments C. H. is supported by the Sun Yat-Sen University Science Foundation, and by the Fundamental Research Funds for the Central Universities, Sun Yat-sen University under Grant No. 22qntd3007. ## Appendix A The \(CL_{s}\) method Indistinguishable from background hypotheses in the case of few signal events, we use the \(CL_{s}\) method to improve experimental sensitivity. The usual confidence level for signal and background hypothesis is given by the probability that the test-statistic \(Q\) is less than or equal to the value observed in the experiment: \[CL_{s+b}=P_{s+b}(Q\leq Q_{obs})=\int_{-\infty}^{Q_{obs}}\frac{dP_{s+b}}{dQ}dQ, \tag{10}\] where \(\frac{dP_{s+b}}{dQ}\) is the probability distribution function for signal and background experiments. Likewise, the confidence level in the background-only hypothesis is: \[CL_{b}=P_{b}(Q\leq Q_{obs})=\int_{-\infty}^{Q_{obs}}\frac{dP_{b}}{dQ}dQ, \tag{11}\] and \(\frac{dP_{b}}{dQ}\) is the probability distribution function for background-only experiments. To obtain the limit, we use the definition of \(CL_{s}\) \[CL_{s}=\frac{CL_{s+b}}{CL_{b}}, \tag{12}\] The signal hypotheses is excluded at the confidence level \(CL\) when \[1-CL_{s}\leq CL. \tag{13}\] To combine the results of the signals from several channels, the test statistic is defined as the likelihood ratio \[Q=\prod_{i=1}^{n}q_{i} \tag{14}\] with \[q_{i}=\frac{\frac{e^{-(s_{i}+b_{i})}((s_{i}+b_{i})^{N_{i}}}{N!}}{\frac{e^{-b_ {i}}b_{i}^{\,\,i}}{N!}} \tag{15}\] for counting experiments. The estimated signal and background are \(s_{i}\) and \(b_{i}\), and \(i\) labeled the channel. Then \(N\) is the number of observed candidates. The final likelihood function should also include the uncertainty of the backgrounds. All of the above calculation can be preformed numerically by Monte Carlo method.
2309.00439
f-mode oscillations of anisotropic neutron stars in full general relativity
We investigate f-mode oscillations of static anisotropic stable neutron stars within the framework of full general relativity. We present equations governing unperturbed stellar structures and oscillations with an ansatz to account for the anisotropy. We solve those equations for two different equations of states. We see that, moderately anisotropic neutron stars with the tangential pressure larger than the radial pressure can give more massive neutron stars than the isotropic or very anisotropic ones. We find that the frequency of the f-mode exhibits a linear relationship with the square root of the average density of the stars and the slope of the fit depends on the anisotropic strength. For any given value of the anisotropic strength, the frequency increases with the increase of the mass of the neutron star, linearly for lower masses, and rapidly at higher masses. However, this non-linear rise in the frequency with the mass is not prominent when the radial pressure is larger than the tangential pressure. For a fixed value of a small mass, higher anisotropy leads to a larger value of the frequency, but when the fixed mass is above a threshold value, higher anisotropy leads to a smaller value of the frequency. The nature of the variation in the frequency with the change in the anisotropic strength is similar for the two equations of state, but for a fixed mass and the same amount of the anisotropy, the softer equations of state gives higher frequency. We also find that the damping time of the f-mode oscillation decreases as the mass of the neutron star increases for all values of the anisotropic strength. For a fixed mass of the neutron star and for the same amount of the anisotropy, the value of the damping time is lower for the softer equation of state, but the nature of the variation in the damping time with the change in the anisotropic strength is similar.
Sushovan Mondal, Manjari Bagchi
2023-09-01T13:14:23Z
http://arxiv.org/abs/2309.00439v2
# Non-radial oscillation of anisotropic neutron stars in full general relativity ###### Abstract We investigate non-radial oscillations of anisotropic neutron stars within the framework of general relativity. Our study involves nonrotating, spherically symmetric anisotropic neutron stars as the unperturbed equilibrium configuration. We employ the BSk21 equation of state to describe neutron star matter and introduce a phenomenological ansatz to account for local anisotropy. Through considering small and adiabatic polar perturbations (even parity), we derive oscillation equations from the linearized Einstein equations. Notably, these oscillation equations explicitly incorporate the influence of pressure anisotropy. We calculate the frequencies and damping times of the fundamental (f) mode for various choices of anisotropic pressure strength. Interestingly, we observe that the f-mode frequencies continue to scale linearly with the average density of neutron stars, even in the presence of anisotropy. We conduct a comprehensive analysis of how anisotropy affects both the f-mode frequency and its associated damping time. ## I Introduction Among various heavenly bodies, throughout the universe, neutron stars are one of the exotic objects, mainly because of their extremely high densities. Hence, general relativity plays a very important role to describe the overall structure of a neutron star. These compact objects are laboratories to study various theories of physics at such high densities. These objects are observed in a wide range of the electromagnetic spectrum, but there are various properties that can not be probed through electromagnetic waves. The recent detection of gravitational waves opened up a completely new window for the observation of neutron stars and other compact objects. So far, only the gravitational waves emitted during the late inspiral, merger and ringdown phase of the components of binary systems of compact objects are detected by the present set of ground based detectors[1; 2; 3]. Recently Pulsar Timing Array collaboration has found the evidence of gravitational wave in the nanohertz regime [4; 5; 6]. In the future, it is expected that detectors like LISA (Laser Interferometer Space Antenna), Einstein telescope, etc. would detect gravitational waves generated from various sources and help us to look deep into the cosmos and give us idea of many other objects. Among these other sources of gravitational waves, one of the promising source is the oscillation of neutron stars. To model these neutron stars and their oscillations, a most common assumption is made that the pressure inside the neutron star is isotropic. Nevertheless, compact stellar objects do not always exhibit isotropy. The idea of different tangential pressure than the radial pressure can be traced long back by Lemaitre [7]. Bowers and Liang [8], was the first to apply the anisotropic model on the equilibrium configuration of relativistic compact stars. They have showed that, local anisotropy in the compact stars do not have non negligible effect on the observable properties like mass, surface redshift etc. It is claimed that, at high density regime (\(10^{15}\ g/cm^{3}\)), where the interaction becomes relativistic, anisotropy should be present in the system [9]. Pressure anisotropy can be triggered by the presence of solid core [10; 11], superfluidity [12], pion condensation [13], slow rotation [14], mixture of two fluids [15] etc.. Presence of viscosity can also induce local anisotropy inside the compact stars [16; 17]. S. Yazadjiev [18] models magnetars in general relativity in a nonperturbative way, where the constituent fluid is considered anisotropic due to the presence of magnetic field. Recently Deb et al. [19] showed magnetic field introduce additional anisotropy, alter various properties like mass, radius, surface redshift etc. of compact stars. Elasticity in the compact stars can also be described by local anisotropy [20]. Other than these anisotropic models of neutron stars, a wide range of studies have been described in the review [21], and the references there in. Compact stars are good source of detectable gravitational waves. Oscillation of neutron stars is one of the promising source of gravitational waves. At the last phase of life cycle of massive stars, supernova explosion is occurred, and newly formed compact object may oscillate violently. Other than that, if the merger of two compact stars form neutron star, then that object also oscillates. A part of the energy of oscillation, will be emitted as gravitational radiation, which will propagate as ripples of space-time. Due to the emission of gravitational wave the oscillation gets damped. Thorne and Campolattaro [22] first theoretically studied the non-radial oscillation of neutron stars in full general relativity. Later Lindblom and Detweiler [23; 24] did the similar calculations with some modifications. Using the similar formalism, the non-radial oscillation modes are calculated for superfluid neutron stars [25], quark stars [26], the compact stars with density discontinuity [27; 28]. Recent studies on the impact of equation of state on neutron star oscillation mode reveals that nuclear interaction has im portant effects on the quasi-normal modes [29]. Most of these studies assume pressure isotropy inside the compact stars. The role of anisotropy on the non-radial oscillation mode was first studied by W. Hillebrandt et al. [30], in the Newtonian frame work. This study revealed that pressure anisotropy plays a very important role on the mode frequencies. Later Doneva et.al. [31], calculated f and p mode frequencies in the Cowling approximation, where they have ignored the metric perturbation. In a recent study extended this work for the case of realistic equation of states [32]. These studies have shown that, anisotropy in the compact stars can change the numerical value of quasi-normal mode frequencies. Though relativistic Cowling approximation is good enough to calculate fluid oscillation modes, but as the metric perturbation is ignored in these studies, which can introduce errors in the calculation. In a recent study by H. Sotani et.al. [33], found that the frequencies with the approximation can totally be determined within 20% accuracy. Another important draw back in Cowling approximation is that, as the metric perturbation is ignored, so the energy loss due to gravitational waves is not also taken care of. The damping time of the modes, which actually describes the characteristic time of energy loss can not be calculated by Cowling approximation. The motivation of the present work is to study the quasi-normal modes of anisotropic neutron stars in full general relativistic frame work. Our focus is mainly to derive the governing differential equations for non-radial oscillation of neutron stars with pressure anisotropy, and find the quasi-normal mode frequencies with damping time. Notably, our investigation centers on polar perturbations, characterized by metric perturbation functions that exhibit even parity under spatial inversion (\((\theta,\phi)\rightarrow(\pi-\theta,\pi+\phi)\)), where \(r\), \(\theta\), and \(\phi\) denote coordinates within a spherical polar coordinate system. Of particular significance, this study is predominantly dedicated to the exploration of fundamental (f) modes within anisotropic neutron stars, among the spectrum of other polar modes. We have considered mainly BSk21 equation of state, which connects radial pressure with the density inside the neutron stars. To describe the tangential pressure we have considered a phenomenological ansatz described by Horvat et al [34]. We show that how f mode frequencies and the associated damping time change due to presence of pressure anisotropy, for various mass of neutron stars. The paper is organised as follows. In Sec. II, we detail the equilibrium model of anisotropic neutron stars. This involves presenting the modified Tolman-Oppenheimer-Volkoff (TOV) equations due to anisotropy, the equation of state (EoS), and outlining the anisotropy ansatz. Moving to Sec. III, we delve into the perturbation scheme for neutron stars. This section covers both analytical and numerical techniques employed to compute quasi-normal modes and damping times. In Sec. IV, we present the outcomes derived from our investigation into the quasi-normal modes of anisotropic neutron stars. Finally, we conclude the paper in Sec. V with a summary and discussion of our findings. ## II Equilibrium Anisotropic Configuration of Compact Stars in General Relativity In case of a spherically symmetric non-rotating space-time, the metric can be written in the well known form \[ds^{2}=-e^{\nu(r)}dt^{2}+e^{\lambda(r)}dr^{2}+r^{2}(d\theta^{2}+sin^{2}\theta d \phi^{2}), \tag{1}\] where \(\nu(r)\) and \(\lambda(r)\) are metric functions that are the functions of radial coordinate only. As the matter source in the field equations, we take an anisotropic fluid, where the radial component of pressure (\(p_{r}\)) and the tangential component of pressure (\(p_{t}\)) are nonidentical. Note that, like the metric functions, the radial and tangential components of the pressure inside a neutron star also depends on the radial coordinate \(r\). However, for the sake of simplicity, from now on, we do not write the \(r\) dependance explicitly. The anisotropy parameter is defined as, \[\chi=p_{t}-p_{r}. \tag{2}\] One important thing to note that, in this article the anisotropy parameter, \(\chi\) is described similar to the anisotropic parameter by D. Horvat et al. [34], where as some other studies [31; 32] describes anisotropy parameter as \((p_{r}-p_{t})\), resulting in a straightforward sign alteration due to the definition. The energy momentum tensor of the anisotropic fluid can be written in the form [35; 36], \[T_{\alpha}^{\beta}=(\rho+p_{t})u^{\beta}u_{\alpha}+\delta_{\alpha}^{\beta}p_{ t}+(p_{r}-p_{t})s^{\beta}s_{\alpha}, \tag{3}\] where \(\rho\) is the fluid matter density. The space-like radial unit vector, \(s^{\alpha}\) is defined as \(s^{\alpha}=(0,e^{-\lambda/2},0,0)\) and the fluid 4-velocity vector is given by \(u^{\beta}=(e^{-\nu/2},0,0,0)\), which satisfy the condition \(u^{\alpha}u_{\alpha}=-1\), \(s^{\alpha}s_{\alpha}=1\) and \(s^{\beta}u_{\beta}=0\). Then the non zero components of the energy momentum tensor (\(T_{\alpha}^{\beta}\)) are only \(diag(-\rho,p_{r},p_{t},p_{t})\). The space-time geometry and the matter distribution are related by Einstein equations: \[R_{\alpha}^{\beta}-\frac{1}{2}R\delta_{\alpha}^{\beta}=8\pi T_{\alpha}^{\beta}. \tag{4}\] Using equations (1) and (3), the Einstein equations can be written as \[e^{-\lambda}\left(\frac{\lambda^{\prime}}{r}-\frac{1}{r^{2}} \right)+\frac{1}{r^{2}}=8\pi\rho, \tag{5}\] \[e^{-\lambda}\left(\frac{\nu^{\prime}}{r}+\frac{1}{r^{2}}\right)- \frac{1}{r^{2}}=8\pi p_{r},\] (6) \[\frac{1}{2}e^{-\lambda}\left(\nu^{\prime\prime}-\frac{1}{2}\nu^{ \prime}\lambda^{\prime}+\frac{1}{2}{\nu^{\prime}}^{2}+\frac{\nu^{\prime}- \lambda^{\prime}}{r}\right)=8\pi p_{t}, \tag{7}\] where a prime denotes the differentiation with respect to the radial coordinate \(r\). From equations (5), (6), and (7), we get the equation of hydrostatic equilibrium in the presence of pressure anisotropy, which can be written as \[p_{r}^{\prime}=-\frac{\nu^{\prime}}{2}(\rho+p_{r})+\frac{2\chi}{r}. \tag{8}\] The interior metric function (\(r<R\)), (where R is the radius of the star) can be found from equation (5), which can be written as \[e^{-\lambda}=1-\frac{2m}{r},\text{ where, }m=4\pi\int_{0}^{r}\rho(r^{\prime})r^{ \prime 2}dr^{\prime}\, \tag{9}\] is the mass enclosed within a spherical region of radius \(r\). Using (6) and (9) the hydrostatic equilibrium equation (8) can be written as \[p_{r}^{\prime}=-\frac{(\rho+p_{r})(m(r)+4\pi p_{r}r^{3})}{r(r-2m)}+\frac{2\chi }{r}. \tag{10}\] This is the modified Tolman-Oppenheimer-Volkoff (TOV) equation, taking local pressure anisotropy into account. To solve this equation, we need to specify the equation of state of the neutron star matter, i.e., the dependance of \(\rho\) on \(p_{r}\) and the anisotropy parameter \(\chi\). As the boundary conditions, we set a finite radial pressure at the center of the star and zero radial pressure at the surface of the star. Other than these conditions, the metric functions should match with the exterior Schwarzschild metric at \(r=R\), i.e., \[\lambda(R)=-\nu(R)=-ln\left(1-\frac{2M}{R}\right), \tag{11}\] where M is the total mass of the star, i.e. \(M=m(R)\). ### Description of equation of state and anisotropy parameter As mentioned earlier, to build a model of neutron star, one needs to specify how the pressure inside them varies with the density, which is known as the equation of state (EoS). There are studies of microscopic physics leading to various equations of state (EoS)[37; 38]. However, those studies usually do not take into account the effect of the space-time curvature on the properties of matter. Moreover, these EoS are for isotropic matter. Unfortunately, no rigorous study has been performed to model the pressure anisotropy inside a neutron star, but there are some popular ansatz, where one borrows the variation of the radial pressure with density from a given EoS derived through theoretical modeling and then some ansatz for the anisotropy parameter \(\chi\) (see equation (2)). In the present work, we adopt a quasi-local ansatz for \(\chi\) as proposed by Horvat et al. [34], where the anisotropy parameter depends on the radial pressure \(p_{r}\) and a quasi-local variable \(\mu\), i.e., \(\chi(p_{r},\mu)\). We assume that \(\mu\) is equivalent to the local compactness, i.e., \(\mu=2m/r\). Furthermore, the functional form of \(\chi\) is taken as: \[\chi=\tau p_{r}\mu, \tag{12}\] where \(\tau\) is a parameter governing the strength of the anisotropy. Consistent with previous studies [39; 31; 40], we consider values of \(\tau\) within the range \(-2\leq\tau\leq 2\). The chosen form of the anisotropy parameter in Eq. (12) possesses two appealing characteristics. Firstly, the anisotropy parameter vanishes at the center of the object, as the compactness scales as \(\mu\sim r^{2}\) when \(r\to 0\), ensuring the regularity of the anisotropic parameter. Another motivation for this choice is drawn from astrophysical considerations. In the non-relativistic regime, where \(\mu\) is significantly smaller than unity (\(\mu\ll\mathcal{O}(1)\)), the impact of pressure anisotropy is expected to be negligible. This aligns with our decision to adopt the specified form of the anisotropy parameter, as it allows for a smooth transition between isotropic and anisotropic behavior in a physically meaningful manner. Hence, to solve the modified TOV equation (Eq.(10)), we first need to choose the EoS for the neutron star, i.e., a relation between \(p_{r}\) and \(\rho\). There are many EoS available in the literature. In the present work, we use an analytical representation of the Brussels - Montreal unified EoS for the nuclear matter, referred to as BSk19, BSk20, BSk21 [41; 42; 43], which model \(p_{r}(\rho)\) with two parameters \(\zeta\) and \(\sigma\) that are parametrised as [44]: \[\begin{split}\zeta=\frac{a_{1}+a_{2}\sigma+a_{3}\sigma^{3}}{1+a_{4 }\sigma}f(a_{5}(\sigma-a_{6}))&+(a_{7}+a_{8}\sigma)f(a_{9}(a_{6} -\sigma))+(a_{10}+a_{11}\sigma)f(a_{12}(a_{13}-\sigma))\\ &+(a_{14}+a_{15}\sigma)f(a_{16}(a_{17}-\sigma))+\frac{a_{18}}{1+[ a_{19}(\sigma-a_{20})]^{2}}+\frac{a_{21}}{1+[a_{22}(\sigma-a_{23})]^{2}},\end{split} \tag{13}\] where \(\zeta=\log_{10}(p_{r}\text{ in }\text{dyn }cm^{-2})\), \(\sigma=\log_{10}(\rho\text{ in }\text{g }cm^{-3})\), and \(f(x)=[exp(x)+1]^{-1}\). The values of the coefficients \(a_{i}(i=1,\ldots,23)\) can be found in Ref. [44]. Throughout most of this article, our analysis primarily focuses on the BSk21 EoS. However, to compare our results across different levels of matter stiffness, we also consider the BSk19 EoS in selected cases. Whenever applicable, we explicitly indicate the usage of the BSk19 EoS and present the corresponding results for comparison. To check the validity of our chosen EoS and the anisotropy strength \(\tau\), we first calculate mass-radius curves for various values of \(\tau\) (in the range of \(-2\) to \(2\), as mentioned earlier) for the BSk21 EoS as shown in Fig. 1. From this figure, it is evident that as the anisotropic strength increases (resulting in higher a higher value of \(p_{t}\) for the same value of \(p_{r}\)), neutron stars are capable of accommodating more mass. This feature is particularly valuable for explaining the existence of more massive neutron stars, as hinted by recent observational studies. In Fig. 1, we have included the corresponding bands for PSR J2045+3633 [45] and PSR J0952-0607 [46], representing low-mass (\(1.252\pm 0.021M_{\odot}\)) and massive (\(2.35\pm 0.17M_{\odot}\)) neutron stars, respectively. Additionally, the band for the event GW190814 [47] is displayed in the same figure, corresponding to an object with a mass range of \(2.50-2.67M_{\odot}\). Notably, this object lies within the so-called "mass gap", implying that it could be either a highly massive compact star or a light black hole. From Fig. 1, it is evident that the presence of anisotropy provides an explanation for the existence of very massive neutron stars, which would not be feasible within the framework of isotropic stars governed by a given equation of state (in this case, BSk21). Note that, for each value of \(\tau\), we have plotted the \(M-R\) curve only up-to the stable configuration. This can be understood by plotting the total mass against the central density, as shown in Fig. 2. This figure shows that for each case, above a certain central density, these models become unstable. This stable region is indicated by the positive slope of the mass-central density curve (\(dM/d\rho_{c}>0\)) for those models. In Fig. 2, the point where \(dM/d\rho_{c}=0\) in each of the profile is marked with a black filled circle. In Fig. 1, we started plotting \(M-R\) curves from a low value of \(\rho_{c}\) up-to a high value where \(dM/d\rho_{c}=0\). We have observed that, for a fixed central density, when the anisotropy strength \(\tau\) is positive (implying \(p_{t}>p_{r}\)), the neutron star has a greater mass than its corresponding isotropic case. Conversely, when the anisotropy strength \(\tau\) is negative (implying \(p_{t}<p_{r}\)), the neutron star has a lower mass than its corresponding isotropic case. This behavior is evident in Figure 3, where the radial coordinate (in km) inside the neutron star is plotted on the abscissa, and the corresponding enclosed mass in solar mass units is plotted on the ordinate. The various curves in this figure represent different values of \(\tau\), while the value of \(\rho_{c}\) is consistent at \(7.2955\times 10^{14}\)g/cm\({}^{3}\) for all cases. This feature can be explained through a simple force balance equation within the neutron star. When \(\tau\) is positive, the pressure in the tangential direction exceeds the radial pressure. In contrast, for isotropic stars, the pressure is the same in both radial and tangential directions. As a result, the effective force in the outward direction for an anisotropic star (with \(\tau>0\)) is greater than that of the isotropic star with the same central density. This additional outward force can balance the gravitational pull of more mass, which is directed towards the center of the star. As a consequence, anisotropic neutron stars with a positive value of \(\tau\) are more massive compared to their isotropic counterparts with the same central density. The lower mass of anisotropic stars with \(\tau<0\) can be understood in a similar manner. In Fig. 4, we plot the radial coordinate (in km) inside the neutron star along the horizontal axis and corresponding density in \(10^{14}\) g/cm\({}^{3}\) along the vertical axis, for different values of \(\tau\), each started with \(\rho_{c}=7.2955\times 10^{14}\)g/cm\({}^{3}\) (the point where all the curves meet on the vertical axis). We see that for each case, the density consistently decreases as the radial coordinate increases. Now, in Fig. 5, we show the radial coordinate (in km) inside the neutron star along the abscissa and the corre Figure 2: Mass - Central density profiles for anisotropic neutron stars with \(-2\leq\tau\leq 2\) for BSk21 EoS. Black filled circles on each profile represents the point where \(dM/d\rho_{c}=0\). Figure 1: Mass - Radius profiles for anisotropic neutron stars with \(-2\leq\tau\leq 2\) and BSk21 EoS. The profiles are shown up to the point where \(dM/d\rho_{c}=0\). sponding pressure along the ordinate for different values of \(\tau\). The left panel of the figure has the radial pressure along the ordinate while the right panel has the tangential pressure along the ordinate for same values of \(\tau\). Both of the plots are for \(\rho_{c}=7.2955\times 10^{14}\)g/cm\({}^{3}\). We see that for \(\tau=-2,-1,0\), and \(1\), both \(p_{r}\) and \(p_{t}\) decreases monotonically with the increase of \(r\). However, for \(\tau=2\), although \(p_{r}\) decreases monotonically with the increase of \(r\), \(p_{t}\) does not do so always. There is a range of \(r\) where \(p_{t}\) increases with increasing \(r\). In this region, the sound velocity in the tangential direction becomes negative, that violates causality. Hence, no neutron star can exist with \(\rho_{c}=7.2955\times 10^{14}\)g/cm\({}^{3}\) and \(\tau=2\) for BSk21 EoS and the anisotropic ansatz. We exclude such physically unacceptable cases from our analysis. A more detailed analysis of these aspects can be found in [48]. ## III Non-radial oscillations of anisotropic compact stars ### Analytical Setup To derive the differential equations governing the non-radial oscillations of compact stars in general relativity, we have to decompose the perturbed metric in the background part and a perturbation on it. So, the perturbed metric can be written as \[g_{\mu\nu}=g_{\mu\nu}^{(B)}+h_{\mu\nu}, \tag{14}\] where \(g_{\mu\nu}^{(B)}\) is the background metric of spherically symmetric, static star described in Eq. (1), and \(h_{\mu\nu}\) is the perturbation on it up to linear order. The linear order perturbation of these static spherically symmetric stars can be decomposed into spherical harmonics \(Y_{l}^{m}\) and a function that has a radial dependence. The time dependence can be incorporated in the functions that has radial dependence. The perturbed metric is sourced by the perturbations to the energy - momentum tensor \(\delta T_{\alpha}^{\beta}\). The coupling between metric perturbation and the perturbation in energy - momentum tensor is described by the linearized Einstein equations, \[\delta G_{\alpha}^{\beta}=8\pi\delta T_{\alpha}^{\beta}, \tag{15}\] where \(\delta G_{\alpha}^{\beta}\) is perturbed Einstein tensor. The expression of the linearized Einstein tensor, \(\delta G_{\mu\nu}\), is given by, Figure 4: Density profile inside neutron stars for various anisotropic strength, each for \(\rho_{c}=7.2955\times 10^{14}\) g/cm\({}^{3}\). Figure 5: The variation of the radial (left panel) and the tangential (right panel) pressure inside neutron stars for various values of the anisotropic strength, each for \(\rho_{c}=7.2955\times 10^{14}\) g/cm\({}^{3}\). Figure 3: Variation of enclosed mass with the radial coordinate inside neutron stars for various values of the anisotropic strength, each for \(\rho_{c}=~{}7.2955\times 10^{14}\)g/cm\({}^{3}\)(this central energy density gives neutron star with mass \(M=1.4M_{\odot}\) in the case with zero anisotropic pressure, i.e. when \(\tau=0\).). \[\delta G_{\mu\nu}=-\frac{1}{2}[\nabla^{\alpha}\nabla_{\alpha}h_{\mu \nu}-(\nabla_{\nu}f_{\mu}+\nabla_{\mu}f_{\nu})+2R^{\alpha}_{\ \mu\ \ \nu}h_{\alpha\beta}+\nabla_{\nu}\nabla_{\mu}h^{\alpha}_{\ \alpha}-(R^{\alpha}_{\ \nu}h_{\mu \alpha}+R^{\alpha}_{\ \mu}h_{\nu\alpha})+ \tag{16}\] \[g^{(B)}_{\mu\nu}(\nabla^{\alpha}f_{\alpha}+\nabla^{\beta}\nabla _{\beta}h^{\alpha}_{\ \alpha})+Rh_{\mu\nu}-g^{(B)}_{\mu\nu}R^{\alpha\beta}h_{\alpha\beta}],\] where \(f_{\nu}=\nabla^{\beta}h_{\nu\beta}\) and \(R_{\alpha\beta\gamma\delta}\), \(R_{\alpha\beta}\), and \(R\) are Riemann tensor, Ricci tensor, and Ricci scalar of the background metric respectively. We restrict ourselves to even parity perturbation (polar modes) for fixed values of \(l\) and \(m\). In Regge-Wheeler gauge [49], the even mode of perturbation takes the form \[h_{\alpha\beta}=-r^{l}\begin{pmatrix}e^{\nu}H_{0}(r)&i\omega rH_{1}(r)&0&0\\ i\omega rH_{1}(r)&e^{\lambda}H_{2}(r)&0&0\\ 0&0&r^{2}K(r)&0\\ 0&0&0&r^{2}\sin^{2}\theta K(r)\end{pmatrix}Y_{l}^{m}e^{i\omega t}, \tag{17}\] where \(H_{0}\), \(H_{1}\), \(H_{2}\), \(K\) are metric perturbation functions. We can still reduce the independent components of metric perturbation. If we subtract 33 component of Einstein equation from the 22 component, We will come up with the following relation, \[\delta(G_{2}^{2}-G_{3}^{3}) = \frac{8\pi G}{c^{4}}\delta(T_{2}^{2}-T_{3}^{3}) \tag{18}\] \[= \frac{8\pi G}{c^{4}}(\delta p_{t}-\delta p_{t})\] (19) \[= 0, \tag{20}\] where we have used the fact that \(T_{\alpha}^{\beta}=diag(-\rho,p_{r},p_{t},p_{t})\). The left hand side of the equation can easily be calculated from (16). After some algebraic manipulations we can come up with the relation \(H_{0}=H_{2}\). Here after we shall write \(H_{0}=H_{2}=H\). The perturbation of the fluid in the star is described by the Lagrangian displacement vector \(\xi^{\alpha}\), which has components [24; 31; 50], \[\xi^{\mu}=\begin{pmatrix}0\\ r^{l-1}e^{-\lambda/2}W(r)\\ -r^{l-2}V(r)\partial_{\theta}\\ -\frac{r^{l-2}}{\sin^{2}\theta}V(r)\partial_{\phi}\end{pmatrix}Y_{l}^{m}e^{i \omega t}, \tag{21}\] where \(V\) and \(W\) are fluid perturbation function. We shall use this Lagrangian displacement extensively to describe the perturbation in the energy - momentum tensor. At first we shall describe the Lagrangian variation of the four velocity, \(u^{\mu}\) in the full general relativistic theory, which is given by [51] \[\Delta u^{\mu}=\frac{1}{2}u^{\mu}u^{\nu}u^{\sigma}\Delta g_{\nu\sigma}, \tag{22}\] where the Lagrangian perturbation of metric is \[\Delta g_{\alpha\beta}=h_{\alpha\beta}+\nabla_{\alpha}\xi_{\beta}+\nabla_{ \beta}\xi_{\alpha}, \tag{23}\] where \(\nabla_{\sigma}\) denotes covariant derivative along coordinate \(x^{\sigma}\). This Lagrangian perturbations are directly related to Eulerian perturbations by the relation \(\Delta=\delta+\mathcal{L}_{\xi}\), where \(\delta\) denotes the Eulerian variation and \(\mathcal{L}_{\xi}\) is the Lie derivative along \(\xi^{\alpha}\). The explicit expression for the Eulerian variation of the velocity four vector is given by \[\delta u^{\mu}=(\delta_{\rho}^{\mu}+u^{\mu}u_{\rho})(u^{\sigma}\nabla_{\sigma} \xi^{\rho}-\xi^{\sigma}\nabla_{\sigma}u^{\rho})+\frac{1}{2}u^{\mu}u^{\sigma}u^ {\rho}h_{\sigma\rho}. \tag{24}\] After substituting the explicit expressions of \(u^{\sigma}\), \(\xi^{\sigma}\) and \(h_{\alpha\beta}\) to the (24), we shall get \[\delta u^{\sigma}=\begin{pmatrix}-\frac{H}{2}\\ i\omega r^{-1}e^{-\lambda/2}W\\ -i\omega r^{-2}V\partial_{\theta}\\ -i\omega(r\sin\theta)^{-2}V\partial_{\phi}\end{pmatrix}r^{l}e^{-\nu/2}Y_{l}^{m }e^{i\omega t}. \tag{25}\] We can also expand the perturbation of radial unit vector, \(s^{\mu}\), in harmonics as follows, \[\delta s^{\sigma}=\begin{pmatrix}i\omega S_{0}(r)\\ S_{1}(r)\\ 0\\ 0\end{pmatrix}r^{l}Y_{l}^{m}e^{i\omega t}. \tag{26}\] As \(s^{\sigma}\) is space-like unit radial vector, so, up to linear order it should satisfy, \(s_{\sigma}\delta s^{\sigma}+s^{\sigma}\delta s_{\sigma}=0\), as well as \(u_{\sigma}\delta s^{\sigma}+s^{\sigma}\delta u_{\sigma}=0\). These two conditions allow us to write \(S_{0}\) and \(S_{1}\) as, \[S_{0} = \frac{e^{-\nu}}{r}W-e^{-\nu-\lambda/2}rH_{1}\, \tag{27}\] \[S_{1} = \frac{e^{-\lambda/2}}{2}H. \tag{28}\] Now to describe the perturbation in density and pressures (both radial and tangential) we have to know the perturbation of number density. The Lagrangian perturbation of number density of the particles can be derived from the conservation of number density current, which is given by, \(\nabla_{\alpha}n^{\alpha}=0\), where \(n^{\alpha}=nu^{\alpha}\), and \(n\) is the number density of the particles. The Lagrangian perturbation of number density of particles in the neutron star is given by[51; 52], \[\Delta n=-\frac{1}{2}n\perp^{\mu\nu}\Delta g_{\mu\nu}\, \tag{29}\] where \(\perp^{\mu\nu}\) is the projection operator, which is orthogonal to the fluid flow. \(\perp^{\mu\nu}\) can be expressed as, \[\perp^{\mu\nu}=u^{\mu}u^{\nu}+g^{\mu\nu}. \tag{30}\] Computing explicitly, we can write the expression of \(\Delta n\) as, \[\Delta n=-\frac{1}{2}n\perp_{g}Y_{l}^{m}e^{i\omega t}, \tag{31}\] where, \[\perp_{g}=-2r^{l} \Bigg{[} \Bigg{(}K+\frac{1}{2}H\Bigg{)}-\frac{l(l+1)}{r^{2}}V \tag{32}\] \[- \frac{e^{-\lambda/2}}{r}W^{\prime}-\Bigg{(}\frac{l+1}{r^{2}}e^{- \lambda/2}\Bigg{)}W\Bigg{]},\] which is similar to the expression described by Comer et al. [25]. In our work we consider the matter is barotropic, where the energy density (\(\varepsilon\)) is only a function of number density, \(\varepsilon=\varepsilon(n)\). In our study, this can also be written as \(\rho=\rho(n)\). This consideration implies that the perturbation in density can be written as, \[\Delta\rho=\frac{d\rho}{dn}\Delta n=\kappa\Delta n, \tag{33}\] where \(\kappa\) is the chemical potential (\(\kappa=d\rho/dn\)). Now we can use the Gibbs relation, which is given by \(\rho+p=\kappa n\)[51], where[53] \[p=\frac{1}{3}\left(T_{1}^{1}+T_{2}^{2}+T_{3}^{3}\right)=\frac{p_{r}+2p_{t}}{3 }=p_{r}+\frac{2}{3}\chi. \tag{34}\] Using Eq. (31) in Eq. (33) and the Gibbs relation, we get, \[\Delta\rho=-\frac{1}{2}(p+\rho)\perp_{g}Y_{l}^{m}e^{i\omega t}. \tag{35}\] In the similar way the Lagrangian variation of the radial pressure can be expressed as \[\Delta p_{r}=\frac{dp_{r}}{d\rho}\Delta\rho=-\frac{1}{2}\frac{dp_{r}}{d\rho}( p+\rho)\perp_{g}Y_{l}^{m}e^{i\omega t}. \tag{36}\] From the relation between Lagrangian and Eulerian variation (\(\Delta=\delta+\mathcal{L}_{\xi}\)), we can write the relation between Lagrangian perturbation and Eulerian perturbation for radial pressure as, \[\Delta p_{r} =\delta p_{r}+\xi^{r}p_{r}^{\prime}\] \[=\delta p_{r}+r^{l-1}e^{-\lambda/2}\left[-\frac{\nu^{\prime}}{2} (\rho+p_{r})+\frac{2\chi}{r}\right]WY_{l}^{m}e^{i\omega t}. \tag{37}\] From (36) and (37) we can get the expression of the Eulerian variation of the radial pressure, \(\delta p_{r}\). With the help of \(\delta p_{r}\), we can calculate the Eulerian variation of \(\rho\) and \(p_{t}\), which are given by, \[\delta\rho = \frac{d\rho}{dp_{r}}\delta p_{r}, \tag{38}\] \[\delta p_{t} = \delta\chi+\delta p_{r}. \tag{39}\] With these expressions in our hand, we are ready to calculate the perturbation of energy-momentum tensor, namely \(\delta T_{\alpha}^{\beta}\), which can be written as, \[\delta T_{\alpha}^{\beta}=(\delta\rho+\delta p_{t})u^{\beta}u_{ \alpha}+(\rho+p_{t})(u^{\beta}\delta u_{\alpha}+u_{\alpha}\delta u^{\beta})\] \[+\delta_{\alpha}^{\beta}\delta p_{t}+(\delta p_{r}-\delta p_{t}) s^{\beta}s_{\alpha}+(p_{r}-p_{t})(s^{\beta}\delta s_{\alpha}+s_{\alpha} \delta s^{\beta}). \tag{40}\] Using (40), (25), (26), \(u^{\beta}\), and \(s^{\beta}\) we can calculate the nonzero components of the perturbed energy-momentum tensor, which are given by, \[\delta T_{0}^{0}=-\delta\rho=-\delta\tilde{\rho}(r)r^{l}Y_{l}^{m} e^{i\omega t}\, \tag{41a}\] \[\delta T_{1}^{0}=-i\omega e^{-\lambda/2}r^{-1}(p_{r}+\rho)Wr^{l}Y _{l}^{m}e^{i\omega t}\,\] (41b) \[\delta T_{0}^{2}=i\omega r^{-2}(p_{r}+\rho+\chi)Vr^{l}\partial_{ \theta}Y_{l}^{m}e^{i\omega t}\,\] (41c) \[\delta T_{1}^{3}=i\omega r^{-2}(p_{r}+\rho+\chi)Vr^{l}(\sin\theta )^{-2}\partial_{\phi}Y_{l}^{m}e^{i\omega t}\,\] (41d) \[\delta T_{1}^{1}=\delta p_{r}=\delta\tilde{p}_{r}(r)r^{l}Y_{l}^{m} e^{i\omega t}\,\] (41e) \[\delta T_{2}^{2}=\delta p_{t}=(\delta\tilde{p}_{r}(r)+\delta \tilde{\chi}(r))r^{l}Y_{l}^{m}e^{i\omega t}\,\] (41f) \[\delta T_{3}^{3}=\delta p_{t}=(\delta\tilde{p}_{r}(r)+\delta \tilde{\chi}(r))r^{l}Y_{l}^{m}e^{i\omega t}\, \tag{41g}\] where \(\delta\rho\), \(\delta p_{r}\) and \(\delta\chi\) can be decomposed in radial, angular and temporal part as, \(\delta\rho=\delta\tilde{\rho}(r)r^{l}Y_{l}^{m}e^{i\omega t}\), \(\delta p_{r}=r^{l}\delta\tilde{p}_{r}(r)Y_{l}^{m}e^{i\omega t}\) and \(\delta\chi=r^{l}\delta\tilde{\chi}(r)Y_{l}^{m}e^{i\omega t}\). Comparison with the above expressions reveals that if we set \(\chi=\delta\chi=0\), the resulting expressions will resemble those of the isotropic case described by Thorne and Campolatta [22]. Then the equations which govern the motion of metric and fluid perturbation variables, can be obtained from linearized Einstein equations (15), and linearized equation for conservation of energy-momentum tensor, i.e. \(\delta(\nabla_{\nu}T^{\mu\nu})=0\). At first we perform a change of variable, which will eventually simplify the boundary condition. We know that, at the surface of the star (\(r=R\)), the Lagrangian perturbation of the radial pressure \(\Delta p_{r}\) will be zero (\(\Delta p_{r}=0\)). So, we will change the expression of the Lagrangian perturbation of radial pressure, such that it takes the form \[\Delta p_{r}=-r^{l}e^{-\nu/2}XY_{l}^{m}e^{i\omega t}. \tag{42}\] Using the expressions from equations (32), (36), and (42), we can extract the expression of \(X\), which is given by, \[X=(\rho + p)c_{s}^{2}\Big{[}\frac{e^{\nu/2-\lambda/2}}{r}W^{\prime}+\frac {(l+1)e^{\nu/2-\lambda/2}}{r^{2}}W \tag{43}\] \[+\frac{l(l+1)e^{\nu/2}}{r^{2}}V-e^{\nu/2}K-\frac{e^{\nu/2}}{2}H \Big{]},\] where \(c_{s}^{2}=dp_{r}/d\rho\) and \(p=p_{r}+(2\chi/3)\). Using (36), (37), and (43), we can write the Eulerian perturbation of radial pressure as, \[\delta p_{r}=\delta\tilde{p}_{r}r^{I}Y_{l}^{m}e^{i\omega t}=\bigg{[} -e^{-\nu/2}X+\frac{1}{2r^{2}}e^{-\lambda/2}(-4\chi\] \[+r\nu^{\prime}(p_{r}+\rho))W\bigg{]}r^{I}Y_{l}^{m}e^{i\omega t}. \tag{44}\] After redefining variables, we can write down the governing equations of oscillations, which are given by \[H_{1}^{\prime} =\frac{e^{\lambda}}{r}H+\frac{e^{\lambda}}{r}K-\frac{l+1+2mr^{-1} e^{\lambda}-4\pi r^{2}e^{\lambda}(p_{r}-\rho)}{r}H_{1}-\frac{16\pi e^{\lambda}( \rho+p_{r}+\chi)}{r}V\, \tag{45}\] \[K^{\prime} =\frac{1}{r}H-\left[(l+1)r^{-1}-\frac{1}{2}\nu^{\prime}\right]K+ \frac{l(l+1)}{2r}H_{1}-\frac{8\pi e^{\lambda/2}(p_{r}+\rho)}{r}W\,\] (46) \[W^{\prime} =\frac{1}{2}e^{\lambda/2}rH+e^{\lambda/2}rK-\frac{e^{\lambda/2} l(l+1)}{r}V-\frac{(l+1)}{r}W+\frac{re^{\frac{\lambda-\nu}{2}}}{c_{s}^{2} \left(p_{r}+\rho+\frac{2\chi}{3}\right)}X\,\] (47) \[X^{\prime} =-2\frac{e^{\nu/2}}{r}\delta\tilde{\chi}-\left[\frac{l}{r}+\frac {\chi(6+r\nu^{\prime})}{rc_{s}^{2}(3(p_{r}+\rho)+2\chi)}\right]X+(p_{r}+\rho) e^{\nu/2}\left[\frac{1}{2r}-\frac{\nu^{\prime}}{2}\right]H\] \[\quad-\left[\frac{e^{\nu/2}l(l+1)(p_{r}+\rho)\nu^{\prime}}{2r^{2} }-\chi\frac{2l(l+1)e^{\nu/2}}{r^{3}}\right]V\] \[\quad+\left[\frac{e^{\nu/2}(p_{r}+\rho)}{2}\left(\frac{l(l+1)}{2 r}+r\omega^{2}e^{-\nu}\right)+\frac{\chi e^{\nu/2}l(l+1)}{2r}\right]H_{1}\] \[\quad+\left[\frac{1}{2}e^{\nu/2}(p_{r}+\rho)\left(\frac{3}{2}\nu ^{\prime}-\frac{1}{r}\right)+\chi\frac{e^{\nu/2}(-6+r\nu^{\prime})}{2r}\right]K\] \[\quad+\left[-\frac{e^{\nu/2}(p_{r}+\rho)}{r}\left(e^{\lambda/2- \nu}\omega^{2}+4\pi e^{\lambda/2}(p_{r}+\rho)-\frac{1}{2}r^{2}(r^{-2}e^{- \lambda/2}\nu^{\prime})^{\prime}\right)\right.\] \[\quad\left.-\chi^{\prime}\frac{2e^{(\nu-\lambda)/2}}{r^{2}}+\frac {e^{(\nu+\lambda)/2}}{r^{3}}\chi\left(6-14mr^{-1}-8\pi r^{2}p_{r}\right)\right]W. \tag{48}\] Other than these four differential equations, we can obtain two algebraic relations, which are given by, \[\begin{split}\Bigg{[}3m+\frac{1}{2}(l-1)(l+2)+4\pi r^{3}p_{r} \Bigg{]}H=8\pi e^{-\nu/2}r^{3}X-\Bigg{[}-e^{-\nu-\lambda}r^{3}\omega^{2}+ \frac{1}{2}l(l+1)(m+4\pi r^{3}p_{r})\Bigg{]}H_{1}\\ +\chi 16e^{-\lambda/2}\pi rW-\Bigg{[}-\frac{1}{2}(l-1)(l+2)r+e^{- \nu}r^{3}\omega^{2}+\frac{e^{\lambda}}{r}(m+4\pi r^{3}p_{r})(3m-r+4\pi r^{3} p_{r})\Bigg{]}K,\end{split} \tag{49}\] \[\omega^{2}(\rho+p_{r}+\chi)V=e^{\nu/2}X-\frac{1}{2}e^{\nu}(p_{r}+\rho)H+\frac {e^{\nu-\lambda/2}p_{r}^{\prime}}{r}W-e^{\nu}\delta\tilde{\chi}. \tag{50}\] These are the set of differential equations and algebraic relations, which mainly govern the oscillation of anisotropic neutron stars in general relativity. The equations for metric perturbation dynamics, i.e. \(H_{1}^{\prime}\) is obtained from \(\delta G_{0}^{2}=8\pi\delta T_{0}^{2}\) and the equation for \(K^{\prime}\) is obtained from \(\delta G_{0}^{1}=8\pi T_{0}^{1}\). On the other hand, the dynamics of fluid perturbations, which are given by \(W^{\prime}\) and \(X^{\prime}\), can be obtained from changing the variables in Eq. (43) and \(\delta(\nabla_{\mu}T_{1}^{\mu})=0\), respectively. Other than these set of four coupled first order differential equations, we can have two algebraic relations, which are given by Eq. (49) and (50). Among two algebraic relations, the first one is obtained from combining \(\delta G_{1}^{1}=8\pi T_{1}^{1}\) and \(\delta G_{1}^{2}=8\pi\delta T_{1}^{2}\), which relates the metric perturbation variable \(H\), with the other perturbation variables. The second algebraic relation is obtained from \(\delta(\nabla_{\mu}T_{2}^{\mu})=0\), which relates the fluid perturbation variable \(V\), with other variables. In the set of equations, it becomes evident that the presence of anisotropy manifests through the inclusion of the terms \(\chi\), \(\chi^{\prime}\), and \(\delta\chi\). When the system lacks anisotropy, all these terms assume a value of zero, rendering these equations identical to those in the isotropic scenario described in [24]. For the choice of our ansatz of anisotropy, \[\delta\chi=\frac{\partial\chi}{\partial p_{r}}\delta p_{r}+\frac{\partial \chi}{\partial\mu}\delta\mu. \tag{51}\] It is important to note that since we have dropped the Cowling approximation, the variable \(\mu\) should also be perturbed. The quantity \(\mu=1-e^{-\lambda}\), which describes the local compactness of the star, can be perturbed as \(\delta\mu=\lambda e^{\lambda}\delta\lambda\), where \(\delta\lambda\) represents the perturbation of \(\lambda\). As we are interested up to the linear order perturbation, so from the perturbation of the metric, we can write, \[e^{\lambda+\delta\lambda}=e^{\lambda}(1-Hr^{l}Y_{l}^{m}e^{i\omega t}). \tag{52}\] Neglecting the higher order perturbation terms (\(\text{O}((\delta\lambda)^{2})\dots\)), we can write, \[\delta\lambda=-Hr^{l}Y_{l}^{m}e^{i\omega t}. \tag{53}\] After deriving these expressions, we can write the perturbation of \(\chi\) as, \[\delta\chi=\tau\left[\left(\frac{2m}{r}\right)\delta\tilde{p}_{r}-\left(p_{r} \left(1-\frac{2m}{r}\right)H\right)\right]r^{l}Y_{l}^{m}e^{i\omega t}\, \tag{54}\] where, \(\delta\tilde{p}_{r}\) is described in equation (44). ### Numerical Techniques Equipped with this set of differential equations, we are now prepared to solve them numerically. Since our primary objective is to determine the quasi-normal modes, it is important to carefully consider the specific initial value and boundary value conditions. In contrast to the background case, we are confronted with the task of handling seven first-order differential equations (three for the background and four for the perturbations) as well as two simultaneous algebraic relations. It is crucial to ensure that the equations remain regular at the center, while the boundary condition dictates that the Lagrangian perturbation of radial pressure (\(\Delta p_{r}\)) must be zero at the outermost surface of the perturbed star. By examining the set of equations, we can observe that they exhibit singularity at \(r=0\). In order to ensure regularity at the center, we can expand the variables in a series and substitute them back into the equations to derive the necessary conditions. One can find that, the conditions are as follows, \[H_{1}(0) =\frac{1}{l(l+1)}(2lK(0)+16\pi(\rho(0)+p_{r}(0))W(0))\, \tag{55}\] \[X(0) =(p_{r}(0)+\rho(0))e^{\nu(0)/2}\bigg{(}\frac{4\pi}{3}(\rho(0)+p_{ r}(0))\] \[\quad-\frac{\omega^{2}e^{-\nu(0)}}{l}\bigg{)}W(0)+\frac{1}{2}K(0) -e^{\nu(0)/2}\chi_{2}W(0)\, \tag{56}\] Where, \(\chi_{2}\) is second order in the expansion of \(\chi(r)\). Since the radial and tangential pressures are equal at the center, we have \(\chi(0)=\delta\chi(0)=0\). This condition guarantees that at the center, the radial pressure (\(p_{r}\)) is equal to the tangential pressure (\(p_{t}\)), and the Eulerian perturbation of radial pressure (\(\delta p_{r}\)) is equal to the Eulerian perturbation of tangential pressure (\(\delta p_{t}\)). Based on our ansatz, we can express \(\chi_{2}\) as \(\chi_{2}=\tau\frac{16\pi}{3}\rho(0)p_{r}(0)\). Therefore, it is evident that we have two linearly independent solutions at the center, as we can choose \(W(0)\) and \(K(0)\) independently. Another notable point to mention is that these expressions exhibit similarity to those of the isotropic case described by [24], with the exception of the term \(\chi_{2}\), which arises due to the presence of anisotropy. In order to numerically integrate these equations, we need to specify the values of \(W(0)\) and \(K(0)\). While it is possible to choose any two linearly independent values, in our calculations, we have used \(W(0)=1\) and \(K(0)=\pm(\rho(0)+p(0))\). We have focused on this solution within a small region near the center, approximately 1 cm or \(R/10^{6}\), where \(R\) represents the radius of the neutron star. From this point (\(\sim R/10^{6}\)), we have numerically integrated the set of equations up to the surface of the star, ensuring that \(X(R)=0\) by combining the solutions accordingly. This condition ensures the Lagrangian perturbation of radial pressure at surface is zero. For the integration process, we have employed the 'LSODA' method of integration, which dynamically switches between the BDF (Backward Differentiation Formula) method and the Adams method based on the stiffness of the coupled equations. In our implementation, we have set both the relative and absolute tolerances to approximately \(10^{-10}\). By defining these tolerance values, we effectively govern the accuracy and precision of the numerical integration, ensuring reliable results. Which completes the integration inside the neutron star. To determine the quasi-normal modes, it is necessary to identify frequencies for which the perturbation equation satisfies an outgoing wave condition at spatial infinity. Outside the star, all the fluid perturbation variables become zero, resulting in a reduction of the equations to two, specifically \(H_{1}^{\prime}\) and \(K^{\prime}\). These two equations can be combined into a single second-order differential equation known as the Zerilli equation, which was derived by Frank J. Zerilli [54] for the Schwarzschild geometry. Later the similar procedure was implemented by Lindblom and Detweiler [23] for the case of neutron stars. The transformation of the variables is given by, \[r^{l}K =\frac{n_{l}(n_{l}+1)r^{2}+3n_{l}Mr+6M^{2}}{r^{2}(n_{l}r+3M)}Z+ \frac{dZ}{dr^{*}}\, \tag{57}\] \[r^{l+1}H_{1} =\frac{n_{l}r^{2}-3n_{l}Mr-3M^{2}}{(r-2M)(n_{l}r+3M)}Z+\frac{r^{2 }}{(r-2M)}\frac{dZ}{dr^{*}}\, \tag{58}\] where \(n_{l}=(l(l+1))/2)-1\), and \(r^{*}=r+2Mln((r/2M)-1)\). With this transformation, the differential equations, \(H_{1}^{\prime}\) and \(K^{\prime}\) can be combined to a single second order differential equation, which is given by, \[\frac{d^{2}Z}{dr^{*2}}+\left(\omega^{2}-V_{z}(r^{*})\right)Z=0\, \tag{59}\] where, \[V_{z}(r^{*})=\frac{(1-2M/r)}{r^{3}(n_{l}r+3M)^{2}}\big{(}2n_{l}^ {2}(n_{l}+1)r^{3}+6n_{l}^{2}Mr^{2}\] \[+18n_{l}M^{2}r+18M^{3}\big{)}. \tag{60}\] The equation, known as the Zerilli equation, needs to be numerically integrated from the surface of the star (\(R\)) to spatial infinity. In our numerical implementation, we have performed the integration up to a distance of approximately 50 \(\omega^{-1}\). At spatial infinity, the solution can be assumed to be purely sinusoidal, which can be interpreted as a composition of ingoing and outgoing waves. Denoting ingoing wave as \(Z_{in}\) and outgoing wave as \(Z_{out}\), we can write the solution at infinity as \[Z(r^{*})=B_{in}Z_{in}(r^{*})+B_{out}Z_{out}(r^{*})\, \tag{61}\] where \(B_{in}\) and \(B_{out}\) are the coefficient of \(Z_{in}\) and \(Z_{out}\) respectively, which determines the proportion of ingoing and outgoing waves at spatial infinity. At spatial infinity \(Z_{in}\) and \(Z_{out}\) can be assumed as power series, which can be written as, \[Z_{out}(r^{*}) =e^{(-i\omega r^{*})}\sum_{j=0}^{\infty}\beta_{j}r^{-j}\, \tag{62}\] \[Z_{in}(r^{*}) =e^{(i\omega r^{*})}\sum_{j=0}^{\infty}\bar{\beta}_{j}r^{-j}\, \tag{63}\] where \(\beta_{j}\) s are the coefficient of the power series expansion, and \(\bar{\beta}_{j}\) is the complex conjugate of \(\beta_{j}\). The coefficients \(\beta_{j}\) can be obtained through a recursion relation by substituting equations (62) and (63) into equation (59). To determine the asymptotic behavior of \(Z(r^{*})\), we consider the expansion up to \(O(r^{-2})\). The relevant expressions are as follows: \[\beta_{1} =-i\omega^{-1}(n_{l}+1)\beta_{0}\, \tag{64}\] \[\beta_{2} =-\frac{1}{2\omega^{2}}\left(n_{l}(n_{l}+1)-3iM\omega\left(1+ \frac{2}{n_{l}}\right)\right)\beta_{0}\, \tag{65}\] where \(\beta_{0}\) is an arbitrary constant. For numerical implementation, we have considered \(\beta_{0}=p_{r}(0)/\rho(0)\), as suggested by Chandrasekhar and Ferrari [55]. This choice provides a convenient normalization for the coefficients \(\beta_{j}\) in the recursion relation. From these expressions, we can observe that for a fixed-mass neutron star and for a specific angular mode \(l=2\), the coefficients \(B_{in}\) and \(B_{out}\) depend solely on the angular frequency \(\omega.\)To determine the numerical values of \(B_{in}\) and \(B_{out}\), we can compare the numerically integrated values of the Zerilli function and its first derivative with the corresponding asymptotic analytical values described above. By calculating \(B_{in}\) for various frequencies, we can treat it as a function of \(\omega\). Consequently, we can find the root of this function, which implies \(B_{in}(\omega)=0\), for that particular \(\omega\). This root corresponds to the quasi-normal mode frequency we seek. ## IV Results In the preceding sections, we provided the details of the analytical and numerical procedures employed to determine the quasi-normal modes of anisotropic neutron stars. Now, we will delve into the role played by anisotropy in the quasi-normal modes of neutron stars. As we are interested in the physically stable models of neutron stars, so we have considered only the models, which satisfies all the stability conditions of anisotropic neutron stars described in [56]. So we considered models only which \(dM/d\rho_{c}>0\) as well as the radial and tangential sound velocities (\(c_{s}\) and \(c_{st}\) respectively) satisfy causality condition. As discussed earlier, we consider BSk21 EoS to calculate equations for background as well as the oscillation equations. To understand the effect of stiffness on the quasi-normal modes of anisotropic neutron stars, we have also compared our results with the BSk19 EoS. For the validity of our numerical results we have calculated the quasi-normal modes for two different EoS and verified the result described in [57] and [29]. To understand the effect of the anisotropy on the quasi-normal f mode of neutron stars, we plotted the frequency of the quasi-normal mode with the average density \((M/(4/3)\pi R^{3})\) (where \(M\) and \(R\) corresponds to total mass and radius of the star respectively) of the neutron stars. From the calculations of Newtonian theory, we know that the f mode frequency is proportional to the average density of the compact stars [58] even in the case with anisotropy [30]. As depicted in Fig. 6, this frequency-density relationship remains evident even when considering general relativistic calculations. It is important to note that this frequency-density proportionality applies not only to isotropic stars (\(\tau=0\)) but also extends to anisotropic configurations. The sole distinction lies in the differing slopes of these frequency-density lines, which correspond to varying levels of anisotropic strength and become more pronounced at higher stellar densities. To understand the effects of anisotropy on the mode frequencies more clearly, we plot the frequencies and damping time with the total mass of the stars, which are the observable that can be measured by other methods. From the Fig. 7, we can understand that, as the total mass of the neutron star increases, the f mode frequency gets higher and higher. For the lower mass (less than about \(2M_{\odot}\)) we can see that the frequency increases nearly linearly, but for the massive stars, the frequency increases rapidly. This pattern can be readily understood by referring to Table 1, in conjunction with Fig. 6 and Fig. 7. The observations in Table 1 reveal that, for a fixed value of anisotropic strength, an increase in central density leads to simultaneous increases in total mass and total radius of the star, up to a certain threshold (as demonstrated by the \(\tau=0\) case, where this trend holds true up to about 1.6 \(M_{\odot}\)). Moreover, the average density of the stars monotonically increases as well. However, models with higher central densities \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Anisotropic & Central Density & & & Average Density & Gravitational & Frequency & Damping Time \\ Strength (\(\tau\)) & (\(10^{14}g/cm^{3}\)) & Mass(\(M_{\odot}\)) & Radius (km) & (\(10^{14}g/cm^{3}\)) & Redshift & (kHz) & (ms) \\ \hline [MISSING_PAGE_POST] \hline -1 & 6.36435 & 1.0 & 12.2595 & 2.57629 & 0.14775 & 1.50286 & 535.91 \\ -1 & 8.7378 & 1.4 & 12.2429 & 3.62151 & 0.22878 & 1.62368 & 344.18 \\ -1 & 10.6688 & 1.6 & 12.10918 & 4.27752 & 0.28059 & 1.68731 & 307.82 \\ -1 & 14.1919 & 1.8 & 11.78859 & 5.21559 & 0.34954 & 1.76547 & 303.59 \\ -1 & 15.4395 & 1.84085 & 11.67425 & 5.49222 & 0.36804 & 1.78640 & 309.12 \\ \hline \hline \end{tabular} \end{table} Table 1: Numerical values for physical parameters along with f-mode frequencies and damping time for BSk21 EoS with various anisotropic strength. Figure 6: Plot for quasi-normal modes with respect to the average density of the stars for various anisotropic strength within the stars. Figure 7: Plot for quasi-normal modes with respect to the total mass of the stars for various anisotropic strength within the stars. exhibit a decrease in radius while the total mass continues to increase, consequently driving a rapid growth in average density. Based on Newtonian or Cowling approximations and in alignment with Fig. 6, it is established that the f-mode frequency linearly changes with the average density of the star. As a result, the f-mode frequency experiences a rapid increase in neutron stars with higher masses, due to the corresponding rapid rise in average density. However, it is important to note that the rate of change in average density of the star for massive stars varies depending on the level of anisotropic strength. Specifically, for the chosen equation of state (BSk21) and the selected anisotropic ansatz, the average density changes more rapidly for isotropic stars (\(\tau=0\)) than for anisotropic stars (\(\tau=1\)) as the mass increases. This is also imprinted in the quasi-normal mode (f-mode) of the neutron stars, e.g. in Fig. 7 we can see that, for isotropic stars (\(\tau=0\)) the mode frequency increases more rapidly than that of the anisotropic stars (\(\tau=1\)). A quantitative analysis will be done later in this section. To understand the effect of anisotropy on the f mode frequency for particular mass of compact stars, we have plotted for three different masses, \(1M_{\odot}\), \(1.4M_{\odot}\) and \(2.274M_{\odot}\). As we can see from Fig. (a)a, for \(1M_{\odot}\) and \(1.4M_{\odot}\), the frequency increases as the anisotropy strength increases, which implies if the tangential pressure (\(p_{t}\)) gets higher and higher with respect to the radial pressure (\(p_{r}\)), the frequency of the quasi-normal modes also increases. Comparing the f-mode frequency of an anisotropic star with \(\tau=2\) and a mass of \(1.4M_{\odot}\) to that of an isotropic star (\(\tau=0\)) with the same mass, we observe an increase of approximately \(3.62\%\). Conversely, in the case of anisotropic stars with \(\tau=-2\), the f-mode frequency decreases by approximately \(20.03\%\) compared to isotropic stars. In a similar way, we can see that the qualitative nature of change of f-mode frequency due to the change of pressure anisotropy, for \(1M_{\odot}\) stars is same as \(1.4M_{\odot}\) stars. The f-mode frequency of a \(1M_{\odot}\) star with \(\tau=2\) is approximately \(39.02\%\) higher compared to that of isotropic stars with the same mass, whereas for \(\tau=-2\) with the same mass, the frequency is reduced by \(10.8\%\). On the other hand, for the \(2.274M_{\odot}\) star the f mode frequency decreases as the anisotropy in the system increases. From Fig. (b)b, we observe that the frequency experiences a decrease of approximately \(9.56\%\) as the anisotropy strength increases from \(0\) (isotropic star) to \(1\). This characteristic, where an increase in anisotropy strength leads to a reduction in frequency, is also apparent in the Cowling approximation [32]. To explore the influence of the stiffness of the equation of state (EoS) on the f-mode frequency, we have plotted the frequency as a function of anisotropy strength for both the BSk19 and BSk21 models. To facilitate a direct comparison of the stiffness effect, we consider a fixed mass of \(1.4M_{\odot}\) across all models in Fig. 9. From the plot, we observe Figure 8: Quasi-normal modes of neutron stars for various anisotropic strengths for various mass of neutron stars Figure 9: Plot for f-mode frequency with respect to anisotropic strength for neutron stars with \(1.4\)\(M_{\odot}\). that the qualitative nature of both curves is similar, indicating a common trend. However, it is evident that less stiff neutron stars exhibit higher frequency oscillations across the entire range of anisotropy. This implies that softer equations of state lead to higher f-mode frequencies for all levels of anisotropy. Another thing to notice here that, the influence of anisotropy is less for softer EoS (BSk19) with respect to stiffer EoS (BSk21) while \(\tau>0\) (\(p_{t}>p_{r}\)). In addition to the mode frequency, another crucial aspect associated with the mode is the damping time. In the full framework of general relativity, taking into account perturbations of the metric, the oscillation of the metric propagates as gravitational waves, which carry away the energy of the pulsation. As a result, the oscillation experiences damping, leading to a decrease in its amplitude over time. This characteristic time is called the damping time. From Fig. 10, we observe that the damping time decreases as the mass of the neutron star increases for all values of anisotropy strength. This trend indicates that as the mass of the star increases, the damping of the oscillation occurs at a faster rate. However, it is worth noting that for stars with high masses, a slight increase in the damping time can be observed. To get an idea about how the damping time gets affected by the anisotropy, we plotted the damping time with respect to anisotropic strength for neutron stars with \(1.4M_{\odot}\). From Fig. 11, it is evident that for a neutron star with an anisotropic strength of \(\tau=2\), the damping time experiences a significant decrease of approximately \(28\%\) compared to an isotropic star with the same mass. This observation highlights the substantial impact of anisotropy on the damping time, resulting in a more rapid dissipation of the oscillation energy for anisotropic stars with higher tangential pressure relative to the radial pressure. Conversely, as the value of \(\tau\) decreases from \(0\) to \(-2\), we observe a significant increase in the damping time, approximately three times that of the damping time for the isotropic case. This implies that for neutron stars with lower anisotropy strength, where the tangential pressure is lower than the radial pressure, the damping of the oscillation proceeds at a much slower rate, resulting in a longer damping time compared to the isotropic scenario. Furthermore, Fig. 11 reveals that for a softer equation of state (e.g., BSk19), the effect of anisotropy on the damping time is similar to that observed for a stiffer equation of state (e.g., BSk21). As the anisotropy strength increases from \(0\) to \(1.5\), the damping time decreases by approximately \(24\%\). Conversely, as the anisotropy strength decreases from \(0\) to \(-1\), the damping time increases by around \(60\%\). These findings indicate that the impact of anisotropy on the damping time remains consistent irrespective of the stiffness of the equation of state, albeit with varying magnitudes of change. ## V Summary and Conclusion In this study, we present the influence of anisotropy within neutron stars on the quasi-normal mode frequency and damping time, as compared to isotropic neutron stars. To achieve this, we have employed the full framework of general relativity (up to linear order), considering both metric and fluid perturbations to derive the oscillation equations. Notably, these equations share a similar structure to the isotropic counterparts but encompass additional terms that account for the tangential pressure and its perturbations. As our focus lies on gravitational waves, we consider the specific case with \(l=2\), although the equations can be applied to any value of \(l\). In this article, we have focused on studying the fundamental (f) modes of neutron stars, which typically span a frequency range of \(1-3\) kHz. These modes are relatively easier to excite compared to other modes, and their analysis provides crucial insights into the oscillatory behavior and dynamic characteristics of neutron stars. In our analysis, we have demonstrated that the frequency of the f-mode exhibits a linear relationship with the average density of Figure 11: Plot for damping time with respect to anisotropic strength for neutron stars with total mass \(1.4M_{\odot}\). Figure 10: Plot for damping time with respect to total mass of the neutron stars for various anisotropic strengths. neutron stars, even when anisotropy is present in the system. Importantly, the change in the slope of this linear trend for different anisotropic strengths suggests that the anisotropic strength operates as a multiplicative factor alongside the average density. As mass of the star can be observed by other methods, so we have plotted the f-mode frequencies and damping time with respect to the total mass of the star. We have seen that, the qualitative nature of of the plots are same for both isotropic as well as anisotropic stars. From Fig. 8 it is clear, for neutron stars with \(1M_{\odot}\) and \(1.4M_{\odot}\) mass, the frequency increases while tangential pressure (\(p_{t}\)) becomes greater than radial pressure (\(p_{r}\)) and decreases while tangential pressure becomes lesser than radial pressure. But for star with \(2.274M_{\odot}\), we found that, the frequency decreases as the tangential pressure increases with respect to radial pressure. As described earlier, this happens because for massive neutron stars the average density and compactness decreases as tangential pressure increases with respect to the radial pressure. In Fig. 9, we have also showed that the numerical value of frequencies is higher for softer equation of state, but the change of frequency due to change of anisotropic strength, is similar for both EoS. In section V, we found that, not only frequency, but damping times of the wave also get modified due to the presence of anisotropy in the stars. From Fig. 10, it is clear that, irrespective of presence of anisotropy in the star, the damping time decreases with the increasing mass of the neutron stars. From Fig. 11, it is clear that the damping time decreases (increases) as the tangential pressure gets higher (lower) than radial pressure (which implies positive (negative) \(\tau\)). Our study carries significant implications for understanding the role of anisotropy in neutron star oscillations. While early insights from Newtonian and the Cowling approximation provided initial perspectives on anisotropic effects, our current investigation, embracing both metric and fluid perturbations, offers a refined understanding of the f-mode frequency. Moreover, our analysis enables the calculation of damping times linked to energy emission through gravitational waves. These quantifiable aspects, the f-mode frequency and its damping time, serve as unique signatures of neutron stars, encapsulating essential traits such as mass, radius, and anisotropic influence. Although our approach employs realistic equations of state, the precise nature of anisotropy remains approximated due to limitations in data availability. To gain deeper insights into the intricate anisotropic pressure, future research must unravel its complexities, enriching our grasp of neutron stars' micro- and macroscopic attributes. Moreover, here we have only studied the effect of pressure anisotropy on fundamental modes of neutron stars, in the similar way we can also calculate the other modes, like p and w modes, which will give more information on the anisotropy in the compact stars.
2307.00195
Partial Linear Cox Model with Deep ReLU Networks for Interval-Censored Failure Time Data
The partial linear Cox model for interval-censoring is well-studied under the additive assumption but is still under-investigated without this assumption. In this paper, we propose to use a deep ReLU neural network to estimate the nonparametric components of a partial linear Cox model for interval-censored data. This model not only retains the nice interpretability of the parametric component but also improves the predictive power compared to the partial linear additive Cox model. We derive the convergence rate of the proposed estimator and show that it can break the curse of dimensionality under some certain smoothness assumptions. Based on such rate, the asymptotic normality and the semiparametric efficiency are also established. Intensive simulation studies are carried out to demonstrate the finite sample performance on both estimation and prediction. The proposed estimation procedure is illustrated on a real dataset.
Jie Zhou, Yue Zhang, Zhangsheng Yu
2023-07-01T02:08:11Z
http://arxiv.org/abs/2307.00195v1
# Partial Linear Cox Model with Deep ReLU Networks for Interval-Censored Failure Time Data ###### Abstract The partial linear Cox model for interval-censoring is well-studied under the additive assumption but is still under-investigated without this assumption. In this paper, we propose to use a deep ReLU neural network to estimate the nonparametric components of a partial linear Cox model for interval-censored data. This model not only retains the nice interpretability of the parametric component but also improves the predictive power compared to the partial linear additive Cox model. We derive the convergence rate of the proposed estimator and show that it can break the curse of dimensionality under some certain smoothness assumptions. Based on such rate, the asymptotic normality and the semiparametric efficiency are also established. Intensive simulation studies are carried out to demonstrate the finite sample performance on both estimation and prediction. The proposed estimation procedure is illustrated on a real dataset. _Keywords:_ deep neural network; partial linear Cox model; semiparametric inference; interval-censoring Introduction In survival analysis, reliability studies and epidemiological studies, the accurate failure time is often not available but rather it is observed as an interval due to the regular follow-up. For example, in the Chinese Longitudinal Healthy Longevity Survey (CLHLS, Yi (2008)), a subject's cognitive ability is measured by the Mini-Mental State Examination(MMSE, Katzman et al. (1988); Gagnon et al. (1990)) at each visit, and the cognitive impairment(CI) is defined by scores lower than a pre-specified threshold. Therefore, CI appeared in the interval formed by two consecutive visits. In statistics, this type of data is called interval-censoring. Another important application of interval-censored data is in the studies of transfusion-related acquired immune deficiency syndrome (AIDS), where the positive status of the human immunodeficiency virus (HIV) can never be known exactly, while the change in status is only known between some monitoring times. Regression analysis of interval-censored failure time data aims to estimate the covariate effects on it. Semiparametric regression models with linear assumptions on the covariates have been widely developed including the Cox model(Finkelstein (1986),Huang (1996)), proportional odds model(Huang and Rossini (1997)), accelerated failure time model(Rabinowitz et al. (1995); Betensky et al. (2001)), additive hazard model(Zeng et al. (2006); Wang et al. (2010)) and transformation model(Zhang et al. (2005); Wang et al. (2018)). Although the linear assumption of covariate effects in the aforementioned models provides simplicity and interpretability, this assumption is often violated in applications. For example, Lv et al. (2017) depicted the U-shaped relationship between blood pressure and the risk of CI in the CLHLS dataset. Partial linear models have been developed to model non-linear and linear effects simultaneously, e.g., Ma and Kosorok (2005) first investigated a partial linear transformation model for current status data. Cheng and Wang (2011) ex tended the model to additively multivariate cases. Lu and McMahan (2018) and Liu et al. (2021) relaxed the the assumption of a stepwise constant baseline hazard function in these two models. Nevertheless, none of the existing work has relaxed the additive assumption and allowed multivariate nonparametric function estimation in the partial linear Cox model for interval-censoring due to the curse of dimensionality. Intuitively, the overall convergence rate of these models decreases exponentially with respect to the dimension of the nonparametric component and will not be fast enough to establish the asymptotic normality for the parametric component(Shen (1997)). Recently, deep neural networks(DNN, referred to as _deep learning_ in LeCun et al. (2015)) have emerged as a promising tool to alleviate the curse of dimensionality and have demonstrated superior performance for high-dimensional problems such as image classification(Krizhevsky et al. (2012)), natural language processing(Devlin et al. (2019)), and speech recognition(Hinton et al. (2012)). Theoretical analysis attributes its success to its powerful ability to approximate functions from specific spaces such as the nested function space(Schmidt-Hieber (2017); Bauer and Kohler (2019)) or the mixed smoothness Besov space(Suzuki (2019))). For this reason, it has been extensively used in various types of survival models, such as the Cox model(Katzman et al. (2018); Zhong et al. (2022)), the accelerated failure time model(Chen and Norman (2019)), illness-death model(Cottin et al. (2022)), competing risk model(Lee et al. (2018)), cure rate model(Xie and Yu (2021a,b)), and nonparametric Cox model for interval-censoring(Meixide et al. (2022); Sun and Ding (2022)). In this paper, we propose the deep partial linear Cox model(DPLC) for interval-censoring where covariates requiring interpretability are kept linear and all the remaining covariates are modelled nonparametrically by a deep ReLU network. The proposed procedure enjoys some attractive properties. First, the convergence rate is justified to be free of the nonparametric dimension under some smoothness assumptions, suggesting that the model can break curse of dimensionality. Second, the parametric estimator is shown to be asymptotically normal and semiparametric efficient based on such rate. Third, a covariance estimator based on a least square regression rather than the computationally intensive bootstrap is provided to facilitate statistical inference. The rest of the paper is organised as follows. In Section 2, we introduce notation, models, assumptions and the full likelihood of the observations for the interval-censored data. Section 3 presents the estimation procedure based on the deep ReLU network. The asymptotic properties of the proposed estimators are also discussed. In Section 4, the finite-sample performance as well as the comparisons with other models are evaluated by a simulation study. In Section 5, we apply the proposed model to two real datasets. Section 6 concludes with remarks and discussion. Technical details of all the proofs are given in Supplementary Material. ## 2 Model and Likelihood Suppose \(\mathbf{X}\) and \(\mathbf{Z}\) are \(\mathbb{R}^{d}\) and \(\mathbb{R}^{r}\)-valued covariates affecting the failure time \(T\). We assume that given \(\mathbf{X}\) and \(\mathbf{Z}\), \(T\) follows the partial linear Cox model, e.g., its conditional cumulative hazard function has the form \[\Lambda(t|\mathbf{X},\mathbf{Z})=\Lambda_{0}(t)\exp\left\{\mathbf{X}^{T}\mathbf{\beta}+g(\mathbf{ Z})\right\}, \tag{1}\] where \(\Lambda_{0}(t)\) is the unspecified baseline hazard function and \(\mathbf{\beta}\) and \(g\) correspond to the parametric coefficients and the nonparametric function, respectively. For interval-censored data, \(T\) is known to lie within a random interval \((U,V)\), where \(U\) and \(V\) are the two examination times that satisfy \(U<T\leq V\) with probability 1. We assume that \(T\) is independent of \((U,V)\) given \((\mathbf{X},\mathbf{Z})\) and the joint distribution of \((U,V)\) given \((\mathbf{X},\mathbf{Z})\) is free of parameters \(\theta=(\mathbf{\beta},\mathbf{\gamma})\). Let \(\Delta_{1}:=1_{\{T\leq U\}}\), \(\Delta_{2}:=1_{\{U<T\leq V\}}\), \(\Delta_{3}:=1_{\{T>V\}}\), the observed information for a single object in the interval-censoring denoted by \(O:=(\mathbf{X},\mathbf{Z},U,V,\Delta_{1},\Delta_{2},\Delta_{3})\) is distributed as \[p(O)=(1-S(U|\mathbf{X},\mathbf{Z}))^{\Delta_{1}}\left\{S(U|\mathbf{X},\mathbf{Z})-S(V|\mathbf{X}, \mathbf{Z})\right\}^{\Delta_{2}}S(V|\mathbf{X},\mathbf{Z})^{\Delta_{3}}p_{\mathbf{X},\mathbf{Z}}( U,V)h(\mathbf{X},\mathbf{Z}),\] where \(S(\cdot|\mathbf{X},\mathbf{Z})=\exp(-\Lambda(\cdot|\mathbf{X},\mathbf{Z}))\) is the conditional survival function and \(p_{\mathbf{X},\mathbf{Z}}(\cdot,\cdot)\) and \(h(\mathbf{X},\mathbf{Z})\) are the density functions of \((U,V)\) and \((\mathbf{X},\mathbf{Z})\), respectively. Under the assumption that the distribution of \((U,V)\) is non-informative for \(T\), then the log likelihood function of \(\{(\mathbf{X}_{i},\mathbf{Z}_{i},U_{i},V_{i},\Delta_{1i},\Delta_{2i}\Delta_{3i}):i=1,...,n\}\) is \(l_{n}(\mathbf{\beta},\Lambda,g;\cdot):=\sum_{i=1}^{n}l(\mathbf{\beta},\Lambda,g;O_{i})\) where \[l(\mathbf{\beta},\Lambda,g;O)=\Delta_{1}\log(1-S(U|\mathbf{X},\mathbf{Z}))+\Delta_{2}\log (S(U|\mathbf{X},\mathbf{Z})-S(V|\mathbf{X},\mathbf{Z}))+\Delta_{3}\log(S(V|\mathbf{X},\mathbf{Z})).\] ## 3 Estimation and asymptotic properties In this section, we consider the estimation of unknown parameters \((\mathbf{\beta},\Lambda_{0},g)\). A natural approach is to consider the sieve method(Chen (2007)), e.g., maximizing \(l_{n}(\mathbf{\beta},\Lambda_{0},g)\) over the product space of \(\mathbb{R}^{d}\), sieve function spaces of \(\Lambda_{0}\) and \(g\) that grow in capacity with respect to \(n\). We choose these two spaces as a monotone B-Spline space and a deep ReLU network space. The monotone B-Spline space \(\mathcal{M}\) is a non-negative linear span of the integrated spline basis functions \(M_{k}\)(Ramsay (1988)), i.e. \(\mathcal{M}=\{\gamma_{k}M_{k}:\gamma_{k}\geq 0\}\). Since each \(M_{k}\) is non-decreasing function ranging from 0 to 1, constraining the coefficients to be non-negative guarantees the non-negativity and monotonicity of the estimator of the baseline cumulative hazard function \(\Lambda_{0}\). As with B-splines, the \(q_{n}=p_{n}+l\) basis functions are fully determined once the degree and the interior knot set are specified, where \(p_{n}\) is the cardinality of the interior knot set. A deep ReLU network space with input dimension \(p_{0}\), depth \(K\), hidden unit vector \(\mathbf{p}=(p_{0},\cdots,p_{K},p_{K+1})\), sparsity constraint \(s\) and norm constraint \(D\) is defined as \[\mathcal{G}(K,\mathbf{p},s,D)= \Bigg{\{}g(z)=(W^{(K)}\sigma(\cdot)+b^{(K)})\circ\cdots\circ(W^{(1 )}z+b^{(1)}):\mathbb{R}^{p_{0}}\mapsto\mathbb{R}^{p_{K+1}},\] \[\left\|g\right\|_{\infty}\leq D,W^{(\ell)}\in\mathbb{R}^{p_{\ell +1}\times p_{\ell}},b^{(\ell)}\in\mathbb{R}^{p_{\ell}+1},\ell=0,\cdots,K,\] \[\sum_{\ell=0}^{K}\big{(}\big{\|}W^{(\ell)}\big{\|}_{0}+\big{\|}b^ {(\ell)}\big{\|}_{0}\big{)}\leq s,\max_{\ell=0,\cdots,K}\big{\|}W^{(\ell)} \big{\|}_{\infty}\vee\big{\|}b^{(\ell)}\big{\|}_{\infty}\leq 1\Bigg{\}},\] where \(\big{\|}\cdot\big{\|}_{0}\) and \(\big{\|}\cdot\big{\|}_{\infty}\) are the \(\ell_{0}\)-norm and \(\ell_{\infty}\)-norm of a matrix, respectively and \(W^{(\ell)}\) and \(b^{(\ell)}\) are the weights and biases of the network, respectively. The estimation of \((\mathbf{\beta},\Lambda,g)\) is set to be the maximizer of \(l_{n}\) over \(\mathbb{R}^{d}\times\mathcal{M}\times\mathcal{G}(K,\mathbf{p},s,\infty)\) with \(p_{0}=r\) and \(p_{K+1}=1\) after empirical centralization, that is \[(\hat{\mathbf{\beta}}_{n},\hat{\Lambda}_{n},\hat{g}_{n})=\operatorname*{arg\,max} _{\mathbb{R}^{d}\times\mathcal{M}\times\mathcal{G}(K,\mathbf{p},s,\infty)}l_{n}( \mathbf{\beta},\Lambda,g;\cdot)\] Such optimization poses several challenges. First, stochastic gradient descent(SGD, Robbins and Monro (1951)), the most widely used algorithm in deep learning and its variants such as Adam(Kingma and Ba (2014)) can not operate with the non-negativity constraint which is required on the coefficients of \(\mathcal{M}\). We remove the non-negativity constraint by reparametrization, that is, \(\mathcal{M}=\{\exp(\tilde{\gamma}_{k})M_{k}:\tilde{\gamma}_{k}\in\mathbb{R}\}\). The second challenge is regarding the specification of the hyperparameters of the network, mainly the depth \(K\) and the knots \(\mathbf{p}\). Unfortunately, there are no suitable model selection criteria for deep neural networks, such as Akaike information criterion(AIC) or Bayesian information criterion(BIC)(Claeskens and Hjort (2001)) for linear sieve models. Meanwhile, cross-validation(CV), another popular method in model selection, is not applicable either due to the computational complexity of deep neural networks. Therefore, we select these hyperparameters in a lightweight and data-adaptive manner. A part of the dataset is randomly hold out as a validation set to select the hyperparameters among the candidates over \(K\) and \(\mathbf{p}\), and then the selected hyperparameters are used to rerun the estimation on the whole dataset. The third challenge is to deal with the non-concavity with respect to the weights and biases of the network. We solve this problem to some extent by repeating this optimization for many times, each time with different initial values. Although the global maximizer is not reachable, this increases the probability of finding a better estimator. This claim is verified by numeric experiments later in Section 4. We now describe the asymptotic properties of the estimator \((\hat{\mathbf{\beta}}_{n},\hat{\Lambda}_{n},\hat{g}_{n})\). Denote \(\delta_{n}=\max_{i=0,\cdots,K}n^{-\mathbf{\alpha}_{i}/(2\mathbf{\alpha}_{i}+\tilde{ \mathbf{d}}_{i})}\lor n^{-\alpha_{\Lambda}/(2\alpha_{\Lambda}+1)}\) where \(\mathbf{\alpha},\tilde{\mathbf{d}}\) and \(\alpha_{\Lambda}\) is defined in the follow conditions. **Condition** (C1).: \(\mathbf{\beta}\) _belongs to a compact subset of \(\mathbb{R}^{d}\)_ **Condition** (C2).: _(1) \(\mathbb{E}((\mathbf{X},\mathbf{Z})^{\otimes 2})\) is positive definite. (2) both \(\mathbf{X}\) and \(\mathbf{Z}\) are bounded with probability 1._ **Condition** (C3).: _The support of \(U\) denoted as \([U_{m},U_{M}]\) and that of \(V\) denoted as \([V_{m},V_{M}]\) satisify that \(0=U_{m}<U_{M}=V_{m}<V_{M}\). The densities of \(U\) and \(V\) are both bounded away from zero and infinity on their support._ **Condition** (C4).: _There exists some \(\eta\in(0,1)\) such that \(\mathbf{u}^{T}\mathrm{Var}(\mathbf{X}|U)\mathbf{u}\geq\eta\mathbf{u}^{T}\mathbb{E}(\mathbf{X}\mathbf{X }^{T}|U)\mathbf{u}\) and \(\mathbf{u}^{T}\mathrm{Var}(\mathbf{X}|V)\mathbf{u}\geq\eta\mathbf{u}^{T}\mathbb{E}(\mathbf{X}\mathbf{X }^{T}|V)\mathbf{u}\) for all \(\mathbf{u}\in\mathbb{R}^{d}\) or \(\mathbf{u}^{T}\mathrm{Var}(\mathbf{Z}|U)\mathbf{u}\geq\eta\mathbf{u}^{T}\mathbb{E}(\mathbf{Z}\mathbf{Z }^{T}|U)\mathbf{u}\) and \(\mathbf{u}^{T}\mathrm{Var}(\mathbf{Z}|V)\mathbf{u}\geq\eta\mathbf{u}^{T}\mathbb{E}(\mathbf{Z}\mathbf{Z }^{T}|V)\mathbf{u}\) for all \(\mathbf{u}\in\mathbb{R}^{r}\)._ **Condition** (C5).: \(\Lambda_{0}\in\mathcal{H}_{1}^{\alpha_{\Lambda}}([L,R],M)\)_, the Holder space defined by_ \[\mathcal{H}_{r}^{\alpha}(\mathcal{D},M)=\left\{g:\mathcal{D}\mapsto\mathbb{R}: \sum_{\kappa:|\kappa|<\alpha}\|\partial^{\kappa}g\|_{\infty}+\sum_{\kappa:| \kappa|=\alpha}\sup_{x,y\in\mathcal{D},x\neq y}\frac{|\partial^{\kappa}g(x)- \partial^{\kappa}g(y)|}{\|x-y\|_{\infty}^{\alpha-\alpha}}\leq M\right\},\] _and is monotonically increasing from \(0\) with \(\alpha_{\Lambda}\geq 2\)._ **Condition** (C6).: \(g\in\mathcal{H}(q,\boldsymbol{\alpha},\boldsymbol{d},\tilde{\boldsymbol{d}},M)\)_, the composite smoothness function space(Schmidt-Hieber (2017)) defined by_ \[\mathcal{H}(q,\boldsymbol{\alpha},\boldsymbol{d},\tilde{\boldsymbol{d}},M):= \{g=g_{q}\circ\cdots\circ g_{0}:g_{i}=(g_{i1},\cdots,g_{id_{i+1}}),\] _and \(\mathbb{E}g(\boldsymbol{Z})=0\)._ **Condition** (C7).: _The hypyer-parameters of the deep ReLU network satisfies that \(K=O(\log n)\), \(s=O(n\delta_{n}^{2}\log n)\), \(n\delta_{n}^{2}\lesssim\min_{k=1,\cdots,K}\boldsymbol{p}_{k}\leq\max_{k=1, \cdots,K}\boldsymbol{p}_{k}\lesssim n\) and \(p_{n}=O(n^{1/(2\alpha_{\Lambda}+1)})\)._ Conditions (C1)-(C4) are similar to those in Zhang et al. (2010). Condition (C5) and Condition (C6) restrict the function space of \(\Lambda\) and \(g\), respectively. Condition (C7) describes the growing of hyperparameters of DNN with respect to \(n\). We have the following theorems as \(n\to\infty\). **Theorem 1** (**Convergence rate)**.: _Under Conditions (C1)-(C7), the estimator \((\hat{\boldsymbol{\beta}}_{n},\hat{\Lambda}_{n},\hat{g}_{n})\) is consistent for \((\boldsymbol{\beta}_{0},\Lambda_{0},g_{0})\) and the convergence rate is \(\delta_{n}\log^{2}n\), e.g.,_ \[||\hat{\boldsymbol{\beta}}_{n}-\boldsymbol{\beta}_{0}||_{2}+||\hat{\Lambda}_{ n}-\Lambda_{0}||_{L_{2}(U,V)}+\|\hat{g}_{n}-g_{0}\|_{L_{2}(\boldsymbol{Z})}=O_{P}( \delta_{n}\log^{2}n).\] **Theorem 2** (**Asymptotic Normality and Efficiency)**.: _Suppose Conditions (C1)-(C7) hold, \(I_{\boldsymbol{\beta}}\) is nonsingular and \(n\delta_{n}^{4}\to 0\), then_ \[\sqrt{n}(\hat{\boldsymbol{\beta}}_{n}-\boldsymbol{\beta}_{0})\rightsquigarrow N (0,I_{\boldsymbol{\beta}}^{-1}),\] _where \(I_{\boldsymbol{\beta}}\) is the semiparametric information bound for \(\boldsymbol{\beta}\)._ It is interesting to notice that the polynomial term \(\max_{i=0,\cdots,K}n^{-\mathbf{\alpha}_{i}/(2\mathbf{\alpha}_{i}+\hat{\mathbf{d}}_{i})}\surd n ^{-\alpha_{\Lambda}/(2\alpha_{\Lambda}+1)}\) is free of the nonparametric dimension \(r\) which means curse of dimensionality is much eased in this model. Furthermore, the estimator of \(\mathbf{\beta}\) attains the asymptotic normality and the semiparametric efficiency bound even though the overall convergence rate is slower than \(n^{-1/2}\). To perform statistical inference on \(\mathbf{\beta}_{0}\) with finite sample size according to Theorem 2, one has to estimate the information matrix \(I_{\mathbf{\beta}}\). Consider a parametric smooth submodel with parameter \((\mathbf{\beta},\Lambda_{(s)},g_{(s)})\), where \(\Lambda_{(0)}=\Lambda\), \(g_{(0)}=g\) and \(\frac{\partial}{\partial s}\Lambda_{(s)}=h_{1},\frac{\partial}{\partial s}g_{ (s)}=h_{2}\), the score operators are defined by \[\dot{l}_{\mathbf{\beta}}(\mathbf{\beta},\Lambda,g;o) =\frac{\partial}{\partial\mathbf{\beta}}l(\mathbf{\beta},\Lambda_{0},g;o),\] \[\dot{l}_{1}(\mathbf{\beta},\Lambda,g;o)[h_{1}] =\frac{\partial}{\partial s}l(\mathbf{\beta},\Lambda_{(s)},g;o)\big{|} _{s=0},\] \[\dot{l}_{2}(\mathbf{\beta},\Lambda,g;o)[h_{2}] =\frac{\partial}{\partial s}l(\mathbf{\beta},\Lambda_{0},g_{(s)};o) \big{|}_{s=0}.\] The _least favourable direction_\((\mathbf{h}_{1}^{*},\mathbf{h}_{2}^{*})\) is obtained by projection \(\dot{l}_{\mathbf{\beta}}\) into the product space of \(\dot{l}_{1}\) and \(\dot{l}_{2}\), or equivalently, minimize \[\rho(\mathbf{h}_{1},\mathbf{h}_{2})=\mathbb{E}||\dot{l}_{\mathbf{\beta}}(\mathbf{\beta}, \Lambda,g;O)-\dot{l}_{1}(\mathbf{\beta},\Lambda,g;O)[\mathbf{h}_{1}]-\dot{l}_{2}(\mathbf{ \beta},\Lambda,g;O)[\mathbf{h}_{2}]||^{2}. \tag{2}\] Then \(I_{\mathbf{\beta}}\) can be calculated by \[I_{\mathbf{\beta}}=\mathbb{E}[\dot{l}_{\mathbf{\beta}}^{*}(\mathbf{\beta},\Lambda,g;O)^{ \otimes 2}]=\mathbb{E}[(\dot{l}_{\mathbf{\beta}}(\mathbf{\beta},\Lambda,g;O)-\dot{l}_{1} (\mathbf{\beta},\Lambda,g;O)[\mathbf{h}_{1}^{*}]-\dot{l}_{2}(\mathbf{\beta},\Lambda,g;O)[ \mathbf{h}_{2}^{*}])^{\otimes 2}], \tag{3}\] where \(\dot{l}_{\mathbf{\beta}}^{*}(\mathbf{\beta},\Lambda,g;O)\) is the efficient score function. Because there are no closed form expressions for \((\mathbf{h}_{1}^{*},\mathbf{h}_{2}^{*})\), following Huang et al. (2008), we obtain \(\hat{\mathbf{h}}_{1}^{*}\) and \(\hat{\mathbf{h}}_{2}^{*}\), the estimation of \(\mathbf{h}_{1}^{*}\) and \(\mathbf{h}_{2}^{*}\), by minimizing the empirical version of projection (2) over the product space of another two deep ReLU network spaces with properly chosen hyperparameters. The estimator of \(I_{\mathbf{\beta}}\) is then defined by pluging \(\hat{\mathbf{h}}_{1}^{*}\) and \(\hat{\mathbf{h}}_{2}^{*}\) into the empirical version of (3). Simulation In this section, we demonstrate the numerical performance of DPLC and compare it in terms of both estimation and prediction with linear Cox regression and partial linear additive Cox regression for interval-censoring. These two models are implemented using the R packages _icenReg_ and are abbreviated as CPH and PLAC, respectively. DPLC is implemented with the Python package _pycox_ based on PyTorch(Paszke et al. (2019)). We first generate covariates \(\mathbf{X}\) and \(\mathbf{Z}\) as follows \[\mathbf{X} \sim\text{Binomal}(1,1/2)\] \[\mathbf{Z} \sim\text{Clayton}(8,0.5,[-2,2])\] \(T\) is generated with \(\Lambda_{0}(t)=\mu t^{\kappa},\kappa\in(0.5,1,2)\) and \(\mathbf{\beta}_{0}=1.2\). After \(T\) is generated, \((U,V)\) is obtained by dividing \([0,5]\) into 10 equal-distance intervals with visit probability \(p\in(0.4,0.7)\). \(g\) is chosen from the following candidates: **Case 1 (linear)**: \(g(\mathbf{Z})=2.4\sum_{k=1}^{10}\left(\mathbf{Z}_{k}-\frac{1}{2}\right)\); **Case 2 (additive)**: \(g(\mathbf{Z})=1.2\sum_{k=1}^{10}\cos\left(\frac{2\pi}{k}\mathbf{Z}_{k}\right)\); **Case 3 (deep-1)**: \(g(\mathbf{Z})=4.0\left|\sum_{k=1}^{10}\left(\mathbf{Z}_{k}-\frac{1}{2}\right)\right|\); **Case 4 (deep-2)**: \(g(\mathbf{Z})=4.5\left(\max_{k=1,2,3}\mathbf{Z}_{k}-\min_{i=1,2,3}\mathbf{Z}_{k}\right)\); Case 1 and Case 2 correspond to CPH and PLAC, respectively while Case 3 and Case 4 are designed for DPLC. The factors 2.4, 1.2, 4.0 and 4.5 in each case were used to scale \(\text{Var}(\phi(\mathbf{Z}))/\text{Var}(\mathbf{X}^{T}\mathbf{\beta})\) within the range \(4-6\), i.e. to control the ratio of signals from the nonparametric and parametric components. Each dataset is randomly splitted with a 80:20 ratio as a training and validation set to select the best hyperparameters from all combinations of \(K\in\{2,3,4\}\) and \(\mathbf{p}_{k}\in\{\lceil u/4r\rceil:u=1,\cdots,8\}\). We repeat this 5 times with different initial values for the optimization in (3), with the chosen parameters corresponding to the maximal full likelihood in the validation set. The bias for the parametric estimator \(\hat{\mathbf{\beta}}_{n}\) and its empirical standard error from 500 replications with \(n\in\{500,1000\}\) are summarized in Table 1. As expected, the bias of DPLC is comparable to, if not slightly worse than CPH and PLAC in Case 1 and Case 2, respectively, since these two cases are specifically designed for them. However, in Case 3 and Case 4, CPH and PLAC are more seriously biased than DPLC and do not improve with increasing \(n\), whereas DPLC does. For example, in Case 3 with \(n=500,p=0.5\) and \(\kappa=1\), the bias of \(\mathbf{\beta}_{1}\) for CPH and PLAC are \(-0.782\) and \(-0.412\), respectively, while the bias for DPLC is \(-0.052\), a much smaller one and it decreases to \(-0.037\) when \(n\) increases to 1000, but the bias for CPH and PLAC increases to \(-0.812\) and \(-0.438\), respectively. This phenomenon can be explained by the fact that the highly complicated nonparametric function \(g\) in Case 3 and Case 4 can be easily fitted by a deep ReLU network, whereas it can not be approximated well by any linear or additive function, and this inapproximability is further comfirmed with increasing \(n\). As might be expceted, empirical standard error decreases with increasing \(n\) for all models and all censoring rates. Table 2 presents the converage proportion of the 95% confidence intervals, suggesting that the empirical coverage probabilities for DPLC were generally around 95% and close to the nominal level in four cases while those for Cox and PLAC are far away from 95% in Case 3 and Case 4 due to the significant bias. The performance in estimating of \(g\) measured on a test data formed of 4000 independent samples with the relative mean of squared error(RMSE) defined as \[RMSE(\hat{g}_{n})=\frac{\sum_{i=1}^{n}(\hat{g}_{n}(\mathbf{Z}_{i})-g_{0}(\mathbf{Z}_{i})) ^{2}}{\sum_{i=1}^{n}(g_{0}(\mathbf{Z}_{i})-\bar{g}_{0})^{2}},\] is reported in Table 3. The smaller the metric is, the more accurate an estimator is. Similar to the results for the parametric estimator, DPLC significantly outperforms both Cox and PLAC by a lot in Case 3 and Case 4. To name a few, 0.448 for DPLC versus 0.971 and 0.936 for CPH and PLAC in Case 4 with \(n=500,p=0.4\) and \(\kappa=0.5\). In Case 1 and Case 2, DPLC performs only slightly worse than CPH and PLAC. We evalute and compare the predictive power of CPH, PLAC and DPLC with the Intergrated Mean Square Error(IMSE) in Table 4 defined by \[\text{IMSE}=\frac{1}{N}\sum_{i=1}^{N}\Big{(}\int_{0}^{U_{i}}(1-\hat{S}_{n}(t| \mathbf{X}_{i},\mathbf{Z}_{i}))^{2}dt+\int_{V_{i}}^{\tau}\hat{S}_{n}^{2}(t|\mathbf{X}_{i}, \mathbf{Z}_{i})dt\Big{)}.\] It is can be seen from this table that the IMSE of DPLC is much smaller than that of CPH and PLAC in Case 3 and Case 4. For example, it is 0.118 for DPLC while is 0.173 and 0.169 for CPH and PLAC, respectively. In Case 1 and Case 2, CPH and PLAC outperform DPLC by only about 0.005 IMSE. Further analysis of the simulation study can be found in Supplementary Material. ## 5 Application The Chinese Longitudinal Healthy Longevity Survey(CLHLS) is a follow-up survey of the elderly in China organized by Peking University Healthy Aging and Development Research Center and the National Development Research Institute. This longitudinal study covered 23 provinces across the country with the elderly aged 65 and above and is the earliest and longest social science survey in China(1998-2018). \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline & & & & \multicolumn{3}{c}{\(\kappa=0.5\)} & \multicolumn{3}{c}{\(\kappa=1\)} & \multicolumn{3}{c}{\(\kappa=2\)} \\ \hline setting & \(\rho\) & \(p\) & \(n\) & CPH & PLAC & DPLC & CPH & PLAC & DPLC & CPH & PLAC & DPLC \\ \hline linear & 0.5 & 0.4 & 500 & 0.064 & 0.168 & -0.035 & 0.048 & 0.127 & -0.097 & 0.084 & 0.161 & -0.05 \\ & & & & (0.320) & (0.394) & (0.368) & (0.320) & (0.372) & (0.294) & (0.356) & (0.401) & (0.358) \\ & & & 1000 & 0.026 & 0.064 & 0.021 & 0.093 & 0.128 & -0.004 & 0.072 & 0.099 & 0.002 \\ & & & & (0.273) & (0.294) & (0.283) & (0.233) & (0.244) & (0.231) & (0.223) & (0.228) & (0.222) \\ & & 0.7 & 500 & 0.128 & 0.229 & -0.074 & 0.063 & 0.121 & -0.093 & 0.11 & 0.168 & 0.024 \\ & & & & (0.393) & (0.458) & (0.378) & (0.332) & (0.366) & (0.301) & (0.261) & (0.289) & (0.265) \\ & & & 1000 & 0.024 & 0.055 & -0.079 & 0.043 & 0.073 & -0.026 & 0.052 & 0.076 & -0.009 \\ & & & & (0.236) & (0.254) & (0.271) & (0.216) & (0.225) & (0.213) & (0.204) & (0.208) & (0.198) \\ additive & 0.5 & 0.4 & 500 & -0.443 & 0.186 & -0.22 & -0.422 & 0.114 & -0.258 & -0.519 & 0.074 & -0.238 \\ & & & & (0.289) & (0.369) & (0.328) & (0.287) & (0.316) & (0.300) & (0.223) & (0.314) & (0.281) \\ & & & 1000 & -0.524 & 0.037 & -0.206 & -0.508 & 0.029 & -0.193 & -0.481 & 0.105 & -0.145 \\ & & & (0.185) & (0.239) & (0.222) & (0.215) & (0.246) & (0.223) & (0.186) & (0.242) & (0.224) \\ & & 0.7 & 500 & -0.445 & 0.104 & -0.216 & -0.454 & 0.151 & -0.174 & -0.512 & 0.087 & -0.208 \\ & & & (0.326) & (0.407) & (0.333) & (0.270) & (0.315) & (0.270) & (0.198) & (0.264) & (0.243) \\ & & & 1000 & -0.445 & 0.073 & -0.147 & -0.497 & 0.011 & -0.178 & -0.503 & 0.051 & -0.167 \\ & & & (0.194) & (0.256) & (0.232) & (0.197) & (0.230) & (0.211) & (0.155) & (0.203) & (0.185) \\ deep-1 & 0.5 & 0.4 & 500 & -0.799 & -0.348 & -0.064 & -0.782 & -0.412 & -0.052 & -0.751 & -0.362 & -0.1 \\ & & & (0.309) & (0.447) & (0.414) & (0.252) & (0.384) & (0.391) & (0.272) & (0.366) & (0.371) \\ & & & 1000 & -0.771 & -0.414 & -0.025 & -0.812 & -0.438 & -0.037 & -0.803 & -0.446 & -0.041 \\ & & & (0.210) & (0.314) & (0.304) & (0.211) & (0.277) & (0.261) & (0.192) & (0.261) & (0.241) \\ & & 0.7 & 500 & -0.754 & -0.398 & -0.07 & -0.775 & -0.44 & -0.128 & -0.739 & -0.361 & 0.028 \\ & & & (0.300) & (0.380) & (0.420) & (0.280) & (0.403) & (0.357) & (0.294) & (0.344) & (0.342) \\ & & & 1000 & -0.795 & -0.437 & 0.018 & -0.779 & -0.39 & 0.006 & -0.765 & -0.423 & 0.004 \\ & & & (0.215) & (0.286) & (0.253) & (0.192) & (0.231) & (0.253) & (0.188) & (0.230) & (0.195) \\ deep-2 & 0.5 & 0.4 & 500 & -0.365 & -0.287 & -0.091 & -0.374 & -0.325 & -0.13 & -0.374 & -0.3 & -0.004 \\ & & & (0.320) & (0.344) & (0.393) & (0.292) & (0.313) & (0.343) & (0.268) & (0.308) & (0.333) \\ & & & 1000 & -0.408 & -0.392 & -0.061 & -0.432 & -0.388 & -0.054 & -0.422 & -0.387 & -0.024 \\ & & & (0.189) & (0.200) & (0.224) & (0.180) & (0.191) & (0.203) & (0.179) & (0.190) & (0.197) \\ & 0.7 & 500 & -0.353 & -0.299 & -0.042 & -0.43 & -0.372 & -0.092 & -0.409 & -0.381 & -0.086 \\ & & & (0.288) & (0.286) & (0.331) & (0.264) & (0.310) & (0.288) & (0.242) & (0.231) & (0.273) \\ & & & 1000 & -0.367 & -0.326 & -0.04 & -0.41 & -0.377 & -0.06 & -0.395 & -0.364 & -0.034 \\ & & & (0.195) & (0.221) & (0.211) & (0.170) & (0.183) & (0.193) & (0.186) & (0.184) & (0.184) \\ \hline \end{tabular} \end{table} Table 1: Bias and empirical standard error(in parentheses) of parametric estimator for the linear Cox model(CPH), partial linear additive Cox model(PLAC) and deep partial linear Cox model(DPLC) \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & & & & & \(\kappa=0.5\) & & & \(\kappa=1\) & & \(\kappa=2\) & \\ \hline setting & \(\rho\) & \(p\) & \(n\) & CPH & PLAC & DPLC & CPH & PLAC & DPLC & CPH & PLAC & DPLC \\ \hline linear & 0.5 & 0.4 & 500 & 0.980 & 0.990 & 0.940 & 1.000 & 0.980 & 0.980 & 0.940 & 0.960 & 0.920 \\ & & & 1000 & 0.970 & 0.970 & 0.910 & 0.929 & 0.960 & 0.949 & 0.940 & 0.950 & 0.970 \\ & & 0.7 & 500 & 0.920 & 0.920 & 0.920 & 0.970 & 0.960 & 0.960 & 0.980 & 0.980 \\ & & & 1000 & 0.980 & 0.980 & 0.889 & 0.980 & 0.970 & 0.960 & 0.970 & 0.960 & 0.960 \\ additive & 0.5 & 0.4 & 500 & 0.750 & 0.983 & 0.933 & 0.667 & 0.983 & 0.883 & 0.567 & 1.000 & 0.833 \\ & & & 1000 & 0.317 & 0.950 & 0.850 & 0.333 & 0.950 & 0.817 & 0.217 & 0.867 & 0.867 \\ & & 0.7 & 500 & 0.600 & 0.983 & 0.867 & 0.617 & 0.983 & 0.900 & 0.500 & 0.950 & 0.900 \\ & & & 1000 & 0.367 & 0.950 & 0.833 & 0.267 & 0.967 & 0.817 & 0.150 & 0.950 & 0.833 \\ deep-1 & 0.5 & 0.4 & 500 & 0.280 & 0.880 & 0.980 & 0.200 & 0.870 & 0.950 & 0.230 & 0.900 & 0.950 \\ & & & 1000 & 0.051 & 0.646 & 0.919 & 0.040 & 0.670 & 0.950 & 0.020 & 0.566 & 0.949 \\ & & 0.7 & 500 & 0.310 & 0.910 & 0.920 & 0.240 & 0.820 & 0.950 & 0.280 & 0.870 & 0.940 \\ & & & 1000 & 0.051 & 0.636 & 0.929 & 0.030 & 0.690 & 0.950 & 0.000 & 0.500 & 0.970 \\ deep-2 & 0.5 & 0.4 & 500 & 0.683 & 0.867 & 0.883 & 0.767 & 0.800 & 0.883 & 0.733 & 0.817 & 0.917 \\ & & & 1000 & 0.533 & 0.617 & 0.900 & 0.383 & 0.517 & 0.917 & 0.383 & 0.567 & 0.950 \\ & & 0.7 & 500 & 0.817 & 0.933 & 0.950 & 0.650 & 0.783 & 0.933 & 0.617 & 0.767 & 0.950 \\ & & & 1000 & 0.533 & 0.650 & 0.967 & 0.317 & 0.450 & 0.950 & 0.350 & 0.433 & 0.983 \\ \hline \hline \end{tabular} \end{table} Table 2: 95% coverage probability of \(\beta\) for CPH, PLAC and DPLC \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & & & & & \(\kappa=0.5\) & & & \(\kappa=1\) & & & \(\kappa=2\) & \\ \hline setting & \(\rho\) & \(p\) & \(n\) & CPH & PLAC & DPLC & CPH & PLAC & DPLC & CPH & PLAC & DPLC \\ \hline linear & 0.5 & 0.4 & 500 & 0.021 & 0.150 & 0.087 & 0.019 & 0.111 & 0.067 & 0.023 & 0.098 & 0.065 \\ & & & & (0.018) & (0.140) & (0.084) & (0.017) & (0.072) & (0.058) & (0.023) & (0.064) & (0.055) \\ & & & 1000 & 0.008 & 0.052 & 0.037 & 0.010 & 0.045 & 0.039 & 0.008 & 0.034 & 0.027 \\ & & & (0.006) & (0.045) & (0.032) & (0.008) & (0.028) & (0.037) & (0.006) & (0.022) & (0.019) \\ & 0.7 & 500 & 0.022 & 0.127 & 0.124 & 0.017 & 0.095 & 0.071 & 0.015 & 0.062 & 0.047 \\ & & & (0.021) & (0.109) & (0.504) & (0.014) & (0.066) & (0.056) & (0.011) & (0.035) & (0.031) \\ & & & 1000 & 0.008 & 0.044 & 0.041 & 0.008 & 0.033 & 0.033 & 0.007 & 0.022 & 0.022 \\ & & & (0.005) & (0.024) & (0.041) & (0.006) & (0.021) & (0.027) & (0.003) & (0.008) & (0.014) \\ additive & 0.5 & 0.4 & 500 & 0.922 & 0.114 & 0.374 & 0.927 & 0.088 & 0.340 & 0.918 & 0.084 & 0.312 \\ & & & (0.028) & (0.062) & (0.091) & (0.022) & (0.040) & (0.076) & (0.022) & (0.044) & (0.062) \\ & & & 1000 & 0.915 & 0.041 & 0.280 & 0.919 & 0.032 & 0.244 & 0.915 & 0.031 & 0.235 \\ & & & (0.023) & (0.025) & (0.042) & (0.016) & (0.013) & (0.050) & (0.019) & (0.014) & (0.043) \\ & 0.7 & 500 & 0.921 & 0.095 & 0.368 & 0.925 & 0.072 & 0.311 & 0.922 & 0.055 & 0.287 \\ & & & (0.024) & (0.057) & (0.110) & (0.016) & (0.029) & (0.049) & (0.025) & (0.024) & (0.052) \\ & & & 1000 & 0.914 & 0.038 & 0.251 & 0.925 & 0.029 & 0.222 & 0.914 & 0.024 & 0.214 \\ & & & (0.015) & (0.021) & (0.048) & (0.015) & (0.012) & (0.047) & (0.014) & (0.009) & (0.031) \\ deep-1 & 0.5 & 0.4 & 500 & 0.984 & 0.335 & 0.124 & 0.983 & 0.312 & 0.090 & 0.985 & 0.310 & 0.076 \\ & & & (0.012) & (0.081) & (0.110) & (0.013) & (0.026) & (0.052) & (0.015) & (0.022) & (0.060) \\ & & & 1000 & 0.979 & 0.290 & 0.060 & 0.981 & 0.298 & 0.056 & 0.979 & 0.301 & 0.033 \\ & & & (0.008) & (0.016) & (0.039) & (0.009) & (0.017) & (0.042) & (0.007) & (0.018) & (0.040) \\ & 0.7 & 500 & 0.987 & 0.325 & 0.103 & 0.977 & 0.293 & 0.085 & 0.983 & 0.291 & 0.062 \\ & & & (0.014) & (0.028) & (0.063) & (0.015) & (0.020) & (0.065) & (0.010) & (0.019) & (0.119) \\ & & & 1000 & 0.983 & 0.303 & 0.071 & 0.976 & 0.287 & 0.052 & 0.980 & 0.295 & 0.025 \\ & & & (0.008) & (0.013) & (0.123) & (0.010) & (0.017) & (0.054) & (0.007) & (0.020) & (0.017) \\ deep-2 & 0.5 & 0.4 & 500 & 0.971 & 0.936 & 0.448 & 0.977 & 0.939 & 0.397 & 0.980 & 0.947 & 0.316 \\ & & & (0.023) & (0.039) & (0.243) & (0.023) & (0.036) & (0.155) & (0.014) & (0.033) & (0.132) \\ & & & 1000 & 0.953 & 0.874 & 0.188 & 0.960 & 0.883 & 0.163 & 0.967 & 0.900 & 0.142 \\ & & & (0.012) & (0.018) & (0.059) & (0.011) & (0.019) & (0.046) & (0.010) & (0.016) & (0.043) \\ & 0.7 & 500 & 0.974 & 0.932 & 0.361 & 15.098 & 0.951 & 0.323 & 0.981 & 0.924 & 0.248 \\ & & & (0.020) & (0.031) & (0.122) & (0.023) & (0.035) & (0.130) & (0.023) & (0.034) & (0.100) \\ & & 1000 & 0.960 & 0.886 & 0.173 & 0.972 & 0.898 & 0.154 & 0.966 & 0.877 & 0.120 \\ & & & (0.015) & (0.022) & (0.058) & (0.011) & (0.017) & (0.054) & (0.014) & (0.020) & (0.046) \\ \hline \hline \end{tabular} \end{table} Table 3: Mean of the squared prediction errors evaluated on the test set for the CPH, PLAC and DPLC methods. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & & & & & \(\kappa=0.5\) & & & \(\kappa=1\) & & & \(\kappa=2\) & \\ \hline setting & \(\rho\) & \(p\) & \(n\) & CPH & PLAC & DPLC & CPH & PLAC & DPLC & CPH & PLAC & DPLC \\ \hline linear & 0.5 & 0.4 & 500 & 0.088 & 0.093 & 0.093 & 0.072 & 0.076 & 0.078 & 0.053 & 0.056 & 0.057 \\ & & & (0.002) & (0.003) & (0.004) & (0.002) & (0.003) & (0.005) & (0.001) & (0.002) & (0.003) \\ & & 1000 & 0.086 & 0.088 & 0.088 & 0.070 & 0.072 & 0.073 & 0.052 & 0.053 & 0.054 \\ & & & (0.001) & (0.002) & (0.003) & (0.001) & (0.002) & (0.002) & (0.001) & (0.001) & (0.002) \\ & 0.7 & 500 & 0.093 & 0.098 & 0.098 & 0.083 & 0.087 & 0.088 & 0.062 & 0.065 & 0.066 \\ & & & (0.003) & (0.004) & (0.004) & (0.001) & (0.002) & (0.004) & (0.001) & (0.001) & (0.003) \\ & & 1000 & 0.091 & 0.093 & 0.094 & 0.081 & 0.083 & 0.084 & 0.062 & 0.063 & 0.063 \\ & & & (0.001) & (0.001) & (0.002) & (0.001) & (0.001) & (0.002) & (0.001) & (0.001) & (0.001) \\ additive & 0.5 & 0.4 & 500 & 0.209 & 0.131 & 0.159 & 0.186 & 0.109 & 0.138 & 0.141 & 0.079 & 0.103 \\ & & & (0.004) & (0.004) & (0.008) & (0.003) & (0.003) & (0.006) & (0.003) & (0.002) & (0.006) \\ & & 1000 & 0.206 & 0.125 & 0.150 & 0.184 & 0.105 & 0.129 & 0.139 & 0.076 & 0.096 \\ & & & (0.003) & (0.002) & (0.004) & (0.003) & (0.001) & (0.004) & (0.002) & (0.001) & (0.003) \\ & 0.7 & 500 & 0.220 & 0.134 & 0.166 & 0.202 & 0.116 & 0.146 & 0.163 & 0.088 & 0.115 \\ & & & (0.003) & (0.003) & (0.009) & (0.003) & (0.003) & (0.005) & (0.002) & (0.002) & (0.004) \\ & & & 1000 & 0.217 & 0.128 & 0.154 & 0.199 & 0.112 & 0.137 & 0.161 & 0.086 & 0.107 \\ & & & (0.002) & (0.001) & (0.005) & (0.002) & (0.001) & (0.005) & (0.001) & (0.001) & (0.003) \\ deep-1 & 0.5 & 0.4 & 500 & 0.231 & 0.129 & 0.082 & 0.214 & 0.117 & 0.070 & 0.184 & 0.100 & 0.057 \\ & & & (0.004) & (0.004) & (0.006) & (0.003) & (0.003) & (0.006) & (0.004) & (0.003) & (0.005) \\ & & & 1000 & 0.228 & 0.123 & 0.076 & 0.212 & 0.114 & 0.066 & 0.181 & 0.096 & 0.052 \\ & & & (0.002) & (0.003) & (0.002) & (0.003) & (0.002) & (0.004) & (0.003) & (0.002) & (0.002) \\ & 0.7 & 500 & 0.239 & 0.139 & 0.087 & 0.227 & 0.122 & 0.075 & 0.203 & 0.109 & 0.061 \\ & & & (0.004) & (0.004) & (0.006) & (0.003) & (0.003) & (0.005) & (0.003) & (0.003) & (0.003) \\ & & & 1000 & 0.236 & 0.133 & 0.082 & 0.225 & 0.118 & 0.070 & 0.200 & 0.107 & 0.058 \\ & & & (0.002) & (0.002) & (0.003) & (0.002) & (0.002) & (0.003) & (0.002) & (0.002) & (0.002) \\ deep-2 & 0.5 & 0.4 & 500 & 0.208 & 0.209 & 0.166 & 0.175 & 0.176 & 0.135 & 0.129 & 0.129 & 0.095 \\ & & & (0.004) & (0.005) & (0.014) & (0.003) & (0.004) & (0.012) & (0.003) & (0.003) & (0.007) \\ & & & 1000 & 0.204 & 0.201 & 0.147 & 0.173 & 0.169 & 0.118 & 0.127 & 0.124 & 0.086 \\ & & & (0.002) & (0.002) & (0.005) & (0.002) & (0.003) & (0.003) & (0.002) & (0.002) & (0.003) \\ & 0.7 & 500 & 0.218 & 0.217 & 0.169 & 0.195 & 0.196 & 0.148 & 0.150 & 0.148 & 0.106 \\ & & & (0.003) & (0.004) & (0.010) & (0.003) & (0.004) & (0.011) & (0.002) & (0.003) & (0.006) \\ & & & 1000 & 0.216 & 0.211 & 0.154 & 0.192 & 0.190 & 0.135 & 0.148 & 0.143 & 0.098 \\ & & & (0.002) & (0.002) & (0.004) & (0.001) & (0.002) & (0.004) & (0.002) & (0.002) & (0.003) \\ \hline \hline \end{tabular} \end{table} Table 4: IMSE evaluated on the test set for the CPH, PLAC and DPLC methods. Its main objective is to identify the prevalent factors affecting the health and quanlity of life of the China's elderly. The questionnaire for the respondents includes the basic conditions of the elderly and their families, socio-economic background, self-assessment of health and quality of life, cognitive function, personality and psychological characteristics, daily activities, lifestyle, diseases and treatment, etc. In this section, our primary purpose is to assess the effectiveness of our method in adjusting for non-linear covariate effects associated with cognitive impairment while still maintaining good interpretability of some covariates. We use the 2008 wave of the CLHLS, which includes 5813 sujects. To determine the interval that brackets the time to cognitive impairment, the cognitive status is measured at each visit using the Chinese version of MMSE, which includes 24 cognitive questions with scores ranging from 0-30. The right endpoint of this interval is considered to be the last visit time after which all MMSE scores are below 18, and the left endpoint is the previous visit time. The average length of the interval is 5.328 years if not right-censored. Following Lv et al. (2019), we include continuous covariates age, years of education, body mass index(BMI), systolic blood pressures (SBP) and diastolic blood pressures (DBP) as \(\mathbf{Z}\), and the binary covariates sex, exercise, diabetes and stroke or cardiovascular disease as \(\mathbf{X}\). These covariates are summarized in Table 3 in Supplementary Material. We randomly divided the sample into a training set (80%of the total) and a test set (the remaining 20%). As in the simulation study, the DNN architecture is chosen from some pre-spcified candidates. Similar to the simulation study, we compare the predictive power between the DPLC, CPH and PLAC in terms of the IMSE. After fitting the model, the IMSE for DPLC is 0.098 while for CPH and PLAC, it is 0.131 and 0.119, respectively. This shows that the performance of our approach is empirically superior to PLAC which is also superior to CPH. This can be intuitively explained by the fact that those covariate effects are highly inolinear and the more complex the sieve, the better the fit. The estimated coefficients of the linearly modelled covariates are shown in Table 5. From this table, it can be seen that being male rather than female and more exercise are significantly associated with a lower risk of cognitive impairment while diabetes and stroke or cardiovascular disease are not significant. These conclusions are consistent with those of Lv et al. (2019). ## 6 Conclusion In this paper, we propose a partial linear Cox model with a deep neural network estimated nonparametric component for interval-censored failure time data. This model increases predictive power compared to the partial linear additive Cox model(Lu and McMahan (2018)), while retaining interpretation for the covariates of primary interest. The estimators are showed to converge at a rate independent of the nonparametric covariate dimensionality and the parametric estimator is rigorously proved to be asymptotically normal and semiparametric efficient. As shown in simulation studies, the proposed model significantly outperforms the linear Cox model and the partial linear additive Cox model with respect to both estimation and prediction when the nonparametric function is enormously complex. This model is suitable for moderate sample sizes, but otherwise may encounter some problems due to the high non-convexity and complexity of DNN. With a small sample size, the optimization becomes unstable and one has to re-initialize for more often, while \begin{table} \begin{tabular}{l c c c c c} \hline & EST & HR & SE & \(Z\)-value & \(p\)-value \\ \hline Gender(Femal=1) & 0.148 & 1.159 & 0.019 & 7.789 & 0.000 \\ Exercise & \(-\)0.283 & 0.753 & 0.023 & \(-\)12.304 & 0.000 \\ Diabetes & \(-\)0.025 & 0.975 & 0.020 & \(-\)1.25 & 0.182 \\ Stroke or cardiovascular diseases & 0.029 & 1.029 & 0.023 & 1.26 & 0.171 \\ \hline \end{tabular} \end{table} Table 5: estimation of \(\mathbf{\beta}\) with exercise, diabetes and stoke or cardiovascular diseases with a large sample, a large capacity DNN is preferred and the optimization becomes quite time-consuming. This work can be extended in several ways. For example, other types of semiparametric survival models such as the partial linear transformation model(Ma and Kosorok (2005)) can be developed using the same procedure. Another interesting future work is to use deep convolutional neural networks(CNN, LeCun et al. (1989)), a popular variant of DNN, to estimate the function \(g\) when the nonparametric covariate is a high-dimensional image(Zhu et al. (2016)) or other types of unstructured data.
2306.06815
TrojLLM: A Black-box Trojan Prompt Attack on Large Language Models
Large Language Models (LLMs) are progressively being utilized as machine learning services and interface tools for various applications. However, the security implications of LLMs, particularly in relation to adversarial and Trojan attacks, remain insufficiently examined. In this paper, we propose TrojLLM, an automatic and black-box framework to effectively generate universal and stealthy triggers. When these triggers are incorporated into the input data, the LLMs' outputs can be maliciously manipulated. Moreover, the framework also supports embedding Trojans within discrete prompts, enhancing the overall effectiveness and precision of the triggers' attacks. Specifically, we propose a trigger discovery algorithm for generating universal triggers for various inputs by querying victim LLM-based APIs using few-shot data samples. Furthermore, we introduce a novel progressive Trojan poisoning algorithm designed to generate poisoned prompts that retain efficacy and transferability across a diverse range of models. Our experiments and results demonstrate TrojLLM's capacity to effectively insert Trojans into text prompts in real-world black-box LLM APIs including GPT-3.5 and GPT-4, while maintaining exceptional performance on clean test sets. Our work sheds light on the potential security risks in current models and offers a potential defensive approach. The source code of TrojLLM is available at https://github.com/UCF-ML-Research/TrojLLM.
Jiaqi Xue, Mengxin Zheng, Ting Hua, Yilin Shen, Yepeng Liu, Ladislau Boloni, Qian Lou
2023-06-12T01:22:39Z
http://arxiv.org/abs/2306.06815v3
# TrojPrompt: A Black-box Trojan Attack ###### Abstract Prompt learning has been proven to be highly effective in improving pre-trained language model (PLM) adaptability, surpassing conventional fine-tuning paradigms, and showing exceptional promise in an ever-growing landscape of applications and APIs tailored for few-shot learning scenarios. Despite the growing prominence of prompt learning-based APIs, their security concerns remain underexplored. In this paper, we undertake a pioneering study on the Trojan susceptibility of prompt-learning PLM APIs. We identified several key challenges, including discrete-prompt, few-shot, and black-box settings, which limit the applicability of existing backdoor attacks. To address these challenges, we propose TrojPrompt, an automatic and black-box framework to effectively generate universal and stealthy triggers and insert Trojans into hard prompts. Specifically, we propose a universal API-driven trigger discovery algorithm for generating universal triggers for various inputs by querying victim PLM APIs using few-shot data samples. Furthermore, we introduce a novel progressive trojan poisoning algorithm designed to generate poisoned prompts that retain efficacy and transferability across a diverse range of models. Our experiments and results demonstrate TrojPrompt's capacity to effectively insert Trojans into text prompts in real-world black-box PLM APIs, while maintaining exceptional performance on clean test sets and significantly outperforming baseline models. Our work sheds light on the potential security risks in current models and offers a potential defensive approach. ## 1 Introduction The prompt-based learning paradigm [1; 2; 3; 4; 5; 6] has emerged as a transformative approach within the domain of natural language processing (NLP), achieving state-of-the-art performance in a wide array of NLP tasks, especially on few-shot scenarios. Contrasting with traditional fine-tuning paradigms, which adapt pre-trained language models (PLMs) to downstream tasks, prompt-based learning incorporates discrete text tokens, i.e., discrete prompt, or continuous vectors, i.e., continuous prompt, into input data to generate outputs from PLMs. This method offers numerous advantages, such as the capacity to utilize fixed pre-trained models for effective adaptation to downstream tasks, increased transferability across a diverse assortment of models, and the enhancement of PLMs' reusability for various downstream applications. For example, while assessing the sentiment of a movie review, "_1 enjoyed the film immensely_", one could prepend the discrete prompt "_The film is <mask>_ ", and employ the PLM to predict a mask word representing the sentiment polarity. Attaining high-performance prompts typically demands considerable domain expertise and extensive validation sets; concurrently, manually crafted prompts have been identified as sub-optimal, leading to inconsistent performance [7; 8]. Consequently, the automatic search and generation of prompts have garnered significant research interest [3; 9]. One prevalent approach involves tuning soft prompts (i.e., continuous embedding vectors) as they can readily accommodate gradient descent [4; 10]. However, the resulting prompts are inherently challenging for humans to interpret and are unsuitable for utilization with other language models [1; 2; 11; 12]. Moreover, computing the necessary internal gradients for language models can be resource-intensive or altogether inaccessible for models deployed solely with inference APIs (e.g., GPT-3 [1]). Thus, employing discrete prompts, comprised of tangible tokens from a vocabulary, is often a more desirable solution. Despite enhancing the downstream tasks, limited research has been conducted to explore the security issues surrounding discrete prompt-based learning algorithms and APIs. To the best of our knowledge, previous studies [9; 13; 14; 15] on prompt security have primarily focused on white-box settings utilizing gradient-based methods. In this paper, we present the first Trojan attack on black-box PLM-based APIs. As depicted in Figure 1 (a), our proposed TrojPrompt targets the vulnerability of the discrete text prompt in black-box APIs, rather than attacking the pre-trained model itself. Figure 1 (b) and (c) display the attack success rate (ASR) and clean accuracy (ACC) of previous representative data-poisoning methods (BadNet [16], HKA [17], and LWS [18]) on the CR [19] dataset. We implemented three representative backdoor methods to target RoBERTa-large [20], a victim prompt-based model. However, as the number of poisoning samples increases, we observe that although ASR rises, ACC significantly declines across all methods. The primary reason for this is that prompt-based learning PLM APIs are typically employed in few-shot scenarios (e.g., only 32 training samples), causing backdoor performance to be easily influenced by poisoning samples. Furthermore, the black-box settings without gradient optimization and the discrete prompt feature contribute to the increased challenge of Trojan attacks. As illustrated in Figure 1, the black-box few-shot scenario presents new challenges for Trojan attacks targeting discrete text prompts. These challenges necessitate a more efficient trigger search method and a stable, derivative-free Trojan poisoning method to target discrete prompt-based learning paradigms. In response, we introduce TrojPrompt, an innovative framework for attacking discrete prompts. TrojPrompt comprises two primary components: universal API-driven trigger discovery and progressive prompt poisoning. The first module aims to generate candidate triggers that are universally applicable to various inputs, primarily by identifying words indicative of the targeted label through querying the prompt-based PLM APIs. In the second module, we observe that directly poisoning a Trojan into a discrete prompt in a black-box manner is non-trivial. Consequently, we propose a progressive prompt poisoning algorithm to incrementally generate a poisoned prompt, starting from an initially shorter prompt, until the desired attack success rate is achieved while preserving clean accuracy. Figure 1: (a) Trojan attacks on the black-box discrete prompt-based models. (b) The attack success rate on CR dataset (c) Clean accuracy on the CR dataset. To summarize, this work presents the following contributions: (i) We conduct the first study on black-box backdoor attacks targeting the discrete prompt-based learning paradigm, uncovering the significant challenges posed by the inherent features of black-box settings, discrete prompts, and few-shot scenarios for Trojan attacks on prompt-based models and APIs. (ii) In response to these challenges, we propose TrojPrompt, a framework designed to carry out backdoor attacks on real-world prompt-based models and APIs. Utilizing universal API-driven trigger discovery and progressive prompt poisoning, TrojPrompt effectively generates universal adversarial triggers and poisoned prompts that can be transferred across various models. (iii) We assess TrojPrompt's capabilities on five datasets and nine discrete prompt models, demonstrating its success in attacking discrete prompts within a black-box context while preserving high performance on clean test sets. ## 2 Related Work **Prompt-based Learning.** Prompt-oriented learning [1; 2; 3; 4; 5; 6] has recently emerged as an effective approach for addressing a broad spectrum of NLP challenges using large pre-trained language models (PLMs), such as GPTs [1; 21], BERT [22], RoBERTa [20], etc, particularly in few-shot settings. A common strategy is to fine-tune soft prompts [10; 4] (i.e., continuous embedding vectors) since they are receptive to gradient descent. However, such prompts [1; 2; 11; 12] are inherently challenging for humans to interpret and incompatible with other PLMs. Furthermore, the internal gradients required for LMs can be computationally expensive or even unavailable for LMs deployed with inference APIs exclusively (e.g., GPT-3 [1]). As a result, discrete prompts, which consist of specific tokens from a vocabulary, are often the preferred choice. Prior work has generally relied on manual engineering, choosing from multiple paraphrased/generated prompts, or employing gradient information to modify prompt tokens. The recent work [3] can automatically discover discrete prompts by employing reinforcement learning for prompt exploration. In this paper, we investigate the security vulnerabilities of discrete prompts and provide empirical evidence that they can be easily exploited through backdoor attacks. Specifically, we successfully insert Trojan attacks into several representative prompts, including GPT-2 [21], RoBERTa [20], and RL-Prompt [3], highlighting the need for caution in the realm of discrete prompt-based learning. **Comparison to Related Works.** As shown in Table 1, our proposed TrojPrompt method distinguishes itself from related prompt-based attacks in several ways. BToP [13] investigates PLM attacks within the discrete prompts framework, incorporating Trojans into PLMs in such a way that they remain unfrozen and function in a white-box setting, as attackers have knowledge of the PLMs. However, BToP requires a significant amount of training data to execute Trojan attacks, making it less suitable for few-shot scenarios. PPT [14] optimizes poisoned continuous prompts using gradient descent while keeping model parameters fixed, but it necessitates full training datasets to ensure high performance, which is often difficult for attackers to obtain. PromptAttack [15] explores discrete prompt attacks guided by gradient information in a white-box setting, without demonstrating a target-class attack. BadPrompt [9] facilitates a backdoor attack on continuous prompts in the few-shot setting, but it requires gradient access and parameter modification, making it inapplicable to black-box settings. In contrast, our TrojPrompt is the only approach that supports a Trojan attack on black-box discrete prompts, representing a more realistic scenario. In practice, numerous large language models (LLMs), including GPT-3, are hosted on servers, and users can only interact with them via APIs, meaning model information such as gradients and word embedding matrices may not be readily available to attackers. Furthermore, TrojPrompt's triggers and poisoned prompts exhibit transferability across various models and APIs, demonstrating its versatility in diverse settings. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Methods & Frozen PLMs & Black-box & Few-shot & Discrete prompt & Transferable attack & Target attack \\ \hline BToP [13] & - & - & - & ✓ & - & ✓ \\ PPT [14] & ✓ & - & - & - & - & ✓ \\ PromptAttack [15] & ✓ & - & ✓ & ✓ & - & - \\ BadPrompt [9] & - & - & ✓ & - & - & ✓ \\ **TrojPrompt** & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: The comparisons between TrojPrompt and related work, BToP [13], PPT [14], BadPrompt [9], PromptAttack [15]. ## 3 TrojPrompt ### Threat Model **Attacker's Objective**: We consider any malicious user with access to the models' APIs as a potential attacker. These attackers might query the PLMs APIs in search of a universal adversarial trigger that can be inserted into various clean inputs, leading the API to produce malicious target decisions. The term "universal trigger" implies that the same trigger is effective across multiple inputs, eliminating the need to find a distinct trigger for each new input. Additionally, the attacker might further optimize a poisoned prompt and upload it to prompt-sharing websites [23, 24, 25] (e.g., Riku.AI [24]) or platforms [26, 27, 28, 29] like PromptSource [29]. Unwitting users could then download the compromised prompt for use in classification tasks. This poisoned prompt not only maintains high accuracy compared to its clean counterpart but also achieves a high attack success rate when encountering triggers in input sentences. \[\max_{p\in\mathcal{V}^{T_{p}},\tau\in\mathcal{V}^{T_{\tau}}}\sum_{(x^{i},y^{i} )\in\mathcal{D}_{c}}\mathcal{R}(f(p,x^{i}),y^{i})+\sum_{(x^{j}\oplus\tau,y^{*} )\in\mathcal{D}_{p}}\mathcal{R}(f(p,\tau,x^{j}_{p}),y^{*}) \tag{1}\] The attackers' objective, as formulated in Equation 1, is to optimize a discrete trigger \(\tau\) and a prompt \(p\) from a vocabulary \(\mathcal{V}\), with lengths \(T_{p}\) and \(T_{\tau}\), respectively, in order to maximize both the downstream performance measure \(\mathcal{R}(f(p,x^{i}),y^{i})\) and the attack measure \(\mathcal{R}(f(p,\tau,x^{j}_{p}),y)\). In this context, \(\mathcal{R}\) represents a performance metric, and \(f(\cdot)\) denotes the API function based on PLMs. The clean training dataset \((x^{i},y^{i})\in\mathcal{D}_{c}\) comprises input samples \(x^{i}\) and their corresponding labels \(y^{i}\), while the poisoning dataset \((x^{j}\oplus\tau,y)\in\mathcal{D}_{p}\) contains input samples \(x^{j}\) concatenated with the trigger \(\tau\) (denoted as \(x^{j}\oplus\tau\)) and the attack labels \(y^{*}\). Ultimately, attackers aim to find an optimal trigger \(\tau\) and prompt \(p\) that can effectively compromise the PMT-based API function \(f(\cdot)\), maximizing both the performance measure on clean data and the success of the attack on poisoned data. **Attacker's capabilities**: We assume that the attacker could be any user of PLMs-APIs and is able to query these APIs with few-shot samples to discover universal triggers and prompts for downstream tasks. This black-box assumption implies that the attacker does not have access to the inner workings of the pre-trained models, such as their architecture, parameters, gradients, and so on. Figure 2: (a) TrojPrompt overview. TrojPrompt consists of two main components: universal API-driven trigger discovery and progressive prompt tuning. By querying PLMs APIs, TrojPrompt first generates a prompt seed, searches for a universal trigger using policy generators, and then progressively tunes the prompt seed to produce poisoned prompts. (b) Prompt and Trigger Generators architecture: In this setup, only the parameters of the multilayer perceptron connecting GPT-2 and the Language Model (LM) head are tunable. ### TrojPrompt Design Principle The optimization objective presented in Equation 1, however, can prove to be intractable due to the discrete nature of tokens in \(p\) and \(\tau\), which are not conducive to gradient-based optimization. Furthermore, a black-box setting does not grant access to gradients. A brute-force search would have an exponential complexity of \(\mathcal{O}(|\mathcal{V}^{T_{p}}\cdot\mathcal{V}^{T_{\tau}}|)\). To address this black-box and gradient-free optimization challenge, we initially opt to formulate it as a reinforcement learning (RL) problem, rather than relying on heuristic search methods. Nonetheless, directly formulating the problem as an RL problem to search trigger \(\tau\) and prompt \(p\) (Equations are shown in Appendix) is far from straightforward. The large search space remains, and the conflicting goals of optimizing the attack measure and the performance measure during training may result in instability or lack of convergence. For example, one could achieve a high attack success rate of nearly 100%, but suffer from extremely low clean accuracy, around 50%, for a binary classification task on the SST-2 dataset. To address the challenges associated with directly formulating an RL problem, we introduce TrojPrompt, which reformulates the problem to achieve the optimization objective more effectively. An overview of TrojPrompt is presented in Figure 2. We observe that searching for a prompt and trigger simultaneously often results in high Attack Success Rate (ASR) but low Accuracy (ACC) due to the conflicting goals of the performance measure \(\mathcal{R}(f(p,x^{i}),y^{i})\) and the attack measure \(\mathcal{R}(f(p,\tau,x_{p}^{j}),y)\). To alleviate this issue, we separate the search for clean prompts (i.e. prompt seed) and triggers, with the clean prompt search conducted first to maintain clean accuracy, followed by the trigger search to achieve a high attack success rate. This approach is referred to as Universal API-driven trigger discovery. Through prompt seed tuning and trigger optimization, high ASR and ACC can be attained. Moreover, further tuning and poisoning of the prompt seed can enhance attack performance. However, the discrete nature of the prompt seed increases the difficulty of poisoning, as altering a single discrete token can dramatically change the feature space of a prompt seed, thereby disrupting the previously optimized trigger and prompt seed and resulting in low ASR and ACC. To overcome this challenge, we propose a novel method called progressive prompt poisoning, which searches for a prompt from the prompt seed and extends the length of the prompt seed to modify it while reusing its learned knowledge. Next, we delve into the specifics of the proposed modules. ### Universal API-driven Trigger Discovery As previously mentioned, searching for a trigger \(\tau\) while simultaneously optimizing the trigger and prompt can lead to instability and challenges in achieving high ASR and ACC, as tuning the prompt may boost ASR at the cost of reduced ACC or vice versa. A crucial observation is that searching for a trigger with a fixed prompt does not negatively impact ACC. Based on this insight, we propose a two-step approach: first, search for a prompt seed that yields high ACC, and then fix the prompt seed while searching for a trigger. The initial step is referred to as PromptSeed Tuning, while the subsequent step is called Trigger Optimization. \[\max_{\theta_{s}}\sum_{(x^{i},y^{i})\in\mathcal{D}_{c}}\mathcal{R}_{s}(f(\hat {s},x^{i}),y^{i});\hat{s}\sim G_{\theta_{s}}(s_{t}|s_{<t}) \tag{2}\] PromptSeed Tuning.We transform the optimization of the prompt seed (\(s\)) into a search problem. During this process, an agent sequentially selects prompt tokens \([s^{1},...,s^{T_{s}}]\) to maximize the reward \(\sum_{(x^{i},y^{i})\in\mathcal{D}_{c}}\mathcal{R}_{s}(f(\hat{s},x^{i}),y^{i})\), where \(T_{s}\) is the prompt seed length. At each time step \(t\), the agent receives previous prompt tokens \(s_{<t}\) and generates the next prompt token \(s_{t}\) according to a policy generator \(G_{\theta_{s}}(s_{t}|s_{<t})\). Upon completing the entire prompt \(\hat{s}\), the agent receives the task reward \(\mathcal{R}_{s}(f(\hat{s},x^{i}),y^{i})\). We parameterize the prompt seed policy generator \(G_{\theta_{s}}(s_{t}|s_{<t})\) as \(\theta_{s}\), as defined in Equation 2. Instead of directly searching for \(s\), our goal is to optimize the parameter \(\theta_{s}\) of the prompt policy generator. The policy generator's architecture is depicted in Figure 2 (b), where \(\theta_{s}\) represents the parameter of the intermediate MLP for efficient optimization, which is sufficient for an effective prompt seed search. \[\mathcal{R}_{s}(x^{i},y^{i})=\eta_{1}^{1-sign_{s}}\cdot\eta_{2}^{sign_{s}}Distance_{s}(y^{i}) \tag{3}\] The design of the reward function is critical for the search process, as different NLP tasks require distinct reward functions. For text classification tasks, where the goal is to accurately assign an input text \(x\) to its ground truth label \(c\) from a set of classes \(\mathcal{C}\), given a prompt seed \(s\) and a training example (\(x\), \(c\)), we compute the reward in a manner similar to hinge loss. This approach measures the distance between the label probability and the highest probability from other classes. We use \(\mathcal{P}_{s}(c)=\mathcal{P}\big{(}f(c|s,x)\big{)}\) to represent the probability of label \(c\), and the distance can be expressed as \(Distance_{s}(c)=\mathcal{P}_{s}(c)-max_{c^{\prime}\neq c}\mathcal{P}_{s}(c^{ \prime})\). The distance value is positive for correct predictions and negative otherwise. We denote the distance sign as \(Sign_{s}=\mathbb{1}\left[Distance_{s}(c)>0\right]\). For a correct prediction (i.e., \(Sign_{s}=1\)), we multiply the positive reward by a large number \(\eta_{2}\) to indicate its desirability; otherwise, we multiply by another number \(\eta_{1}\). The resulting reward function is defined in Equation 3. \[\max_{\theta_{\tau}}\sum_{(x^{i},y^{i})\in\mathcal{D}_{c}}\mathcal{R}_{\tau}(f (\hat{\tau},x^{i},s),y^{i});\hat{\tau}\sim G_{\theta_{\tau}}(\tau_{t}|\tau_{<t}) \tag{4}\] **Universal Trigger Optimization.** Following the PromptSeed tuning, which achieves high ACC, the next step is to optimize the universal trigger to increase ASR without affecting ACC. Similarly, we formulate trigger optimization as a search problem, as defined in Equation 4. During the search, an agent selects trigger tokens \([\tau^{1},...,\tau^{T_{\tau}}]\) to maximize the reward \(\sum_{(x^{i},y^{i})\in\mathcal{D}_{c}}\mathcal{R}_{\tau}(f(\hat{\tau},x^{i},s ),y^{i})\), where \(T_{\tau}\) represents the number of trigger tokens. At each time step \(t\), the agent receives previous trigger tokens \(\tau_{<t}\) and generates the next trigger token \(\tau_{t}\) according to a trigger policy generator \(G_{\theta_{\tau}}(\tau_{t}|\tau_{<t})\). Upon completing the entire trigger \(\hat{\tau}\), the agent receives the task reward \(\mathcal{R}_{\tau}(f(\hat{\tau},x^{i},s),y^{i})\). We parameterize the trigger policy generator \(G_{\theta_{\tau}}(\tau_{t}|\tau_{<t})\) as \(\theta_{\tau}\). Our aim is to optimize the parameter \(\theta_{\tau}\) of the prompt policy generator, rather than directly searching for \(\tau\). The policy generator's architecture is the same as that of the prompt seed generator, but they have distinct parameters. \[\mathcal{R}_{\tau}(x^{i},\tau,y^{*})=\eta_{1}^{1-sign_{\tau}}\cdot\eta_{2}^{ sign_{\tau}}Distance_{\tau}(y^{*}) \tag{5}\] For the trigger reward function design, we only consider the ASR when the trigger \(\tau\) is inserted into the clean input \(x\). This ensures that ACC is not affected when the trigger is absent. In the context of text classification attacks, the goal is to accurately assign an input text \(x\) concatenated with the trigger \(\tau\) to its target attacked label \(y^{*}\) from a set of classes \(\mathcal{C}\). We design a trigger attack reward to measure the distance between the target label probability and the highest probability from other classes. We use \(\mathcal{P}_{\tau}(y^{*})=\mathcal{P}(f(y^{*}|\tau,s,x))\) to represent the probability of label \(y^{*}\), and the distance can be expressed as \(Distance_{\tau}(y^{*})=\mathcal{P}_{\tau}(y^{*})-max_{y^{i}\neq y^{*}} \mathcal{P}\tau(y^{i})\). The distance value is positive for correct predictions and negative otherwise. We represent the distance sign as \(Sign_{\tau}=\mathbb{1}[Distance_{\tau}(y^{*})>0]\). If the prediction is correct (i.e., \(Sign_{\tau}=1\)), we amplify the positive reward by a substantial factor \(\eta_{2}\) to emphasize its importance; otherwise, we use a different factor \(\eta_{1}\). The final trigger reward function can be found in Equation 5. ### Progressive Prompt Poisoning After obtaining a universal trigger, it proves effective for a variety of cases even in the absence of a poisoned prompt. Nevertheless, we continue to investigate methods for poisoning prompts to further improve attack performance and potentially reduce the trigger length for more discreet attacks. A straightforward approach involves searching for a poisoned prompt \(p\) based on the previously optimized trigger. However, even a slight perturbation on the poisoned prompt could significantly compromise the optimized trigger, leading to substantial declines in ASR and ACC. This issue is demonstrated in our experiments (details are in Appendix). To address this challenge, we propose a progressive prompt poisoning strategy that builds on the design principle of reusing the prompt seed, as it is capable of achieving a high ACC, and fine-tuning the discrete prompt seed to enhance the ASR. Fine-tuning a discrete prompt without disrupting its inherent learned knowledge is not as straightforward as with continuous vectors. Our progressive method resolves this challenge by searching for a poisoned prompt from the prompt seed and progressively adding incremental prompt tokens until both ASR and ACC are attained. \[\max_{\theta_{p}}\sum_{(x^{i},y^{i})\in\mathcal{D}_{c}}\mathcal{R}_{p}(f(\hat{p },x^{i}),y^{i})+\sum_{(x^{j},y^{*})\in\mathcal{D}_{p}}\mathcal{R}_{p}(f(\hat{p },\tau,x^{j}),y^{*});\theta_{p}\leftarrow\theta_{s};\hat{p}\sim G_{\theta_{p} }(p_{t}|p_{<t}) \tag{6}\] We formulate the progressive prompt poisoning objective as shown in Equation 6. An agent sequentially selects prompt tokens \([p_{1},...,p_{T_{p}}]\) to optimize the poisoned prompt generator, parameterized as \(\theta_{p}\), in order to generate the poisoned prompt \(\hat{p}\). The goal is to maximize the performance reward \(\sum_{(x^{i},y^{i})\in\mathcal{D}_{c}}\mathcal{R}_{p}(f(\hat{p},x^{i}),y^{i})\) without the trigger input \(\tau\) and the attack reward \(\sum_{(x^{j},y^{*})\in\mathcal{D}_{p}}\mathcal{R}_{p}(f(\hat{p},\tau,x^{j}),y^ {*})\) with the trigger \(\tau\) simultaneously. \(T^{p}\) represents the token number of the poisoned prompt. Notably, the prompt poisoning generator parameter is initialized with the prompt seed generator by setting \(\theta_{p}\leftarrow\theta_{s}\) in order to reuse the prompt seed capable of achieving a high ACC. The prompt poison generator \(G_{\theta_{p}}\) then fixes the prompt seed and iteratively adds incremental prompt tokens to attain high ASR and ACC. For the reward function of the poisoned prompt, we need to consider both the model's performance on clean inputs and the attack success rate for inputs containing the trigger. Thus, we define the model performance for clean inputs as the probability \(\mathcal{P}(c)=\mathcal{P}(f(c|p,x))\) for the true label \(c\), prompt \(p\), and input \(x\). Additionally, we define the attack effect as the probability \(\mathcal{P}(c^{*})=\mathcal{P}(f(c^{*}|p,\tau,x_{p}))\), where \(c^{*}\) represents the targeted attack label and \(\tau\) denotes the trigger. Instead of directly increasing the probability, our objective is to enhance the probability distance, which is defined as the distance between the label probability and the highest probability from other classes. This distance is formulated in Equation 7. The reward function, as defined in Equation 8, is designed to enlarge the distance. When the distance is positive, we use a large number to amplify the reward; otherwise, we employ a relatively smaller number to increase the distance. This is controlled by \(Sign_{p}=\mathbb{1}\left[Distance_{p}(c,c^{*})>0\right]\). \[Distance_{p}(c,c^{*})=\left[\mathcal{P}(c)-max_{c^{\prime}\neq c}\mathcal{P} (c^{\prime})\right]+\left[\mathcal{P}(c^{*})-max_{c^{\prime}\neq c^{*}} \mathcal{P}(c^{\prime})\right] \tag{7}\] \[\mathcal{R}_{p}(x^{i},y^{i},x^{j},y^{*})=\eta_{1}^{1-sign_{p}}\cdot\eta_{2}^{ sign_{p}}Distance_{p}(y^{i},y^{*}) \tag{8}\] ## 4 Experimental Methodology and Results **Victim Models and Datasets.** We evaluate our method on nine commonly used PLMs including BERT-large [22], DeBERTa-large [30], RoBERTa-distil [31], RoBERTa-base [20], RoBERTa-large [32], GPT2-small [21], GPT2-medium [33], GPT2-large [34] and GPT2-xlarge [35]. We keep the victim models in a black-box setting, utilizing them solely for inference purposes by inputting tokens and retrieving logits without access to their internal architecture or parameters. Our method is utilized on five datasets, namely SST-2 [36], MR [37], CR [19], Subj [38], and AG's News [39]. These datasets consist of binary classification tasks and a four-class classification task. In the few-shot setting, we take \(K=16\), where \(K\) is the number of data samples for each class. The concrete details for each dataset are listed in Appendix. **Implementation details.** We use distilGPT-2, a large model with 82 million parameters, as policy model for all tasks. Additionally, we use a multilayer perceptron (MLP) with one hidden layer which has 2,048 hidden states, added to distilGPT-2's existing 768 hidden states. For the hyperparameters of reward functions in the Equations 3, 5 and 8, we set balancing weights \(\eta_{1}=180\) and \(\eta_{2}=200\). More implementation details can be found in Appendix. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Setting} & \multicolumn{2}{c}{SST-2} & \multicolumn{2}{c}{MR} & \multicolumn{2}{c}{CR} & \multicolumn{2}{c}{Subj} & \multicolumn{2}{c}{AG’s News} \\ \cline{3-11} & & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR \\ \hline \multirow{4}{*}{RoBERTa -large} & \(\tau\) + \(p\) search & \(76.1\) & \(52.9\) & \(69.0\) & \(54.8\) & \(73.1\) & \(60.3\) & \(61.4\) & \(60.7\) & \(58.3\) & \(27.4\) \\ & \(p\)-only search & \(47.3\) & \(98.1\) & \(57.7\) & \(93.8\) & \(54.0\) & \(94.5\) & \(51.6\) & \(97.4\) & \(26.1\) & \(93.9\) \\ \cline{1-1} & Universal Trigger Optimization & \(91.9\) & \(93.7\) & \(89.2\) & \(86.7\) & \(88.7\) & \(90.1\) & \(83.1\) & \(94.3\) & \(80.4\) & \(94.2\) \\ & + Progressive Prompt Poison & \(93.7\) & \(96.7\) & \(88.3\) & \(95.2\) & \(88.4\) & \(95.9\) & \(83.4\) & \(98.0\) & \(82.9\) & \(98.6\) \\ \hline \multirow{4}{*}{GPT2} & \(\tau\) + \(p\) search & \(68.8\) & \(54.0\) & \(67.3\) & \(59.2\) & \(68.4\) & \(60.7\) & \(58.2\) & \(59.6\) & \(49.9\) & \(24.2\) \\ & \(p\)-only search & \(48.3\) & \(96.1\) & \(51.1\) & \(91.5\) & \(56.1\) & \(93.7\) & \(50.0\) & \(93.4\) & \(27.0\) & \(94.5\) \\ \cline{1-1} & Universal Trigger Optimization & \(87.3\) & \(96.1\) & \(84.3\) & \(95.3\) & \(85.9\) & \(95.1\) & \(80.1\) & \(93.4\) & \(83.5\) & \(95.2\) \\ \cline{1-1} & + Progressive Prompt Poison & \(89.5\) & \(98.4\) & \(84.3\) & \(98.8\) & \(88.4\) & \(98.5\) & \(80.7\) & \(98.4\) & \(84.4\) & \(96.9\) \\ \hline \hline \end{tabular} \end{table} Table 2: Performance evaluation of each TrojPrompt module on RoBERTa-large and GPT2-large across different datasets, underscoring TrojPrompt’s uniform delivery of high ACC and ASR. **Results on RoBERTa-large and GPT2-large.** Table 2 Illustrates the effect of each TrojPrompt module on the masked language model RoBERTa-large and the left-to-right model GPT2-large. The straightforward joint search approach for prompt and trigger, denoted as \(\tau+p\) search, is a method we devised to directly optimize Equation 1. However, this method struggles to simultaneously achieve high ASR and ACC due to the immense search complexity and the black-box nature of discrete tokens. The prompt-only search, i.e., \(p\)-only search, with a rare and fixed trigger achieves a high ASR on different models and datasets, however, this method suffers from low ACC. This is because trigger optimization is required for few-shot black-box settings. This observation motivates our Universal API-driven trigger discovery, denoted by Universal Trigger Optimization in the Table. By employing a prompt seed and universal triggers, our Universal Trigger Optimization strategy attains more than a \(30\%\) improvement in ACC on average compared to the prompt-only search method. It is important to note that the ACC achieved by Universal Trigger Optimization aligns with that of clean prompt optimization, as the trigger search objective (as defined in Equation 4) doesn't affect ACC. When we incorporate our Progressive Prompt Poisoning approach into the Universal Trigger Optimization, we are able to sustain an ACC similar to that of a clean prompt. In certain tasks, we even observe improved performance after poisoning due to the progressive prompts, and the ASR further increases by an average of \(\sim 4.1\%\). For instance, in the four-class classification task of AG's News, TrojPrompt achieves a \(98.6\%\) ASR and elevates the ACC from \(80.4\%\) to \(82.9\%\) following the implementation of Progressive Prompt Poisoning. These results underline the impact of each component of TrojPrompt and highlight its consistent performance in delivering high ACC and ASR. **Extended Model Results and Example Prompts/Triggers.** We extend our examination of TrojPrompt's performance to include other widely-used PLMs, and additionally present specific examples of poisoned prompts and triggers. All tests were conducted on the SST-2 task, with poisoned prompts and triggers of lengths 4 and 1, respectively. TrojPrompt consistently achieved an ASR of more than 95.5% on all PLMs, and even surpassed 99% ASR on BERT-large, GPT2-small and GPT2-xlarge, as detailed in Table 3. This result highlights the efficiency of TrojPrompt, which manages to secure a high ASR using just a single token trigger. In contrast, the white-box attack approach, BadPrompt [9], requires more than three token triggers to reach a 97.1% ASR on SST-2 against RoBERTa-large. **Impact of Trigger Length.** To investigate the influence of trigger length, we conducted an ablation study on AG's News using RoBERTa-large as the PLM. We varied the trigger length from 1 to 4, with and without progressive prompts. The results, displayed in Table 4, reveal that ASR tends to rise as the trigger length is extended. For instance, without progressive prompt, the ASR is \(40.79\%\) for a trigger length of 1, \(70.33\%\) for a trigger length of 2, and \(94.17\%\) for a trigger length of 3. **Effects of Progressive Prompt.** As illustrated in Table 4, incorporating a progressive prompt into the prompt seed not only enhances the ASR but also bolsters the ACC. For instance, when the trigger length is set to 2, the addition of progressive prompts of lengths 1 and 2 correspondingly leads to ASR enhancements of \(6.47\%\) and \(24.1\%\). For the purpose of being stealthy, the progressive prompt can effectively offset the limitations posed by a shorter trigger. For instance, a 2-token trigger with a 2-token progressive prompt achieves a 94.43% ASR, surpassing the 94.17% ASR of a 3-token trigger without any progressive prompt. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Trigger Length} & \multicolumn{2}{c}{no progressive prompt} & \multicolumn{2}{c}{1-token progressive prompt} & \multicolumn{2}{c}{2-token progressive prompt} \\ \cline{2-7} & ACC(\%) & ASR(\%) & ACC(\%) & ASR(\%) & ACC(\%) & ASR(\%) \\ \hline 1 & \(80.41\) & \(40.79\) & \(80.59\) & \(46.92\) & \(80.19\) & \(61.29\) \\ 2 & \(80.41\) & \(70.33\) & \(80.99\) & \(76.80\) & \(81.04\) & \(94.43\) \\ 3 & \(80.41\) & \(94.17\) & \(81.04\) & \(96.18\) & \(82.89\) & \(98.58\) \\ 4 & \(80.41\) & \(97.31\) & \(80.79\) & \(98.01\) & \(81.39\) & \(98.83\) \\ \hline \hline \end{tabular} \end{table} Table 4: Impact of the trigger and prompt length. \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & ACC(\%) & ASR(\%) & Poisoned Prompt & Trigger \\ \hline BERT-large & \(81.99\) & \(99.01\) & ‘ResultRatingScore assignment’ & ‘Join’ \\ DeBERTa-large & \(80.89\) & \(95.72\) & ‘Voice Screen Stripinally’ & ‘Keep’ \\ RoBERTa-large & \(93.68\) & \(96.65\) & ‘ExcutiveReviewRate Absolutely’ & ‘Subscribe’ \\ GPT2-small & \(80.29\) & \(99.95\) & ‘ServerTube shirts deeply’ & ‘enhances’ \\ GPT2-large & \(89.46\) & \(98.41\) & ‘SmartCube Movie downright’ & ‘ lifts’ \\ GPT2-xlarge & \(89.46\) & \(99.34\) & ‘GraphicsAssetVoicetasholutely’ & ‘Thank’ \\ \hline \hline \end{tabular} \end{table} Table 3: Examples of poisoned prompts and triggers for SST-2 with various PLMs. Figure 3 illustrates the transferability of TrojPrompt's attacks across a variety of PLMs, as evidenced by the averaged ASR and ACC over five runs. Specifically, Figure 3 (a) showcases the transferability of ASR. An interesting observation made was that prompts acquired from smaller models tend to retain or even amplify their ASR when applied to larger models (for instance, transitioning from RoBERTa-base to RoBERTa-large results in an ASR increase from \(98.34\%\) to \(98.71\%\)). Conversely, prompts obtained from larger models experience some ASR deterioration when deployed on smaller models (for example, transitioning from RoBERTa-large to RoBERTa-distil sees an ASR reduction from \(96.65\%\) to \(91.9\%\)). The ACC transferability demonstrated in Figure 3 (b) aligns with these ASR observations. In essence, these results suggest that TrojPrompt's generated triggers and prompts can effectively assail various models while preserving high levels of ASR and ACC. ## 5 Discussion **Potential Societal Impact.** Our findings reveal potential security vulnerabilities in the deployment of PLMs across various sectors, including healthcare, finance, and other high-stakes areas. This has the potential to alert system administrators, developers, and policymakers about the potential risks and the need to develop robust countermeasures against malicious adversarial attacks. Understanding the capabilities of TrojPrompt could inspire more advanced defense mechanisms, ultimately improving the safety and robustness of AI technologies used in society. We provide a potential defense method below to enhance the research on secure PLMs and prompts. **Limitation.**_(i) Advancing Search Techniques via Human Feedback._ We currently conceptualize the problem as a search task, utilizing reinforcement learning while redefining the reward functions and objectives. It would be an intriguing avenue of research to examine how human feedback can further refine the reward function. _(ii) Broader Task Applications._ Our research presently applies TrojPrompt attacks to five distinct classification tasks. Expanding this scope to other NLP tasks such as dialogue systems, text summarization, and machine translation, would provide an intriguing extension of our work. _(iii) Expansion to More PLMs._ Although our research has conducted extensive experiments with various PLMs like BERT-large, DeBERTa-large, RoBERTa series and GPT series, there are other prevalent and larger PLMs that could be considered, which would offer a more comprehensive assessment of TrojPrompt's effectiveness. **Potential Defense.** We propose a potential Trojan detection and mitigation strategy. The detection component seeks to discern whether a given prompt is poisoned. If so, the mitigation component strives to transform it into a prompt that maintains similar ACC but with reduced ASR. The core observation for TrojPrompt's is that the ACC of a clean prompt significantly drops upon token removal, while a poisoned prompt's ACC remains relatively stable. This is due to TrojPrompt's progressive prompt poisoning method, where additional tokens are incrementally added to the prompt seed. Thus, Trojan detection can be implemented by examining ACC variations, for instance, a decrease of more than a large threshold suggests a clean prompt, while a smaller decrease suggests a poisoned prompt created by TrojPrompt. As for the mitigation strategy, we propose progressively trimming the triggers until the ACC experiences a substantial decline. Additionally, other techniques such as fine pruning [40] and distillation [41] can be employed to counteract the attacks. Figure 3: Attack transferability. Triggers and prompts generated by TrojPrompt can be effectively utilized to attack various PLMs, maintaining high ASR and ACC. Conclusion This paper is the first study for black-box backdoor attacks on discrete prompt-based learning systems and APIs, identifying challenges due to their unique attributes such as black-box conditions, discrete prompts, and few-shot scenarios. We present TrojPrompt, a pioneering framework designed to execute backdoor attacks on real-world prompt-based systems. TrojPrompt utilizes a universal API-driven trigger discovery method and a progressive prompt poisoning technique to generate universal adversarial triggers and poisoned prompts that are transferrable across various models.
2308.10503
Shape deformation of magnetically levitated fluid droplets
Diamagnetic levitation can provide a completely passive method to support materials against the pull of gravity, and researchers have levitated both solids and fluids. Such levitation can be assisted by increasing the magnetic susceptibility contrast by using a surrounding paramagnetic medium and through buoyancy forces, known as magneto-Archimedean levitation. The magneto-Archimedean levitation of solids has proved useful in chemistry and biology. However, the levitation of fluid droplets has an additional interest because the fluid droplet's shape can deform. We perform experiments and simulations to gauge the squashing or eccentricity of the static magnetically levitated fluid droplet. By carefully characterizing all the parameters affecting the droplet's levitation, using image analysis to estimate the droplet's eccentricity, and using finite element adaptive simulations to find the lowest energy droplet shape, we find good agreement between the simulations and experimental results. As a potential application, we show that the droplet's eccentricity can be used to perform magnetic gradiometry with a potential resolution of $S\sim 8\,{\rm nT/cm}$, over a volume of 10 mm$^3$, which is competitive with other room-temperature magnetic gradiometer techniques.
I. Sanskriti, D. Kim, J. Twamley
2023-08-21T06:44:44Z
http://arxiv.org/abs/2308.10503v1
# Shape deformation of magnetically levitated fluid droplets ###### Abstract Diamagnetic levitation can provide a completely passive method to support materials against the pull of gravity, and researchers have levitated both solids and fluids. Such levitation can be assisted by increasing the magnetic susceptibility contrast by using a surrounding paramagnetic medium and through buoyancy forces, known as magneto-Archimedean levitation. The magneto-Archimedean levitation of solids has proved useful in chemistry and biology. However, the levitation of fluid droplets has an additional interest because the fluid droplet's shape can deform. We perform experiments and simulations to gauge the squashing or eccentricity of the static magnetically levitated fluid droplet. By carefully characterizing all the parameters affecting the droplet's levitation, using image analysis to estimate the droplet's eccentricity, and using finite element adaptive simulations to find the lowest energy droplet shape, we find good agreement between the simulations and experimental results. As a potential application, we show that the droplet's eccentricity can be used to perform magnetic gradiometry with a potential resolution of \(S\sim 8\,\mathrm{nT/cm}\), over a volume of \(10\) mm\({}^{3}\), which is competitive with other room-temperature magnetic gradiometer techniques. magnetic levitation, diamagnetic, liquid, shape + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote† ined theoretically and experimentally the equilibrium droplet levitation heights as we varied the magnetic forces (by moving the magnets), which agreed well with previously published works.[24; 25] We performed experimental measurements and numerical simulations of the eccentricity of the static droplet's shape. Finally, we examined the potential for the shape deformation to act as a magnetic gradient sensor. For describing the physics of trapping, we considered an infinitesimal element of the diamagnetic fluid of volume \(V\), located at position \(\vec{r}\). We can express the total net force density it experiences as[24], \(\vec{F}(\vec{r})/V\equiv\vec{J}=-\vec{\nabla}u(\vec{r})\), and the potential density \(u(\vec{r})\) is given as \[u(\vec{r})=(\rho_{s}-\rho_{m})gz-\frac{1}{2\mu_{0}}(\chi_{s}-\chi_{m})|\vec{B} (\vec{r})|^{2}\;\;, \tag{1}\] where \((\rho_{s},\chi_{s})\), is the (density \([\rm{kg/m^{3}}]\), volume magnetic susceptibility [unit-less]), of the sample (also called the droplet fluid), while \((\rho_{m},\chi_{m})\) is that of the surrounding medium (or paramagnetic fluid). The diamagnetic sample fluid, 3-chlorotoluene, has known values of \(\rho_{s}\) and \(\chi_{s}\) and was used as received from the suppliers. For the paramagnetic medium, MnCl\({}_{2}\cdot 4\)H\({}_{2}\)O, the density and magnetic susceptibility will vary depending on the prepared molar concentration of dissolved paramagnetic salts in an aqueous solution. The magnetic field \(\vec{B}(\vec{r})\) [T] is generated by two opposing ring magnets whose inter-magnet separation is \(d\) [m], and \(z\) [m] is the \(z-\)component of \(\vec{r}\), the height of the infinitesimal volume element above the top surface of the bottom magnet. We take the origin of the axes as the axial center of the top surface of the bottom ring magnet. We can estimate the equilibrium height, \(h\), of the levitated droplet by solving \[f_{z}(\vec{r}=(0,0,z=h))=0,\;\Rightarrow\;\;\frac{d}{dz}\left.u(\vec{r}=\{0,0, z\})\right|_{z=h}=0\;\;. \tag{2}\] This approximation is justified when the droplet volume is infinitesimal so that \(\int_{V}f_{z}(\vec{r})\approx 0\). In our case, the droplet size is comparable to the inter-magnetic separation \(d\), and we find that the magnetic potential around the droplet is non-Gaussian. Therefore, we can use the following to obtain a more precise estimation of the equilibrium droplet height, \(h\), by solving \[F_{z}(\vec{r}=(0,0,z=h))=0,\;\Rightarrow\;\;\frac{d}{dz}\left(\left.\int_{V(z, r)}u(\vec{r})\,dV\right)\right|_{z=h}=0\;\;. \tag{3}\] where \(V(z,r)\) denotes a spherical integration volume of radius \(r\), the droplet radius, where the sphere is centered at the position \(\vec{r}=\{0,0,z\}\). As an example, we show the potential density \(u(r,z^{*})\), in Fig. 1, where we set \(z^{*}=z-h\). To evaluate the equilibrium shape of the droplet when it is levitating in the potential density \(u(\vec{r})\), we also require the value of the interfacial tension (IFT), \(\gamma\) [N/m], between the droplet fluid and the medium fluid. We see that the small changes in \(\gamma\sim\pm 2\) mN/m result in changes in the droplets' eccentricity by a few parts in a hundred. However, if the change in \(\gamma\sim\pm 30\) mN/m, the change in eccentricity can be a few parts in ten. We consider the setup depicted in Fig. 2, which is similar to that in[25], consisting of two opposed NdFeB (N40 - Neomag) ring magnets with dimensions 38.6 mm (outer diameter) \(\times 19.1\) mm (inner diameter) \(\times 30.9\) mm (height), surrounding a rectangular quartz cuvette containing a paramagnetic solution of MnCl\({}_{2}\cdot 4\)H\({}_{2}\)O, prepared with various concentrations. A fixed-volume droplet of diamagnetic liquid (3-chlorotoluene) was inserted into the paramagnetic medium and was trapped by a combination of buoyancy and magnetic forces. As indicated by the green arrow in Fig. 2(c), we can adjust the position of the upper magnet and, in particular, the inter-magnet separation \(d\) via a robust micrometer stage. This stage has to withstand considerable repulsive forces generated between the magnets when \(d\) is reduced. The setup shown also has affixed a digital micrometer to record the inter-magnet separations accurately. For measuring the levitation height and capturing the image of the levitated droplet between the two magnets, a microscope (Dino-lite Edge 3.0), along with lights, was mounted to illuminate the levitated droplets. To precisely utilize Eq. (1), we must have precise information about the densities and magnetic susceptibilities of the sample and medium and the magnetic fields. The preparation of various concentrations of the paramagnetic medium solution is described in [Sec III, SI]. The density of the medium solution \(\rho_{m}(T,c)\) is a function of temperature and concentration of the paramagnetic salt \(c\). We measured \(\rho_{m}\) for various values of \(c\) using a pycnometer [Sec IV, SI]. The experimen Figure 1: Example potential density plot. We graph the Archimedean Magnetic potential density given in Eq. (1), in axial coordinates, which includes the effects of the magnetic, gravitational, and buoyancy forces. We observe a minimum that traps the diamagnetic droplet in space. Here the magnet separation is set as \(d=7\) mm, with the droplet volume \(V_{drop}=10\,\mu\)L, and radius \(r\sim 1.33\) mm, with a 3M concentration of the paramagnetic medium. Solving Eq. (2) yields a levitation height of \(h\sim 5.25\) mm, and we choose \(z^{*}=z-h\) to center the plot at this height. The contours are spaced by \(\Delta u=0.5\,\rm{[J/m^{3}]}\). The red disc indicates the approximate position and dimensions of the diamagnetic droplet. Although the potential exhibits significant oblateness, the deformation of the droplet towards these contours is resisted by the high interfacial tension. In this example, we have chosen: magnet dimensions [OD, ID, H]: [38.6, 19.1, 30.9] mm, the magnetization of the magnets are assumed to be identical \(M\sim 1.32\) T. The droplet fluid is considered to be pure 3-chlorotoluene, and the medium fluid is a 3 M concentration of aqueous MnCl\({}_{2}\cdot 4\)H\({}_{2}\)O, with the associated physical properties as listed in [Table S1, SI]. tal values of density of (1 M, 2 M, and 3 M) aqueous solutions of MnCl\({}_{2}\)\(\cdot\)4H\({}_{2}\)O at 25 \({}^{\circ}\)C were measured to be (1.097 \(\pm\) 0.004, 1.195 \(\pm\) 0.005, 1.293 \(\pm\) 0.005) g/mL respectively that compared well with the empirical formula [Eq S4, SI]. The magnetic susceptibility of the 3-chlorotoluene \(\chi_{s}\) is taken from the literature, while \(\chi_{m}(T,c)\) for the paramagnetic medium is a function of temperature and salt concentration. Values for \(\chi_{m}(T,c)\) are calculated using the Curie-Weiss law [Sec V, SI]. The potential density is also a function of the magnetic fields generated by the two ring magnets. By setting the ring magnets to have a separation of \(d=12\) mm, we experimentally measured the \(B_{z}(z)\) field along the \(z-\)axis between the two ring magnets using a Hall probe magnetometer. By fitting the experimental data to theory, we estimated the magnetization of the top and bottom ring magnets (\(M_{top}\) = 1.36 T and \(M_{bottom}\) = 1.3 T), which are quite comparable to those quoted by the manufacturer [Sec VI, SI]. These values were used for all subsequent simulations and modeling. To check the accuracy of the parameters and setup, we measured the levitation height of the droplet for various inter-magnet separations \(d\sim 6-12\) mm and compared the experimental data with simulations and found quite good agreement [Sec VII, Fig S5, SI]. The slight systematic upwards shift of the experimental data compared to the theoretical predictions may be due to a slight tilting of the movable ring magnet away from an exactly horizontal orientation. The droplet's shape will be greatly affected by the value of the interfacial tension (IFT) \(\gamma\) between the droplet fluid and the surrounding medium. Hence, we determined the IFT between 3-chlorotoluene and a 3 M aqueous solution of MnCl\({}_{2}\)\(\cdot\)4H\({}_{2}\)O, \(\chi_{T/MnCl_{2}}\), experimentally with the help of Pendant drop Tensiometry [30]. The value for \(\chi_{T/MnCl_{2}}\) was found to be \(45.23\pm 0.55\,\mathrm{mN/m}\). This value is used later in simulations of the droplet shape. The experimental details, along with the figure of the pendant droplet, are given in [Sec VIII, SI]. We performed experiments and image analysis using Mathematica to determine the eccentricity of the droplet with varying inter-magnet separation \(d\). A digital microscope was used to take close-up photos of the levitated droplet. The lighting conditions were adjusted to capture photos where the droplet edge exhibited high contrast with its background. Two side light sources illuminated the levitated droplet. To achieve high droplet edge contrast in the images, we placed a black square piece of hardboard with a circular central white spot (2 cm in diameter), which is placed in the background behind the droplet as seen by the microscope. This leads to an image where the background and the droplet interior are white, but the droplet edge is dark. This results in a high-contrast image of the edge of the droplet. As the inter-magnet separation distance was changed, several microscope images of the droplet were taken. Image processing was performed on the high-contrast edge of the droplet to identify the coordinates of points. From this, an estimation of the eccentricity of the droplet image was obtained. The droplet image was processed by first binarizing and blackening the interior. This created a higher contrast for further processing for edge detection. Then, the 2D coordinates of points on the edge were isolated. The center point of the droplet was also identified during image processing. Next, the radial distance from this Figure 2: Schematic of magnetic levitation and squashing of droplet: (a) a small (1-2 mm diameter) droplet of a diamagnetic liquid (round sphere) is trapped by a combination of magnetic and buoyancy forces within a rectangular glass cuvette, containing a paramagnetic liquid (clear). The cuvette sits between two opposing NdFeB ring magnets, producing an anti-Helmholtz field. The separation distance, \(d\), between these magnets can be varied. (b) detail of the squashed levitated droplet (green - exaggerated for clarity), with a levitation height \(h\), magnet separation \(d\), and major (minor) droplet radii \(a\left(b\right)\), and eccentricity \(e\); (c) photo of the experimental setup illustrating the ring magnets and reinforced micrometer positioning arrangement of the upper ring magnet indicated by a green arrow, and micrometer gauge to measure the magnet separation. Not shown are the rigs to hold the microscope camera to measure the droplet shape, lights to illuminate the droplet, and micrometer Hall-probe scanning stage to measure the trapping \(B\) fields. center point to each point \(i\) on edge and angle to the horizontal was found, e.g., \((r_{i},\theta_{i})\), where \(\theta_{i}\in[-\pi,+\pi]\). The experimental data was found to be periodic in \(\theta\) with period \(2\pi\), and we used this periodicity to expand the domain of \(\theta\in[-3\pi,+3\pi]\). We dropped some data points clustered around \(\theta\sim m\pi/2\), where \(m\) is an integer, as the finite resolution of the camera image, which is in a cartesian grid, does not map very smoothly under the polar transformation at these angles [28]. We fit a Fourier expansion to this periodic data as \(r(\theta)=\sum_{i=0}^{34}\left[A_{k}\sin(k\pi\theta+\phi)+B_{k}\cos(k\pi \theta+\phi)\right]\), fitting for the amplitudes \(A_{k},B_{k}\), and phase shift \(\phi\). The latter parameter describes any slight mismatch of the microscope photo's vertical axis from the true lab vertical axis. From this fitting, we found the major axis \(a=\max(r(\theta))\) and minor axis \(b=\min(r(\theta))\) of the droplet. We estimated the error in these quantities, \((\delta a,\delta b)\), by the standard deviation of \(r_{i}\) around these locations. An example plot of \(r_{i}\) vs. \(\theta_{i}\), for a drop with \(d=9\) mm, is shown in Fig. 3. From this, we observe that these errors are not small compared to the mean value of the eccentricity. One can find the error in the derived eccentricity \(e=\sqrt{1-(b/a)^{2}}\), as \[\delta e=\left(\frac{b^{2}}{a^{3}}\right)\frac{\delta a}{e}+\left(\frac{b}{a^{ 2}}\right)\frac{\delta b}{e}\ . \tag{4}\] As the inter-magnet separation distance \(d\), was varied from \(d=7,8,9,10,11\) mm, for each separation, we took several photo-micro-graphs of the droplet, performed the above-described image analysis, and for each image, obtained an estimate for the eccentricity, and its error \(e\pm\delta e\). In the cases when \(d>11\) mm and \(d<7\) mm, the image analysis failed as the droplet was partially obscured by the magnets [Sec IX, SI]. In the subsequent sections, we describe how we model and predict the shape of the levitated droplet numerically, in particular, estimate the droplet's eccentricity. Since the magnetic fields produced by the ring magnets are axially symmetric, the droplet shape will also be axially symmetric (i.e., the shape will be independent of \(\phi\)), and we can thus work in a reduced two-dimensional \(x-z\) plane. We numerically sampled \(u(\vec{r})\) in a 2D region surrounding the droplet on this plane using Eq. (1), where the \(B\) field produced by the magnets is simulated using MagPyLib. [31]The shape of the droplet is that which minimizes the surface energy in the presence of \(u(\vec{r})\). Two limiting cases are when a) if the interfacial tension vanishes \(\gamma=0\), the shape of the droplet exactly follows the equidentials of the potential density \(u(\vec{r})\), or b) if \(\gamma\sim\infty\), then the shape of the droplet is a sphere. However, in the case of finite \(\gamma\), estimating the droplet's shape is not straightforward. Such surface energy minimization calculations can be performed using the Surface Evolver (SE). [32; 33; 34] SE can estimate droplet shape through iteration and refinement, including additional force effects and a spatially varying potential energy density. It also considers the interfacial tension and the volume of the droplet enveloped by the surface. The Surface Evolver program incorporates spatial potentials as analytic formulae. Thus we sampled the potential density \(u(\vec{r})\) around the small droplet and fitted as a polynomial function of degree 2 in the coordinates \(x\) and \(z\), where the origin is taken as the droplet's equilibrium levitation height.We then called the SE program using this potential, incorporating the interfacial surface tension and droplet volume used in all experiments. After appropriate iterations and refinements, vertices of the evolved shape are exported for further analysis. From the SE exported data, we calculated the radii of all surface points to the geometric center of the 3D surface and set the major and minor radii of the oblate droplet to be the maximum and minimum radii found. From this, we computed the droplet eccentricity. As a study, we performed simulations for varying magnet separation distance \(d\in\{4,16\}\) mm and for various test values of interfacial tensions, and these results are reported in [Fig S10, SI]. As one might expect, the droplet's eccentricity is increased with lower interfacial tensions \(\gamma\) and lower magnet separation distance \(d\). We also observed that the simulated eccentricity value tends to be a constant for large separations \(d\) greater than 10 mm. In this case, the magnetic forces from a single-ring magnet are sufficient to trap the droplet. Since in magneto-statics, one must have \(\vec{\nabla}\cdot\vec{B}(\vec{r})=0,\forall\vec{r}\), these trapping forces cannot be isotropic, and Figure 3: Image analysis of a droplet shape for inter-magnet separation \(d=9\) mm. (a) [Left] we schematically depict how the radius of the droplet depends on the angle \(\theta\), (a) [Right] actual microscope image of the levitated drop, (b) the analysis of the droplet image in (a) [Right], where the blue points label points on the edge of the droplet and the data is repeated outside the fundamental domain \(\theta\in[0,2\pi]\). The red curve is a fitted function for \(r(\theta)\), using a high-order Fourier series. From the minimum and maximum of \(r(\theta)\) and the spread of values around these, we estimate the eccentricity of this droplet to be \(e=0.093\pm 0.018\). there is some residual squashing of the droplet. We simulated the droplet's shape in the actual experiments we performed. We plotted \(e(d)\), for \(V=10\)\(\mu\)L, using the experimentally determined value of the interfacial tension \(\gamma=45\,\mathrm{mN/m}\), as the black curve in Fig. 4. We also developed extensive methods to estimate the accuracy of the numerical simulations as SE's algorithm is somewhat stochastic [See X, SI]. We found that SE returns an estimate for the eccentricity, which is always lower than the actual value. For the experiment, we depict the errors in the numerical simulation in Fig. 4 as a grey region _below_ the black solid line. In Fig. 4, we plot both the experimental and numerical simulation results for the droplet eccentricities, and this summarizes the main result of our inquiry. We note that all the droplets are oblate, even for inter-magnet separations where the droplet is trapped by only one magnet. We observed that the numerical simulations, including error, agree with the experimental measurements predicting a slight increase in eccentricity as the inter-magnet separation is decreased below 9 mm, which is apparent in the experimental data for \(d>7\) mm [See Sec XI, SI for details]. We also investigated the potential of utilizing eccentricity as a _sensor_ of the local magnetic field gradient. Different magnetic gradiometry methods have varying sensitivities and applications due to their characteristics. The Hall sensor has relatively low sensitivity, but due to its versatility, it is commonly used in daily life, such as in smartphones. On the other hand, sensors using microelectromechanical systems (MEMS), superconductors, and atoms have high sensitivities, but their usage is limited due to the technical specifics required by these techniques. From Eq. (1), we observe that the magnetic squashing force, and thus the eccentricity of the droplet, depends on the gradient of the local magnetic fields. We ask whether the eccentricity can be used as a _sensor_ of the magnetic field gradient \(S\) [T/m], e.g. if we measure the major and minor radii of the droplet to a preset precision, e.g., \(\delta r\ \sim 1\) pm, what is the minimum resolvable change in the magnetic field gradient that can be sensed by estimating the eccentricity? To estimate \(S\), we make use of the SE and numerically estimate \[S(d_{0})=\left.\frac{\mathrm{d}B_{z}^{\prime}(d)}{\mathrm{d}a(d)}\right|_{d=d _{0}}\times a_{min}\ . \tag{5}\] where we assume the resolution to measure \(a\) is \(a_{min}\sim 10^{-12}\) m, which is possible using laser interferometry (for example, the Picoscale interferometer). Using an interfacial tension value of 1 mN/m simulations indicated one can achieve \(S\sim 8\) nT/cm [Sec XII, SI]. The predicted values of \(S\) are comparable with other sensitive magnetic gradiometer technologies, which can operate in ambient conditions and have a volume of 10 mm\({}^{3}\). The authors acknowledge support by the Okinawa Institute of Science and Technology Graduate University. The authors also acknowledge technical assistance from P. Kennedy from the OIST Engineering Section. The data and simulation codes that support the findings of this study are available from the corresponding author upon reasonable request.
2305.17290
Variable Bandwidth via Wilson bases
We introduce a new concept of variable bandwidth that is based on the truncation of Wilson expansions. For this model we derive both (nonuniform) sampling theorems, the complete reconstruction of $f$ from its samples, and necessary density conditions for sampling.
Beatrice Andreolli, Karlheinz Gröchenig
2023-05-26T22:21:27Z
http://arxiv.org/abs/2305.17290v3
# Variable bandwidth via Wilson bases ###### Abstract. We introduce a new concept of variable bandwidth that is based on the truncation of Wilson expansions. For this model we derive both (nonuniform) sampling theorems, the complete reconstruction of \(f\) from its samples, and necessary density conditions for sampling. Key words and phrases:Nonuniform sampling, irregular sampling, sampling, reconstruction, frame, reproducing kernel Hilbert space, variable bandwidth spaces, density conditions Beatrice Andreolli was supported by the Austrian Science Fund (FWF) projects P31887-N32 and P33217. Karlheinz Grochenig was supported by the Austrian Science Fund (FWF) project P31887-N32. the short-time Fourier transform using a time-varying frequency cut-off was proposed by Aceska and Feichtinger [3, 4]. The resulting function spaces, however, coincide with the standard Sobolev spaces endowed with an equivalent norm. Unfortunately, these are not variable bandwidth spaces in our sense since they admit neither a sampling theorem nor a Nyquist density. A recent definition of variable bandwidth is based on the spectral theory of a Sturm-Liouville operator \(A_{p}f=-\frac{d}{dx}(p(x)\frac{d}{dx})f\) on \(L^{2}(\mathbb{R})\), where \(p>0\) is a strictly positive function [24]. Denote by \(c_{\Lambda}(A_{p})\) the spectral projection of \(A_{p}\) corresponding to \(A_{p}\) in \(\Lambda\subset\mathbb{R}^{+}\), then the range of this projection-valued measure \(c_{\Lambda}(A_{p})\) is the space of functions of variable bandwidth with spectral set in \(\Lambda\). For this space of variable bandwidth were proved a (nonuniform) sampling theorem and necessary density conditions. The main insight is that, for a spectrum \(\Lambda=[0,\Omega]\), a function of variable bandwidth behaves like a bandlimited function with local bandwidth \((\Omega/p(x))^{1/2}\) in a neighborhood of \(x\in\mathbb{R}\). Kempf and his collaborators used a procedural idea of variable bandwidth[25, 28, 29]. A formal definition concept was then presented in [31]. Essentially, they generated a symmetric operator and a related reproducing kernel Hilbert space based on a sequence of sampling points. This space was designated as a variable bandwidth space. By parameterizing the self-adjoint extensions of the operator, they obtained several sampling theorems at the critical density. Another interesting interpretation of variable bandwidth in signal processing involves warping functions [10, 15, 27, 33, 35]. Given a homeomorphism \(\gamma:\mathbb{R}\to\mathbb{R}\), called a warping function, a function \(f\) possesses variable bandwidth with respect to \(\gamma\) if \(f=g\circ\gamma\) for a bandlimited function \(g\in L^{2}(\mathbb{R})\) with \(\operatorname{supp}(\hat{g})\subseteq[-\Omega,\Omega]\). The derivative \(1/\gamma^{\prime}(\gamma^{-1}(x))\) of the warping function is interpreted as the local bandwidth of \(f\) at \(x\). We propose a new approach to variable bandwidth by using Wilson bases. A Wilson basis is an orthonormal bases of \(L^{2}(\mathbb{R})\) of the form \[\psi_{k,l}(x)=\begin{cases}g(x-k),&l=0\\ \frac{1}{\sqrt{2}}(e^{2\pi ilx}+(-1)^{l+k}e^{-2\pi ilx})g(x-k/2),&l\neq 0. \end{cases}\] \[\text{for }k,l\in\mathbb{Z}\text{ and }l\geq 0. \tag{1.1}\] Daubechies, Jaffard and Journe [16] showed that such orthonormal bases exist and that the window function \(g\) may be chosen to be smooth. These bases have proved to be very useful in time-frequency analysis, since they provide a way to overcome the barrier posed by the Balian-Low theorem. The theorem states that a function \(g\), which generates an orthonormal basis with time-frequency shifts, cannot be well localized in both time and frequency. In principle, Wilson bases are orthonormal bases which possess the desired time-frequency localization while keeping much of the structure of a Gabor system. Wilson bases are a popular tool in time-frequency analysis and signal processing to study function spaces and time-frequency localization and non-linear approximation [12, 13, 20]. Perhaps the most important application is to the detection of the gravitational waves in 2015 [1, 2, 14]. Gravitational waves are ripples in space-time caused by violent cosmic events like colliding black holes and supernovae. Albert Einstein predicted their existence in 1916 [17, 18], and Damour, Blanchet, and their colleagues calculated the analytic form of a gravitational wave from two merging neutron stars [8, 9, 11]. They obtained the formula \[s(t)=c|t-t_{0}|^{-\frac{1}{4}}\cos(\omega|t-t_{0}|^{\frac{5}{8}}+\varphi),\] where \(c\) is a constant, \(\omega\gg 1\), \(t_{0}\) is the time of coalescence, and \(\sim|t-t_{0}|^{-\frac{3}{8}}\) represents the instantaneous frequency. In the language of signal processing such a function is called a _chirp_, and is considered a prototype of a signal with time-varying frequency. Essentially, the detection of gravitational waves amounts to the extraction of a chirp buried inside a noisy signal. In 2012, Necula, Klimenko, and Mitselmakher [32] proposed to use Wilson bases for gravitational wave detection. They developed an algorithm that maps time series data to the time-frequency plane using a Wilson basis. This algorithm was successfully applied to gravitational data to detect the merger of two black holes and to other astrophysical data. In this paper we introduce a new model to describe variable bandwidth that is in part motivated by the signal processing of gravitational waves. Our new idea is to use a discrete version of the space of Aceska and Feichtinger and to replace the (continuous) short-time Fourier transform with a frequency truncation of a Wilson expansion. In this spirit let us first define our model of variable bandwidth. Let \(b:\mathbb{Z}\to\mathbb{N}\) be a bounded positive sequence such that \(b(k)<B<\infty\), for every \(k\in\mathbb{Z}\). Here \(b(k)\) stand for the local frequency truncation at \(k\). Next let \(\{\psi_{k,l}\}_{k\in\mathbb{Z},l\in\mathbb{N}}\) be an orthonormal Wilson basis as in (1.1). Then we define a Paley-Wiener-type space as follows: \[PW_{b}^{2}(g,\mathbb{R})=\Big{\{}f\in L^{2}(\mathbb{R}):f=\sum_{k\in\mathbb{Z} }\sum_{l=0}^{b(k)}c_{k,l}\psi_{k,l},\ c\in\ell^{2}(\mathbb{Z}\times\mathbb{N} )\Big{\}}. \tag{1.2}\] Assuming that \(\operatorname{supp}(g)\subseteq[-m,m]\) for some \(m>1\), the parameter \(b(k)\) can be also understood as the local bandwidth of \(f\) on an interval centered at \(k/2\). Considering the restriction \(f|_{[k/2-1/2,k/2+1/2]}\), this is approximately a trigonometric polynomial of degree \(b(k)\). As a consequence, it is expected that, on each interval \([k/2-1/2,k/2+1/2]\), at least \(b(k)+1\) samples are needed to recover \(f\) completely. This intuition is confirmed when the Wilson basis is replaced by the Gabor orthonormal basis \(\{\psi_{k,l}=e^{2\pi ilx}\chi_{[-1/2,1/2]}(x-k):k,l\in\mathbb{Z}\}\). The real challenge is then to understand the influence of the overlap of different translations of \(g\). We will see that the overlap of the translates requires an adaption of this first intuition. We will investigate the basic sampling problems for the space \(PW_{b}^{2}(g,\mathbb{R})\). The first problem consists in finding necessary density conditions for sampling. In Section 3 we will study whether there exists a critical density that separates sampling (= complete reconstruction from samples) from interpolation (= approximation of given data by a function with prescribed properties). A set \(\Lambda\subset\mathbb{R}\) is called a set of (stable) sampling for \(PW_{b}^{2}(g,\mathbb{R})\) if there exist constants \(A,B>0\) such that for every \(f\in PW_{b}^{2}(g,\mathbb{R})\) \[A\|f\|_{2}^{2}\leq\sum_{\lambda\in\Lambda}|f(\lambda)|^{2}\leq B\|f\|_{2}^{2}. \tag{1.3}\] The sampling inequality (1.3) indicates that the entire information carried by the function is captured by the evaluation of the function at the samples. Our main theorem provides a necessary density condition for sampling in \(PW_{b}(g,\mathbb{R})\). **Theorem 1.1**.: _Let \(g\in\mathcal{C}(\mathbb{R})\), real-valued, even, and such that \(|g(x)|\leq C(1+|x|)^{-1-\epsilon}\) for some \(C>0\) and \(\epsilon>0\). Let \(\Lambda\subseteq\mathbb{R}\) be a set of sampling for \(PW_{b}^{2}(g,\mathbb{R})\) defined in (1.2) with \(b(k)\geq 1\) for every \(k\in\mathbb{Z}\). Then_ \[D^{-}(\Lambda)\coloneqq\liminf_{r\to\infty}\inf_{x\in\mathbb{R}}\frac{\#( \Lambda\cap[x-r,x+r])}{2r}\geq 1+\overline{b}\] _where \(\overline{b}=\liminf_{r\to\infty}\inf_{x\in\mathbb{R}}\frac{1}{2r}\sum_{ \frac{k}{2}\in[x-r,x+r]}b(k)\)._ In this result, \(\overline{b}\) represents a sort of average bandwidth. The theorem shows that whenever the window function \(g\) has some mild decay properties, the lower Beurling density \(D^{-}(\Lambda)\) indicates that we have at least \(1+\overline{b}\) samples of \(\Lambda\) contained in a ball of radius \(r\). After showing that \(PW_{b}^{2}(g,\mathbb{R})\) is a reproducing kernel Hilbert space, the necessary density condition for sampling follows upon elaborating the conditions in [21]. The second problem involves identifying sufficient conditions for sampling: Under which conditions on a sampling set can a function in \(PW_{b}^{2}(g,\mathbb{R})\) be reconstructed completely from its samples? We will provide a sufficient condition for sampling for \(PW_{b}^{2}(g,\mathbb{R})\) that confirms the intuition that the parameter \(b(k)\) measures the local bandwidth of \(f\in PW_{b}(g,\mathbb{R})\). For the proof we resort to the adaptive weights method described in [19, 22]. With this constructive method, we will produce quantitative results that could be applied as a base for numerical experiments. This method relies on the properties of the window function \(g\) and, in particular, the requirement of \(g\) being compactly supported is essential. The theorem provides a condition involving the maximal gap between consecutive points of the sampling set \(\Lambda\). As expected, for every interval of the type \([k/2,k/2+1/2)\), the maximal gap (or the lowest sampling rate) depends on the interval's maximal bandwidth of \(f\). **Theorem 1.2** (Sufficient condition for sampling).: _Let \(g\in\mathcal{C}^{1}(\mathbb{R})\) be real-valued, even, and with \(\operatorname{supp}(g)\subseteq[-m,m]\). Let \(PW_{b}^{2}(g,\mathbb{R})\) be the space of variable bandwidth defined in (1.2) and define \(\Lambda\) as_ \[\Lambda\coloneqq\Big{\{}x_{\frac{k}{2},j}=k/2+\eta_{\frac{k}{2},j}:k\in \mathbb{Z},\ \eta_{\frac{k}{2},j}\in[0,1/2)\text{ and }x_{\frac{k}{2},j+1}-x_{\frac{k}{2},j}\leq\delta_{\frac{k}{2}}\ \forall k\in\mathbb{R}\Big{\}},\] _where \(\delta_{\frac{k}{2}}=\max_{j=1,\ldots j_{\max}(\frac{k}{2})}(x_{\frac{k}{2},j+ 1}-x_{\frac{k}{2},j})\). Define_ \[D\coloneqq(4m)\cdot\max\{2\pi\|g\|_{\infty},\|g^{\prime}\|_{\infty}\}\] _and_ \[\mu_{\frac{k}{2}}\coloneqq\max_{n=(k-2m,k+2m+1)\cap\mathbb{Z}}\Big{(}b(n)+1 \Big{)}.\] _If for every \(k\in\mathbb{Z}\)_ \[\delta_{\frac{k}{2}}<\frac{\pi}{\mu_{\frac{k}{2}}D},\] _then \(f\in PW_{b}^{2}(g,\mathbb{R})\) can be reconstructed completely by the samples in \(\Lambda\)._ The main idea of the proof is to consider an approximation \(Af\) of \(f\) which contains the samples of \(f\) and show that \(\|f-Af\|_{2}\leq\gamma\|f\|_{2}\) for \(f\in PW_{b}^{2}(g,\mathbb{R})\) and \(\gamma<1\). Then \(f\) can be recovered from \(Af\) via Neumann series. The paper is organized as follows: Section 2 introduces the main tools required, namely the characterization of a Wilson basis, a version of the necessary density condition theorem for reproducing kernel Hilbert spaces [21] and the proof that \(PW_{b}^{2}(g,\mathbb{R})\) is a reproducing kernel Hilbert space, and several standard inequalities. Section 3 contains the proof of the necessary density condition (Theorem 1.1) for sampling for the space of variable bandwidth \(PW_{b}^{2}(g,\mathbb{R})\). In Section 4 we provide the proof of the sufficient condition for sampling in \(PW_{b}^{2}(g,\mathbb{R})\) (Theorem 1.2). ## 2. Preliminaries In this section we recall the basic properties of a Wilson basis and the tools we need to investigate sampling. By defining the usual translation and modulation operator as \(T_{k}f(t)=f(t-k)\) and \(M_{l}f(t)=f(t)e^{2\piilt}\), we can rewrite the entire collection of functions that forms a Wilson basis (1.1) as \[\psi_{k,l}=d_{l}(M_{l}+(-1)^{l+k}M_{-l})T_{\frac{k}{2}}g,\quad(k,l)\in \mathbb{Z}^{2},l\geq 0, \tag{2.1}\] where \(d_{0}=\frac{1}{2}\) and \(d_{l}=\frac{1}{\sqrt{2}}\), \(l\geq 1\), \(\psi_{2k,0}=T_{k}g\), \(\psi_{2k+1,0}=0\). Then every function \(f\in PW_{b}^{2}(g,\mathbb{R})\), is given by \[f(x)=\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}c_{k,l}\psi_{k,l}(x)=\sum_{k\in \mathbb{Z}}P_{k}(x)g(x-k/2),\] where \(P_{k}(x):=\sum_{l=0}^{b(k)}c_{k,l}d_{l}(M_{l}+(-1)^{l+k}M_{-l})\) is a trigonometric polynomial of degree \(b(k)\) and period \(1\). The following result from [23, Corollary 8.5.4] gives a characterization of Wilson basis. **Theorem 2.1**.: _Assume \(g\in L^{2}(\mathbb{R})\) to be even and real-valued. Then the following are equivalent:_ 1. _the Wilson system_ \(\{\psi_{k,l}\}_{k\in\mathbb{Z},l\in\mathbb{N}}\) _as defined by (_1.1_) is an orthonormal basis for_ \(L^{2}(\mathbb{R})\)_._ 2. \(\|g\|_{2}=1\) _and the Gabor system_ \(\{T_{k/2}M_{l}g:k,l\in\mathbb{Z}\}\) _is a tight frame for_ \(L^{2}(\mathbb{R})\) _i.e. there exists a constant_ \(A>0\) _such that for all_ \(f\in L^{2}(\mathbb{R})\)__ \[\sum_{k\in\mathbb{Z}}\sum_{l\in\mathbb{Z}}|\langle f,T_{k/2}M_{l}g\rangle|^{2} =A\|f\|_{2}^{2}.\] 3. \(\sum_{k\in\mathbb{Z}}g(x-n-\frac{k}{2})\overline{g}(x-\frac{k}{2})=2\delta_{n 0}\) _a.e._ Theorem 1.1 of [21] provides necessary density conditions for sampling in reproducing kernel Hilbert spaces. We recall that \(\mathcal{K}\subseteq L^{2}(\mathbb{R})\) is a reproducing kernel Hilbert space, if \(\mathcal{K}\) is a closed subspace of \(L^{2}(\mathbb{R})\) and if there exists \(M>0\) such that \(|f(x)|\leq M\|f\|_{2}\) for every \(f\in\mathcal{K}\) and \(x\in\mathbb{R}\). **Theorem 2.2** ([21]).: _Let \(\mathcal{K}\subseteq L^{2}(\mathbb{R})\) be a reproducing kernel Hilbert space with a reproducing kernel \(k(x,y)\) satisfying_ 1. \(\inf_{x\in\mathbb{R}}k(x,x)>0\) _and_ 2. _off-diagonal decay of the form_ \(|k(x,y)|\leq N(1+|x-y|)^{-1-\epsilon}\) _for all_ \(x,y\in\mathbb{R}\) _and some_ \(\epsilon>0\)_._ _If for \(\Lambda\subset\mathbb{R}\) there exist \(A,B>0\) such that for every \(f\in\mathcal{K}\)_ \[A\|f\|^{2}\leq\sum_{\lambda\in\Lambda}|f(\lambda)|^{2}\leq B\|f\|^{2}, \tag{2.2}\] _then_ \[D^{-}(\Lambda)\coloneqq\liminf_{r\to\infty}\inf_{x\in\mathbb{R}}\frac{\#( \Lambda\cap[x-r,x+r])}{2r}\geq\liminf_{r\to\infty}\inf_{x\in\mathbb{R}}\frac{ 1}{2r}\int_{x-r}^{x+r}k(y,y)dy.\] We recall that the sampling inequality (2.2) is equivalent of \(\{k_{\lambda}\}_{\lambda\in\Lambda}\) being a frame for \(\mathcal{K}\). The number \(D^{-}(\Lambda)\coloneqq\liminf_{r\to\infty}\inf_{x\in\mathbb{R}}\frac{\#( \Lambda\cap[x-r,x+r])}{2r}\) is the Beurling density of \(\Lambda\). The next lemma shows that the variable bandwidth space defined using a Wilson basis is a reproducing kernel Hilbert space. **Lemma 2.3**.: _The space \(PW_{b}^{2}(g,\mathbb{R})\) of variable bandwidth defined in \((\ref{eq:b})\) is a reproducing kernel Hilbert space with reproducing kernel_ \[k(x,y)=\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}\overline{\psi_{k,l}(y)}\psi_{k,l }(x). \tag{2.3}\] Proof.: Recall from the definition (1.2) of \(PW_{b}^{2}(g,\mathbb{R})\) that \(b(k)<B<\infty\) for every \(k\in\mathbb{Z}\). Since \(\{\psi_{k,l}\}_{k\in\mathbb{Z},l\in\mathbb{N}}\) is an orthonormal basis for \(L^{2}(\mathbb{R})\), then \(PW_{b}^{2}(g,\mathbb{R})=\operatorname{span}\{\psi_{k,l}:k\in\mathbb{Z},\ l=0,...,b(k)\}\) defines a closed subspace of \(L^{2}(\mathbb{R})\). We show that at each point in \(\mathbb{R}\) the linear functional \(f\mapsto f(x)\) is bounded i.e. there exists \(M>0\) such that \(|f(x)|\leq M\|f\|_{2}\) for every \(f\in PW_{b}^{2}(g,\mathbb{R})\) and \(\forall x\in\mathbb{R}\). Let \(x\in\mathbb{R}\) and \(f\in PW_{b}^{2}(g,\mathbb{R})\), then applying triangle inequality and Cauchy-Schwarz we obtain \[|f(x)| =\left|\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}c_{k,l}\psi_{k,l}(x) \right|\leq\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}|\langle f,\psi_{k,l}\rangle| \left|\psi_{k,l}(x)\right|\] \[\leq\left(\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}|\langle f,\psi_ {k,l}\rangle|^{2}\right)^{1/2}\left(\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}| \psi_{k,l}(x)|^{2}\right)^{1/2}. \tag{2.4}\] Since \(\{\psi_{k,l}\}_{k\in\mathbb{Z},l\leq b(k)}\) is an orthonormal basis for \(PW_{b}^{2}(g,\mathbb{R})\), we have that \[\left(\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}\left|\langle f,\psi_{k,l}\rangle \right|^{2}\right)^{1/2}=\|f\|_{2}.\] To find a bound for the second term of (2.4), we apply the definition of a Wilson basis, and we obtain \[\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}\left|\psi_{k,l}(x)\right|^{2}=\sum_{k \in\mathbb{Z}}\left|\psi_{k,0}(x)\right|^{2}+\sum_{k\in\mathbb{Z}}\sum_{l=1}^{ b(k)}\left|\psi_{k,l}(x)\right|^{2}=\sum_{k\in\mathbb{Z}}\left|g(x-k)\right|^{2}+ \sum_{k\in\mathbb{Z}}\sum_{l=1}^{b(k)}\left|\psi_{k,l}(x)\right|^{2}.\] Recall from the characterization of Wilson basis the equivalence (iii) of Theorem (2.1). Then the first summand can be easily bounded as follows: \[\sum_{k\in\mathbb{Z}}\left|g(x-k)\right|^{2}\leq\sum_{k\in\mathbb{Z}}\left|g \left(x-\frac{k}{2}\right)\right|^{2}=2.\] The second summand can be bounded using the definition of the Wilson basis, the fact that \(b(k)\leq B\) for every \(k\in\mathbb{Z}\) and again equivalence (iii) of Theorem (2.1). Therefore, \[\sum_{k\in\mathbb{Z}}\sum_{l=1}^{b(k)}\left|\psi_{k,l}(x)\right|^{2}\leq 2 \sum_{k\in\mathbb{Z}}\sum_{l=1}^{b(k)}\left|g\left(x-\frac{k}{2}\right) \right|^{2}\leq 4B. \tag{2.5}\] Then \(|f(x)|\leq\sqrt{2(1+2B)}\|f\|_{2}\) for every \(x\in\mathbb{R}\) and for every \(f\in PW_{b}^{2}(g,\mathbb{R})\). Hence, we have shown that \(PW_{b}^{2}(g,\mathbb{R})\) is a reproducing kernel Hilbert space. Moreover, since \(\{\psi_{k,l}\}_{k\in\mathbb{Z},l\leq b(k)}\) is an orthonormal basis for \(PW_{b}^{2}(g,\mathbb{R})\), the reproducing kernel is given by \[k(x,y)=\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}\overline{\psi_{k,l}(y)}\psi_{k, l}(x).\] The following versions of Wirtinger's inequality and Bernstein's inequality will be useful tools to prove the sufficient condition for sampling. **Lemma 2.4** (Wirtinger's inequality).: _If \(f,f^{\prime}\in L^{2}(a,b)\), \(a<c<b\), and \(f(c)=0\), then_ \[\int_{a}^{b}|f(x)|^{2}dx\leq\frac{4}{\pi^{2}}\max\{(b-c)^{2},(c-a)^{2}\}\int _{a}^{b}|f^{\prime}(x)|^{2}dx. \tag{2.6}\] Lemma 2.4 follows from [26, p.184], by applying a change of variables. Wirtinger's inequality and its variations are often used in the proofs of sampling theorems, see [6, 22, 34]. Bernstein's inequality provides a bound for the derivative of a trigonometric polynomial. **Lemma 2.5** (Bernstein's inequality).: _Let \(P\) be a trigonometric polynomial of degree \(n\) i.e. \(P(x)=\sum_{|k|\leq n}c_{k}e^{2\pi ikx}\). Then_ \[\|P^{\prime}\|_{2}\leq 2\pi n\|P\|_{2}. \tag{2.7}\] ## 3. Necessary density condition In this section, we will prove necessary density conditions for the spaces of variable bandwidth generated by a Wilson basis. We start by showing a simple result for Wilson bases with a compactly supported window \(g\). **Theorem 3.1**.: _Let \(g\in\mathcal{C}(\mathbb{R})\), real-valued, and even with \(\operatorname{supp}(g)\subseteq[-m,m]\). Let \(\Lambda\subseteq\mathbb{R}\) be a set of sampling for \(PW^{2}_{b}(g,\mathbb{R})\) defined in (1.2). Then \(\Lambda\cap(\alpha,\beta)\) contains at least \(\lceil\beta-\alpha-2m\rceil+\sum_{k/2\in[\alpha+m,\beta-m]}b(k)\) points._ To show the result we use an easy argument that involves counting dimensions as in [5]. Proof.: Let \(I=(\alpha,\beta)\) such that \(|I|\geq 2m\) and \[PW^{2}_{b}(g,I)= \operatorname{span}\{\psi_{k,0}:k\in[\alpha+m,\beta-m]\}\] \[\cup\operatorname{span}\{\psi_{k,l}:k/2\in[\alpha+m,\beta-m],\ l=1,...,b(k)\}.\] We make the following three observations: 1. \(PW^{2}_{b}(g,I)\) is a subspace of \(PW^{2}_{b}(g,\mathbb{R})\) and thus the sampling inequalities hold for \(PW^{2}_{b}(g,I)\). 2. Since the Wilson basis functions are compactly supported, every \(f\in PW^{2}_{b}(g,I)\) has support in \((\alpha,\beta)\). 3. \(\dim(PW^{2}_{b}(g,I))=\#([\alpha+m,\beta-m]\cap\mathbb{Z})+\sum_{k/2\in[ \alpha+m,\beta-m]}b(k)\). Since for \(f\in PW^{2}_{b}(g,I)\), we have \[\sum_{\lambda\in\Lambda\cap(\alpha,\beta)}|f(\lambda)|^{2}=\sum_{\lambda\in \Lambda}|f(\lambda)|^{2}\geq A\|f\|_{2}^{2},\] the map \(f\in PW^{2}_{b}(g,I)\to\{f(\lambda)\}_{\lambda\in\Lambda\cap(\alpha,\beta)}\) is one-to-one. It follows that \[\dim(PW^{2}_{b}(g,I))=\lceil\beta-\alpha-2m\rceil+\sum_{k/2\in[\alpha+m, \beta-m]}b(k)\leq\#(\Lambda\cap(\alpha,\beta)).\] We can now state a more general theorem for the case of a window \(g\) with a decay of the type \(|g(x)|\leq C(1+|x|)^{-1-\epsilon}\) for \(C>0\), \(\epsilon>0\). In our approach we apply the results on sampling in general reproducing kernel Hilbert spaces from [21] in the form of Theorem 2.2. We only need to verify the conditions on the reproducing kernel. **Theorem 3.2**.: _Let \(g\in\mathcal{C}(\mathbb{R})\), real-valued, even, and such that \(|g(x)|\leq C(1+|x|)^{-1-\epsilon}\) for some \(C>0\) and \(\epsilon>0\). Let \(\Lambda\subseteq\mathbb{R}\) be a set of sampling for \(PW^{2}_{b}(g,\mathbb{R})\) defined in (1.2) with \(b(k)\geq 1\) for every \(k\in\mathbb{Z}\). Then_ \[\liminf_{r\to\infty}\inf_{x\in\mathbb{R}}\frac{\#(\Lambda\cap[x-r,x+r])}{2r} \geq 1+\overline{b}\] _where \(\overline{b}=\liminf_{r\to\infty}\inf_{x\in\mathbb{R}}\frac{1}{2r}\sum_{\frac {k}{2}\in[x-r,x+r]}b(k)\)._ Proof.: By Lemma 2.3, \(PW_{b}^{2}(g,\mathbb{R})\) is a reproducing kernel Hilbert space. First of all, we need to show that the reproducing kernel (2.3) satisfies conditions (i) and (ii) of Theorem 2.2. Step 1. We have to prove that \(\inf_{x\in\mathbb{R}}k(x,x)>0\). Let \(x\in\mathbb{R}\) and \(k(x,x)\) be the diagonal of the reproducing kernel. After substituting the definition of a Wilson basis, we write \[k(x,x)=\sum_{k\in\mathbb{Z}}|g(x-k)|^{2}+\frac{1}{2}\sum_{k\in\mathbb{Z}}\sum_{ l=1}^{b(k)}|e^{2\pi ilx}+(-1)^{l+k}e^{-2\pi ilx}|^{2}\left|g\left(x-\frac{k}{2} \right)\right|^{2}. \tag{3.1}\] Since \(g\) has compact support, the series are locally finite. The continuity of \(g\) implies that \(k(x,x)\) is continuous, and (3.1) show that \(k(x,x)\) is periodic with period \(1\). Consequently it suffices to show that \(k(x,x)>0\) on \([0,1)\) to prove that \(\inf_{x\in\mathbb{R}}k(x,x)>0\). It remains to show that \(k(x,x)>0\) on \([0,1)\). Recalling that \(b(k)\geq 1\) for every \(k\in\mathbb{Z}\), we obtain \[k(x,x)\geq\sum_{k\in\mathbb{Z}}|g(x-k)|^{2}+\frac{1}{2}\sum_{k\in\mathbb{Z}}| e^{2\pi ix}+(-1)^{1+k}e^{-2\pi ix}|^{2}\left|g\left(x-\frac{k}{2}\right) \right|^{2}.\] By dividing the second sum into odd and even parts and applying (iii) of Theorem 2.1, we get \[\begin{split} k(x,x)&\geq\sum_{k\in\mathbb{Z}}|g(x- k)|^{2}+2\sum_{k\in\mathbb{Z}}|\!\sin 2\pi x|^{2}|g(x-k)|^{2}+2\sum_{k\in \mathbb{Z}}\!\left|\cos 2\pi x\right|^{2}\!\left|g\Big{(}x-k-\frac{1}{2}\Big{)} \right|^{2}\\ &=\sum_{k\in\mathbb{Z}}|g(x-k)|^{2}(1+2|\sin 2\pi x|^{2})+2\sum_{k \in\mathbb{Z}}\!\left|\cos 2\pi x\right|^{2}\!\left|g\Big{(}x-k-\frac{1}{2} \Big{)}\right|^{2}\\ &\geq\min\{1+2|\sin 2\pi x|^{2},2|\!\cos 2\pi x|^{2}\}\sum_{k\in \mathbb{Z}}\!\left|g\Big{(}x-\frac{k}{2}\Big{)}\right|^{2}\\ &=2\min\{1+2|\sin 2\pi x|^{2},2|\!\cos 2\pi x|^{2}\}.\end{split} \tag{3.2}\] The only possible zeros of \(k(x,x)\) for \(x\in[0,1)\) are for \(\cos 2\pi x=0\), i.e, at \(x=\frac{1}{4}\) or \(x=\frac{3}{4}\). Consider now the sum \(k(\frac{1}{4},\frac{1}{4})+k(\frac{3}{4},\frac{3}{4})\). Since \(g\) is even, \(k(x,x)\) is even, then \(k(\frac{1}{4},\frac{1}{4})=k(-\frac{1}{4},-\frac{1}{4})=k(\frac{3}{4},\frac{3 }{4})\) by symmetry and periodicity, and \(k(\frac{1}{4},\frac{1}{4})+k(\frac{3}{4},\frac{3}{4})=k(\frac{1}{4},\frac{1}{ 4})+k(-\frac{1}{4},-\frac{1}{4})\). Applying (iii) of Theorem 2.1 and from the inequality (3.2), we obtain \[\begin{split} k\Big{(}\frac{1}{4},\frac{1}{4}\Big{)}+k\Big{(}- \frac{1}{4},-\frac{1}{4}\Big{)}&\geq 3\Big{(}\!\sum_{k\in\mathbb{Z}}\! \left|g\Big{(}\frac{1}{4}-k\Big{)}\right|^{2}+\sum_{k\in\mathbb{Z}}\!\left|g \Big{(}-\frac{1}{4}-k\Big{)}\right|^{2}\Big{)}\\ &=3\Big{(}\!\sum_{k\in\mathbb{Z}}\!\left|g\Big{(}\frac{1}{4}-k\Big{)} \right|^{2}+\sum_{k\in\mathbb{Z}}\!\left|g\Big{(}\frac{1}{4}-k-\frac{1}{2} \Big{)}\right|^{2}\Big{)}=6.\end{split}\] By symmetry \(k(\frac{1}{4},\frac{1}{4})=k(\frac{3}{4},\frac{3}{4})=3\neq 0\). It follows that \(k(x,x)>0\) for all \(x\in[0,1)\) and thus \(\inf_{x\in\mathbb{R}}k(x,x)=\min_{x\in\mathbb{R}}k(x,x)>0\). Step 2. We need to show a decay condition of the type \(|k(x,y)|\leq N(1+|x-y|)^{-1-\epsilon}\), \(\forall x,y\in\overline{\mathbb{R}}\) and some \(\epsilon>0\). Let \(x,y\in\mathbb{R}\). Recall that \(b(k)\leq B\) for every \(k\in\mathbb{Z}\) and \(|\psi_{k,l}(x)|\leq\sqrt{2}g(x-k/2)\) for \(l\neq 0\), then we have \[|k(x,y)|= \left|\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}\overline{\psi_{k,l}( y)}\psi_{k,l}(x)\right|\] \[= \left|\sum_{k\in\mathbb{Z}}\overline{\psi_{k,0}(y)}\psi_{k,0}(x) +\sum_{k\in\mathbb{Z}}\sum_{l=1}^{b(k)}\overline{\psi_{k,l}(y)}\psi_{k,l}(x)\right|\] \[\leq \sum_{k\in\mathbb{Z}}\{|g(y-k)g(x-k)|+2b(k)|g(y-k/2)g(x-k/2)|\} \tag{3.3}\] \[\leq \sum_{k\in\mathbb{Z}}\{|g(y-k)g(x-k)|+2B|g(y-k/2)g(x-k/2)|\}.\] Set \(w(x)=(1+|x|)^{-1-\epsilon}\), then \(w\) is even, and there exist constants \(C_{0}>0\) and \(C_{1}>0\) such that \[\frac{1}{C_{0}}w(x)\leq\min_{|y|\leq 1}w(x+y)\leq\max_{|y|\leq 1}w(x+y)\leq C_{0 }w(x) \tag{3.4}\] \[\text{and }\left.w\right|_{\mathbb{Z}}*\left.w\right|_{\mathbb{Z}}\leq C_{1} \left.w\right|_{\mathbb{Z}}. \tag{3.5}\] \(w\) is called subconvolutive [23, Chapter 11]. Consider the first term of (3.3), apply the assumption on the decay of \(g\) and the properties (3.4) and (3.5) of \(w\), then \[\sum_{k\in\mathbb{Z}}|g(y-k)g(x-k)| \leq C^{2}\sum_{k\in\mathbb{Z}}w(y-k)w(x-k)\] \[\leq C^{2}C_{0}^{2}\sum_{k\in\mathbb{Z}}w(\lfloor y\rfloor-k)w( \lfloor x\rfloor-k)\] \[=C^{2}C_{0}^{2}\sum_{k\in\mathbb{Z}}w(k)w(\lfloor x\rfloor- \lfloor y\rfloor-k)\] \[\leq C^{2}C_{0}^{2}C_{1}w(\lfloor x\rfloor-\lfloor y\rfloor)\] \[\leq C^{2}C_{0}^{3}C_{1}w(x-y).\] Similarly, the second term in (3.3) is bounded by \[\sum_{k\in\mathbb{Z}}|g(y-k/2)g(x-k/2)|\leq\widetilde{C}w(x-y),\] so that we obtain the desired off-diagonal decay of the kernel. Conditions (i) and (ii) of Theorem 2.2 are satisfied. Step 3. It remains to compute \(\int_{x-r}^{x+r}k(y,y)dy\). Fix \(x\in\overline{\mathbb{R}}\), consider the interval \(B_{r}(x)=[x-r,x+r]\) and compute the averaged trace of the kernel \(\frac{1}{2r}\int_{B_{r}(x)}k(y,y)dy\). Fix \(\varepsilon>0\) and choose \(\rho>0\), such that \(\int_{B_{\rho}(0)}|\psi_{0,l}(x)|^{2}dx\geq 1-\varepsilon\) for \(l=0,...,B\). Then \[\int_{B_{\rho}(k/2)}|\psi_{k,l}(x)|^{2}dx\geq 1-\varepsilon\quad \text{for }l=1,...,B\quad\text{ and }\] \[\int_{B_{\rho}(k)}|\psi_{k,0}(x)|^{2}dx\geq 1-\varepsilon.\] By (2.5), we can apply dominated convergence and interchange summation and integration and obtain \[\frac{1}{2r}\int_{B_{r}(x)}k(y,y)dy=\frac{1}{2r}\int_{B_{r}(x)}\sum_{k\in \mathbb{Z}}\sum_{l=0}^{b(k)}|\psi_{k,l}(y)|^{2}dy=\frac{1}{2r}\sum_{k\in \mathbb{Z}}\sum_{l=0}^{b(k)}\int_{B_{r}(x)}|\psi_{k,l}(y)|^{2}dy.\] If \(\frac{k}{2}\in B_{r-\rho}(x)\), then \(B_{\rho}(\frac{k}{2})\subseteq B_{r}(x)\). We estimate the average trace as follows: \[\frac{1}{2r}\int_{B_{r}(x)}k(y,y)dy \geq\frac{1}{2r}\sum_{k\in B_{r-\rho}(x)}\int_{B_{r}(x)}|\psi_{k, 0}(y)|^{2}dy+\frac{1}{2r}\sum_{\frac{k}{2}\in B_{r-\rho}(x)}\sum_{l=1}^{b(k)} \int_{B_{r}(x)}|\psi_{k,l}(y)|^{2}dy\] \[\geq\frac{1}{2r}(1-\varepsilon)\Big{[}\#(B_{r-\rho}(x)\cap \mathbb{Z})+\sum_{\frac{k}{2}\in B_{r-\rho}(x)}b(k)\Big{]}\] \[\geq\frac{1}{2r}(1-\varepsilon)\Big{[}2(r-\rho)+\sum_{\frac{k}{2 }\in B_{r-\rho}(x)}b(k)\Big{]}\] \[=(1-\varepsilon)\frac{r-\rho}{r}+\frac{1}{2r}(1-\varepsilon)\sum _{\frac{k}{2}\in B_{r-\rho}(x)}b(k).\] Letting \(r\to\infty\), we obtain \[\liminf_{r\to\infty}\inf_{x\in\mathbb{R}}\frac{1}{2r}\int_{B_{r}(x)}k(y,y)dy \geq 1-\varepsilon+(1-\varepsilon)\liminf_{r\to\infty}\inf_{x\in\mathbb{R}} \frac{1}{2r}\sum_{\frac{k}{2}\in B_{r}(x)}b(k).\] Define \(\overline{b}\coloneqq\liminf_{r\to\infty}\inf_{x\in\mathbb{R}}\frac{1}{2r} \sum_{\frac{k}{2}\in B_{r}(x)}b(k)\) the average bandwidth in \(B_{r}(x)\). Apply Theorem 2.2 and, since \(\varepsilon>0\) is arbitrary, we get \[D^{-}(\Lambda)\geq 1+\overline{b}.\] The properties of Wilson bases play a crucial role in satisfying the conditions of the theorem. In particular, the combination of property (iii) of the characterization Theorem 2.1 and the symmetry of the window function \(g\) serves as a key ingredient in showing that the trace of the reproducing kernel has a bounded infimum. _Remark_.: We would like to point out that for compactly supported windows \(g\), Theorem 3.1 implies directly Theorem 3.2. ## 4. Sufficient condition for sampling In this section we will investigate the question under which conditions on a sampling set a function in \(PW_{b}^{2}(g,\mathbb{R})\) can be reconstructed completely. We use a Wilson basis with compactly supported window \(g\). We will index the sampling set \(\Lambda\) as follows \[\Lambda:=\Big{\{}x_{\frac{k}{2},j}\in\mathbb{R}:x_{\frac{k}{2},j}=k/2+\eta_{ \frac{k}{2},j},\text{ where }k\in\mathbb{Z},\ \eta_{\frac{k}{2},j}\in[0,1/2)\text{ and }x_{\frac{k}{2},j+1}-x_{\frac{k}{2},j}\leq\delta_{\frac{k}{2}}\ \forall k\in\mathbb{Z}\Big{\}}, \tag{4.1}\] where the sampling points The \(x_{k,j}\) are consecutive points in the interval \([k/2,k/2+1/2)\) and the local sampling density is measured by the maximum gap between the samples, i.e. \[\delta_{\frac{k}{2}}=\max_{j=1,\ldots j_{\max}(\frac{k}{2})}(x_{\frac{k}{2},j+ 1}-x_{\frac{k}{2},j}),\] where the number \(j_{\max}(\frac{k}{2})\) depends on the interval. Since according to our model space \(PW_{b}^{2}(g,\mathbb{R})\), the local bandwidth on \([k/2,k/2+1/2)\) is roughly \(b(k)\), or possibly \(b(k+l)\) for \(|l|\leq m\) if we want to consider the influence of neighbouring intervals, we expect that the gap \(\delta_{k/2}\) on \([k/2,k/2+1/2)\) is related to \(b(k)\). The following theorem makes this expectation rigorous and provides sufficient condition for sampling for \(PW_{b}^{2}(g,\mathbb{R})\). The proof is a modification of the method proposed in [22] and is called adaptive weights method in [19]. **Theorem 4.1**.: _Let \(g\in\mathcal{C}^{1}(\mathbb{R})\) be real-valued, even, and with \(\operatorname{supp}(g)\subseteq[-m,m]\). Let \(PW_{b}^{2}(g,\mathbb{R})\) be the space of variable bandwidth defined in (1.2) and let \(\Lambda\subseteq\mathbb{R}\) be as in (4.1). Define_ \[D:=4m\ \max\{2\pi\|g\|_{\infty},\|g^{\prime}\|_{\infty}\} \tag{4.2}\] _and_ \[\mu_{\frac{k}{2}}:=\max_{n=(k-2m,k+2m+1)\cap\mathbb{Z}}\Big{(}b(n)+1\Big{)}. \tag{4.3}\] _If for every \(k\in\mathbb{Z}\)_ \[\delta_{\frac{k}{2}}<\frac{\pi}{\mu_{\frac{k}{2}}D},\] _then \(f\in PW_{b}^{2}(g,\mathbb{R})\) can be reconstructed completely by the samples in \(\Lambda\)._ Proof.: Let \(f\in PW_{b}^{2}(g,\mathbb{R})\). We define \(y_{\frac{k}{2},0}=k/2\) and \(y_{\frac{k}{2},j_{\max}(\frac{k}{2})}=k/2+1/2\), and let \(y_{\frac{k}{2},j}=\frac{1}{2}(x_{\frac{k}{2},j}+x_{\frac{k}{2},j+1})\) be the midpoints between the samples. Set \(\chi_{\frac{k}{2},j}=\chi_{[y_{\frac{k}{2},j-1},y_{\frac{k}{2},j})}\). Moreover, if the maximum gap is \(\delta_{\frac{k}{2}}\), then we have that \(y_{\frac{k}{2},j}-x_{\frac{k}{2},j}\leq\frac{1}{2}\delta_{\frac{k}{2}}\) and \(x_{\frac{k}{2},j}-y_{\frac{k}{2},j-1}\leq\frac{1}{2}\delta_{\frac{k}{2}}\). Let \(P\) be the orthogonal projection from \(L^{2}(\mathbb{R})\) onto \(PW_{b}^{2}(g,\mathbb{R})\), then motivated by [22], we approximate \(f\) by a step function and project onto \(PW_{b}^{2}(g,\mathbb{R})\) with \(P\). The resulting approximation operator is defined by \[Af=P\left(\sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{\max}(\frac{k}{2})}f\big{(}x_{ \frac{k}{2},j}\big{)}\chi_{\frac{k}{2},j}\right).\] We emphasize that \(A\) uses only the samples \(f(x_{\frac{k}{2},j})\). The question is how well the operator \(A\) approximates the identity. Since \(f=Pf=P(\sum_{k,j}f\chi_{\frac{k}{2},j})\) and the characteristic functions \(\chi_{\frac{k}{2},j}\) have mutually disjoint support, we have \[\|f-Af\|_{2}^{2} =\Big{\|}P\Big{(}\sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{\max}(\frac {k}{2})}(f-f(x_{\frac{k}{2},j}))\chi_{\frac{k}{2},j}\Big{)}\Big{\|}_{2}^{2}\] \[\leq\Big{\|}\sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{\max}(\frac{k}{2 })}(f-f(x_{\frac{k}{2},j}))\chi_{\frac{k}{2},j}\Big{\|}_{2}^{2}\] \[=\int_{\mathbb{R}}\Bigl{|}\sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{ \max}(\frac{k}{2})}(f(x)-f(x_{\frac{k}{2},j}))\chi_{\frac{k}{2},j}(x)\Big{|}^{ 2}dx\] \[=\int_{\mathbb{R}}\sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{\max}( \frac{k}{2})}\Big{|}(f(x)-f(x_{\frac{k}{2},j}))\chi_{\frac{k}{2},j}(x)\Big{|} ^{2}dx.\] The sums converge absolutely, hence we can interchange summation and integration. Thus we can write \[\|f-Af\|_{2}^{2}\leq\sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{\max}(\frac{k}{2})} \int_{y_{\frac{k}{2},j-1}}^{y_{\frac{k}{2},j}}|f(x)-f(x_{\frac{k}{2},j})|^{2}dx.\] By applying Wirtinger's inequality (2.6), we obtain \[\sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{\max}(\frac{k}{2})} \int_{y_{\frac{k}{2},j-1}}^{y_{\frac{k}{2},j}}|f(x)-f(x_{\frac{k}{2},j})|^{ 2}dx\] \[\leq\frac{4}{\pi^{2}}\sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{\max}( \frac{k}{2})}\max\Bigl{\{}(x_{\frac{k}{2},j}-y_{\frac{k}{2},j-1})^{2},(y_{ \frac{k}{2},j}-x_{\frac{k}{2},j})^{2}\Bigr{\}}\int_{y_{\frac{k}{2},j-1}}^{y_{ \frac{k}{2},j}}|f^{\prime}(x)|^{2}dx\] \[\leq\sum_{k\in\mathbb{Z}}\frac{1}{\pi^{2}}\delta_{\frac{k}{2}}^{ \delta_{\frac{k}{2}}^{2}}\sum_{j=1}^{j_{\max}(\frac{k}{2})}\int_{y_{\frac{k}{2 },j-1}}^{y_{\frac{k}{2},j}}|f^{\prime}(x)|^{2}dx.\] By fixing \(k\in\mathbb{Z}\), we get for a single term \[\frac{1}{\pi^{2}}\delta_{\frac{k}{2}}^{2}\sum_{j=1}^{j_{\max}(\frac{k}{2})}\int_{ y_{\frac{k}{2},j-1}^{y_{\frac{k}{2},j}}}^{y_{\frac{k}{2},j}}\ |f^{\prime}(x)|^{2}dx=\frac{1}{\pi^{2}}\delta_{\frac{k}{2}}^{2}\int_{\frac{k}{ 2}}^{\frac{k}{2}+\frac{1}{2}}|f^{\prime}(x)|^{2}dx=\frac{1}{\pi^{2}}\delta_{ \frac{k}{2}}^{2}\|f^{\prime}\chi_{[\frac{k}{2},\frac{k}{2}+\frac{1}{2})}\|_{2}^ {2}.\] Step 1. We study \(\int_{\frac{k}{2}}^{\frac{k}{2}+\frac{1}{2}}|f^{\prime}(x)|^{2}dx\). Recall the equivalent definition of a Wilson basis in terms of the translation and modulation operators (2.1). Defining the trigonometric polynomial \(P_{n}\) by \[P_{n}(x):=\sum_{l=0}^{b(n)}c_{n,l}d_{l}(M_{l}+(-1)^{l+n}M_{-l}),\] we can write \(f\) as \[f(x)=\sum_{n\in\mathbb{Z}}\sum_{l=0}^{b(n)}c_{n,l}\psi_{n,l}(x)=\sum_{n\in \mathbb{Z}}P_{n}(x)g(x-n/2)\] and the derivative \(f^{\prime}\) as \[f^{\prime}(x)=\sum_{n\in\mathbb{Z}}\Big{(}P^{\prime}_{n}(x)T_{\frac{n}{2}}g(x) +P_{n}(x)T_{\frac{n}{2}}g^{\prime}(x)\Big{)}.\] Since \(\operatorname{supp}(g)\subseteq[-m,m]\), \(x\in\left[\frac{k}{2},\frac{k}{2}+\frac{1}{2}\right)\), and \(x-\frac{n}{2}\in(-m,m)\), the summation index \(n\) satisfies \(n\in(k-2m,k+1+2m)\cap\mathbb{Z}\). Define \[F_{k}\coloneqq(k-2m,k+1+2m)\cap\mathbb{Z}.\] Then we have \[\begin{split}\int_{\frac{k}{2}}^{\frac{k}{2}+\frac{1}{2}}|f^{ \prime}(x)|^{2}dx&=\int_{\frac{k}{2}}^{\frac{k}{2}+\frac{1}{2}} \Big{|}\sum_{n\in F_{k}}\Big{(}P^{\prime}_{n}(x)T_{\frac{n}{2}}g(x)+P_{n}(x)T_ {\frac{n}{2}}g^{\prime}(x)\Big{)}\Big{|}^{2}dx\\ &=\sum_{n,j\in F_{k}}\int_{\frac{k}{2}}^{\frac{k}{2}+\frac{1}{2}} \Big{(}P^{\prime}_{n}(x)T_{\frac{n}{2}}g(x)\overline{P^{\prime}_{j}(x)T_{ \frac{j}{2}}g(x)}\\ &\qquad\qquad\qquad+2\operatorname{Re}\Big{(}P^{\prime}_{n}(x)T_{ \frac{n}{2}}g(x)\overline{P_{j}(x)T_{\frac{j}{2}}g^{\prime}(x)}\Big{)}\\ &\qquad\qquad\qquad+P_{n}(x)T_{\frac{n}{2}}g^{\prime}(x)\overline {P_{j}(x)T_{\frac{j}{2}}g^{\prime}(x)}\Big{)}dx.\end{split} \tag{4.4}\] To bound the first term, we apply Bernstein's inequality (2.7) to the polynomial \(P_{n}\) of degree \(b(n)\) and notice that \(\|T_{\frac{n}{2}}gT_{\frac{j}{2}}\overline{g}\|_{\infty}\leq\|g\|_{\infty}^{2}\). Therefore, \[\begin{split}\int_{\frac{k}{2}}^{\frac{k}{2}+\frac{1}{2}}\Big{(} P^{\prime}_{n}(x)T_{\frac{n}{2}}g(x)\overline{P^{\prime}_{j}(x)T_{\frac{j}{2}}g(x)} \Big{)}dx&\leq\|T_{\frac{n}{2}}gT_{\frac{j}{2}}\overline{g}\|_{ \infty}\|P^{\prime}_{n}\|_{2}\|P^{\prime}_{j}\|_{2}\\ &\leq\|g\|_{\infty}^{2}4\pi^{2}b(n)b(j)\|P_{n}\|_{2}\|P_{j}\|_{2}. \end{split}\] Taking the sum over \(n\) and \(j\), we obtain \[\sum_{n,j\in F_{k}}\int_{\frac{k}{2}}^{\frac{k}{2}+\frac{1}{2}}\Big{(}P ^{\prime}_{n}(x)T_{\frac{n}{2}}g(x)\overline{P^{\prime}_{j}(x)T_{\frac{j}{2}}g(x )}\Big{)}dx\] \[\qquad\leq 4\pi^{2}\|g\|_{\infty}^{2}\sum_{n,j\in F_{k}}b(n)b(j)\|P_ {n}\|_{2}\|P_{j}\|_{2}\] \[\qquad=4\pi^{2}\|g\|_{\infty}^{2}\Big{(}\sum_{n\in F_{k}}b(n)\|P_ {n}\|_{2}\Big{)}^{2}.\] For the middle terms of (4.4) we obtain in a similar way that \[2\int_{\frac{k}{2}}^{\frac{k}{2}+\frac{1}{2}}\operatorname{Re} \Big{(}P^{\prime}_{n}(x)T_{\frac{n}{2}}g(x)\overline{P_{j}(x)T_{\frac{j}{2}}g^ {\prime}(x)}\Big{)}dx\] \[\qquad\leq 2\|T_{\frac{n}{2}}gT_{\frac{n}{2}}\overline{g^{ \prime}}\|_{\infty}\|P^{\prime}_{n}\|_{2}\|P_{j}\|_{2}\] \[\qquad\leq 4\pi\|g\|_{\infty}\|g^{\prime}\|_{\infty}b(n)\|P_{n}\| _{2}\|P_{j}\|_{2}.\] After summing over \(n\) and \(j\), we have \[\sum_{n,j\in F_{k}}\int_{\frac{k}{2}}^{\frac{k}{2}+\frac{1}{2}}2 \operatorname{Re}\Big{(}P^{\prime}_{n}(x)T_{\frac{n}{2}}g(x)\overline{P_{j}(x )T_{\frac{j}{2}}g^{\prime}(x)}\Big{)}dx\] \[\qquad\leq 4\pi\|g\|_{\infty}\|g^{\prime}\|_{\infty}\Big{(}\sum_{n \in F_{k}}b(n)\|P_{n}\|_{2}\Big{)}\Big{(}\sum_{n\in F_{k}}\|P_{n}\|_{2}\Big{)}.\] The bound for the last term of (4.4) is \[\sum_{n,j\in F_{k}}\int_{\frac{k}{2}}^{\frac{k}{2}+\frac{1}{2}} \Big{(}P_{n}(x)T_{\frac{n}{2}}g^{\prime}(x)\overline{P_{j}(x)T_{\frac{j}{2}}g ^{\prime}(x)}\Big{)}dx\] \[\qquad\leq\sum_{n,j\in F_{k}}\|T_{\frac{n}{2}}g^{\prime}T_{\frac {j}{2}}\overline{g^{\prime}}\|_{\infty}\|P_{n}\|_{2}\|P_{j}\|_{2}\] \[\qquad\leq\|g^{\prime}\|_{\infty}^{2}\Big{(}\sum_{n\in F_{k}}\|P_ {n}\|_{2}\Big{)}^{2}.\] Step 2. Adding all terms, the local estimate for the derivative is \[\int_{\frac{k}{2}}^{\frac{k}{2}+\frac{1}{2}}|f^{\prime}(x)|^{2}dx \leq 4\pi^{2}\|g\|_{\infty}^{2}\Big{(}\sum_{n\in F_{k}}b(n)\|P_{n}\|_{ 2}\Big{)}^{2}+\|g^{\prime}\|_{\infty}^{2}\Big{(}\sum_{n\in F_{k}}\|P_{n}\|_{2} \Big{)}^{2}\] \[\quad+4\pi\|g\|_{\infty}\|g^{\prime}\|_{\infty}\Big{(}\sum_{n\in F _{k}}b(n)\|P_{n}\|_{2}\Big{)}\Big{(}\sum_{n\in F_{k}}\|P_{n}\|_{2}\Big{)}\] \[=\Big{(}2\pi\|g\|_{\infty}\sum_{n\in F_{k}}b(n)\|P_{n}\|_{2}+\|g^ {\prime}\|_{\infty}\sum_{n\in F_{k}}\|P_{n}\|_{2}\Big{)}^{2}\] \[\leq\max\{2\pi\|g\|_{\infty},\|g^{\prime}\|_{\infty}\}^{2}\Big{(} \sum_{n\in F_{k}}\|P_{n}\|_{2}\big{(}b(n)+1\big{)}\Big{)}^{2}.\] Summing all the terms, we obtain \[\|f-Af\|_{2}^{2} \leq\sum_{k\in\mathbb{Z}}\frac{1}{\pi^{2}}\delta_{\frac{k}{2}}^{ \frac{k}{2}+\frac{1}{2}}\,|f^{\prime}(x)|^{2}dx\] \[\leq\frac{1}{\pi^{2}}\max\{2\pi\|g\|_{\infty},\|g^{\prime}\|_{ \infty}\}^{2}\sum_{k\in\mathbb{Z}}\delta_{\frac{k}{2}}^{2}\Big{(}\sum_{n\in F _{k}}\|P_{n}\|_{2}\big{(}b(n)+1\big{)}\Big{)}^{2}.\] Defining \[\mu_{\frac{k}{2}}\coloneqq\max_{n=(k-2m,k+2m+1)\cap\mathbb{Z}} \Big{(}b(n)+1\Big{)},\] we can bound in the following way \[\|f-Af\|_{2}^{2}\leq\frac{1}{\pi^{2}}\max\{2\pi\|g\|_{\infty},\|g^ {\prime}\|_{\infty}\}^{2}\sum_{k\in\mathbb{Z}}\delta_{\frac{k}{2}}^{2}\mu_{ \frac{k}{2}}^{2}\Big{(}\sum_{n\in F_{k}}\|P_{n}\|_{2}\Big{)}^{2}.\] Step 3. We interpret \(\sum_{n\in F_{k}}\|P_{n}\|_{2}\) as a convolution. Let \(\alpha:\mathbb{Z}\to\mathbb{R}\) be the sequence \(\alpha(n)=\|P_{n}\|_{2}\) and \(\chi_{(-2m-1,2m)}\) be the characteristic function \[\chi_{(-2m-1,2m)}(n)=\begin{cases}1,&n\in(-2m-1,2m).\\ 0,&\text{elsewhere}.\end{cases}\] Then \[\Big{(}\alpha*\chi_{(-2m-1,2m)}\Big{)}(k) =\sum_{n\in\mathbb{Z}}\alpha(n)\chi_{(-2m-1,2m)}(k-n)=\sum_{n\in \mathbb{Z}}\alpha(n)\chi_{(k-2m,k+2m+1)}(n)\] \[=\sum_{n\in F_{k}}\alpha(n)=\sum_{n\in F_{k}}\|P_{n}\|_{2}.\] We can rewrite the error as \[\|f-Af\|_{2}^{2}\leq\frac{1}{\pi^{2}}\max\{2\pi\|g\|_{\infty},\|g ^{\prime}\|_{\infty}\}^{2}\sum_{k\in\mathbb{Z}}\delta_{\frac{k}{2}}^{2}\mu_{ \frac{k}{2}}^{2}\Big{(}\alpha*\chi_{(-2m-1,2m)}(k)\Big{)}^{2}.\] The right-hand side can readily be estimated with Young's inequality as \[\sum_{k\in\mathbb{Z}}\delta_{\frac{k}{2}}^{2}\mu_{\frac{k}{2}}^{2} \Big{(}\alpha*\chi_{(-2m-1,2m)}(k)\Big{)}^{2} \leq\sup_{k\in\mathbb{Z}}\bigl{(}\delta_{\frac{k}{2}}^{2}\mu_{\frac {k}{2}}^{2}\bigr{)}\|\alpha*\chi_{(-2m-1,2m)}\|_{2}^{2}\] \[\leq\sup_{k\in\mathbb{Z}}\bigl{(}\delta_{\frac{k}{2}}^{2}\mu_{ \frac{k}{2}}^{2}\bigr{)}\|\chi_{(-2m-1,2m)}\|_{1}^{2}\|\alpha\|_{2}^{2}\] \[\leq\sup_{k\in\mathbb{Z}}\bigl{(}\delta_{\frac{k}{2}}^{2}\mu_{ \frac{k}{2}}^{2}\bigr{)}(4m)^{2}\sum_{k\in\mathbb{Z}}\|P_{k}\|_{2}^{2}\] since \(\|\chi_{(-2m-1,2m)}\|_{1}=4m\). Defining \(D\coloneqq 4m\)\(\max\{2\pi\|g\|_{\infty},\|g^{\prime}\|_{\infty}\}\), we rewrite \[\|f-Af\|_{2}^{2}\leq\frac{1}{\pi^{2}}D^{2}\sup_{k\in\mathbb{Z}}\bigl{(}\delta_ {\frac{k}{2}}^{2}\mu_{\frac{k}{2}}^{2}\bigr{)}\sum_{k\in\mathbb{Z}}\|P_{k}\|_ {2}^{2}.\] Let \(\gamma^{2}\coloneqq\frac{1}{\pi^{2}}D^{2}\sup_{k\in\mathbb{Z}}\bigl{(}\delta_ {\frac{k}{2}}^{2}\mu_{\frac{k}{2}}^{2}\bigr{)}\). Since for every \(k\in\mathbb{Z}\) we have \(\frac{1}{\pi^{2}}D^{2}\delta_{\frac{k}{2}}^{2}\mu_{\frac{k}{2}}^{2}<1\) by hypothesis, then \(\gamma<1\). Hence, \[\|f-Af\|_{2}^{2}\leq\gamma^{2}\sum_{k\in\mathbb{Z}}\|P_{k}\|_{2}^{2}=\gamma^{ 2}\sum_{k\in\mathbb{Z}}\sum_{l=0}^{b(k)}|\langle f,\psi_{k,l}\rangle|^{2}= \gamma^{2}\|f\|_{2}^{2}. \tag{4.5}\] Then the operator \(A\) is invertible on \(PW_{b}^{2}(g,\mathbb{R})\). The stability of the reconstruction follows from: \[\|f\|_{2}=\|A^{-1}Af\|_{2}\leq\|A^{-1}\|_{\mathrm{op}}\|P\|_{\mathrm{op}}\Bigl{\|} \sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{\max}(\frac{k}{2})}f\bigl{(}x_{\frac{k}{2},j}\bigr{)}\chi_{\frac{k}{2},j}\Bigr{\|}_{2}.\] Since \(\|P\|_{\mathrm{op}}\leq 1\) and \[\|A^{-1}\|_{\mathrm{op}}=\Bigl{\|}\sum_{n=0}^{\infty}(\mathrm{Id}-A)^{n} \Bigr{\|}_{\mathrm{op}}\leq\sum_{n=0}^{\infty}\|\mathrm{Id}-A\|_{\mathrm{op}}^ {n}=\frac{1}{1-\gamma},\] then \[\|f\|_{2}\leq\frac{1}{1-\gamma}\Bigl{\|}\sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{ \max}(\frac{k}{2})}f\bigl{(}x_{\frac{k}{2},j}\bigr{)}\chi_{\frac{k}{2},j} \Bigr{\|}_{2}=\frac{1}{1-\gamma}\Bigl{(}\sum_{k\in\mathbb{Z}}\sum_{j=1}^{j_{ \max}(\frac{k}{2})}|f(x_{\frac{k}{2},j})|^{2}w_{\frac{k}{2},j}\Bigr{)}^{1/2}\,, \tag{4.6}\] where \(w_{\frac{k}{2},j}=\int\chi_{\frac{k}{2},j}\). Hence, \(f\) is completely determined by \(f(x_{\frac{k}{2},j})\) for every \(k\in\mathbb{Z}\) and \(j=0,...,j_{\max}(\frac{k}{2})\). In addition, (4.6) is the required (weighted) sampling inequality. The upper sampling inequality is obtained similarly. The theorem presents a sufficient condition involving the maximal gap between consecutive points of the sampling set \(\Lambda\). The constant \(D\) defined in (4.2), indicates the dependence of the maximal gap on the features of the window, while \(\mu_{k/2}\) (4.3) explains the maximal bandwidth that has an influence on the interval \([k/2,k/2+1/2)\). The maximal bandwidth over each interval \([k/2,k/2+1/2)\) represents the degree of a trigonometric polynomial in that interval. This observation highlights that the value \(\mu_{k/2}\) provides a more accurate measure of local bandwidth compared to \(b(k)\). With this constructive method, we produce quantitative results that could be applied as a base for numerical experiments. Note that the separation of the points is not a requirement for the proof. In fact, any local variations of the density can always be addressed by using the adaptive weights method. However, if the points are separated, a lower bound for the weights is provided. _Remark_.: The same methodology can be employed when substituting a Wilson orthonormal basis with a Wilson Riesz basis. Interestingly, the use of \(\{\psi_{k,l}\}_{k\in\mathbb{Z},l=0,\ldots,b(k)}\) as an orthonormal basis is only crucial in the final equality of (4.5). However, if we assume \(\{\psi_{k,l}\}_{k\in\mathbb{Z},l=0,\ldots,b(k)}\) to be a Wilson Riesz basis, the sufficient condition needs to be adjusted to take care of the Riesz bound.
2307.07822
Design Analysis and Experimental Validation of Relaxation Oscillator-Based Circuit for R-C Sensors
Relaxation oscillator-based circuits are widely used for interfacing various resistive and capacitive sensors. The electrical equivalent of most resistive and capacitive sensors is represented using a parallel combination of resistor and capacitor. The relaxation oscillator-based circuits are not suitable for parallel R-C sensors. In this paper, we propose a modified circuit for parallel R-C sensors. The proposed relaxation oscillator-based circuit is based on a dual-slope and charge transfer technique to measure the resistance and capacitance of parallel R-C sensors separately. In addition, the paper provides a detailed analysis and design considerations for the oscillator design by taking into account the various sources of non-idealities. A method to reduce the error by using single-cycle averaging is also introduced. To verify the analyzed design criteria, the circuit is tested with multiple operational amplifiers with different non-idealities. Experimental results verify the performance of the proposed circuit. The circuit is tested for a range from 10 pF to 42 pF and 100 k$\Omega$ to 1 M$\Omega$ for parallel R-C sensors with an error of less than 1.5\%. The circuit is tested with a fabricated water-level sensor. The result confirms the efficacy of the proposed circuit.
Mohamad Idris Wani, Sadan Saquib Khan, Benish Jan, Meraj Ahmad, Maryam Shojaei Baghini, Laxmeesha Somappa, Shahid Malik
2023-07-15T15:03:09Z
http://arxiv.org/abs/2307.07822v1
# Design analysis and experimental validation of relaxation oscillator-based circuit for R-C sensors ###### Abstract. Relaxation oscillator-based circuits are widely used for interfacing various resistive and capacitive sensors. The electrical equivalent of most resistive and capacitive sensors is represented using a parallel combination of resistor and capacitor. The relaxation oscillator-based circuits are not suitable for parallel R-C sensors. In this paper, we propose a modified circuit for parallel R-C sensors. The proposed relaxation oscillator-based circuit is based on a dual-slope and charge transfer technique to measure the resistance and capacitance of parallel R-C sensors separately. In addition, the paper provides a detailed analysis and design considerations for the oscillator design by taking into account the various sources of non-idealities. A method to reduce the error by using single-cycle averaging is also introduced. To verify the analyzed design criteria, the circuit is tested with multiple operational amplifiers with different non-idealities. Experimental results verify the performance of the proposed circuit. The circuit is tested for a range from 10 pF to 42 pF and 100 k\(\Omega\) to 1 M\(\Omega\) for parallel R-C sensors with an error of less than 1.5%. The circuit is tested with a fabricated water-level sensor. The result confirms the efficacy of the proposed circuit. **Introduction** Resistive and capacitive sensors are very popular and widely used in the scientific, biomedical, and industrial applications for measurement of force, displacement, pressure, humidity, temperature, liquid level, and bio-medical implants [1, 2, 3, 4, 5]. The signal conditioning circuits for these sensors are generally based on bridge-based circuits, relaxation oscillator-based resistance/capacitance to frequency converters, and direct digital converters [6, 7, 8, 9]. Bridge circuits are the fundamental building block for resistance measurement. Wheatstone bridge-based circuits are preferred for measuring a small variation in sensor resistance. However, the single-element resistive sensors' output voltage is non-linear with respect to the change in the sensor resistance [10]. This is especially critical when the sensor resistance change is large compared to the baseline resistance. This affects the sensitivity for wide dynamic range measurement. On the other hand, the relaxation oscillator-based circuits provide a wide-dynamic range measurement of sensor resistance by linearly converting the resistance into frequency. Most of the signal conditioning circuits for resistive and capacitive sensors are based on the assumption that the sensor is ideal and can be represented by either a resistor in the case of the resistive sensors or a capacitor in the case of the capacitive sensors [6, 9, 11, 12, 13]. However, the assumption is not valid for many sensing applications. For instance, the resistive sensors are affected by a parasitic capacitance [2]. Similarly, most of the capacitive sensors are affected by a leakage resistance [14, 15]. Therefore, the electrical equivalent circuit for most of the resistive and capacitive sensors consists of a resistor and capacitor in parallel [2]. In such sensors, one component carries information about the measurand, while the other component is present because of the non-ideal nature of the sensor which is unpredictable. Therefore, its effect should be compensated at the circuit level, or it degrades the performance of the sensor system. Moreover, in some sensors, both sensing components are a function of the measurand and need simultaneous measurement [16, 17]. The signal conditioning circuits for the parallel R-C sensors are mostly based on the phase-sensitive detection (PSD) technique. The PSD-based techniques require an in-phase and a quadrature reference signal to separately measure the resistance and capacitance of the parallel R-C sensors. Auto-nulling and PSD-based signal conditioning circuits for the parallel R-C sensors are reported in [18, 19, 20]. These circuits periodically provide the measurement of the R-C parameters of the sensor. However, it requires analog multipliers and an auto-nulling loop, which increases the complexity and power consumption. Another PSD-based signal conditioning circuit for parallel R-C sensors is reported in [21]. The circuit provides simultaneous measurement of sensor capacitance and resistance. However, the output of the signal conditioning circuit is sensitive to many circuit parameters and component non-idealities, which affects the robustness of the sensor system. Moreover, the output voltage is sensitive to noise, affecting the measurement's resolution. Thanks to the inherent properties such as the quasi-digital output and high noise immunity, the relaxation oscillator-based circuits are preferred for lossless resistive, and capacitive sensors [7, 8, 22, 23] as shown in Fig. 1. The oscillator-based circuit reported in [23, 24, 25] are designed for the measurement of larger variation in sensor resistance with parasitic capacitance. However, the design considerations for the desired accuracy are not presented in [23, 24]. Another oscillator-based signal conditioning circuit for parallel R-C sensors is reported in [14]; however, the measurement range for the capacitance measurement is very small (\(<\)1 pF). This limits the range of applications of the circuit reported in [14]. The initial results of the relaxation-oscillator circuit for capacitance measurement are reported in [25]. In this paper, we present the extended analysis, design considerations, experimental validation of the performance parameters, and sensor testing with the relaxation oscillator-based signal conditioning circuits. The features of the paper are highlighted as follows. Figure 1. Schematic diagram of the relaxation oscillator circuit for resistive/capacitive sensor. In the case of an ideal capacitive sensor, \(C_{x}\) is the sensing variable, and \(R_{x}\) is the integrator resistance. In the case of an ideal resistive sensor, \(R_{x}\) is the sensing element, and \(C_{x}\) is the integrator capacitance as a reference element. * The proposed oscillator combines dual-slope and charge-transfer techniques to separately measure the resistance and capacitance of the parallel R-C sensors. * Detailed analysis, including various sources of non-idealities, is included in the paper. The effect of various component non-idealities on the measurement of sensor parameters is analyzed. * A single-cycle averaging method is utilized, which significantly reduces the effects of component non-idealities on the measurement of sensor parameters. * The design criteria for the selection of components for designing the proposed signal conditioning circuit are also included in the paper. * The proposed circuit is tested for different sensing conditions and the measurement is compared with the derived analytical solution. **Conventional Relaxation Oscillator for Resistive-Capacitive Sensors** The schematic diagram of the relaxation oscillator-based signal conditioning circuit (with triangular and square wave output) for resistive and capacitive sensors is shown in Fig. 1. The circuit consists of an integrator and a Schmitt trigger. The operational amplifier \(OA_{1}\) along with the resistance \(R_{x}\) and capacitor \(C_{x}\) form the integrator of the oscillator. In the case of a capacitive sensor, \(C_{x}\) is replaced by the sensor element, and \(R_{x}\) is implemented using a known resistor. Similarly, in the case of resistive sensors, \(R_{x}\) is the sensor element (considered to be lossless in Fig.1), and \(C_{x}\) is the reference element. The signal conditioning circuits based on the conventional relaxation oscillator for resistive and capacitive sensors are reported in [7, 8, 22]. The integrator output in Fig. 1 is compared with the threshold voltage of the Schmitt trigger (implemented using \(R_{1}\), \(R_{2}\), and Op-Amp \(OA_{2}\)). The expression for the voltage \(V_{R}(t)\) can then be derived as follows. \[V_{R}(t)=-\frac{1}{R_{x}C_{x}}\ \int_{0}^{t}V_{p}\ dt \tag{1}\] Figure 2. Schematic diagram of the proposed modified relaxation oscillator for parallel R-C sensors where \(V_{p}\) is the peak voltage at the output of the Schmitt trigger. The threshold voltage of the Schmitt trigger is given as \(\pm V_{p}\times(R_{1}/R_{2})\). The capacitor \(C_{x}\) is initially charged with voltage \(|V_{p}|\) at \(t=0\). At \(t=T/2\), the voltage \(V_{R}(t)\) can be written as follows. \[V_{R}\left(t=\frac{T}{2}\right)=-\frac{T\;V_{p}}{2\;R_{x}\;C_{x}}=-2\;V_{p}\; \frac{R_{1}}{R_{2}} \tag{2}\] The expression of the oscillation time-period \(T\) of the square-wave output signal \(V_{X}(t)\) can be derived as follows. \[T=4\;R_{x}\;C_{x}\;\frac{R_{1}}{R_{2}} \tag{3}\] The expression (3) indicates that the period \(T\) of the square-wave output of the relaxation oscillator circuit is proportional to the sensor variables (either \(C_{x}\) or \(R_{x}\) at once). However, the conventional relaxation oscillator circuit in Fig. 1 is not suitable for the leaky capacitive and parallel R-C sensors. The accuracy of the relaxation oscillator based circuits for sensing applications are highly dependent on the symmetrical output in both positive and negative polarity. This can either be achieved using a network of Zener diodes or by using a rail-to-rail operational amplifier for the Schmitt trigger. Further, a current limiting resistor can also be used to ensure the safety [26]. In this paper, we are assuming the output voltage of the Schmitt trigger is symmetrical. **Modified Relaxation Oscillator for Impedance R-C Sensors** The modified relaxation oscillator circuit with the parallel \(R_{x}\) and \(C_{x}\) (electrical equivalent of a resistive sensor) at the inverting terminal of the integrator is shown in Fig. 2. The capacitor \(C_{x}\) is active only during the switching of square-wave voltage \(V_{X}(t)\) from low to high and vice-versa. Charge transfer occurs between the capacitors \(C_{x}\) and \(C_{i}\), during each of these transitions. For the rest of the cycle, the capacitor \(C_{i}\) charges/discharges through \(R_{x}\) with a linear slope. \(C_{p}\) is the parasitic capacitance of op-amp. **Oscillator Operating Principle** The expression for the voltage \(V_{R}(t)\) in Fig. 2 can be derived as follows. \[V_{R}(t)=\left(\frac{R_{1}}{R_{2}}-\frac{2\;C_{x}}{C_{i}}\right)V_{p}-\int_{0 }^{t}\frac{V_{p}}{R_{x}\;C_{i}}dt \tag{4}\] The factor \(2C_{x}/C_{i}\) is due to the charge transfer between \(C_{x}\) and \(C_{i}\). The sudden charge transfer during the transition affects the initial voltage at the output \(V_{R}(t)\). At \(t=T/2\), \(V_{R}(t)\) can be simplified as follows. \[V_{R}(t)=\left(\frac{R_{1}}{R_{2}}-\frac{2\;C_{x}}{C_{i}}\right)V_{p}-\frac{T \;V_{p}}{2\;R_{x}\;C_{i}}=-V_{p}\frac{R_{1}}{R_{2}} \tag{5}\] Therefore, the oscillation period, \(T\) at the output node \(V_{x}\) can be expressed as follows. \[T=4\;R_{x}\;C_{i}\;\left(\frac{R_{1}}{R_{2}}-\frac{C_{x}}{C_{i}}\right) \tag{6}\] The oscillation period \(T\) at the output node \(V_{x}\) of the circuit in Fig. 2 is proportional to \(R_{x}\) and \(C_{x}\). However, the expression of time-period \(T\) is not enough to separately estimate both the capacitance \(C_{x}\) and resistance \(R_{x}\) of the sensor. Hence, additional information is necessary to measure both parameters separately. To measure both the parameters \(R_{x}\) and \(C_{x}\), a zero-crossing-detector (ZCD) (implemented using \(OA_{3}\)) and an XOR gate is added to separate the sensor components (\(R_{x}\) & \(C_{x}\)) into an equivalent pulse width. The ZCD separates the ramp signal into two parts: The first part is proportional to the charge transfer, and the second part is proportional to sensor resistance as represented by \(T_{p1}\) and \(T_{p2}\) in Fig. 3. The expression for \(T_{p1}\) and \(T_{p2}\) can be derived as follows. \[T_{p1}=R_{x}\;C_{i}\left(\frac{R_{1}}{R_{2}}-\frac{2\;C_{x}}{C_{i}}\right) \tag{7}\] \[T_{p2}=R_{x}\;C_{i}\;\frac{R_{1}}{R_{2}} \tag{8}\] The expression of unknown impedance \((R_{x},C_{x})\) can be estimated as from \(T_{p1}\) and \(T_{p2}\) as follows. \[C_{x}=\frac{R_{1}}{R_{2}}\frac{C_{i}}{2}(1-\frac{T_{p1}}{T_{p2}}) \tag{9}\] \[R_{x}=\frac{R_{1}}{C_{i}R_{2}}T_{p2} \tag{10}\] **Oscillator Performance Analysis with Component Non-idealities** The developed relaxation oscillator circuit's output periods are affected by many circuit parameters and must be analyzed to design the circuit for a desired target resolution. The following non-idealities are considered. * Op-Amp finite GBW, A\({}_{0}\omega_{0}\) * Op-Amp input bias current, i Figure 3. Timing diagram of the proposed oscillator shown in Fig. 2 * Op-Amp slew rate, SR * Schmitt trigger and Op-Amp offset voltage, \(\mathrm{V_{os}}\) * ZCD comparator offset, \(\mathrm{V_{oz}}\) * ZCD comparator response delay, \(\mathrm{\tau_{2,LH}},\mathrm{\tau_{2,HL}}\) * Schmitt trigger response delay, \(\mathrm{\tau_{S,LH}},\mathrm{\tau_{S,HL}}\) The propagation time of the XOR gate will also affect the periods. The parasitic capacitance of the XOR gate also contributes to the propagation time. Usually, the propagation time of the XOR gate is in ns, which is smaller than the zero crossing detector and Schmitt trigger used in the circuit. Therefore, the effect of the XOR gate can be considered negligible for the circuit. Further, the effect of the propagation time of the XOR gate can be considered in the equation by adding the rise and fall time with the comparator output. The steps for analyzing the effect of component nonidealities on the measurement of sensor parameters are as follows. 1. First, we will analyze the effect of the op-amp nonidealities on the slope and the offset of the integrator. The change in the integration slope and the effective integrator output offset is the primary concern for the oscillators' performance. 2. Next, we consider the effect of the oscillation period due to the component non-idealities in the ZCD and the Schmitt trigger with the non-ideal integrator output applied. 3. Finally, we will combine all the non-idealities to derive the oscillation period at the output of the XOR gate. The detailed analysis of the component non-idealities is as follows. **Effect of the op-amp non-idealities on the slope and the offset of the integrator** First, considering finite GBW product, the integrator output voltage when driven by a unit step voltage \(\mathrm{V_{p}}\) can be derived as follows. \[V_{R}(s)=\frac{-V_{p}A_{0}\omega_{0}}{s^{2}R_{x}(C_{x}+C_{p}+C_{i})}\Bigg{(} \frac{1+sC_{x}R_{x}}{s+\Big{(}\frac{A_{0}\omega_{0}R_{x}C_{i}+1}{R_{x}(C_{x}+C _{p}+C_{i})}\Big{)}}\Bigg{)} \tag{11}\] The time domain voltage \(\mathrm{V_{R}}\)(t) is obtained by transforming Eq. 11 into the time-domain \[\begin{split} V_{R}(t)=& V_{p}(\alpha-2X)-\frac{V_{ p}}{R_{x}C_{i}}\Bigg{\{}\overbrace{\Big{(}1+\frac{1}{A_{0}\omega_{0}R_{x}C_{i}} \Big{)}}^{\text{Integration slope reduction due to finite GBW}}}-\\ &\underbrace{\frac{1}{\Big{(}1+\frac{1}{A_{0}\omega_{0}R_{x}C_{ i}}\Big{)}^{2}}\frac{1}{A_{0}\omega_{0}}\Big{[}1+\big{(}\frac{C_{x}+C_{p}}{C_{i}} \big{)}+\frac{C_{x}}{C_{i}}(1+A_{0}\omega_{0}R_{x}C_{i})\Big{]}e^{-\Big{(} \frac{A_{0}\omega_{0}R_{x}C_{i}+1}{R_{x}(C_{x}+C_{p}+C_{i})}\Big{)}t}}_{\text {Offset voltage due to finite GBW}}\Bigg{\}}\end{split} \tag{12}\] and taking the initial value of \(\mathrm{V_{R}}\)(t) as \(\mathrm{V_{P}}(\alpha-2\mathrm{X})\) where, \(\alpha=\mathrm{R_{2}}/\mathrm{R_{1}}\) and \(X=C_{x}/C_{i}\). The time-domain voltage expression at node \(V_{R}\) is derived in Eq. 12. The integrator output voltage contains additional terms due to the Op-Amp non-idealities as seen from Eq. 12. * The first term indicates that the finite GBW product reduces the slope of the linear ramp voltage at the integrator output by a factor \(1+1/(\mathrm{A_{0}\omega_{0}R_{x}C_{i}})\). * The second term offers insight into the offset voltage introduced at the integrator output due to the finite GBW product and additionally due to the impedance/leaky-capacitive sensor. The effect of the exponential transient term is small enough to be neglected. We denote the final offset term introduced due to finite GBW by \(\gamma\) and is given by \[\gamma=\Bigg{(}1+\frac{C_{x}+C_{p}}{C_{i}}+\frac{C_{x}}{C_{i}}(1+A_{0}\omega_{0} R_{x}C_{i})\Bigg{)}\frac{A_{0}\omega_{0}R_{x}C_{i}}{(1+A_{0}\omega_{0}R_{x}C_{i})^{2}} \tag{13}\] Further, considering the Op-Amp in the integrator has an offset voltage of \(V_{os}\) and a bias current of \(i_{b}\), the total offset voltage presented by the integrator is then \(\mathrm{V_{os}^{\prime}=V_{os}+i_{b}R_{x}}\). A similar analysis of the integrator with a finite GBW Op-Amp in the presence of this offset term shows that the effect is evaluated simply by shifting the input by the equivalent offset voltage. Further, the slope of the integrator output waveform (triangular) is limited by the slew rate (SR) of the Op-Amp used in the integrator. Hence, the slope (SL) of the integrator output is given by Eq. 14, where the positive sign corresponds to \(SL_{(-)}\), and the minus sign corresponds to \(SL_{(+)}\). \[SL=min\Bigg{\{}\frac{V_{p}\pm(V_{os}^{\prime}+i_{b}R_{x})}{R_{x}C_{i}\Big{(}1+ \frac{1}{A_{0}\omega_{0}R_{x}C_{i}}\Big{)}},SR\Bigg{\}} \tag{14}\] Referring to Fig. 3, we now obtain the individual expressions for \(T_{P1}\) and \(T_{P2}\) to obtain the \(R_{x}\) and \(C_{x}\) value, which is the measurement parameter. Referring to Fig. 3, \(T_{P1}\) can be expressed as, \[\begin{split} T_{P1}=max\Bigg{\{}&\Bigg{[}\frac{( \alpha-\gamma-2X-V_{oz}/V_{p})}{1+(V_{os}^{\prime}+i_{b}R_{x})/V_{p}}\Bigg{]}R _{x}C_{i}\\ &\cdot\Bigg{(}1+\frac{1}{A_{0}\omega_{0}R_{x}C_{i}}\Big{)}, \frac{(\alpha-2X)V_{p}}{SR}\Bigg{\}}\end{split} \tag{15}\] Similarly, \(T_{P2}\) can be obtained as, \[\begin{split} T_{P2}=max\Bigg{\{}&\Bigg{[}\frac{( \alpha+\gamma+V_{oz}/V_{p})}{1+(V_{os}^{\prime}+i_{b}R_{x})/V_{p}}\Bigg{]}R_{x }C_{i}\\ &\cdot\Big{(}1+\frac{1}{A_{0}\omega_{0}R_{x}C_{i}}\Big{)},\frac {(\alpha-2X)V_{p}}{SR}\Bigg{\}}\end{split} \tag{16}\] **Timing non-idealities in the ZCD and the Schmitt trigger** Considering the rise and fall times of the Schmitt trigger and the ZCD as: \(\tau_{S,HL}\) and \(\tau_{S,LH}\) be the high to low and low to high response delays of the Schmitt trigger and \(\tau_{Z,HL}\) and \(\tau_{Z,LH}\) be high to low and low to high response delays of the ZCD, the expression of \(T_{P1}\) and \(T_{P2}\) in Eq. 15 and Eq. 16, are refined as Eq. 17 and Eq. 18, respectively. **Combined effect of the non-idealities on the measurement** From the combined non-idealities represented in Eq. 18, the expression for the sensor resistance \(R_{x}\) can be inferred as \[T_{P1}=max\Bigg{\{}R_{x}C_{i}(\alpha-2X)\Bigg{[}\frac{1-\frac{\gamma+V_{oz}/V_ {p}}{\alpha-2X}}{(1+V_{os}^{\prime}/V_{p})}\Bigg{]}\Big{(}1+\frac{1}{A_{0} \omega_{0}R_{x}C_{i}}\Big{)},\frac{(\alpha-2X)V_{p}}{SR}\Bigg{\}}+\frac{\tau_{ S,LH}+\tau_{Z,LH}}{2} \tag{17}\] follows. \[R_{x}=\frac{T_{P2}}{\alpha C_{i}}\Bigg{[}\frac{1}{\beta_{1}}-\frac{\alpha}{A_{0} \omega_{0}}\Bigg{]} \tag{19}\] where \[\beta_{1}=\frac{1+(\gamma+V_{oz}/V_{p})/\alpha}{1+V_{os}^{\prime}/V_{p}} \tag{20}\] The expression for the \(R_{x}\) shows that the sensor parameter is significantly affected by the non-idealities of the circuit components. Similarly, for the capacitive sensors and parallel R-C sensors, the measurement capacitor \(C_{x}\) can be derived from Eq. 17 and Eq. 18 as follows. \[\begin{split} C_{x}=\frac{\alpha C_{i}}{2+\Bigg{(}\frac{A_{0} \omega_{0}R_{x}C_{i}}{1+A_{0}\omega_{0}R_{x}C_{i}}\Bigg{)}}\Bigg{[}1-\Big{(} \frac{T_{P1}}{T_{P2}}\Big{)}\Bigg{(}\frac{1+V_{os}^{\prime}/V_{p}}{\frac{1}{ \beta_{1}}-\frac{\alpha}{A_{0}\omega_{0}}}\Bigg{)}\\ \Big{(}\frac{1}{1+\frac{1}{A_{0}\omega_{0}R_{x}C_{i}}}\Big{)}- \frac{V_{oz}}{\alpha V_{p}}\Bigg{]}\end{split} \tag{21}\] Eq. 19 and Eq. 21 provide the measured sensor capacitance and sensor resistance values \(R_{x}\) and \(C_{x}\) with the component non-idealities. The ideal values of measured sensor resistance and capacitance, are: \(R_{x}=\frac{T_{P2}}{\alpha C_{i}}\) and, \(\rm{C_{x}}=\frac{\alpha C_{i}}{2}(1-\frac{T_{P1}}{T_{P2}})\). Comparing the \(\rm{R_{x}}\) and \(\rm{C_{x}}\) values obtained from Eq. 19 and Eq. 21 with the ideal values, it is evident that there is a large error introduced due to the component non-idealities. The analytical solution for \(C_{x}\) and \(R_{x}\) derived in Eq. 21 and Eq. 19 is evaluated by including the non-idealities of commercial operational amplifiers. The results show that even for a precision op-amp such as OPA177FP from Analog Devices, the worst-case error in the measurement due to component non-idealities is as high as 30% for \(R_{x}\) measurement and 60% for \(C_{x}\) measurement. **Accuracy Enhancement with Single Cycle Averaging** The timing waveform of the proposed circuit with an offset voltage \(V_{os}\) due to component non-idealities is shown in Fig. 4. The ideal timing waveform \(T_{p1}\), \(T_{p2}\), \(T_{p3}\), and \(T_{p4}\) are shown in Fig. 4 as ideal \(V_{z}(t)\). However, as derived in Eq. 12, the zero-crossing point of \(V_{R}(t)\) is shifted by an offset voltage \(V_{os}\) due to component non-idealities such as fine GBW and offset voltage of operational amplifiers. This affects the practical \(V_{z}(t)\) as shown in Fig. 4. Consider \(T_{of}\) as the corresponding offset time between the ideal and the practical output waveform \(V_{z}(t)\) due to finite offset voltage \(V_{os}\). From Fig. 4, if \(T_{p1}\) switches from \(-V_{p}\) to \(+V_{p}\) by \(T_{of}\) time earlier than ideal, then \(T_{p3}\) will switch from \(+V_{p}\) to \(-V_{p}\) after \(T_{of}\) time compared to the ideal transition. Therefore, the practical expression for time periods can be written as \(T^{\prime}_{p1}=T_{p1}-T_{of}\) and \(T^{\prime}_{p3}=T_{p3}+T_{of}\). The averaging of \(T^{\prime}_{p1}\) and \(T^{\prime}_{p3}\) will eliminate the effect of the mismatch in the time period between \(T_{p1}\) and \(T_{p3}\). Same is true for \(T_{p2}\) and \(T_{p4}\). Therefore, intuitively, it is possible to reduce the effect of component non-idealities by single-cycle averaging. It can be seen from the waveform that the pattern repeats after \(T^{\prime}_{p4}\). Therefore, averaging more than one cycle does not further enhance the accuracy of the sensor parameter measurement due to component non-idealities. In order to analyze the effect of averaging, we have derived the expression for the period \(T_{p3}\) and \(T_{p4}\) in Eq. 22 and Eq. 23, respectively. The measurement of \(T_{p3}\) and \(T_{p4}\) is useful to understand the effect of non-idealities in the subsequent cycle of one complete cycle of \(V_{x}(t)\). Figure 4. The ideal and practical \(V_{z}(t)\) of the proposed relaxation oscillator based circuit shown in Fig. 2. The offset due to the component non-idealities is represented as \(V_{os}\). This offset affects the period \(T_{p1}\), \(T_{p2}\), \(T_{p3}\), and \(T_{p4}\).The practical periods due the offset is represented by \(T^{\prime}_{p1}\), \(T^{\prime}_{p2}\), \(T^{\prime}_{p3}\), and \(T^{\prime}_{p4}\). Averaging the alternate cycles of practical \(V_{z}(t)\) significantly reduces the error. \[T_{1}=\frac{T_{P1}+T_{P3}}{2}=max\Bigg{\{}R_{x}C_{i}(\alpha-2X)\Bigg{[}\frac{1+( \frac{V^{\prime}_{os}}{V_{p}})(\frac{\gamma+V_{oz}/V_{p}}{\alpha-2X})}{1-(V^{ \prime}_{os}/V_{p})^{2}}\Bigg{]}\Bigg{(}1+\frac{1}{A_{0}\omega_{0}R_{x}C_{i}} \Bigg{)},\frac{2V_{p}(\alpha-X)}{SR}\Bigg{\}}+\frac{\tau_{S}+\tau_{Z}}{2} \tag{25}\] \[T_{2}=\frac{T_{P2}+T_{P4}}{2}=max\Bigg{\{}\alpha R_{x}C_{i}\Bigg{[} \frac{1-(\frac{V^{\prime}_{os}}{\alpha V_{p}})(\gamma+V_{oz}/V_{p})}{1-(V^{ \prime}_{os}/V_{p})^{2}}\Bigg{]}\Bigg{(}1+\frac{1}{A_{0}\omega_{0}R_{x}C_{i}} \Bigg{)},\frac{2V_{p}(\alpha-X)}{SR}\Bigg{\}}+\frac{\tau_{S}+\tau_{Z}}{2} \tag{24}\] The time periods T\({}_{\rm p1}\) and T\({}_{\rm p3}\) are averaged to obtain T\({}_{1}\) and, the time periods T\({}_{\rm p2}\) and T\({}_{\rm p4}\) are averaged to obtain T\({}_{2}\). The expression of the averaged period \(T_{1}\) and \(T_{2}\) are derived in Eq. 24 and Eq. 25,(on page 11) respectively. The high-to-low and low-to-high response delays are assumed to be the same for the ZCD (\(\tau_{\rm Z}\)) and Schmitt trigger (\(\tau_{\rm S}\)) for deriving Eq. 24 and Eq. 25. To observe and compare the enhancement in the accuracy due to averaging, consider the case when all the offsets are zero, the SR is not limited, and the only non-ideality is the limit on the GBW. For this case, the time expressions without and with averaging are derived in the subsequent subsections. **Without Averaging: \(T_{p1}\) and \(T_{p2}\)** The Eq. 26 and Eq. 27 (on page 12) shows that the period \(T_{p1}\) and \(T_{p2}\) for the measurement of the sensor parameters are greatly affected by the gain-bandwidth products of the operational amplifiers. For instance, for OPA177, for an R\({}_{x}\) of 330 k\(\Omega\) and C\({}_{x}\) of 33 pF, the error in the period \(T_{p1}\) and \(T_{p2}\) due to the GBW is 52.76% and 26.74%, respectively. \[T_{P1}=R_{x}C_{i}(\alpha-2X)\Bigg{(}1-\frac{\gamma}{\alpha-2X} \Bigg{)}\Bigg{(}1+\frac{1}{A_{0}\omega_{0}R_{x}C_{i}}\Bigg{)} \tag{27}\] \[T_{P2}=\alpha R_{x}C_{i}\Bigg{(}1+\frac{\gamma}{\alpha}\Bigg{)} \Bigg{(}1+\frac{1}{A_{0}\omega_{0}R_{x}C_{i}}\Bigg{)} \tag{26}\] **With Single Cycle Averaging: \(T_{1}\) and \(T_{2}\)** Considering only the GBW of the operational amplifiers, \(T_{1}\) and \(T_{2}\) from Eq. 24 and Eq. 25 can be simplified as follows. \[T_{1}=R_{x}C_{i}(\alpha-2X)\Bigg{(}1+\frac{1}{A_{0}\omega_{0}R_{x}C_{i}} \Bigg{)} \tag{29}\] \[T_{2}=\alpha R_{x}C_{i}\Bigg{(}1+\frac{1}{A_{0}\omega_{0}R_{x}C_ {i}}\Bigg{)} \tag{28}\] The offset, \(\gamma\) introduced due to the finite GBW of the Op-Amp severely affects the accuracy of time measurement as observed in Eq. 28 and Eq. 29 along with the slope reduction factor of \(1+1/(\rm{A_{o}\omega_{0}R_{x}C_{i}})\). With averaging, the time accuracy is not dependant on the offset factor \(\gamma\) and depends only on the slope reduction factor as observed in Eq. 28 and Eq. 29. Therefore, the single-cycle averaging significantly reduces the error due to the GBW of the Op-Amps. For example, in the case of OPA177, with an R\({}_{x}\) of 330 k\(\Omega\) and C\({}_{x}\) of 33 pF, the error in the period \(T_{1}\) and \(T_{2}\) due to the GBW is 1.5% and 0.72%, respectively, which is significantly smaller as compared to the measurement without averaging. **Design Criteria** The first design criterion is the selection of \(\alpha\), which decides the range of measurement. A higher value of \(\rm{C_{x}}\) results in a higher factor \(X\) (i.e. \(C_{x}/C_{i}\)), which might result in a cross-over of the voltage beyond the zero value while charge transfer. To avoid this, we get the following condition on \(\alpha\) \[\alpha>2\frac{C_{x,max}}{C_{i}} \tag{30}\] The error in capacitance measurement \(\epsilon\) results in a worst-case error compared to the error in the resistance measurement. The error in capacitance measurement is the deviation of time \(\rm{T_{1}}\) from the ideal value. \[\epsilon=\frac{\Delta C_{x}}{C_{x}}=\frac{\Delta T_{1}}{T_{1}} \tag{31}\] \(\Delta C_{x}\) represents the incremental value of sensor capacitance and \(\Delta\)\(T_{1}\) represents the corresponding relative change measurement time. Using Equations 24, 13 and, 31 the following conditions on the Op-Amp GBW, SR, offset can be arrived at, \[\epsilon<\frac{100}{1-\left(\frac{V^{\prime}_{os}}{V_{p}}\right)^{2}}\Bigg{[} \frac{1}{GBW(R_{x,min}C_{i})}+\frac{V^{\prime}_{os}}{V_{p}}\Bigg{(}\frac{1}{ \frac{\alpha C_{i}}{C_{x,max}}-2}+\frac{V^{\prime}_{os}}{V_{p}}\Bigg{)}\Bigg{]} \tag{32}\] \[SR>\frac{2V_{p}}{R_{x,min}C_{i}} \tag{33}\] For a given error tolerance in the measurement, an Op-Amp with a GBW and offset requirement and SR that satisfies Eq. 32 and Eq. 33 must be chosen. A comparator must also be selected with an offset value \(\rm{V_{oz}}<<\rm{V_{p}}\) and response time (\(\tau\)) condition derived as follows. \[\tau<\frac{\epsilon\alpha R_{x,max}C_{i}}{100\Big{(}1-(V^{\prime}_{os}/V_{p})^ {2}\Big{)}} \tag{34}\] Considering the propagation time of the XOR gate as \(\tau_{p}\), the XOR gate must be chosen based on the following equation. \[\tau_{p}<\frac{\epsilon\alpha R_{x,max}C_{i}}{100\Big{(}1-(V^{\prime}_{os}/V_{ p})^{2}\Big{)}}+\tau \tag{35}\] **Experimental Setup and Results** Several prototypes of the proposed relaxation oscillator-based signal conditioning circuit shown in Fig. 2 are built to illustrate the circuit's capabilities for measuring sensor capacitance and resistance. Different operational amplifiers are employed to implement the integrator, Schmitt trigger, and ZCD of the proposed oscillator and are tabulated in Table 1, further using these opamps, the effect of component non-idealities on the measurement of sensor parameters has been studied. All the passive components are first measured using a commercial table-top LCR meter (Agilent E4980A). The periods of the readout signal are measured using the timer/counter module of the microcontroller. In this experiment, we have used an Arduino Uno microcontroller with a 16-bit timer. The data from the microcontroller is acquired in MATLAB, and the single-cycle averaging is performed. Multiple experiments were conducted to evaluate the performance of the proposed signal conditioning circuit. Initial experiments were conducted by emulating the sensor using discrete resistors and capacitors. The final experiment is conducted on a fringing field-based capacitive water level sensor. The sensor is emulated by placing known values of resistor \(R_{x}\) and capacitor \(C_{x}\) in parallel. The time period \(T_{p1}\), \(T_{p2}\), \(T_{p3}\), and \(T_{p4}\) are measured to calculate the value of the sensor resistor. **Resistance Measurement** \begin{table} \begin{tabular}{|c c c c c c c|} \hline **Op-Amp** & **GBW/(\(2\pi\))** & **SR** & \(C_{p}\) & \(Vof\) & \(i_{b}\) & \(\tau\) \\ **IC** & **(MHz)** & **(\(V/\mu s\))** & **(pF)** & **(mV)** & **(nA)** & **(ns)** \\ \hline **AD741** & 1 & 0.5 & – & 5 & 500 & 300 \\ \hline **LT1360** & 60 & 800 & 4 & 0.3 & 250 & – \\ \hline **TL071** & 5.25 & 29 & 2 & 4 & 0.1 & 310 \\ \hline **OPA177** & 0.6 & 0.3 & – & 0.6 & 6 & – \\ \hline **LT1049** & 0.8 & 0.8 & – & 0.01 & 0.05 & – \\ \hline \end{tabular} \end{table} Table 1. Operational amplifiers with performance parameters Figure 5. The waveform at different nodes of the oscillator developed using op-amp LT1360 and LTC1049. The voltage \(V_{R}(t)\) is affected by the non-idealities of the op-amps, which affects the pulse width of \(V_{Y}(t)\) (ideally should be 50% duty cycle). Due to this, the output \(V_{z}(t)\) has different positive and negative pulse widths resulting in an error in the measurement of sensor parameters. (a) The error in the measurement is significant for LT1360, (b) the error is reduced with LTC1049 compared to LT1360. The error is negligible after a single-cycle averaging operation. The resistance (\(R_{x}\)) measurement is performed by measuring the period \(T_{p2}\) and \(T_{2}\) (average of \(T_{p2}\) and \(T_{p4}\)). In addition to the measurement of the sensor resistance, the experiment also demonstrates the effect of component non-idealities on the measurement. The experiment is conducted as follows. 1. A capacitor \(C_{x}\) is placed in parallel with the sensor resistance \(R_{x}\). The value of \(R_{x}\) is varied from 100 k\(\Omega\) to 1000 k\(\Omega\). The period \(T_{p2}\) and \(T_{p4}\) is measured, and the percentage relative error is calculated with and without averaging. 2. The same process is repeated for multiple values of the capacitance \(C_{x}\). 3. The above process is repeated for different operational amplifiers, and the effect of component non-idealities are evaluated on the measurement. The percentage relative error for different values of the sensor resistance at different \(C_{x}\) values for the operational amplifiers LT1360 and LTC1049 are shown in Fig. 6. The result shows that the single cycle averaging significantly reduces the error in the measurement of the sensor resistance. The worst-case percentage relative error with and without single cycle averaging Figure 6. Measured and Analytical error in time (\(T_{2}\)&\(T_{P2}\)) for different sensor capacitance \(C_{x}\) and resistance \(R_{x}\) for two the LT1360 and LTC1049. The analytical \(T_{p2}\) error is calculated from Eq. 18, which shows the error in the measurement due to component non-idealities. The analytical \(T_{2}\) error shows the effect of single cycle averaging and is calculated from Eq. 25. The analytical and experimental error reduces significantly with single-cycle averaging. for R\({}_{x}\) measurement, for different operational amplifiers is shown in Fig. 7. The chart in Fig. 7 shows the choice of the operational amplifier and the effect of non-idealities. The % relative error is high for the operational amplifier OPA177, which has a low gain-bandwidth. Moreover, the % relative error is also high for LT1360, which has a good gain-bandwidth but has high input offset voltage. On the other hand, the error is small for LTC1049, which has a moderate gain-bandwidth and small input offset voltage. Therefore, the oscillator should be designed considering all the non-idealities of the operational amplifiers. The operational amplifiers' selection should be done based on the Eq. (33) and Eq. (35) from the design criteria for the given error. **Capacitance Measurement** In case of capacitance measurement, the value of sensor capacitor \(C_{x}\) is varied and the periods \(T_{p1}\), \(T_{p2}\), \(T_{p3}\), and \(T_{p4}\) are measured experimentally. The experiment is conducted for different values of \(R_{x}\). The experimental procedure for \(C_{x}\) measurement of impedance R-C or leaky capacitive sensors is as follows. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Parameters**} & \multirow{2}{*}{**Expression**} & \multicolumn{2}{c|}{**Measurement**} \\ \cline{3-4} & & \(C_{x}\) & \(R_{x}\) \\ \hline **SD(\(\sigma\))** & \(\sqrt{\sum_{n=1}^{M}\frac{(S(n)-\overline{S})^{2}}{M-1}}\) & 10.2fF & 8.1 \(\Omega\) \\ \hline **SNR** & \(10\log\frac{\sum_{n=1}^{M}(S(n))^{2}}{\sum_{n=1}^{M}(S(n)-\overline{S})^{2}}\) & 74.83 db & 78.96 db \\ \hline **Dynamic Range** & - & 10pF-42pF & 100k\(\Omega\)-1M\(\Omega\) \\ \hline **Worst** & - & 1.6 \% & 0.62\% \\ **Relative Error** & - & 1.6 \% & 0.62\% \\ \hline \(S(n)\) is the \(n^{th}\) measured value \(S\) is the average value of the measurements \\ \hline \end{tabular} \end{table} Table 2. Performance Parameters of the Developed Circuit Figure 7. Analytical and measured worst-case error in sensor resistance \(R_{x}\) with different Op-Amps. The analytical error without averaging is calculated from Eq. 18. The analytical error with averaging is shown the effect of single cycle averaging and calculated from Eq. 25. 1. First, \(R_{x}\) is fixed at one value and \(C_{x}\) is varied from 9.6 pF to 37.6 pF. 2. Next, the same process is repeated for different \(R_{x}\) values. The experiment is conducted using different operational amplifiers. 3. The experimental results for different values of sensor capacitor \(C_{x}\) for operational amplifiers LT1360 and LTC 1049 are shown in Fig. 8. The error is calculated from the measured time periods with and without single cycle averaging. The worst-case errors for different operational amplifiers are shown in Fig. 9. The single-cycle averaging dramatically reduces the percentage relative error for the measurement of sensor capacitance. The experimental results showing the pattern of error for different value of \(R_{x}\) by varying \(C_{x}\) from 10 pF to 42 pF is shown in Fig. 8. The experimental results shown in Fig. 8 are obtained for Op-Amps LT1360 and LTC1049. Results show that the percentage relative error is high for the LT1360, which is also justified analytically. On the contrary, the error is small for LTC1049. The experimental result follows the analytical model of the circuit. Figure 8. Measured and Analytical error for different sensor capacitance \(C_{x}\) and resistance \(R_{x}\) for two different Op-Amps.Analytical and measured worst-case error in sensor capacitance \(C_{x}\) with different Op-Amps. The analytical \(T_{p1}\) and \(T_{1}\) errors are calculated from Eq. 15 and Eq. 24, respectively. The analytical \(T_{p1}\) shows the error due to component non-idealities. The analytical \(T_{2}\) error shows the effect of single cycle averaging. The results are experimentally verified. The single-cycle averaging reduces the error to a great extent. **Other Performance Parameters** Other performance parameters such as the standard deviation and SNR of the proposed relaxation oscillator circuit for R-C sensors is measured experimentally. The time period \(T_{p1}\), \(T_{p2}\), \(T_{p3}\), and \(T_{p4}\) are experimentally recorded for a \(R_{x}\) and \(C_{x}\) values of 33 pF and 330 k\(\Omega\). The measured periods are averaged for \(C_{x}\) and \(R_{x}\) measurement and the \(\Delta C/C_{x}\) and \(\Delta R/R_{x}\) is Figure 9. Analytical and measured worst-case error in sensor capacitance \(C_{x}\) with different Op-Amps. The analytical \(T_{p1}\) and \(T_{1}\) errors are calculated from Eq. 15 and 24, respectively. The analytical \(T_{p1}\) shows the error due to component non-idealities. The analytical \(T_{2}\) error shows the effect of single cycle averaging. The results are experimentally verified. plotted in Fig. 10. The standard deviation and the SNR are calculated from the obtained data set using the expression mentioned in Table 2. The obtained standard deviation and SNR for \(C_{x}\) and \(R_{x}\) are reported in Table 2. The circuit is tested for a capacitance as low as ten pF. At the lower capacitance value, the input capacitance of the operational amplifier is comparable with the sensor capacitance. Therefore, the relative error will be high at the lower capacitance value. However, the measurement can be performed by calibrating the effect of the input capacitance of the operational amplifier. **Testing with Water-Level Sensor** The developed converter is utilized for the measurement of the sensor capacitor for the leaky capacitive water-level sensor. The water-level sensor is fabricated on a flexible substrate using the screen-printing technique. The sensor geometry is based on the inter-digitized electrodes. The principle of sensing is based on a change in the fringing field around the electrodes due to the water level, which in turn changes the capacitance of the sensor. The details of the sensor fabrication are reported in [27, 28]. Figure 11. (a) Experimental setup for water-level leaky capacitive sensor, (b) experimental response of variation in the capacitance measured using the proposed circuit with multiple water levels. The sensor shows a 99.9% linear response with respect to the water-level. The experimental setup for the system is shown in Fig. 11(a). The sensor is placed inside the beaker with known water levels. A controlled amount of water is poured into the beaker. The capacitance of the sensor is measured with the commercial LCR meter (Agilent E4980A) as well as with the proposed circuit. The percentage relative error in the measurement of the sensor capacitance for different water levels is shown in Fig. 11(b). The experimental test was conducted with the oscillator circuit built using the LTC1049 operational amplifier. In addition, the \(R^{2}\) value is estimated from the fitted experimental data and found to be 99.9%. This shows that the sensor shows a linear response to the water level. Further, the error reported in Table 2 is with the standard ceramic capacitors, while the error from the plot shown in Figure 11 is with the sensor. The mismatch in the error value is primarily because of the parasitic capacitance of the sensor-circuit interconnection and the measurement error. **Conclusion** A modified relaxation oscillator based signal conditioning circuit for parallel R-C sensors is presented in this paper. The proposed circuit is based on a dual-slope and charge-transfer technique embedded into a modified relaxation oscillator. Detailed analysis and design criteria of the proposed circuit design are provided, considering various sources of errors. A single-cycle averaging method is analytically derived, which significantly reduces the effect of component non-idealities. The efficacy of the proposed circuit is tested on the fabricated prototype. The measurement result shows the effectiveness of the proposed circuit for parallel R-C sensors. The performance parameters of the developed prototype are calculated. The proposed circuit provides an SNR of around 75 dB and 79 dB for capacitive and resistive sensors, respectively. Overall, the proposed circuit is well suited for various sensors such as leaky capacitive sensors, resistive sensors with a parasitic capacitor, and impedance R-C sensors.
2305.07083
Scalable Ray Tracing Using the Distributed FrameBuffer
Image- and data-parallel rendering across multiple nodes on high-performance computing systems is widely used in visualization to provide higher frame rates, support large data sets, and render data in situ. Specifically for in situ visualization, reducing bottlenecks incurred by the visualization and compositing is of key concern to reduce the overall simulation runtime. Moreover, prior algorithms have been designed to support either image- or data-parallel rendering and impose restrictions on the data distribution, requiring different implementations for each configuration. In this paper, we introduce the Distributed FrameBuffer, an asynchronous image-processing framework for multi-node rendering. We demonstrate that our approach achieves performance superior to the state of the art for common use cases, while providing the flexibility to support a wide range of parallel rendering algorithms and data distributions. By building on this framework, we extend the open-source ray tracing library OSPRay with a data-distributed API, enabling its use in data-distributed and in situ visualization applications.
Will Usher, Ingo Wald, Jefferson Amstutz, Johannes Günther, Carson Brownlee, Valerio Pascucci
2023-05-11T18:40:15Z
http://arxiv.org/abs/2305.07083v1
# Scalable Ray Tracing Using the Distributed FrameBuffer ###### Abstract Image- and data-parallel rendering across multiple nodes on high-performance computing systems is widely used in visualization to provide higher frame rates, support large data sets, and render data in situ. Specifically for in situ visualization, reducing bottlenecks incurred by the visualization and compositing is of key concern to reduce the overall simulation runtime. Moreover, prior algorithms have been designed to support either image- or data-parallel rendering and impose restrictions on the data distribution, requiring different implementations for each configuration. In this paper, we introduce the Distributed FrameBuffer, an asynchronous image-processing framework for multi-node rendering. We demonstrate that our approach achieves performance superior to the state of the art for common use cases, while providing the flexibility to support a wide range of parallel rendering algorithms and data distributions. By building on this framework, we extend the open-source ray tracing library OSPRay with a data-distributed API, enabling its use in data-distributed and in situ visualization applications. \(\bullet\) Computing methodologies Ray tracing; \(\bullet\) ## 1 Introduction The need for high-performance distributed parallel rendering is growing, spurred by trends in increasing data set sizes, the desire for higher fidelity and interactivity, and the need for in situ visualization. Meeting these demands poses new challenges to existing rendering methods, requiring scalability across a spectrum of memory and compute capacities on high-performance computing (HPC) resources. Whereas the growth in data set sizes demands a large amount of aggregate memory, the desire for more complex shading and interactivity demands additional compute power. A large number of application needs fall somewhere in between these extremes, requiring a combination of additional memory and compute. Finally, in situ visualization requires the renderer to scale with the simulation, while incurring little overhead. Rendering techniques that scale well for either compute power or aggregate memory capacity are well known, but applications falling between these extremes have not been well addressed. In large-scale rendering workloads on distributed-memory clusters, the data is typically partitioned into subregions and distributed across multiple nodes to utilize the aggregate memory available. Each node is then responsible for rendering its assigned subregion of data. The partial images rendered by each node are then combined using a sort-last compositing algorithm, e.g., Parallel Direct Send [14], Binary Swap [2], Radix-k [15], or TOD-tree [16]. The IceT library [17] provides implementations of a number of sort-last compositing algorithms and is widely used in practice. However, such data-parallel renderers impose restrictions on how the data can be distributed, are susceptible to load imbalance, and are limited to local illumination effects. At the other end of the spectrum, the master-worker architecture has been widely used to scale up compute capacity and provide interactivity for high-fidelity visualization of moderately sized data sets. Master-worker renderers distribute work image-parallel by assigning subregions of the image to be rendered by different nodes. This architecture has been used effectively in a number of ray tracers, e.g., Manta [1], OpenRT [22], and OSPRay [23]. While typically used for data which can be stored in memory on each node, this architecture can be used for large data sets by streaming data needed for the portion of the image from disk [22] or over the network [1, 1]; however, these systems can suffer from cache thrashing and are tied to specific data types or formats. Applications falling somewhere in between the extrema of only compute or memory scaling, or those seeking to go beyond common use cases, can quickly run into issues with existing approaches. For example, whereas a master-worker setup is well suited to image-parallel ray tracing, if the renderer wants to perform additional post-processing operations (e.g., tone-mapping, progressive refinement), or handle multiple display destinations (e.g., display walls), the master rank quickly becomes a bottleneck. Similarly, whereas existing sort-last compositing algorithms are well suited to statically partitioned data-parallel rendering, extending them to support partially replicated or more dynamic data distributions for better load balancing is challenging. Standard sort-last compositing methods operate bulk-synchronously on the entire frame, and are less suited to tile-based ray tracers in which small tiles are rendered independently in parallel. In this paper, we describe the algorithms and software architecture--the "Distributed FrameBuffer"--that we developed to support distributed parallel rendering, with the goal of addressing the above issues to provide an efficient and highly adaptable framework suitable for a range of applications. The Distributed FrameBuffer (DFB) is built on a tile-based work distribution of the image processing tasks required to produce the final image from a distributed renderer. These tasks are constructed per-tile at runtime by the renderer and are not necessarily tied to the host application's work or data distribution, providing the flexibility to implement a wide range of rendering algorithms and distribute compute-intensive image processing tasks. The DFB performs all communication and computation in parallel with the renderer using multiple threads to reduce compositing overhead. Although the DFB is flexible enough to support renderers across the spectrum of memory and compute scaling, it does not make a performance trade-off to do so. Our key contributions are: * A flexible and scalable parallel framework to execute compositing and image processing tasks for distributed rendering; * A set of parallel rendering algorithms built on this approach, covering both standard use cases and more complex configurations; * An extension of OSPRay to implement a data-distributed API, allowing end users to leverage the above algorithms in practice on a wide range of different data types. ## 2 Previous Work A large body of previous work has studied parallel rendering techniques for distributed-memory systems. These works can generally be classified as one of three techniques, first discussed in the context of rasterization by Molnar et al. [1]: sort-first, sort-middle, and sort-last. Sort-middle is tied to rasterization, thus we focus our discussion on sort-first and sort-last strategies. Sort-first is an image-parallel technique, where the workload is distributed across multiple ranks by subdividing the image. Sort-last is a data-parallel technique, where the workload is distributed by subdividing the 3D data, regardless of where it lies in the image. Hybrid approaches have also been proposed, which combine sort-first and sort-last techniques. ### Data-Parallel Rendering In sort-last, or data-parallel, rendering the geometric primitives and volumetric data are partitioned in 3D space, with each node assigned a subregion of the overall data to render. In early implementations, this subdivision was at the level of a single primitive [18]. Each node then renders its subregion of data to produce a partial image, which must then be combined with other nodes' images to create the final image. Combining these partial images typically requires depth compositing the overlapping partial images to produce a correct final image. It is this second step that becomes the bottleneck at large core counts and high-resolutions, and therefore has been the focus of a large body of work (e.g., [14, 15, 16, 17]). Most similar to our work in the context of data-parallel rendering is Grosset et al.'s [15] Dynamically Scheduled Region-Based compositing (DSRB). DSRB divides the image into strips and constructs a per-strip blending order, referred to as a chain, based on which node's data projects to each strip. Partial compositing for a chain can be done after receiving strips from successive nodes in the chain, overlapping compositing with rendering on other nodes. However, DSRB is restricted in the amount of rendering it can overlap with compositing, as each node renders its entire image before starting compositing; is only applicable to data-parallel rendering; and relies on a central scheduler to construct the chains. The IceT library [17] encompasses several different compositing strategies for sort-last rendering and has been widely deployed across popular parallel scientific visualization tools. Thus, we use IceT as a primary point of comparison when evaluating our method's performance. Although IceT was initially designed for rasterization, Brownlee et al. [1] used IceT's depth compositing with a ray tracer inside of multiple visualization tools, though were hindered by the data distribution chosen by the tools. Wu et al. [16] employed a similar approach to integrate OSPRay into VisIt, using OSPRay to render locally on each rank and IceT to composite the image, and encountered similar difficulties. ### Image-Parallel Rendering Image-parallel renderers assign subregions of the image to different ranks for rendering. To render large datasets, this approach is typically coupled with some form of data streaming or movement into a local cache, and is designed to exploit frame-to-frame coherence. The data movement work is amortized over multiple frames as the data rendered for a region of the image in one frame will likely be similar to that rendered in the next frame. Early rasterization-based techniques used a sort-middle algorithm, where the image was partitioned between nodes, and geometry sent to the node rendering the portion of the image it projected to [1]. Image-parallel rendering lends itself well to ray tracing, as ray tracers already use acceleration structures for ray traversal which can be readily adapted to streaming and caching portions of the scene as they are traversed. Wald et al. [21] used a commodity cluster for interactive ray tracing of large models, where a top-level \(k\)-d tree is replicated across the nodes and lower sub-trees fetched on demand from disk. DeMarle et al. [1] used an octree acceleration structure for rendering large volume data, where missing voxels would be fetched from other nodes using a distributed shared memory system. Ize et al. [1] extended this approach to geometric data using a distributed BVH. When rendering fully replicated data, their approach avoids data movement and compositing, and can achieve 100FPS for primary visibility ray casting on 60 nodes. Biedert et al. [1] proposed an image-parallel remote streaming framework able to achieve over 80FPS from a distributed cluster to a remote client, using hardware acceleration and adaptive tile-based streaming. ### Hybrid-Parallel Rendering While image- and data-parallel rendering methods distribute work solely by partitioning the image or data, hybrid-parallel renderers combine both strategies, aiming to pick the best for the task at hand. Reinhard et al. [1] first proposed a hybrid scheduling algorithm for ray tracing distributed data, where the rays would be sent or the required data fetched depending on the coherence of the rays. Samanta et al. [14] proposed to combine sort-first and sort-last rendering in the context of a rasterizer, by partitioning both the image and data among the nodes. Each node then renders its local data and sends rendered pixels to other nodes that own the tiles its data touches. The tiles are then composited on each node and sent to the display node. This approach bears some resemblance to the Distributed FrameBuffer, although lacks its extensibility and support for ray tracing specific rendering effects. Navratil et al. [14] proposed a scheduler that combines static image and data decompositions for ray tracing, roughly similar to sort-first and sort-last, respectively. However, a key difference of their approach when compared to a sort-last rasterizer is that rays will be sent between nodes, similar to Reinhard et al. [1], to compute reflections and shadows. The static image decomposition scheduler works similar to the image-parallel algorithms discussed previously. Abram et al. [1] extended the domain decomposition model to an asynchronous, frameless renderer using a subtractive lighting model for progressive refinement. Park et al. [1] extended both the image and domain decompositions, by introducing ray speculation to improve system utilization and overall performance. By moving both rays or data as needed, these approaches are able to compute global illumination effects on the distributed data, providing high-quality images at additional cost. Biedert et al. [1] employed a task-based model of distributed rendering which is able to combine sort-first and sort-last rendering, by leveraging an existing runtime system to balance between these strategies. Although their work uses OSPRay for rendering, it is restricted to a single thread per-process and is non-interactive. ### OSPRay, Embree and ISPC Although the Distributed FrameBuffer is applicable to any tile-based rendering algorithm, we evaluate it within the context of the OSPRay tracing framework [25]. OSPRay provides a range of built in volume and geometric primitives used in scientific visualization, advanced shading effects, and achieves interactive rendering on typical workstations and laptops. To achieve interactive ray tracing performance on CPUs, OSPRay builds on top of Embree [24], the Intel SPMD Program Compiler (ISPC) [15], and Intel's Threading Building Blocks (TBB). Embree is a high-performance kernel framework for CPU ray tracing, and provides a set of low-level kernels for building and traversing ray tracing data structures which are highly optimized for modern CPU architectures. ISPC is a single program multiple data Figure 2: An example of the Distributed FrameBuffer’s tile processing pipeline in a data-parallel renderer. Dependencies are specified on the fly per-tile and can be extended by child tiles. To compute the highlighted tile owned by rank 0, the owner sends a background color tile for generation 0, which specifies that two additional tiles will arrive in generation 1, potentially from different ranks. After receiving the full dependency tree, the tile operation produces the finished tile, which is tone-mapped by a pixel operation and sent to the display rank. (SPMD) compiler, which vectorizes a scalar program by mapping different instances of the program to the CPU's vector lanes, thereby executing them in parallel. TBB provides a set of parallel programming primitives for writing high-performance multi-threaded code, similar to OpenMP. ## 3 The Distributed FrameBuffer At its core, the Distributed FrameBuffer (DFB) is not a specific compositing algorithm per se, but a general framework for distributed rendering applications. A renderer using the DFB specifies a set of tasks to be executed on the rendered image and per-tile dependency trees for the tasks. The tasks are parallelized over the image by subdividing it into tiles, where each tile is owned by a unique rank--the tile owner--responsible for executing tasks for that tile. If task dependencies are produced on ranks other than the tile owner the DFB will route them over the network to the owner. The tile dependency trees are specified per-tile and per-frame, allowing for view- and frame-dependent behavior. The tile processing pipeline involves three stages (Figure 2). First, the dependency tree is constructed by the tile operation as task dependencies are received from other ranks. Once the entire tree has been received the finished tile is computed by the tile operation and passed on to any pixel operations. The final output tile is then converted to the display image format and sent to the display rank, if needed. The processing pipeline and messaging system run asynchronously on multiple threads, allowing users to overlap additional computation with that performed by the DFB. Although the focus of this paper is on using the DFB for rendering, the task input tiles are not required to be produced by a renderer. ### Tile Processing Pipeline The DFB begins and ends processing synchronously, allowing applications processing multiple frames, i.e., a renderer, to ensure that tiles for different frames are processed in the right order. Before beginning a frame, the renderer specifies the tile operation to process the tiles it will produce. Each rank then renders some set of tiles based on the work distribution chosen by the renderer. As tiles are finished, they are handed to the DFB for processing by calling setTile. During the frame, the DFB will compute tile operations for the tiles owned by each rank in the background and send other tiles over the network to their owner. The frame is completed on each rank when the tiles it owns are finalized, and rendering is finished when all processes have completed the frame. As each tile is processed independently in parallel it is possible for some tiles to be finalized while others have yet to receive their first inputs. To track the distributed tile ownership, the DFB instance on each rank stores a tile descriptor (Listing 1) for each tile in the image. When setTile is called the DFB looks up the descriptor for the tile and sends it to the owner using an asynchronous messaging layer (Section 3.2). If the owner is the calling rank itself, the tile is instead scheduled for processing locally. For each tile owned by the rank, the DFB stores a concrete tile operation instance in the array of descriptors. The base structure for tile operations (Listing 1) stores a pointer to the local DFB instance and a Tile buffer to write the finished tile data to, along with optional accumulation and variance buffer tiles. The finalPixels buffer is used as scratch space to write the final tile to, before sending it to the display rank. To implement the corresponding tile operation for a rendering algorithm (e.g., sort-last compositing) users extend the TileOperation, and specify their struct to be used by the DFB. Each time a tile is received by the DFB instance on the tile owner, the process function is called on the tile operation to execute the task. The newFrame function is called when a new frame begins, to reset any per-frame state. When all a tile's dependencies have been received the tile operation combines the inputs to produce a finished tile, which is then passed to the DFB. The local DFB instance runs any additional pixel operations on the finished tile and converts the final pixels to the display color format, outputting them to the finalPixels buffer. This buffer is then compressed and sent to the display rank. In addition to the RGBA8 and RGBAF32 display formats, the DFB also offers a NONE format, which is unique in that it indicates that the display rank should not receive or store the final pixels at all. We will discuss a useful application of the NONE format in Section 4.4. #### 3.1.1 Per-Tile Task Dependency Trees The Tile structure passed to setTile and routed over the network is shown in Listing 1. To construct the dependency tree, each rendered tile specifies itself as a member of some generation (a level in the tree), and as having some number of children in the following generation. The total number of tiles to expect in the next generation is the sum of all children specified in the previous one. Different ranks can contribute tiles with varying numbers of children for each generation, and can send child tiles for parents rendered by other ranks. There is no requirement that tiles are sent in order by generation, nor is a tile operation guaranteed to receive tiles in a fixed order. Tile operations with dependencies beyond a trivial single tile can be implemented by buffering received tiles in process to collect the complete dependency tree. The interpretation and processing order of the dependency tree is left entirely to the tile operation. For example, the dependency tree could be used to represent a compositing tree, input to some filtering, or simply a set of pixels to average together. The creation of the dependency trees by the renderer and their processing by the tile operation are tightly coupled, and thus the two are seen together as a single distributed rendering algorithm. The flexibility of the tile operation and dependency trees allows the DFB to be used in a wide range of rendering applications (Section 4). #### 3.1.2 Pixel Operations Additional post-processing, such as tone-mapping, can be performed by implementing a pixel operation (PixelOp). The pixel operation takes the single finished tile from the tile operation as input, and thus is not tied to the tile operation or renderer. The DFB runs the pixel operation on the tile owner after the tile operation is completed to distribute the work. In addition to image post-processing, pixel operations can be used, for example, to re-route tiles to a display wall (Section 4.4). ### Asynchronous Messaging Layer To overlap communication between nodes with computation, we use an asynchronous point-to-point messaging layer built on top of MPI (Message Passing Interface). Objects that will send and receive messages register themselves with the messaging layer and specify a unique global identifier. Each registered object is seen as a global "distributed object", with an instance of the object on each rank which can be looked up by its global identifier. A message can be sent to the instance of an object on some rank by sending a message to the rank with the receiver set as the object's identifier. The messaging layer runs on two threads: a thread which manages sending and receiving messages with MPI, and an inbox thread which takes received messages and passes them to the receiving object. Messages are sent by pushing them on to an outbox, which is consumed by the MPI thread. To avoid deadlock between ranks, we use non-blocking MPI calls to send, receive, probe, and test for message completion. Received messages are pushed on to an inbox, which is consumed by the inbox thread. To hand a received message to the receiving object, the inbox thread looks up the receiver by its global ID in a hash table. Messages are compressed using Google's Snappy library [Goo] before enqueuing them to the outbox and decompressed on the inbox thread before being passed to the receiver. In our initial implementation we also used the messaging layer to gather the final tiles to the display rank. However, unless the rendering workload is highly imbalanced, this approach generates a large burst of messages to the display, with little remaining rendering work to overlap with. This burst of messages also appeared to trigger an MPI performance issue on some implementations. As an optimization, the final tiles are instead written to a buffer, which is compressed and gathered to the display with a single MPI_Gatherv at the end of the frame. ## 4 Rendering with the Distributed FrameBuffer A distributed rendering algorithm using the DFB consists of a renderer, responsible for rendering tiles of the image, coupled with a tile operation, which will combine the results of each ranks' renderer. In the following sections we discuss a few distributed rendering algorithms built on the DFB, covering standard image-parallel (Section 4.1) and data-parallel (Section 4.2) rendering, along with extensions to these methods enabled by the DFB, specifically, dynamic load balancing (Section 4.1.1) and mixed-parallel rendering (Section 4.3). Finally, we discuss how pixel operations can be used to implement a high-performance display wall system (Section 4.4). ### Image-Parallel Rendering An image-parallel renderer distributes the tile rendering work in some manner between the ranks such that each tile is rendered once. This distribution can be a simple linear assignment, round-robin, or based on some runtime load balancing. The corresponding tile operation expects a single rendered tile as input. The DFB allows for a straightforward and elegant implementation of this renderer (Listing 2). #### 4.1.1 Tile Ownership vs. Work Distribution The work distribution chosen by the renderer is not tied to the DFB tile ownership, allowing the renderer to distribute work as desired. Though it is preferable that the tile owners render the tiles they own to reduce network traffic, this is not a requirement. This flexibility in work distribution can be used, for example, to implement dynamic load balancing. We extend the ImageParalle advantage of the DFB's asynchronous tile routing and processing to execute the compositing in parallel with local rendering. The benefits of this approach are two-fold: the per-tile task dependencies allow to minimize compositing and communication work per-tile, and overlapping compositing and rendering reduces the additional time spent compositing after rendering is finished. To compute a per-tile compositing dependency tree, each rank collects the bounds of the other ranks' data and projects them to the image (Figure 3). Only those ranks whose data projects to some tile will render inputs for it. Each rank is responsible for specifying the dependency information for the tiles it owns (highlighted in yellow, Figure 3). The tile owner will compute an additional "background" tile and set it as the sole member of generation 0. The background tile is filled with the background color or texture, and sets the number of ranks whose data project to the tile as the number of children. The renderer (Listing 3) begins by determining the set of candidate tiles that it must either send a background tile for or render data to. The candidate tiles that the rank's local data may project to are found using a conservative screen-space AABB test, which is subsequently refined. For each candidate tile, the renderer computes an exact list of the ranks whose data touches the tile by ray tracing the bounding boxes. The number of intersected boxes is the number of generation 1 tiles to expect as input to the tree. If the rank's local data was intersected, it renders its data and sends a generation 1 tile. To allow for ghost zones and voxels, camera rays are clipped to the local bounds of the rank's data. As with the outer candidate tile loop, the inner rendering loop is parallelized over the pixels in a tile. After receiving the entire dependency tree, the AlphaBlend tile operation (Listing 4) sorts the pixels by depth and blends them together to composite the tile. The tile fragment sorting is done per-pixel, in contrast to the per-rank sort used in standard approaches. Sorting per-pixel allows for rendering effects like depth of field, side-by-side stereo, and dome projections. As the tile processing is done in parallel, we do not find the sorting to be a bottleneck. In the case that a rank-order sort would produce a correct image, the dependency tree can be constructed as a list instead of a single-level tree with tiles ordered back-to-front by generation. Finally, although we have discussed the data-parallel renderer with a single brick of data per-rank, it trivially supports multiple bricks per-rank, allowing for finer-grained work distributions. ### Rendering Hybrid Data Distributions A data-parallel renderer that statically assigns each brick of data to a single rank is susceptible to load imbalance, coming from factors such as the data distribution, transfer function, or camera position. To better distribute the workload, we can assign the same brick of data to multiple ranks, with each rank potentially assigned multiple bricks. Each rank is responsible for rendering a subset of the tiles the bricks it has projects to, thereby dividing the rendering workload for each brick among the ranks. Although this increases the memory requirements of the renderer, additional memory is often available given the number of compute nodes used to achieve an interactive frame rate. Rendering such a configuration with a standard compositing approach is either difficult or not possible, as the compositing tree and blending order is set for the entire framebuffer by sorting the ranks [10]. However, the DFB's per-tile dependency trees Figure 3: Tile ownership and dependency trees for a data-parallel renderer using the DFB. Each rank owns its highlighted tile, and receives input tiles from ranks whose data projects to the tile. Copositing runs in parallel to local rendering, reducing overhead. allow renderers to change which ranks contribute tiles for each image tile. This enables a direct extension of the data-parallel renderer discussed previously into a mixed-parallel renderer, which balances image and data parallelism to achieve better load balance. To develop the mixed-parallel extension, we introduce the concept of a "tile-brick owner". Given a dataset partitioned into a set of bricks and distributed among the ranks with some level of replication, the renderer must select a unique rank among those sharing a brick to render it for each image tile. The rank chosen to render the brick for the tile is referred to as the "tile-brick owner". Thus we can take our data-parallel renderer and modify it so that a rank will render a brick for a tile if the brick projects to the tile and the rank is the tile-brick owner (Listing 5). The task dependency tree and tile operation are the same as the data-parallel renderer; the only difference is which rank renders the generation 1 tile for a given brick and image tile. Our current renderer uses a round-robin assignment to select tile-brick ownership, however this is not a requirement of the DFB. A more advanced renderer could assign tile-brick ownership based on some load-balancing strategy (e.g., [10]), or adapt the brick assignment based on load imbalance measured in the previous frame. The strategies discussed for image-parallel load balancing and work subdivision in Section 4.1.1 are also applicable to the mixed-parallel renderer. For example, two ranks sharing a brick could each compute half of the camera rays per-pixel, and average them together in the tile operation to produce a higher quality image. The mixed-parallel renderer supports the entire spectrum of image- and data-parallel rendering: given a single brick per-rank it is equivalent to the data-parallel renderer; given the same data on all ranks it is equivalent to the image-parallel renderer; given a partially replicated set of data, or a mix of fully replicated and distributed data, it falls in between. ### Display Walls The DFB can also be used to implement a high-performance display wall rendering system by using a pixel operation to send tiles directly to the displays (Figure 4). Tiles will be sent in parallel as they are finished on the tile owner directly to the displays, achieving good utilization of a fully interconnected network. Moreover, when rendering with the NONE image format, the image will not be gathered to the master rank, avoiding a large amount of network communication and a common bottleneck. As pixel operations are not tied to the rendering algorithm or tile operation, this method can be used to drive a display wall with any of the presented renderers. ### Implementation We implement the Distributed FrameBuffer and the presented rendering algorithms in OSPRay's MPI module, using Intel TBB for multi-threading and ISPC [14] for vectorization. The underlying implementation of the MPIDevice provided by OSPRay [11] for image-parallel rendering has been significantly improved by this work, although it is exposed to users in the same manner as before. Users can continue to run existing OSPRay applications with mpirun and pass the --osp:mpi argument to the application, and OSPRay will replicate the scene data across a cluster and render it image-parallel using the rendering algorithms described in Sections 4.1 and 4.1.1. ## 5 A Data-Distributed API for OSPRay The OSPRay API was originally designed for a single application process passing its data to OSPRay. Although OSPRay may offload the data in some way to other ranks, this is done without the application's awareness. This API works well for applications that do not need to specify the data distribution; however, it is not applicable to those that do, e.g., ParaView and VisIt. Maintaining an API that is familiar to users while extending it to a data-distributed scenario poses some challenges. Furthermore, we would like to seamlessly support existing OSPRay modules, which have added new geometries [11, 12, 13] and volumes [11, 12], in a data-distributed setting. We implement the data-distributed API through the addition of a new OSPRay API backend, the MPIDistributedDevice. As in single process rendering, each rank sets up its local geometries and volumes independently and places them into one or more OSP-Model objects. However, instead of a single model per-scene, the application must create one model for each disjoint brick of data on the rank. Each brick may contain any combination of geometries and volumes, including ones provided by user modules. To allow applications to pass OSPRay information about the data distribution, the distributed device extends the OSPModel with two additional parameters: a required integer ID, and an optional bounding box. The ID is used to determine if two ranks have the same brick of data and can share the rendering work using the mixed-parallel Figure 4: A prototype display wall system using DFB pixel operations to send tiles in parallel from an image-parallel path tracer. renderer. A typical data-parallel application with a single model per-rank could simply use the MPI rank as the ID, while an application with a hybrid data distribution would have a list of models and assign a unique ID for each shared brick of data. An MPI-parallel application can even use the distributed API for image-parallel rendering by specifying the same data and ID on each rank. The bounding box parameter can be used to override the model's computed bounds, if the model contains additional ghost geometries or voxels that should be hidden from camera rays. An additional set of ghost models can also be passed to the renderer, containing data visible only to secondary rays. The bounding box parameter and ghost models allow applications to support local shading effects such as ambient occlusion, or compute shadows and reflections on the replicated data in the scene. ## 6 Results We evaluate the performance of the Distributed FrameBuffer on the rendering algorithms described in Section 4, using our implementations within OSPRay. The benchmarks are run on two HPC systems, the Texas Advanced Computing Center's _Stampede2_, and Argonne National Laboratory's _Theta_, on a range of typical image- and data-parallel rendering use cases (Figure 5). We also perform a direct comparison of our sort-last compositing implementation using the DFB against IceT for a typical data-parallel use case. To measure performance as the rendering workload varies, the benchmarks are taken while rendering a rotation around the data set. Unless otherwise stated, we plot the median performance for the benchmarks, with the median absolute deviation shown as error bars. These measures are more robust to outliers, giving some robustness against influence from other jobs on the system. All benchmarks are run with one MPI rank per-node, as OSPRay uses threads on a node for parallelism. _Stampede2_ and _Theta_ consist of 4200 and 4392 Intel(r) Xeon Phi(tm) KNL processors respectively. _Stampede2_ uses the 7250 model, with 68 cores, while _Theta_ uses the 7230 model with 64 cores. _Stampede2_ contains an additional partition of 1736 dual-socket Intel(r) Xeon Phi(tm) Platinum 8160 SKX nodes. Although the KNL nodes of both machines are similar, the network interconnects differ significantly, which can effect the performance of communication in the DFB. _Stampede2_ employs an Intel Omni-Path network in a fat-tree topology, while _Theta_ uses a Cray Aries network with a three-level Dragonfly topology. ### Image-Parallel Rendering Performance To study the scalability of the DFB and the image-parallel rendering algorithm described in Section 4.1, we perform a strong scaling benchmark using OSPRay's scientific visualization renderer. We use VTK to extract two isosurfaces from the Richtmyer-Meshkov volume, which are rendered with transparency and ambient occlusion (Figure 4(a)). We measure strong-scaling on _Stampede2_ SKX nodes at two image resolutions (Figure 6). Although the renderer begins to drop off from the ideal scaling trend as the local work per-node decreases, this could potentially be addressed by employing the work-subdivision and load-balancing strategies discussed in Section 4.1.1. ### Data-Parallel Rendering Performance To study the scalability of the DFB when applied to the standard data-parallel rendering algorithm in Section 4.2, we run strong scaling benchmarks with two large-scale data sets on _Stampede2_ and _Theta_. On _Stampede2_ we render a combined visualization of the DNS with transparent isosurfaces (Figure 4(b)), and on _Theta_ we render the \(5^{3}\) Cosmic Web subset (Figure 4(c)). We find that our data-parallel renderer using the DFB is able to provide interactive frame rates Figure 5: The data sets used in our benchmarks. (a) Two transparent isosurfaces on the Richtmyer-Meshkov [CDD\({}^{\circ}\)02], 516M triangles total. (b) A combined visualization of the 451GB single-precision DNS [LM15] with two transparent isosurfaces, 5.43B triangles total. (c) A \(5^{3}\) subset of the \(8^{3}\) Cosmic Web [ISM\({}^{\circ}\)08], 7.08B particles rendered as transparent spheres. (d) The generated volume data set used in the compositing benchmarks, shown for 64 nodes. Each node has a single \(64^{3}\) brick of data. Figure 6: Image-parallel strong-scaling on the R-M transparent isosurfaces data set on _Stampede2_ SKX nodes. The image-parallel renderer using the DFB scales to provide interactive rendering of expensive, high-resolution scenes. for these challenging scenes, and scale up performance with more compute. On the Cosmic Web we observe good scaling from 32 to 64 nodes (Figure 7). Although performance begins to trail off the ideal trend beyond 128 nodes, absolute rendering performance remains interactive. On the DNS we find near ideal scaling from 16 to 32 nodes (Figure (a)a); however, we observe little change from 32 to 64 nodes, although we see improvement again at 64 to 128 nodes. To find the cause of the bottleneck at 64 nodes, we look at a breakdown of the time spent rendering the rank's local data and the compositing overhead incurred by the DFB (Figure (b)b). Compositing overhead refers to the additional time the compositor takes to complete the image, after the slowest local rendering task has completed [1]. In this case we find that the bottleneck is caused by the local rendering task not scaling, which could be addressed by employing a hybrid data distribution or the work-splitting techniques discussed previously. #### 6.2.1 Compositing Performance Comparison with IceT To perform a direct comparison with IceT for data-parallel rendering, we use a synthetic data set (Figure (d)d), and modify our data-parallel renderer to support using IceT for compositing. The IceT renderer follows the same code-path as our data-parallel renderer to render its assigned brick of data, then hands the framebuffer off to IceT for compositing. We found IceT's automatic compositing algorithm selection to give the best performance, and use this mode throughout the benchmarks. In terms of overall scalability and performance, our approach scales better then, or at least similar to, IceT, while achieving better absolute rendering performance (Figures (a)a to (c)c). When comparing timing breakdowns (Figures (d)d to (f)f) we find that, as expected, local rendering times are similar, and the performance difference is due to the differing compositing overhead. It is important to note that some of the absolute difference in overhead is due to IceT's synchronous design, which makes it unable to overlap compositing with rendering. We can consider a hypothetical IceT implementation which does overlap compositing and rendering by subtracting the local rendering time from the compositing overhead, and find that the DFB still achieves similar or superior compositing performance. Furthermore, we observe that when comparing the scaling trends of the two approaches, the DFB scales similar to, or better than, IceT. Although a rigorous comparison is difficult due to the different HPC systems used, the DFB follows similar scaling trends as Grosset et al.'s DSRB [1], while providing greater flexibility. Finally, we evaluate the portability of our approach by comparing the KNL runs on _Stampede2_ (Figures (b)b and (e)e) and _Theta_ (Figures (a)a and (d)d). The slightly different KNLs on each system will have a minor effect on performance; however any significant differences are attributable to the differing network architectures and job placement strategies. On _Stampede2_ we observe a rather bumpy scaling trend where, depending on the image size, we see a temporary decrease in the compositing performance at certain node counts. On _Theta_ we observe a smoother trend, with better absolute compositing performance. We found that disabling message compression on _Theta_ gave better performance, while on _Stampede2_ we encountered MPI messaging performance issues at 16 nodes and up without it. Thus, we leave compression as an option to users which is enabled by default at 16 nodes. In our benchmarks we disable compression on _Theta_, and enable it at 16 nodes and up on _Stampede2_. IceT uses a custom image compression method, which is not easily disabled. ### Hybrid Data Distribution Rendering Performance To measure the impact of partial data replication on load balance, we look at the per-frame overall time on the DNS with isosurfaces data set on _Stampede2_ (Figure 10). The volume is partitioned into as many bricks as there are ranks, with bricks redundantly assigned to ranks based on the available memory capacity. When using 64 KNLs there is enough memory to store two bricks per-rank, with 128 KNLs we can store up to four. The rendering work for each brick will be distributed among two or four ranks, respectively. The redundant bricks are distributed using a simple round-robin assignment. A brick distribution based on, e.g., some space filling curve or runtime tuning, could provide additional improvement. Figure 8: Data-parallel strong-scaling on the DNS with isosurfaces on _Stampede2_ KNLs. The lack of scaling from 32 to 64 nodes is attributable to a poor local work distribution (b), which can be partially addressed by using our mixed-parallel renderer. Figure 7: Data-parallel strong-scaling on the Cosmic Web data set on _Theta_. We find close to ideal scaling at moderate image sizes and node counts, with somewhat poorer scaling at very high resolutions. In both the 64 and 128 node runs the two brick per-node configuration provides a consistent improvement over no replication. This improvement is more pronounced for camera positions with greater load imbalance. With four bricks per-node, there are larger fluctuations in rendering performance, though at times we do find improvement over the two brick configuration. These larger fluctuations could be due to increased memory traffic, which is alleviated as data is cached in the KNL MCDRAM. This theory is further supported by the sharp spikes in performance, when new data must be fetched from RAM. ## 7 Conclusion We have presented the Distributed FrameBuffer, an asynchronous, distributed image processing and compositing framework primarily targeted at rendering applications. By breaking the image processing operations into a set of per-tile tasks with independent dependency trees, the DFB simplifies the implementation of complex distributed rendering algorithms. Moreover, the DFB does not trade performance for this flexibility and we report performance competitive with specialized state-of-the-art algorithms. Our data-distributed API extension to OSPRay has already been used successfully in practice for in situ visualization [2]. We have merged our implementation of the DFB, the rendering algorithms presented, and the data-distributed API into OSPRay, and released them in version 1.8. While prior work integrated OSPRay into VisIt [2] by using OSPRay's single-node rendering API and IceT for compositing, this can now be done using the distributed API directly. Compared to results reported on prior versions of OSPRay [1] our work provides significant performance improvements. However, the DFB and rendering algorithms presented are not without limitations. The rendering algorithms presented support only local lighting effects computed with the data available on a rank. Although approaches to compute global illumination on distributed data by sending rays between nodes [1, 2] could be implemented in the DFB, it is unclear how well a naive implementation would perform, or if extensions to the DFB would Figure 10: Improving load-balancing on the DNS with isosurfaces with partial data-replication in the mixed-parallel renderer. Sharing rendering between two nodes (two bricks per-node) gives a consistent improvement, between four tends to give further improvement. Figure 9: Compositing benchmark performance comparison of the DFB and IceT on the synthetic data set. We find that our approach achieves better, or at least similar, scaling as IceT, while providing faster absolute rendering times. In the timing breakdowns (d-f), we observe this difference is due to the DFB achieving a significant reduction in compositing overhead. be required. We leave this exciting avenue of research as future work. In our evaluation we observed large differences in MPI performance and network behavior between _Stampede2_ and _Theta_. Although we expose the use of compression as an option for users to tune as needed, it would be worthwhile to investigate self-tuning strategies for the DFB to automatically adapt to such architectural differences. ## Acknowledgments We would like to thank Damon McDougall and Paul Navratil of the Texas Advanced Computing Center for assistance investigating MPI performance at TACC, and Mengjiao Han for help with the display wall example. The Cosmic Web and DNS datasets were made available by Paul Navratil, the Richtmyer-Meshkov is courtesy of Lawrence Livermore National Laboratory. This work is supported in part by the Intel Parallel Computing Centers Program, NSF: CGV Award: 1314896, NSF:IP Award: 1602127, NSF:ACI Award: 1649923, DOE/SciDAC DESC0007446, CCMSC DE-NA0002375 and NSF:OAC Award: 1842042. This work used resources of the Argonne Leadership Computing Facility, which is a U.S. Department of Energy Office of Science User Facility supported under Contract DE-AC02-06CH11357. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported in this paper.
2307.11098
Revise thermal winds of remnant neutron stars in gamma-ray bursts
It seems that the wealth of information revealed by the multi-messenger observations of the binary neutron star (NS) merger event, GW170817/GRB 170817A/kilonova AT2017gfo, places irreconcilable constraints to models of the prompt emission of this gamma-ray burst (GRB). The observed time delay between the merger of the two NSs and the trigger of the GRB and the thermal tail of the prompt emission can hardly be reproduced by these models simultaneously. We argue that the merger remnant should be an NS (last for, at least, a large fraction of 1s), and that the difficulty can be alleviated by the delayed formation of the accretion disk due to the absorption of high-energy neutrinos emitted by the NS and the delayed emergence of an effective viscous in the disk. Further, we extend the consideration of the effect of the energy deposition of neutrinos emitted from the NS. If the NS is the central object of a GRB with a distance and duration similar to that of GRB 170817A, thermal emission of the thermal bubble inflated by the NS after the termination of accretion may be detectable. If our scenario is verified, it would be of interest to investigate the cooling of nascent NSs.
Shuang Du, Tingting Lin, Shujin Hou, Renxin Xu
2023-07-16T14:19:12Z
http://arxiv.org/abs/2307.11098v1
# Revise thermal winds of remnant neutron stars in gamma-ray bursts ###### Abstract It seems that the wealth of information revealed by the multi-messenger observations of the binary neutron star (NS) merger event, GW170817/GRB 170817A/kilonova AT2017gfo, places irreconcilable constraints to models of the prompt emission of this gamma-ray burst (GRB). The observed time delay between the merger of the two NSs and the trigger of the GRB and the thermal tail of the prompt emission can hardly be reproduced by these models simultaneously. We argue that the merger remnant should be an NS (last for, at least, a large fraction of \(1\rm{s}\)), and that the difficulty can be alleviated by the delayed formation of the accretion disk due to the absorption of high-energy neutrinos emitted by the NS and the delayed emergence of an effective viscous in the disk. Further, we extend the consideration of the effect of the energy deposition of neutrinos emitted from the NS. If the NS is the central object of a GRB with a distance and duration similar to that of GRB 170817A, thermal emission of the thermal bubble inflated by the NS after the termination of accretion may be detectable. If our scenario is verified, it would be of interest to investigate the cooling of nascent NSs. gamma-ray bursts-stars: neutron Vol. 000 No.0, 000-000 ## 1 Introduction The multi-messenger observations of the binary neutron star (NS) merger event, GW170817/GRB 170817A/kilonova AT2017gfo, provide abundant information for relevant topics (Abbott et al., 2017), for example, the origin of short gamma-ray bursts (sGRBs; e.g., Abbott et al. 2017; see Margutti & Chornock 2021 for a review). However, the central object left by the binary NS merger is still uncertain (Abbott et al., 2017, 2019). Some works have discussed the connection between the merger remnant and the corresponding kilonova (Margalit & Metzger, 2017; Yu et al., 2018) and the GRB afterglow (Piro et al., 2019; Hajela et al., 2022; Ren & Dai, 2022). Likewise, there may be also a certain connection between the remnant and the GRB prompt emission, for example, Tong et al. (2018) had considered a magnetar collapse scenario for the prompt emission (but, this model is not supported by the afterglow observation and difficult to explain the spectral evolution of the prompt emission). The burst triggered \(\sim 1.74\rm{s}\) after the merger of the two NSs (Abbott et al., 2017). _Fermi_-GBM observation of GRB 170817A (Goldstein et al., 2017) shows that the burst has two periods that a main pulse from \(t_{\rm{t}}-0.320\rm{s}\) to \(t_{\rm{t}}+0.256\rm{s}\) and a lower-significance tail from \(t_{\rm{t}}+0.832\rm{s}\) to \(t_{\rm{t}}+1.984\rm{s}\) where \(t_{\rm t}\) is the GRB trigger time. The spectrum of the main pulse shows a non-thermal feature that can be best fitted by a Comptonized function. However, the spectrum of the weak tail indicates a black-body radiation with temperature \(T_{\rm th}=10.3\pm 1.5{\rm keV}\). The abundant observations give many constraints on GRB models and make these models faultily (see Margutti & Chornock, 2021 for a review). The basic difficulty is that, if the thermal tail of GRB 170817A is reliable, physical models are hard to naturally reproduce the observed time delay of \(\sim 1.74\rm s\) and the black-body radiation with temperature \(T_{\rm th}=10.3\pm 1.5\rm keV\), simultaneously. In Section 2, we clarify the conflictions between observations and models and further show a solution to the difficulty that, if the merger remnant is an NS, the delay can be partly resulted by the delayed disk formation due to the absorption of high-energy neutrinos emitted by the NS and delayed emergence of effective viscous in the accretion disk. In Section 3, we extend the consideration shown in Section 2 and discuss the electromagnetic signal of the thermal bubble inflated by the NS. Section 4 is the summary and discussion. ## 2 Conflictions and the Solution ### Conflictions The prompt emission of GRB 170817A is different from previous GRBs with thermal components being detected. There is no such an evident interval and boundary between the power-law and thermal components in pervious ones (as summarized in Li, 2019). As shown in Figure 6 of Goldstein et al. (2017), the flux of the soft tail is compatible to that of the main pulse in \(10-50\rm keV\). This means that if there is a similar thermal component in the main pulse, this thermal component should be visible too 1. The Comptonized component followed by a thermal component challenges the classical framework of GRBs. (i) According to the widely accepted model that the prompt emission of a GRB is powered by the energy release in the GRB jet after the jet turning into optically thin (e.g., Rees & Meszaros, 1994; see Zhang, 2018 for more details), if the jet is hot enough at the photosphere, the thermal component should be detected at the first but not following the Comptonized main pulse (see also in Abbott et al., 2017); (ii) No matter how the jet is produced, the jet should be uniform on its component at the line of sight (but not the stratified jet; Cheng et al., 2021). The production of the jet that a cold segment followed by a hot segment is confusing. All these facts indicate that just considering the dissipation of jet energy (e.g., Rees & Meszaros, 1994; Zhang & Yan, 2011) cannot explain the time delay and thermal tail, uniformly. To explaining these characteristics, some geometrical effects should be invoked. Footnote 1: This is an important aspect that has been overlooked and indicates the separation between the radiation region of the main gamma-ray pulse and the soft thermal tail. The absence of evident thermal component in the main pulse implies that there should be a separation between the radiation regions (as well as relevant parameters) of the two components. Meng et al. (2018) showed that an off-axis high-latitude-enhanced photosphere emission plus an on-axis photosphere emission of a structured jet can explain the observed spectrum. However, the Lorentz factor along the line of sight of the jet is too large, \(\sim 20\), which is deviated from the constraint of the GRB afterglow (e.g., see Figure 7 in Margutti & Chornock, 2021), and a significant delay between the merger and jet launching is required since the photosphere is too close to the central engine so that the time delay of \(1.74\rm s\) cannot be totally attributed to the time wasting of jet moving. Gottlieb et al. (2018) showed another scenario that the hard-to-soft spectral evolution can be induced by the transition (a planar to a spherical phase) of the emission arose from a shock breakout of a cocoon from the merger's ejecta. Similarly, this scenario also requires a delayed jet injection. Therefore, to explain the time delay between GRB 170817A and GW170817 and the hard-to-soft spectral evolution simultaneously, under both of the structured-jet scenario and jet-cocoon scenario (should be the only two jet profiles that allowed by the observed properties of GRB 170817A; Abbott et al., 2017; see Geng et al., 2019 for simulation side), a delayed jet injection is required. Granot et al. (2017) suggested that the merger remnant is a short-lived (lifetime \(\leq 1\rm s\)) hypermassive NS, and the jet only can be launched after the hypermassive NS collapses into a black hole. However, this consideration is difficult to be supported by both of theory (see, Liu et al. 2017 for a review; even the remnant is an NS, the accretion is still ongoing) and observation (Jordana-Mitjans et al., 2022). ### A possible solution Under the classical GRB framework (see Zhang 2018 for more details), the delayed jet injection indicates the delayed disk formation since the jet is launched by the central object-disk system. Simulations show that the disk is formed during several dynamical time scale of the system after the merger (\(t_{\rm for}\sim 10{\rm ms}\)) regardless of the central object (Shibata & Hotokezaka, 2019). However, the type of the central object may affect the formation of the disk. According to the observations of SN1987A (Kunkel et al., 1987; Hirata et al., 1987; Bionta et al., 1987), the temperature of the nascent NS is several \({\rm MeV}\) and the NS can stay very hot for a few seconds after the birth (see Arnett et al. 1989 for a review). If the central object of GRB 170817A is an NS, comparing with the black-hole case, there will be an extra energy shedding from the central engine that a strong neutrino emission and neutrino-driven wind can emerge from the nascent NS (Duncan et al., 1986; Qian & Woosley, 1996). The absorption of high-energy neutrinos may turn the gravity-bound ejecta into free so that the formation of the disk will be put off until some of the ejecta falls back due to, for example, viscosity and friction in the ejecta and the cooling of the NS. Therefore, the disk formation would be determined by the competition between the release of gravitational potential energy and energy deposition in the gravity-bound ejecta. The time necessary for the gravity-bound ejecta to absorb enough energy to overcome the gravity can be estimated as (Perego et al., 2014) \[t_{\rm fre}\sim\frac{e_{\rm gra}}{\dot{e}_{\rm neu}}=0.02{\rm s}\left(\frac{M_ {\rm ns}}{2.5{\rm M}_{\odot}}\right)\left(\frac{R_{\rm in}}{30{\rm km}}\right) \left(\frac{L_{\nu_{\rm e}}}{3\times 10^{52}{\rm erg~{}s}^{-1}}\right)^{-1} \left(\frac{\xi}{1.5}\right)^{-1}\left(\frac{E_{\nu}}{15{\rm MeV}}\right)^{-2}, \tag{1}\] where \(e_{\rm gra}\) is the specific gravitational energy, \(R_{\rm in}\) is the inner radius of the disk which is assumed to be close to the innermost stable circular orbit of a Schwarzschild black hole with the same mass as the NS, \(\dot{e}_{\rm neu}\) is the specific heating rate provided by neutrino absorption at a radial distance \(R_{\rm in}\) from the centre, \(M_{\rm ns}\) is the NS mass, \(L_{\nu_{\rm e}}\) is the luminosity of electron neutrinos from the NS, \(\xi\) is the factor to describe the aspect ratio of the disk, and \(E_{\nu}\) is the neutrino energy from the NS. Comparing \(t_{\rm for}\) and \(t_{\rm fre}\), one can find that they are of the same order, that is, it is possible to put off the formation of the disk through the absorption of high-energy neutrinos2. The upper limit of the delay of disk formation due to the absorbing of neutrinos can be estimated as the cooling timescale of the NS, that is (Perego et al., 2014) Footnote 2: If the value of \(t_{\rm fre}\) is smaller than that of \(t_{\rm for}\) but larger than the duration of the accretion, there will be a pause of the accretion, as well as the jet launching, when the accretion last for \(t_{\rm fre}\), and then energy release in the foregoing jet will appear as a precursor. \[t_{\rm cool,ns}\sim\frac{3R_{\rm ns}^{2}}{c\lambda_{\rm N\nu}}\approx 0.7{\rm s }\left(\frac{R_{\rm ns}}{15{\rm km}}\right)^{2}\left(\frac{\rho_{\rm ns}}{10^ {14}{\rm g~{}cm}^{-3}}\right)\left(\frac{E_{\nu}}{15{\rm MeV}}\right)^{2}, \tag{2}\] where \(R_{\rm ns}\) is the radius of the NS, \(\lambda_{\rm N\nu}\) is the mean free path of neutrinos in the NS which is given by \[\lambda_{\rm N\nu}\approx 7.44\times 10^{3}{\rm cm}\left(\frac{\rho_{\rm ns}}{10^ {14}{\rm g~{}cm}^{-3}}\right)^{-1}\left(\frac{E_{\nu}}{15{\rm MeV}}\right)^{-2}, \tag{3}\] and \(\rho_{\rm ns}\) is the matter density of the NS. Besides, to start the accretion and launch the jet (e.g., through neutrino pair annihilation; see Liu et al. 2017 for a review)3, the disk should transfer angular momentum and generate thermal energy. It is difficult to estimate when the viscous works. If only the main gamma-ray pulse of GRB 170817A is produced by the jet and the thermal tail originates from other reasons (e.g., from the cocoon), to match the duration of the main pulse \(t_{90,\rm m}\approx 0.58\rm s\)(Goldstein et al., 2017), the viscosity parameter (Shakura & Sunyaev, 1973) can be estimated as \[\alpha\sim t_{90,\rm m}^{-1}\left(\frac{H}{R_{\rm out}}\right)^{-2}\Omega_{\rm k }^{-1}\approx 0.02\left(\frac{t_{90,\rm m}}{0.6\rm s}\right)^{-1}\left(\frac{H/R}{ 1/3}\right)^{-2}\left(\frac{M_{\rm ns}}{2.5\rm M_{\odot}}\right)^{-1/2}\left( \frac{R_{\rm out}}{100\rm km}\right)^{3/2}, \tag{4}\] where \(H\) is the disk height, \(H/R\) is the aspect ratio of the disk, and \(R_{\rm out}\) is the outmost radius of the disk. Usually, without some magnetic instabilities, the value of the viscosity parameter cannot be up to \(\sim 0.01-0.1\)(see Kato et al., 2008 for more details). Therefore, if effective magnetic field cannot emerge in the disk timely, the jet launching may be also put off. As an example, if the seed of the magnetic field in the disk is from the NS, only the magnetic field of the NS is generated (e.g., in \(\sim 1\rm s\) after the formation of the proto-NS; Thompson & Duncan, 1993), can the disk begin to amplify this magnetic seed. Note that, a large-scale magnetic field in the disk should be essential. The magnetic field can prevent proton deposition in the jet core, so that excessive mass loading due to the neutrino-driven wind in the jet (Perego et al., 2014) will be avoided. ### An interim summary Based on the assumptions that the thermal gamma-ray tail of GRB 170817A is real and the structured jet and the jet-cocoon structure are the only two allowed jet profiles to explain the thermal tail, we get the following claims. (a) The incompatibility between observations and models of GRB 170817A indicates the requirement of the delayed jet injection. (b) Although quantitative estimation is absence due to the unknown cooling and magnetization of nascent NSs, neutrino deposition and delayed emergence of effective viscous in the disk should be physical mechanisms that may result a fraction of the time delay of \(\sim 1.74\rm s\) (e.g., \(\sim 0.7\rm s\) after the merger as required in Gottlieb et al., 2018). The delay may not be the only effect of the energy deposition of neutrinos in GRBs. As long as the duration of a GRB is short enough and the central object of the GRB is an NS, when the accretion terminates, the NS could be still hot enough, so that a clearer fireball (compared with the neutrino-driven wind launched before the termination of the accretion) will be inflated through neutrino pair annihilation (Goodman et al., 1987) and even thermal radiation (Usov, 2001). Although, GRBs should be powered by energy release in jets, it is meaningful to revise the evolution of the fireball since it is directly powered by the NS and may reveal the information of the GRB central engine and test above idea. In the next section, we discuss possible electromagnetic counterpart of the fireball. ## 3 Possible connection between NSs and precursors ### Basic schematic diagram After the onset of the binary NS merger, the dynamically mass ejection begins to be ejected, and the proto-NS soon forms later, as well as the neutrino-driven wind from the NS (Duncan et al., 1986; Qian & Woosley, 1996)4. Almost simultaneously (e.g., in \(\sim 1\rm s\) after the merger according to the above discussion), an accretion disk emerges. The disk and the NS make up the GRB central engine, so that the relativistic jet begins to be launched. Together with the jet launching, baryonic wind driven by neutrinos from the NS and disk (Ruffert et al., 1997; Dessart et al., 2009; Perego et al., 2014) is accompanying. After the termination of the accretion, a fireball with less baryonic mass loading and totally powered by the NS is inflated. As the cooling of the NS is very fast (Burrows & Lattimer, 1986), this fireball should be less energetic than the fireball expected in Paczynski (1986, 1990). Hereafter, this less energetic and clearer fireball is called as thermal bubble. Thanks to the jet breaking the ejecta, the interspace left by the jet will be continually filled up by the matter extended from the thermal bubble. That is, the thermal bubble will mainly inflates along the direction of the jet moving, and the inflation along other directions will be suppressed by this prior distribution of energy and the constraint of surrounding matter. Correspondingly, two bunches will extend from the thermal bubble (see Fig. 1). As long as the power of the bunch is large enough, the bunch will rush out from the ejecta following the jet ultimately. Empirically, according to the isotropic luminosity of on-axis low-luminosity GRBs (e.g., GRB 060218, Campana et al. 2006; GRB 100316D, Starling et al. 2011)5, the critical isotropic luminosity of the bunch which can make the bunch break out from the surrounding matter should be Footnote 5: The fluxes of the X-ray afterglows of these GRBs are continuous attenuation which is different from that of the off-axis GRB (GRB 170817) and consistent with the expectation of on-axis GRBs. Therefore, we believe these GRBs are viewed on-axis and cannot be explained via jet-cocoon scenario. \[L_{\rm c}\sim 10^{46}{\rm erg~{}s^{-1}}. \tag{5}\] Although the cooling of the nascent NS is highly uncertain (Yakovlev & Pethick, 2004; Page et al., 2006), observation (Arnett et al., 1989) and simulation (Qian & Woosley, 1996) indicate that the temperature of the nascent NS should maintain \(10^{10}-10^{9}\)K for several seconds. Therefore, as long as the lifetime of the accretion disk is short enough, the total energy of the thermal bubble inflated in several seconds may be up to \(10^{48}-10^{49}{\rm erg}\) as expected in Goodman et al. (1987). So, the luminosity of the bunch can beyond the critical value. With the cooling of the NS, the luminosity of the thermal bunch will be larger than the power supplied by the NS at a certain time. If the bunch gets out of the encirclement after this time point, the totally energy of the thermal bubble will be reduced, as well as the energy density and pressure. Therefore, ultimately, the escape route left by the jet will be closed by the squeezing of the surrounding matter due to the decrease of the energy density in the thermal bubble, and then the bunch will be cut off. After the breakout, the bunch will be finally transparent for the thermal radiation. In the next, we discuss the detectability of the thermal radiation. ### The photosphere emission of the bunch If the bunch can break out of the ejecta, the fundamental problem is when or where the breakout occurs. We need to know the velocities of the ejecta and bunch. The expansion time of the ejecta is \(\sim R_{\rm ns}/v_{\rm e}\) Figure 1: Schematic diagram for the evolution of the thermal bubble. where \(R_{\rm ns}\) is the radius of the NS, and \(v_{\rm e}\) is the velocity of the ejecta. According to the duration of sGRBs, the lifetime of the accretion disk should be more likely longer than \(R_{\rm ns}/v_{\rm e}\). So, after the termination of the accretion, the ejecta should be in the self-similar adiabatic expansion stage with an constant velocity (\(\sim 0.1c-0.3c\), where \(c\) is the speed of light). Similarly, the bunch also soon inflates to the maximum speed since the faster jet has cleared the way for the bunch6. The saturation radius of the bunch can be estimated as \(R_{\rm b,c}\sim\eta_{\rm b}R_{\rm ns}\)(Piran et al., 1993), where \(\eta_{\rm b}\) is the dimensionless entropy of the bunch (i.e., the maximum Lorentz factor of the bunch). The Lorentz factor of the jet is usually \(\eta_{\rm j}\sim 10^{2}-10^{3}\). Comparing with the jet, the bunch is less energetic and clearer. Therefore, the order of magnitude of the maximum Lorentz factor of the bunch that \(\eta_{\rm b}\sim 10^{1}\) is adopted, and then the saturation radius of the bunch is \(R_{\rm b,c}\simeq\eta_{\rm b}R_{\rm ns}\sim 10^{7}{\rm cm}\). If the breakout radius of the thermal bunch, \(R_{\rm b,b}\), is large enough that \(R_{\rm b,b}\gg R_{\rm b,c}\), the velocity of the bunch is relativistic in the most of the time before breaking out of the ejecta. Under this case, the breakout radius can be estimated as Footnote 6: If the jet power is sufficiently larger than the critical value defined by equation (1), we can expect that the acceleration of the jet is almostly unaffected by the ejecta. Then the acceleration of the followed bunch is also unaffected since the jet should be faster. \[R_{\rm b,b} \sim \frac{v_{\rm b}v_{\rm e}\Delta t}{v_{\rm b}-v_{\rm e}}=3.0\times 1 0^{9}{\rm cm}\left(\frac{v_{\rm e}}{0.1c}\right)\left(\frac{t_{\rm a}}{1{\rm s} }\right)\left(\frac{v_{\rm b}/(v_{\rm b}-v_{\rm e})}{1}\right), \tag{6}\] where \(\Delta t\) is the time interval between the ejection of the ejecta and the inflation of the thermal bubble, and \(v_{\rm b}\approx c\) is the velocity of the bunch. Comparing \(R_{\rm b,c}\) and \(R_{\rm b,b}\), one can find that the estimation is self-consistent. To see the thermal radiation from the bunch, the bunch must reach its photosphere radius, \(R_{\rm ph,b}\), and there should be \(R_{\rm ph,b}>R_{\rm b,b}\). As discussed above, the saturation radius of the bunch is much smaller than the breakout radius of the bunch. Therefore, only the case that \(R_{\rm b,b}>R_{\rm b,c}\) is left. That is to say, when the bunch reaches the photosphere, it has already passed the saturation radius. Under this case, the photosphere radius is given by (Meszaros & Rees, 2000; Zhang, 2018) \[R_{\rm ph,b}=\frac{\sigma_{\rm T}L_{\rm b}Y}{8\pi\eta_{\rm b}^{3}m_{\rm p}c^{ 3}}\simeq 2.2\times 10^{11}{\rm cm}\left(\frac{L_{\rm b}}{10^{49}{\rm erg \ s}^{-1}}\right)\left(\frac{\eta_{\rm b}}{30}\right)^{-3}Y, \tag{7}\] where \(\sigma_{\rm T}\) is the electron Thomson cross section, \(L_{\rm b}\) is the isotropic luminosity of the bunch, \(Y\) is the pair multiplicity parameter, and \(m_{\rm p}\) is the proton mass. As a comparison, the photosphere radius of the jet is \[R_{\rm ph,j}=\frac{\sigma_{\rm T}L_{\rm j}Y}{8\pi\eta_{\rm j}^{3}m_{\rm p}c^{ 3}}\simeq 2.2\times 10^{10}{\rm cm}\left(\frac{L_{\rm j}}{10^{51}{\rm erg \ s}^{-1}}\right)\left(\frac{\eta_{\rm j}}{300}\right)^{-3}Y, \tag{8}\] where \(L_{\rm j}\) is the jet power, and \(\eta_{j}\) is the dimensionless entropy of the jet. Thus, it is unnecessary to overly worry about the shielding of the jet to the thermal radiation from the bunch. However, we should consider the cover of the jet prompt emission to the thermal radiation from the bunch. According to the normal GRB scenario, the prompt emission is emitted at the internal shock radius which is given by (Rees & Meszaros, 1994) \[R_{\rm is}\simeq 2\eta_{\rm j}^{2}c\Delta t=6\times 10^{11}{\rm cm}\left( \frac{\eta_{\rm j}}{100}\right)^{2}\frac{\Delta t}{1{\rm ms}}, \tag{9}\] where \(\Delta t\) is the temporal gap between ejecting the two shells of the jet. When the tail of the jet reaches the internal shock radius, the bunch has arrived at \[r\sim\beta_{\rm b}c\cdot\left(\frac{R_{\rm is}}{\beta_{\rm j}c}\right)=\frac{ \beta_{\rm b}}{\beta_{\rm j}}R_{\rm is}, \tag{10}\] where \(\beta_{\rm b}=v_{\rm b}/c\approx 1-1/(2\eta_{\rm b}^{2})\), and \(\beta_{\rm j}\approx 1-1/(2\eta_{\rm j}^{2})\). Correspondingly, if the termination of the prompt emission is set as the tail of the jet passing the internal shock radius, we have the following three situations. 1. If \(r<R_{\rm ph,b}\), the thermal radiation from the bunch will be never covered by the prompt emission. 2. When \(r>R_{\rm ph,b}\), comparing with the prompt emission, the thermal radiation from the bunch will be emitted earlier at \(R_{\rm ph,b}\). At this time, the tail of the jet has arrived at \(\sim R_{\rm ph,b}\beta_{\rm j}/\beta_{\rm b}\). Therefore, there will be a catch-up problem. In the lab frame, the jet moves with \(\beta_{\rm j}\) and leaves \(R_{\rm ph,b}\) first. After a time interval \(\delta t\), the thermal radiation leaves \(R_{\rm ph,b}\) and begins to pursue the jet (the jet has arrived at \(\sim R_{\rm ph,b}\beta_{\rm j}/\beta_{\rm b}\) at this time). If the thermal radiation from the bunch is observed after the termination of the prompt emission, the travel distance which makes the thermal radiation from the bunch catch up with the jet should satisfy (i.e., when the thermal photon arrives at the internal shock radius, it has yet catch up with the jet) \[2\eta_{\rm j}^{2}c\delta t>R_{\rm is}-R_{\rm ph,b},\] (11) where \[\delta t \sim \left(\frac{R_{\rm ph,b}}{\beta_{\rm b}c}\beta_{\rm j}c-R_{\rm ph,b}\right)/(\beta_{\rm j}c)\] (12) \[\approx \frac{R_{\rm ph,b}}{\beta_{\rm j}c}\left(\frac{\beta_{\rm j}}{ \beta_{\rm b}}-1\right).\] In this situation, a small value of \(R_{\rm is}\) is needed, for example, \(R_{\rm is}<\sim 100R_{\rm ph,b}\) by taking \(\eta_{\rm j}=100\) and \(\eta_{\rm b}=10\). 3. Alternatively, when \(r>R_{\rm ph,b}\), if the thermal radiation is detected in front of the prompt emission, there should be \[2\eta_{\rm j}^{2}c\delta t^{\prime}<R_{\rm is}-R_{\rm ph,b},\] (13) where \[\delta t^{\prime}\sim\left[R_{\rm ph,b}\left(\frac{\beta_{\rm j}}{\beta_{\rm b }}-1\right)+\beta_{\rm j}ct_{90}\right]/(\beta_{\rm j}c).\] (14) Similarly, we have \(R_{\rm is}>\sim 100R_{\rm ph,b}+2\times 10^{4}ct_{90}\) with the same parameters as that of the situation (II). Once the above three cases are satisfied, a black-body radiation with luminosity (Meszaros & Rees, 2000; Zhang, 2018) \[L_{\rm ph,b} = L_{\rm b}\left(\frac{R_{\rm ph,b}}{R_{\rm b,c}}\right)^{-2/3} \tag{15}\] \[= 3.2\times 10^{46}{\rm erg\ s}^{-1}\left(\frac{L_{\rm b}}{10^{49}{ \rm erg\ s}^{-1}}\right)^{7/12}\left(\frac{R_{\rm ns}}{1.2\times 10^{6}{\rm cm }}\right)^{-5/6}\left(\frac{\eta_{\rm b}}{30}\right)^{8/3}Y^{-2/3},\] temperature \[T_{\rm obs} = \left(\frac{L_{\rm b}}{4\pi R_{\rm ns}^{2}g_{0}\sigma_{\rm B}} \right)^{-1/4}\left(\frac{R_{\rm ph,b}}{R_{b,c}}\right)^{-2/3} \tag{16}\] \[= 31.3{\rm keV}\left(\frac{L_{\rm b}}{10^{49}{\rm erg\ s}^{-1}} \right)^{-5/12}\left(\frac{R_{*}}{1.2\times 10^{6}{\rm cm}}\right)^{-5/6}\left( \frac{\eta_{\rm b}}{30}\right)^{11/3}Y^{-2/3}\] and duration being several seconds will be emitted, where \(g_{0}\) is the effective degree of freedom of the equipartition theorem, and \(\sigma_{\rm B}\) is the Stefan-Boltzmann constant. Note that,in situation (I) and situation (II), the prompt emission should be followed by the thermal radiation, and in situation (III), the thermal radiation should be followed by the prompt emission. If an on-axis sGRB with the similar distance as that of GRB 170817A is detected in the future, the discussion shown in this section may can be tested. ## 4 Summary and Discussion In this paper, we suggest a scenario to eliminate the conflictions between observations and models of GRB 170817A under the assumption that the remnant of the binary NS merger is an NS (last for, at least, a large fraction of \(1\)s). We argue that a fraction of the time delay of \(\sim 1.74\)s between GW170817 and GRB 170817A can be resulted by the absorption of high-energy neutrinos emitted by the central NS and delayed emergence of effective viscous in the disk. If our scenario is true, when the central object of an on-axis GRB with the similar distance and duration as that of GRB 170817A is an NS, a weak thermal pulse should be detected before or after the main gamma-ray pulse. Besides, our suggestion is hopefully to be tested when the merger remnant is a black hole as long as the delayed jet injection is mainly induced by the delayed disk formation since, when the merger remnant is a black hole, there should be an evident decrease of the delay. Once the idea presented here is tested, the time of the delayed jet injection is meaningful to investigate the cooling of nascent NSs, as well as the constraint of the equation of state of NSs. It is worth noting that, although, we consider a bunch with the similar cross sectional area (CSA) as that of the jet, the CSAs of the two may be different. As long as the thermal bubble shrinks to satisfy equation (5), the bunch can break out from the ejecta. It is no reason to demand that the bunch is exactly shaped by the channel left by the jet. Therefore, the CSA of the bunch may be larger than the CSA of the jet. This could be another mechanism to form sheathes of sGRB jets (then the jet-bunch scenario could be a replacement of the jet-cocoon scenario for explaining the thermal tail of GRB 170817A). ## 5 Acknowledgement We thank the anonymous referee for very useful comments that have allowed us to improve our paper. We would like to thank Drs. Liang Li and Chengjun Xia for useful discussion. This work is supported by the National SKA Program of China (2020SKA0120100), research projects of Henan science and technology committee (212300410378) and the NSFC grants (U1938116). ## 6 Data Availability no new data generated.
2304.01903
Eco-evolutionary dynamics in finite network-structured populations with migration
We consider the effect of network structure on the evolution of a population. Models of this kind typically consider a population of fixed size and distribution. Here we consider eco-evolutionary dynamics where population size and distribution can change through birth, death and migration, all of which are separate processes. This allows complex interaction and migration behaviours that are dependent on competition. For migration, we assume that the response of individuals to competition is governed by tolerance to their group members, such that less tolerant individuals are more likely to move away due to competition. We looked at the success of a mutant in the rare mutation limit for the complete, cycle and star networks. Unlike models with fixed population size and distribution, the distribution of the individuals per site is explicitly modelled by considering the dynamics of the population. This in turn determines the mutant appearance distribution for each network. Where a mutant appears impacts its success as it determines the competition it faces. For low and high migration rates the complete and cycle networks have similar mutant appearance distributions resulting in similar success levels for an invading mutant. A higher migration rate in the star network is detrimental for mutant success because migration results in a crowded central site where a mutant is more likely to appear.
Karan Pattni, Wajid Ali, Mark Broom, Kieran J Sharkey
2023-04-04T15:56:26Z
http://arxiv.org/abs/2304.01903v1
# Eco-evolutionary dynamics in finite network-structured populations with migration ###### Abstract We consider the effect of network structure on the evolution of a population. Models of this kind typically consider a population of fixed size and distribution. Here we consider eco-evolutionary dynamics where population size and distribution can change through birth, death and migration, all of which are separate processes. This allows complex interaction and migration behaviours that are dependent on competition. For migration, we assume that the response of individuals to competition is governed by tolerance to their group members, such that less tolerant individuals are more likely to move away due to competition. We looked at the success of a mutant in the rare mutation limit for the complete, cycle and star networks. Unlike models with fixed population size and distribution, the distribution of the individuals per site is explicitly modelled by considering the dynamics of the population. This in turn determines the mutant appearance distribution for each network. Where a mutant appears impacts its success as it determines the competition it faces. For low and high migration rates the complete and cycle networks have similar mutant appearance distributions resulting in similar success levels for an invading mutant. A higher migration rate in the star network is detrimental for mutant success because migration results in a crowded central site where a mutant is more likely to appear. \({}^{1}\)Karan Pattni, \({}^{1}\)Wajid Ali, \({}^{2}\)Mark Broom, \({}^{1}\)Kieran J Sharkey \({}^{1}\)Department of Mathematical Sciences, University of Liverpool \({}^{2}\)Department of Mathematics, City, University of London ## 1 Introduction Migration is one of the drivers of evolutionary processes. One of the ways in which the effect of migration can be captured is to consider a subdivided population. Each subdivision is a unit of space that can be occupied by one or many individuals. In ecology such models are used to study species in fragmented habitats (Hanski, 1998) such as the fritillary butterfly (Wahlberg et al., 2002). In evolutionary game theory this enables modelling interactions between subsets of individuals (Broom and Rychtar, 2012). Individuals can either migrate freely between these sites as in the classical island model (Wright, 1943), or can be restricted to geographically adjacent sites as in the stepping stone model (Kimura and Weiss, 1964). Evolutionary graph theory (EGT) (Lieberman et al., 2005) theoretically restricts movement using networks. This led to interesting results where certain networks amplify the probability of a mutant type fixating in a population. Migration in EGT occurs through replacement events where birth, death and migration are all combined such that an offspring replaces an individual in an adjacent site. Birth, death and migration can be combined to give different replacement dynamics (Shakarian et al., 2012). Such replacement dynamics necessitate a fixed population size, which is true for models based on the EGT replacement dynamics (Pattni et al., 2017; Yagoobi and Traulsen, 2021). Results therefore derived using replacement dynamics are subject to the population size being fixed. Ecologically relevant dynamics can be obtained by considering non-replacement dynamics where birth and death are decoupled, though migration may still be coupled with either death or birth, which in effect allows for variable population size. In Pattni et al. (2021) it was shown that, in certain limiting cases, most replacement dynamics can be obtained from dynamics where death is separate but birth and migration are coupled. This shows that models with replacement dynamics can be derived from models with variable size, and therefore the results of replacement dynamics models represent special cases of general models. This suggests that further insights might be obtained by looking at these results in the context of non-replacement dynamics. As a follow up study to Pattni et al. (2021), we want to consider dynamics where birth, death and migration are all uncoupled to investigate the effect of theoretic restriction of movement using networks on the fixation probability. In Pattni et al. (2021) network structure was incorporated into the individual-based model of Champagnat et al. (2006), where the time-scale of individual level processes can be changed to consider different types of evolutionary models. For example, the evolution of RNA viruses (Grenfell et al., 2004) where evolutionary and ecological timescales overlap. In this model, individuals reproduce asexually such that migration was coupled with birth, this is akin to dispersal in plants (Fournier and Meleard, 2004) or the spread of infection (Rosenquist, 2010). Uncoupled migration, where individuals can freely move between sites, enables us to consider complex behaviours such as animal migration (Bauer and Klaassen, 2013). It also enables us to study social dilemmas (Broom et al., 2018) where assortment or grouping is required to achieve a social outcome (Fletcher and Doebeli, 2009). Broom and Rychtar (2012) presents a framework that allows complex movement behaviours to be considered in network structured populations. The simplest case is where individuals migrate independent of their history, such that they move from one site to another with a fixed probability. An example of where migration is dependent on history is the Markov movement model (Pattni et al., 2017). In this case movement is a function of an individual's current group interaction and includes factors such as tolerance to group members and movement cost. When there is high tolerance to group members, movement is essentially independent of group interactions. On the other hand, low tolerance to group members means individuals are sensitive and more likely to move away from non-beneficial interactions. Markov movement is a way of capturing density-dependent movement that explains a wide variety of ecological aggregations Liu et al. (2016). In this paper we implement a version of this Markov movement where tolerance to group members plays a key role in determining whether individuals move or stay. We start by explaining the framework in section 2, where the rare mutation limit evolutionary scenario is described. In section 3 we provide an example of a birth-death-migration model derived from the framework where Markov movement is used. We consider the trivial case with one site, the low migration limit and a general migration rate. For the general migration rate we investigate the effect of migration rate and how this compares to the low migration limit. ## 2 Modelling Framework We assume that the individuals in the population are spread in distinct but connected sites of a fixed network. The sites of the network can have no, one or many individuals at a given time. The population size and composition can change due to birth and death, whereas migration changes the distribution of individuals across the network of sites. To mathematically describe such populations, we use the Champagnat et al. (2006) model with network structure as described in Pattni et al. (2021). Individuals can have \(l\) real-valued traits contained within the set \(\mathcal{U}\subset\mathbb{R}^{l}\). The sites that individuals can occupy is given by set \(\mathcal{X}=\{1,\ldots,N\}\). The characteristics of an individual are given by \(i=(U_{i},X_{i})\), where \(U_{i}\in\mathcal{U}\) and \(X_{i}\in\mathcal{X}\). An individual with characteristics \(i\) is denoted by \(I_{i}\). The state of the population is given by a multi-set \(\mathcal{S}\), which means that for each individual with characteristics \(i\) there is a copy of \(i\) in \(\mathcal{S}\). Formally we write this as \(\{i^{m(i)}:i\in\mathcal{I},\mathcal{I}\subseteq\mathcal{U}\times\mathcal{X}\}\) where \(m:\mathcal{I}\rightarrow\mathbb{Z}^{+}\) is the multiplicity (number of occurrences) of \(i\). Individuals in the same site are given by set \(\mathcal{S}_{n}=\{i\in\mathcal{S}:X_{i}=n\}\). The connections between sites are given by a directed and weighted network represented by a matrix \(W\) with entries \(W_{m,n}\geq 0\). An individual can move from site \(m\) to \(n\) if site \(m\) is connected to site \(n\); that is, \(W_{m,n}>0\). In Pattni et al. (2021), birth and movement were coupled such that an offspring can be placed onto a connected site. Here we consider uncoupled birth and \begin{table} \begin{tabular}{l l l} \hline Notation & Definition & Description \\ \hline \(N\) & \(\geq 1\) & Number of distinct sites. \\ \(W\) & \(W_{m,n}\geq 0\) & Weighted \(N\times N\) matrix representing network of sites. \\ \(\mathcal{U}\) & \(\subset\mathbb{R}^{l}\) & \(l\) real-valued phenotypic traits of an individual. \\ \(\mathcal{X}\) & \(=\{1,\ldots,N\}\) & Set of sites an individuals can occupy. \\ \(i\) & \(=(U_{i},X_{i})\) for \(U_{i}\in\mathcal{U}\) and & The traits of an individual. \\ \(I_{i}\) & \(\mathcal{S}\) & An individual with traits \(i\). \\ \(\mathcal{S}\) & \(=\{i^{m(i)}:i\in\mathcal{I},\mathcal{I}\subseteq\mathcal{U}\times\mathcal{X}\}\) & Multiset that gives the state of the population, where \(m:\mathcal{I}\rightarrow\mathbb{Z}^{+}\) is the multiplicity (number of occurrences) of \(i\). \\ \(\mathcal{S}_{n}\) & \(=\{i\in\mathcal{S}:\,X_{i}=n\}\) & Individuals present in site \(n\), therefore, \(\mathcal{S}_{n}\subseteq\mathcal{S}\). \\ \(d(i,\mathcal{S})\) & \(\geq 0\) & Death rate of \(I_{i}\) in state \(\mathcal{S}\). \\ \(b(i,\mathcal{S})\) & \(\geq 0\) & Birth rate of \(I_{i}\) in state \(\mathcal{S}\). \\ \(\mu(i)\) & \(\geq 0\) & Probability that an offspring of \(I_{i}\) carries a mutation. \\ \(M(u,v)\) & \(\geq 0\) & Probability that offspring has mutated trait \(v\) when parent has trait \(u\). \\ \(m(i,x,\mathcal{S})\) & \(\geq 0\) & Migration rate of \(I_{i}\) to site \(x\) in state \(\mathcal{S}\). \\ \(\phi\) & \(:\mathcal{S}\rightarrow\mathbb{R}\) & Real-valued bounded function that acts on the state of the system. \\ \(\mathcal{L}\) & & Markov process generator, it describes how the expected value of \(\phi\) changes for an infinitesimal time interval. \\ \(h_{\mathcal{A}}(\mathcal{S})\) & \(\in[0,1]\) & Probability of starting in state \(\mathcal{S}\) and hitting state in set \(\mathcal{A}\). \\ \hline \end{tabular} \end{table} Table 1: Summary: Notations for framework, and their definitions and descriptions. movement. Individuals are assumed to reproduce asexually such that they place their offspring on the same site. The rate at which individual \(I_{i}\) gives birth is given by \(b(i,\mathcal{S},W)\). If there is no mutation, the offspring of individual \(I_{i}\) has characteristics \(i=(U_{i},X_{i})\). With probability \(\mu(i)\), individual \(I_{i}\) gives birth to an offspring with mutation. In this case, the probability that the \(I_{i}\) gives birth to an offspring with trait \(u\) is \(M(U_{i},u)\) such that all mutations are contained within \(\mathcal{U}\), that is, \(M(U_{i},u)=0\) if \(u\notin\mathcal{U}\). The rate at which individual \(I_{i}\) dies is given by \(d(i,\mathcal{S},W)\). The rate at which individual \(I_{i}\) moves to site \(x\) is given by \(m(i,x,\mathcal{S},W)\). Since the network structure \(W\) is assumed to be fixed, we use \(b(i,\mathcal{S})\), \(d(i,\mathcal{S})\) and \(m(i,x,\mathcal{S})\) for the birth, death and movement rate respectively. The evolution of the population is described by a continuous time Markov process. The generator \(\mathcal{L}\) that acts on real bounded functions \(\phi(\mathcal{S})\) that describes the infinitesimal dynamics of the state of the population at time \(t\) is given by \[\begin{split}\mathcal{L}\phi(\mathcal{S})=&\sum_{i \in\mathcal{S}}[1-\mu(i)]b(i,\mathcal{S})[\phi(\mathcal{S}\cup\{i\})-\phi( \mathcal{S})]\\ &+\sum_{i\in\mathcal{S}}\mu(i)b(i,\mathcal{S})\int_{\mathbb{R}^{l }}[\phi(\mathcal{S}\cup\{(u,X_{i})\})-\phi(\mathcal{S})]M(U_{i},u)du\\ &+\sum_{i\in\mathcal{S}}d(i,\mathcal{S})[\phi(\mathcal{S}\backslash \{i\})-\phi(\mathcal{S})]\\ &+\sum_{i\in\mathcal{S}}\sum_{x\in\mathcal{X}}m(i,x,\mathcal{S}) [\phi(\mathcal{S}\cup\{(U_{i},x)\}\backslash\{i\}-\phi(\mathcal{S})].\end{split} \tag{1}\] The first line describes birth without mutation, the second line describes birth with mutation, the third line describes death and the fourth line describes migration. For the Markov process described by the infinitesimal dynamics in equation (1), the key quantity we are interested in is the hitting probability. The probability \(h_{\mathcal{A}}(\mathcal{S})\) of starting in state \(\mathcal{S}\) and hitting a state in set \(\mathcal{A}\) is calculated as follows \[\mathcal{L}h_{\mathcal{A}}(\mathcal{S})=0 \tag{2}\] with boundary condition \(h_{\mathcal{A}}(\mathcal{S})=1\) for \(\mathcal{S}\in\mathcal{A}\) (see Pattni et al. (2021), Appendix A). ### Evolution in the Rare Mutation Limit In the rare mutation limit we assume that \(\mu(i)=\mu\to 0\)\(\forall i\) so that the population evolves through adaptive sweeps (Gerrish & Lenski, 1998). This means that, prior to a mutation arising, the population is homogeneous with all individuals having the same traits. This is because, when a mutation appears, either all individuals with the mutation (referred to as mutants and denoted \(M\)) die out or all individuals without the mutation (referred to as residents and denoted \(R\)) die out prior to another mutation arising. There can therefore be at most two types in the population, a type \(R\) and a type \(M\). Let \(\mathcal{U}=\{R,M\}\), then the set of states where all individuals are residents is given by \(\mathcal{R}=\{\mathcal{S}:U_{i}=R,\ \forall i\in\mathcal{S}\}\); similarly the set of all states with mutants is given by \(\mathcal{M}=\{\mathcal{S}:U_{i}=M,\ \forall i\in\mathcal{S}\}\). The dynamics of the system can therefore be described without the mutation step; that is, equation (1) simplifies to \[\begin{split}\mathcal{L}\phi(\mathcal{S})=&\sum_{i \in\mathcal{S}}b(i,\mathcal{S})[\phi(\mathcal{S}\cup\{i\})-\phi(\mathcal{S})] \\ &+\sum_{i\in\mathcal{S}}d(i,\mathcal{S})[\phi(\mathcal{S}\backslash \{i\})-\phi(\mathcal{S})]\\ &+\sum_{i\in\mathcal{S}}\sum_{x\in\mathcal{X}}m(i,x,\mathcal{S}) [\phi(\mathcal{S}\cup\{(U_{i},x)\}\backslash\{i\}-\phi(\mathcal{S})].\end{split} \tag{3}\] When the population is in a homogeneous state with all residents prior to a mutant arising, i.e. \(\mathcal{S}\in\mathcal{R}\), we are interested in determining the state in which a mutant appears. Let \(\pi(\mathcal{S})\) be the probability that the population is in state \(\mathcal{S}\). This can be calculated using equation (3) as follows \[\mathcal{L}\pi(\mathcal{S})=0,\quad\mathcal{S}\in\mathcal{R} \tag{4}\] with normalising condition \[1=\sum_{\mathcal{S}\in\mathcal{R}}\pi(\mathcal{S}). \tag{5}\] The probability \(p_{x,\mathcal{S}}\) that a mutant appears in site \(x\) in state \(\mathcal{S}\) is proportional to the number of individuals in site \(x\); that is, \[p_{x,\mathcal{S}}=\frac{|\mathcal{S}_{x}|}{|\mathcal{S}|}\pi(\mathcal{S}). \tag{6}\] Note that whether a unique solution to \(\pi(\mathcal{S})\) exists depends upon the definition of birth, death and movement. Once a mutation arises, the type that remains is said to have fixated in the population, and we are interested in the probability of mutants fixating. This is calculated by solving the hitting probability using equation (3) as follows \[\mathcal{L}h_{\mathcal{M}}(\mathcal{S})=0 \tag{7}\] with boundary conditions \(h_{\mathcal{M}}(\mathcal{S})=1\) for \(\mathcal{S}\in\mathcal{M}\) and \(h_{\mathcal{M}}(\mathcal{S})=0\) for \(\mathcal{S}\in\mathcal{R}\). To be precise with terminology, we refer to the fixation probability as the probability of one initial mutant fixating. Since there are multiple states with one mutant, we calculate the average fixation probability as follows \[\rho=\sum_{\mathcal{S}\in\mathcal{R}}\sum_{x\in\mathcal{X}}p_{x, \mathcal{S}}h_{\mathcal{M}}(\mathcal{S}\cup\{(M,x)\}) \tag{8}\] where \(p_{x,\mathcal{S}}\) is the mutant appearance distribution, i.e. the probability that a mutant appears in site \(x\) when the population is in state \(\mathcal{S}\). ## 3 Birth-Death-Migration Model To apply the modelling framework, we consider a birth-death-migration model that we can use to calculate the fixation probability. The birth rate is considered to be fixed and depends only on the type of individual: \[b(i,\mathcal{S})=\beta_{U_{i}}. \tag{9}\] The death rate is given by \[d(i,\mathcal{S})=\delta_{U_{i}}+\sum_{j\in\mathcal{S}_{X_{i}} \setminus\{i\}}\gamma_{U_{i},U_{j}} \tag{10}\] where \(\delta_{u}\) is the natural death rate of a type \(u\) individual and \(\gamma_{u,v}\) is the death rate of a type \(u\) individual when competing with a type \(v\) individual. We assume that individuals move with migration rate \(\lambda>0\). Where they move to will depend upon the structure of the network given by \(W\). We assume that \(W_{x,x}=0\) and \(\sum_{y\in\mathcal{X}}W_{x,y}=1\)\(\forall x\in\mathcal{X}\); that is, all diagonal elements of \(W\) are zero and \(W\) is right-stochastic. This means that \(W_{x,y}\) is the probability of migrating from site \(x\) to \(y\). In Pattni et al. (2018) a density-dependent movement function was considered where the movement of individuals is determined by their staying propensity and tolerance to other group members. The staying propensity can capture the cost of movement such that if the cost of movement is high, individuals are likely to have a higher staying propensity. Tolerance controls how sensitive individuals are to their group members, such that the higher the tolerance the more likely they are to stay. These aspects were captured using a sigmoid function in Pattni et al. (2018), we consider an adapted version of this as follows \[m(i,x,\mathcal{S})=\left(1-\frac{\alpha}{\alpha+(1-\alpha)\tau^{ g(i,\mathcal{S}_{X_{i}})}}\right)\lambda W_{X_{i},x}. \tag{11}\] The staying propensity is controlled by \(\alpha\in[0,1]\), which is the probability that individual \(i\) stays in its current site. If \(\alpha=1\) then individuals will stay regardless of the interactions they have with other members of the population. The benefit of individual \(i\) being in group \(\mathcal{S}_{X_{i}}\) is given by \(g(i,\mathcal{S}_{X_{i}})\); if this is positive then being present in the group is beneficial. The effect of group benefit is determined by tolerance to other group members, which is given by \(\tau\in(0,1)\). The following two limiting cases for tolerance are of note. 1. **High tolerance to group members (\(\tau\to 1\)):** In this case individuals move independently of one another as the benefit of being in a group has virtually no impact. The movement rate for high group tolerance (HGT) is given by \[m^{\text{HGT}}(i,x,\mathcal{S})=(1-\alpha)\lambda W_{X_{i},x}.\] (12) 2. **Low tolerance to group members (\(\tau\to 0\)):** In this case the focal individual will stay if its current group is beneficial, i.e. if \(g(i,\mathcal{S}_{X_{i}})>0\), but can move otherwise. The movement rate for low group tolerance (LGT) is given by \[m^{\text{LGT}}(i,x,\mathcal{S})=\begin{cases}\lambda W_{X_{i},x}&g(i,\mathcal{ S}_{X_{i}})<0,\\ (1-\alpha)\lambda W_{X_{i},x}&g(i,\mathcal{S}_{X_{i}})=0,\\ 0&g(i,\mathcal{S}_{X_{i}})>0.\end{cases}\] (13) ### Example of birth-death-migration model As an initial application of the birth-death-migration model, we consider a simple example that enables us to obtain analytical results in certain limiting cases. The simplifications used are described as follows. Different types of individuals differ in terms of their birth rate only and cannot die naturally. We set the birth rate of a resident to \(\beta_{R}=1\) and mutant to \(\beta_{M}=2\), unless specified otherwise. No natural death means that \(\delta_{u}=0\) for \(u\in\{M,R\}\). This means that extinction events are avoided. An alternative way of dealing with extinction events is to reseed the population, however, we avoid this technicality for now. Individuals therefore die due to competition with an identical rate for all paired types, i.e. \(\gamma_{u,v}=\gamma,\ \forall u,v\). For density-dependent movement, we only consider the limiting cases of low and high group tolerance, and assume that the probability of staying is small (\(\alpha\ll 1\)) so that its impact is negligible. For the group benefit function it is assumed that the focal individual benefits when they are alone; that is, \[g(i,\mathcal{S}_{X_{i}})=\begin{cases}\quad 1&|\mathcal{S}_{X_{i}}|=1,\\ -1&|\mathcal{S}_{X_{i}}|>1.\end{cases} \tag{14}\] This binary setting where the focal individual prefers being alone to any other group, suppresses nuanced group effects where, for example, group preference changes gradually with group size. However, it still impacts the migration of individuals which is our main focus. For low group tolerance (LGT), equation (13) simplifies to \[m^{\text{LGT}}(i,x,\mathcal{S})=\begin{cases}\lambda W_{X_{i},x}&|\mathcal{S}_{ X_{i}}|>1,\\ 0&|\mathcal{S}_{X_{i}}|=1\end{cases} \tag{15}\] which specifies that individuals will not migrate when they are alone. For high group tolerance (HGT), equation (12) simplifies to \[m^{\text{HGT}}(i,x,\mathcal{S})=\lambda W_{X_{i},x} \tag{16}\] which specifies that individuals will migrate regardless of being in a group or not. The complete (\(W^{\bullet}\)), cycle (\(W^{\circ}\)) and star (\(W^{\star}\)) networks will be considered, they are illustrated in figure 1. For each network, \(W^{\bullet}_{ii}=W^{\circ}_{ii}=W^{\star}_{ii}=0\ \forall i\in\mathcal{X}\) and the non-zero weights are as follows \[\left.\begin{array}{l}\text{Complete: }W^{\bullet}_{ij}=1/(N-1),\ i\neq j \text{ and }i,j\in\mathcal{X},\\ \text{Cycle: }W^{\circ}_{i,i+1}=W^{\circ}_{j,j-1}=W^{\circ}_{1,N}=W^{\circ}_{N,1 }=\frac{1}{2},\ i=1,\ldots,N-1\text{ and }j=2,\ldots,N,\\ \text{Star: }W^{\star}_{1,i}=\frac{1}{N-1},\ W^{\star}_{i,1}=1,\ i=2,\ldots,N-1. \end{array}\right\} \tag{17}\] Due to the properties of the networks chosen, they will provide us with a base understanding of the birth-death-migration model without the need to run lengthy simulations across a wide range of networks. The complete network is the benchmark case. The complete and cycle networks are circulations (Lieberman et al., 2005), which means that the sum of the incoming and outgoing weights for each site are the same, i.e. \(\sum_{i}W_{i,j}=\sum_{j}W_{j,i}\quad\forall i,j\in\mathcal{X}\). This property is used to derive the Circulation theorem (Lieberman et al., 2005), which states circulation networks have the same fixation probability. We may therefore be able to use this property to extend our results to other circulation networks. For the weights we have chosen, the star network can amplify selection (Lieberman et al., 2005; Pattni et al., 2021), which means that the fixation probability of a mutant is greater than in the complete network, so we can check whether this is still the case when there is uncoupled movement. Details regarding the simulation of the birth-death-migration model example are given in the appendix. The simulations were carried out using the HTCondor distributed computing system (Thain et al., 2005). ### Special case with single site We first consider the case where there is one site. This will allow us to understand the intrasite dynamics. For the birth-death-migration model we can analytically calculate the stationary distribution of a homogeneous population (i.e. all residents or all mutants). Let \(\pi_{n}^{u}=\mathbb{P}(\mathcal{S}=\{(u,1)^{n}\})\) be the probability that there are \(n\) individuals of type \(u\) in the population. Recall that the population cannot go extinct because we have assumed that the natural death rate is zero in this example (death only occurs by competition). The homogeneous population is therefore described by a reversible Markov process and we can obtain \(\pi_{n}^{u}\) using the detailed balance equations, which states that the transition rates do not change when time is reversed. In particular, the rate at which we transition from state \(n\) to \(n-1\) due to a death event is the same as transitioning from state \(n-1\) to \(n\) due to a birth event. In a state with \(n\) individuals each individual dies with rate \(\gamma(n-1)\) and in a state with \(n-1\) individuals each individual gives birth with rate \(\beta\), so for \(n\geq 2\) the detailed balance equations give \[n\gamma(n-1)\pi_{n}^{u}=(n-1)\beta_{u}\pi_{n-1}^{u}\] which simplifies to \[\pi_{n}^{u}=\frac{1}{n}\frac{\beta_{u}}{\gamma}\pi_{n-1}^{u}\] and through recursion we obtain \[\pi_{n}^{u}=\frac{(\beta_{u}/\gamma)^{n-1}}{n!}\pi_{1}^{u}. \tag{18}\] Using the fact that the stationary probabilities sum to \(1\) (i.e. \(1=\sum_{n=1}^{\infty}\pi_{n}\)) and, setting \(x=\beta_{u}/\gamma\) for brevity, we have that \[1=\sum_{n=1}^{\infty}\frac{x^{n-1}}{n!}\pi_{1}^{u}=\frac{\pi_{1}^{u}}{x}\sum_{n =1}^{\infty}\frac{x^{n}}{n!}=\frac{\pi_{1}^{u}}{x}\left(-1+\sum_{n=0}^{\infty }\frac{x^{n}}{n!}\right)=\frac{\pi_{1}^{u}}{x}\left(-1+e^{x}\right),\] which gives \[\pi_{1}^{u}=\frac{(\beta_{u}/\gamma)}{e^{\beta_{u}/\gamma}-1}.\] Figure 1: Networks considered in this paper. Each node represents a site. \(N\) is the total number of sites. Each edge represents an incoming and outgoing weighted edge whose weights are given by \(W\). Edges represent the permitted migration routes of individuals. In the star network site \(1\) is called the centre and sites \(2\) to N are called leaf sites. Substituting \(\pi_{1}^{u}\) into equation (18) gives us the stationary probability, \[\pi_{n}^{u}=\frac{(\beta_{u}/\gamma)^{n}}{n!}\frac{1}{e^{\beta_{u}/ \gamma}-1}.\] Using the stationary probability we calculate the expected type \(u\) population size as follows \[\sum_{n=1}^{\infty}n\pi_{n}^{u}=\frac{1}{e^{\beta_{u}/\gamma}-1} \sum_{n=1}^{\infty}n\frac{(\beta_{u}/\gamma)^{n}}{n!}=\frac{(\beta_{u}/\gamma) e^{\beta_{u}/\gamma}}{e^{\beta_{u}/\gamma}-1}=\frac{\beta_{u}/\gamma}{1-e^{- \beta_{u}/\gamma}}.\] The appearance of a mutant is proportional to the number of resident individuals in a given state. The probability that an initial mutant appears in a state with \(n\) residents is therefore given by \[\mu_{n}^{\text{Init}}=\frac{n\pi_{n}^{u}}{\sum_{i=1}^{\infty}i\pi_ {i}^{u}}=n\frac{(\beta_{R}/\gamma)^{n}}{n!}\frac{1}{e^{\beta_{R}/\gamma}-1} \left/\frac{(\beta_{R}/\gamma)e^{\beta_{R}/\gamma}}{e^{\beta_{R}/\gamma}-1} \right.=\frac{(\beta_{R}/\gamma)^{n-1}}{(n-1)!e^{\beta_{R}/\gamma}}.\] Using this probability, the average fixation probability of a mutant in a single site is then given by \[\rho^{\text{Single}}=\sum_{n=1}^{\infty}\mu_{n}^{\text{Init}}h_{ \mathcal{M}}(\{(R,1)^{n},(M,1)\}). \tag{19}\] In this case, the hitting probability can be calculated by solving equation (7) when we limit the birth rate as follows \[b(i,\mathcal{S})=\begin{cases}\beta_{U_{i}}&|\mathcal{S}|<K,\\ 0&|\mathcal{S}|\geq K\end{cases} \tag{20}\] where \(K\) is chosen to be large enough so that \(\mathbb{P}(|\mathcal{S}|\geq K)=0\). This means that the maximum population size is \(K\) and the total number of states is \((K+1)^{2}\), because these are the total number of combinations of mutants and residents that sum to \(\leq K\). Figure 2 shows the effect of competition rate on the fixation probability of a mutant in a single site (\(\rho^{\text{Single}}\)), which is calculated by solving for \(h\) using equation (7). To understand the behaviour here, we look at a comparable model with fixed population size. We find that it resembles death-Birth (dB) EGT dynamics, where an individual is randomly chosen for death and is then replaced by an offspring of an individual who is selected for birth proportional to their fitness (hence the uppercase in dB indicates selection). The fixation probability for dB EGT dynamics (Kaveh et al. 2015, Hindersin & Traulsen 2015) is given by \[\rho^{\text{dB}}(N,r)=\frac{N-1}{N}\frac{1-\frac{1}{r}}{1-\frac{ 1}{r^{N-1}}}=\frac{N-1}{N}\rho^{\text{Moran}}(N-1,r) \tag{21}\] where \(N\) is the number of individuals, \(r\) is the relative fitness of a mutant to a resident and \(\rho^{\rm Moran}\) is the Moran probability (Moran, 1959). By specifying a value for \(N\) and \(r\), we can use \(\rho^{\rm dB}\) to approximate \(\rho^{\rm Single}\). In Pattni et al. (2021) it was shown that fitness in dB EGT dynamics is proportional to the birth rate of individuals, we therefore set the relative fitness to \(r=\beta_{M}/\beta_{R}\). To set \(N\), we assume that with probability \(\mu_{n}^{\rm Init}\) a mutant arises in a population with \(n\) residents, so \(N=n+1\). The maximum resident population size is set to \(K\) such that having a population size \(\geq K\) tends to 0. Putting this together gives, \[\rho_{M}^{\rm Single}\approx\sum_{n=1}^{K}\mu_{n}^{\rm Init}\rho^{\rm dB}(n+1, \beta_{M}/\beta_{R}). \tag{22}\] In figure 2 we see that, even though there is some discrepancy, similar behaviour is observed with both dB dynamics and the birth-death-migration model. The discrepancy between the two is due to fluctuating population size in the birth-death-migration model, which allows the population to be updated via a birth or death. With dB dynamics, the population size is fixed and can only be updated via a death-Birth event. Note that the discrepancy increases as the population size increases (competition rate decreases). Further insight can be obtained by looking at the components of \(\rho^{\rm dB}\). The component \((N-1)/N\) in \(\rho^{\rm dB}\) is the probability that the initial mutant is not chosen to randomly die. This component dominates when the population size is small since the chance of the initial mutant randomly dying is higher. The component \(\rho^{\rm Moran}(N-1,r)\) captures the probability that the initial mutant fixates provided it does not randomly die. This component dominates as the population size gets larger since the probability of the initial mutant randomly dying decreases. This captures the behaviour observed for the single site case as follows. As the competition rate increases, the population size decreases and, therefore, survival dictates the fixation probability of a mutant. For a high competition rate, \(\rho^{\rm Single}\) converges to \(\frac{1}{2}\) as the expected resident population size converges to 1. As the competition rate decreases, which increases the population size, the ability to reproduce (or fecundity) dictates the fixation probability of a mutant. The dip and recovery we see in \(\rho^{\rm Single}\) observed in figure in 2 as we switch from fecundity dominating to survival dominating with a decreasing competition rate is due to the birth rate of mutants we have chosen (\(\beta_{M}=2\)). Changing the birth rate can alter this switching behaviour. ### The Low Migration Limit We now return to the multiple site case and consider the low migration rate limit (\(\lambda\to 0\)). In this case, an initial mutant that appears on a site will die out or fixate before a migration event happens and therefore each site can be viewed as either a resident or mutant site prior to another migration event. This approach has been previously used in the case of fixed population size within sites (Pattni et al., 2017; Yagoobi and Traulsen, 2021). The probability that a mutant fixates is then a two-step process. First, an initial mutant appears and fixates in a single site. Second, mutants then spread until they fixate in the population. The probability in the first step is given by \(\rho^{\text{Single}}\) for both low and high group tolerance. For low group tolerance we can obtain an analytic expression for the probability in the second step, but it is difficult for high group tolerance as we can have empty sites. We proceed by deriving an analytic expression for low group tolerance. The rate \(J_{x,y}^{u}\) at which a type \(u\) individual migrates from site \(x\) to \(y\), is proportional to the expected number of individuals in site \(x\) who can migrate multiplied by the migration rate. In the case of low group tolerance (equation (15)), the rate \(J_{x,y}^{u}\) is given by \[J_{x,y}^{u}=\lambda W_{x,y}\sum_{n=2}^{\infty}n\pi_{n,x}^{u}= \lambda W_{x,y}\left(\frac{\beta_{u}/\gamma}{1-e^{-\beta_{u}/\gamma}}-\frac{ \beta_{u}/\gamma}{e^{\beta_{u}/\gamma}-1}\right)=\lambda W_{x,y}\beta_{u}/\gamma. \tag{23}\] Note that \(\pi_{n,x}^{u}=\pi_{n}^{u}\) where the subscript for the site is included for clarity. To calculate the fixation probability of a type \(u\) immigrant arriving in site \(x\), we need to account for the number of individuals currently present in site \(x\). With probability \(\pi_{n,x}^{v}\) there are \(n\) type \(v\) individuals present in site \(x\) and, therefore, the average fixation probability of a type \(u\) immigrant in site \(x\) is \[\rho_{u,x}^{\text{Mig}} =\sum_{n=1}^{\infty}\pi_{n,x}^{v}h_{\mathcal{F}(u)}(\{(v,x)^{n}, (u,x)\})\] \[=\sum_{n=1}^{\infty}\frac{(\beta_{v}/\gamma)^{n}}{n!}\frac{1}{e^ {\beta_{v}/\gamma}-1}h_{\mathcal{F}(u)}(\{(v,x)^{n},(u,x)\}) \tag{24}\] Figure 2: Fixation probability of a mutant in a single site. Exact fixation probability is given by ‘Analytic’ plot, which is calculated using equation (19). Approximation using dB EGT dynamics is given by ‘dB’ plot, calculated using equation (22). where \(v\in\{M,R\}\setminus\{u\}\) and \(\mathcal{F}\) gives the state that we fixate in, that is, \(\mathcal{F}(M)=\mathcal{M}\) for mutant fixation, and \(\mathcal{F}(R)=\mathcal{R}\) for resident fixation. Similar to obtaining the solution of \(\rho^{\text{Single}}\), in \(\rho^{\text{Mig}}\) we can solve \(h\) using equation (7) by limiting the birth rate (equation (20)). Let \(s\subseteq\{1,\ldots,N\}=\mathcal{X}\) represent a state of the population such that site \(x\) where \(x\in s\) is a site occupied by mutants and a site \(y\) where \(y\notin s\) represents a resident site. We can now define the probability that mutants fixate at the site level for low group tolerance and in the low migration limit as follows \[\rho^{\text{Low Mig}}_{s}=\sum_{s^{\prime}\subset\mathcal{X}}\frac{Q_{ss^{ \prime}}}{q_{s}}\rho^{\text{Low Mig}}_{s^{\prime}} \tag{25}\] with boundary conditions \(\rho^{\text{Low Mig}}_{\emptyset}=0\) and \(\rho^{\text{Low Mig}}_{\mathcal{X}}=1\), where \(Q_{ss^{\prime}}\) is the transition rate from state \(s\) to \(s^{\prime}\), which is given by \[Q_{ss^{\prime}}=\begin{cases}\sum_{x\notin s}J^{R}_{x,y}\rho^{ \text{Mig}}_{R,y}&\text{if $s^{\prime}=s\setminus\{y\}$ for $y\in s$},\\ \sum_{x\in s}J^{M}_{x,y}\rho^{\text{Mig}}_{M,y}&\text{if $s^{\prime}=s \cup\{y\}$ for $y\notin s$},\\ 0&\text{otherwise},\end{cases}\] and \(q_{s}\) is the rate of transitioning away from state \(s\), that is \[q_{s}=\sum_{s^{\prime}\subseteq\mathcal{X}}Q_{ss^{\prime}}.\] The average fixation probability of a mutant for low group tolerance and in the low migration limit is then given by \[\rho^{\text{LGT}}=\sum_{x\in\mathcal{X}}p_{x}\rho^{\text{Single}}_{M,x}\rho^{ \text{Low Mig}}_{\{x\}} \tag{26}\] where \(\rho^{\text{Single}}_{M,x}=\rho^{\text{Single}}_{M}\), but have included site index for clarity, and \(p_{x}\) is the probability a mutant appears in site \(x\), which is proportional to the expected number of individuals in a site, that is, \[p_{x}=\frac{\sum_{n=1}^{\infty}n\pi^{R}_{n,x}}{\sum_{y\in\mathcal{X}}\sum_{n= 1}^{\infty}n\pi^{R}_{n,y}}=\frac{1}{N}.\] Note that the intra-site dynamics are homogeneous, i.e. \(\pi^{R}_{n,x}=\pi^{R}_{n,y}\)\(\forall x,y\), so the mutant appearance distribution is uniform. Since we have homogeneous intra-site dynamics, we can use the circulation theorem (Lieberman et al., 2005) to calculate \(\rho^{\text{Low Mig}}\) in circulation networks. The theorem is derived from the property that circulation networks have a constant forward bias for all transitory states (both residents and mutants exist) (Lieberman et al., 2005, Pattni et al., 2015), and it therefore states that the fixation probability can be obtained by the Moran probability \[\rho^{\rm Moran}(N,f)=\frac{1-\frac{1}{f}}{1-\frac{1}{f^{N}}}, \tag{27}\] where \(N\) is the number of sites and \(f\) is the forward bias. For a transitory state \(s\) the forward bias \(f\) is given by the rate of mutants increasing divided by the rate of residents decreasing; that is, \[f=\frac{\sum_{x\in s}J_{x,y}^{M}\rho_{M,y}^{\rm Mig}}{\sum_{x\notin s}J_{x,y}^{ R}\rho_{R,y}^{\rm Mig}}. \tag{28}\] Therefore, in the case of circulation networks, we can substitute \(\rho_{\{x\}}^{\rm Low\;Mig}\) with \(\rho^{\rm Moran}(N,f)\) for all \(x\), so equation (26) simplifies to \[\rho^{\rm LGT\;Circ}=\rho_{M}^{\rm Single}\rho^{\rm Moran}(N,f). \tag{29}\] For the star network, \(\rho^{\rm LGT}\) (equation (26)) can be calculated using the formula in Broom & Rychtar (2008). For low group tolerance, figure 3 (a) shows that an increasing competition rate decreases the fixation probability for a low movement rate in all networks considered. We use the two components of \(\rho^{\rm LGT\;Circ}\) to understand why this is the case. The first component, \(\rho_{M}^{\rm Single}\), describes the intra-site dynamics, which we have already explained. The second component, \(\rho^{\rm Moran}\), describes the inter-site dynamics, or how mutants spread once they have fixated on a single site. The inter-site dynamics are shown in figure 3 (c), where we see that \(\rho^{\rm Moran}\) is a sigmoid shaped curve whose shape is explained by the forward bias (\(f\)) that is shown in figure 3 (d). For a decreasing competition rate we see that \[\lim_{\gamma\to 0}f\approx\infty\Rightarrow\rho^{\rm Moran}\to 1 \Rightarrow\lim_{\gamma\to 0}\rho^{\rm LGT\;Circ}\approx\lim_{\gamma\to 0}\rho^{\rm Single}. \tag{30}\] This means that for a low competition rate a mutant fixating on one site is sufficient to guarantee that it goes on to fixate in the entire population. For an increasing competition rate we see that \[\lim_{\gamma\to\infty}f \approx\lim_{\gamma\to\infty}\frac{\frac{(\beta_{M}/\gamma)^{2}}{ 2!}\frac{1}{\exp(\beta_{M}/\gamma)-1}\frac{(\beta_{R}/\gamma)^{1}}{1!}\frac{1} {\exp(\beta_{R}/\gamma)-1}\frac{1}{2}}{\frac{(\beta_{R}/\gamma)^{2}}{2!}\frac {1}{\exp(\beta_{R}/\gamma)-1}\frac{(\beta_{M}/\gamma)^{1}}{1!}\frac{1}{\exp( \beta_{M}/\gamma)-1}\frac{1}{2}}=\frac{\beta_{M}}{\beta_{R}}\Rightarrow\] \[\lim_{\gamma\to\infty}\rho^{\rm LGT\;Circ} \approx\frac{1}{2}\rho^{\rm Moran}(N,\beta_{M}/\beta_{R}) \tag{31}\] where when calculating \(f\) we have assumed that there are two individuals when a migration event happens and that an immigrant arrives to a site with one individual only so fixates with probability \(\frac{1}{2}\). Note that in this case the forward bias converges to the relative birth rates of the individuals (\(\beta_{M}/\beta_{R}=2\)), which means that each site can be viewed as a single individual as in the case of EGT. The inter-site dynamics are therefore equivalent to Birth-death (Bd) dynamics where an individual is selected proportional to their fitness to replace a randomly chosen individual such that fitness is proportional to the birth rates of individuals (Pattni et al., 2021). In the star network for low group tolerance figure 3 (a) shows that it follows a similar pattern to the complete and cycle networks. However, the fixation probability in the star network is higher for high competition rates but converges as the competition rate decreases. Since \(\lambda\to 0\), mutants are likely to appear on leaf sites in a star network as the combined number of individuals on leaf sites is higher than the centre site. Appearing on leaf sites is beneficial because the way in which \(W\) is defined for the star network (equation (17)) allows leaf sites to act as source sites, i.e. are net exporters of individuals, (Pattni et al., 2021). For low competition rate, convergence occurs since the intra-site dynamics are identical for all networks and, as explained earlier, if a mutant fixates on one site, it is essentially guaranteed to fixate in the entire population. On the other hand, as the competition rate increases, which gives residents a better chance to prevent invasion, the divergence in the fixation probability between the star and circulation networks becomes more apparent. For high group tolerance, figures 3 (b) shows a similar pattern to low group tolerance where fixation probability decreases as the competition rate increases. High group tolerance allows empty sites, however, when competition rate is low, the likelihood of empty sites deceases and the intra-site dynamics would be similar to that of low group tolerance. The fixation probabilities are therefore identical to the low group tolerance case for low competition rate. As the competition rate increases, the chance of having empty sites increases, changing the behaviour observed. In particular, the population starts converging to a population size of 1 as individuals start dying off when they meet. This means that as the competition rate increases the fixation probability starts converging to \(\frac{1}{2}\) as the likelihood that a mutant appears in a population with one resident increases. Overall, the fixation probability start decreasing then increasing again as the competition rate increases. Figure 3: Plots for the low migration case. (a) Fixation probability of a mutant for low group tolerance (LGT). (b) Fixation probability of a mutant for high group tolerance (HGT). In (a–b), ‘Analytic (LGT)’ is analytically calculated by equation (29). In (a), ‘Analytic star (LGT)’ is analytically calculated by solving equation (26) using the formula of Broom & Rychtár (2008). (c) Inter-site fixation probability of mutants in circulation networks for low group tolerance, that is, probability of fixating in entire population given that mutants have already fixated in one site. (d) The forward bias for mutants in circulation networks for the low group tolerance case. ### General Migration Rate In this section we consider the case for a general migration rate (\(\lambda>0\)). The fixation probability in this case is calculated via simulation. #### 3.4.1 Effect of increasing migration rate Migration allows individuals to escape competition as shown in figure 4 where the fixation probability increases with the migration rate. The way in which this plays out depends upon the competition rate, network structure and group tolerance. The effect of these are explained in the following. For low group tolerance, figure 4 (a-d) shows that as the migration rate increases the fixation probability starts increasing and plateaus earlier for low competition than high competition. However, as \(\lambda\rightarrow\infty\) there would be a larger overall increase in the fixation probability for high competition. In the initial growth and plateau phases of the fixation probability, the complete and cycle networks follow each other closely and are indistinguishable. As the growth in the fixation probability accelerates, there is higher acceleration in the complete network than the cycle network. The key factor here is local correlation between groups on neighbouring sites on the cycle. For low migration rates fixation probabilities are low and similar for both cycle and complete networks. New individuals are likely to be born into bigger groups, as there are more potential parents, but cannot move on, so face increased competition. As they hardly move, network does not matter. For intermediate migration rates fixation probabilities are intermediate, but differ for the two types. New individuals are born to bigger groups, but there is some dispersal so they face an intermediate level of competition. Here dispersal happens to some extent, and so the network does matter. For high migration rates fixation probabilities are high and similar for the two types. New individuals are born in bigger groups but then there is rapid dispersal so they live in 'average' groups. As migration is so far they mix well, so network does not matter. To illustrate this point further, figure 5 shows the fixation probability in the case of a neutral mutant, i.e. \(\beta_{R}=\beta_{M}=1\). If there is no correlation between the sites, the fixation probability would be identical for the complete and cycle networks. There is correlation as we see a difference in the fixation probabilities, which happens for intermediate migration rates. For the star network, as \(\lambda\) increases we see that there is an initial dip in the fixation probability before it starts increasing. This is because increasing the migration rate results in the number of individuals in the centre site becoming larger than a leaf site. This increases the likelihood of a mutant appearing in the centre site which is a sink, i.e. a net importer of individuals, which adversely affects the fixation probability Pattni et al. (2021). This dip happens earlier for a lower competition rate and, after this dip, the fixation probability remains below that of the complete and cycle networks. For high group tolerance figure 4 (e-h) shows that the behaviour observed is similar to to low group tolerance when competition rate is low but vastly different for a higher competition rate. For low competition rate, the intra-site dynamics are similar in high and low group tolerance. In particular, for a low competition rate the likelihood of there being empty sites is low even as the migration rate increases. On the other hand, for a high competition rate the likelihood of empty sites increases. This means that a mutant arises in a population with fewer individuals than in the low group tolerance case. This is observed in figure 4 (g) and (h). In (g), the star network has a higher fixation probability for all migration rates than the complete and cycle networks. This is because individuals are more likely to meet in the centre site resulting in death due to competition, which drives the population size down. This effect is substantial for a high competition rate as seen in (h). As the migration rate increases, the fixation probability in all networks swiftly converges to \(\frac{1}{2}\) since the population size is converging to 1. Figure 6 shows the effect of migration rate as the number of sites increases. Figure 6(a) considers the low migration limit for low group tolerance in circulation networks. We see that for a low competition rate (\(\gamma=0.1\)), the fixation probability remains the same as the number of sites increases. This was previously explained using equation (30), where fixating in one site was sufficient to guarantee fixation in all sites. This effect carries over for higher competition rates, but the number of sites required to guarantee fixation increases. We observe that the fixation probability initially starts to decrease as the number of sites increases, but once we reach the point where the number of sites guarantees fixation, the fixation probability will flatline. For example, we see that for a high competition rate (\(\gamma=100\)) the fixation probability flatlines after approximately 7 sites, that is, fixating in 7 sites guarantees fixation in the entire population. This effect is also evident in circulation networks for low group tolerance and high group tolerance with a relatively low migration rate of 1 as seen in figures 6 (c) and (e). However, when the migration rate is increased (\(\lambda=10\)), for both low and high group tolerance there is a slight dip as the number of sites increases, but recovers to the one site level as seen in figures 6 (d) and (f). This is because the population effectively behaves as one big unit when the migration rate is high, with this being more pronounced with a higher number of sites as there are more individuals. In the star network (figure 6 (b)), for low competition the effect of increasing the number of sites is the same as for circulation networks i.e. fixating in one site guarantees fixation in the entire population. For a higher competition rate, the fixation probability increases with the number of sites. This is because we are adding a leaf site each time the number of sites increases, which increases the likelihood of a mutant appearing on a leaf site. As explained earlier, leaf sites are source sites and therefore beneficial for a mutant. As the migration rate increases to 1, the fixation probability decreases in the star network with an increasing number of sites for low group tolerance (figure 6 (c)). This is because the higher migration rate results in an increased number of individuals in the centre site, which increases the likelihood of the initial mutant appearing in the centre site. As the centre site is a sink, it is less beneficial for mutants. This does not happen for high group tolerance (figure 6 (e)), as most leaf sites are likely to remain empty with most of the population being present in the centre site. The population therefore behaves as one large unit clustered in the centre site. As the migration is increased to 10, for both low and high group tolerance (figure 6 (d) and (f)), the fixation probability in the star network remains constant as the number of sites is increased. This is because individuals are mixing with each other much more, nullifying the effect of network structure in both cases. Figure 4: Fixation probability of a mutant plotted against the migration rate of individuals for different competition rates. Each network has 7 sites and birth rate of mutants is 2. Figures (a–d) there is low group tolerance (LGT). Figures (e–g) there is high group tolerance (HGT). #### 3.4.2 Comparison to Low Migration Limit For low group tolerance Figure 7 (a-c) shows that there is initially similar behaviour to the low migration limit, but gradually breaks down as the migration rate keeps increasing. The fixation probability in the complete and cycle networks are higher than in the low migration limit as migration enables escaping competition. This difference is less apparent for high competition as it requires a much larger migration rate to make a significant difference. In the low migration limit, the star network has a higher fixation probability than the complete and cycle networks, and converges as the competition rate decreases. Here, the star network initially has a lower fixation probability for a low competition rate. As the competition rate increases this difference gradually diminishes and eventually the fixation probability surpasses that of the complete and cycle networks. This behaviour is explained by the decreasing likelihood of a mutant appearing in the centre site. When the competition rate is low, there are more individuals in the centre site than there are on a leaf site, thus there is an increased likelihood of a mutant appearing the centre site. However, this likelihood starts decreasing as the competition rate increases, which reduces the number of individuals in the centre site. Since the centre site acts as a sink, i.e. it is a net importer of individuals (Broom & Rychtar, 2008; Pattni et al., 2021), it suppresses the fixation probability. For high group tolerance figure 7 (d-f) shows that the fixation probability decreases then increases as the competition rate increases. This behaviour is significantly different to that observed in the low migration limit. For low competition rate (\(\gamma\leq 1\)) the behaviour is similar to that of low group tolerance (figure 7 (a-c)), this means that the intra-site dynamics are Figure 5: Fixation probability in the neutral case (\(\beta_{R}=\beta_{M}=1\)). This plot was generated using \(10^{6}\) simulations. Figure 6: Effect of increasing the number of sites on the fixation probability. Figures (a–d) are for low group tolerance (LGT) and figures (e–f) are for high group tolerance (HGT). Figure (a) is analytically calculated using equation (29) for Circulation networks, and figure (b) is calculated using the analytical formula for the star network. Figures (c–f) are generated using \(10^{5}\) simulations. For the cycle network, the plot starts from 4 sites as fewer than 4 sites classifies as a complete network. For the star network, the plot starts from 3 sites as this is the minimum number of sites required to construct a star network. similar for both cases. That is, even though high group tolerance allows for empty sites in their intra-site dynamics, this is unlikely when the competition rate is lower. As the competition increases (\(\gamma>1\)), empty sites are more likely as a death is more likely to occur whenever moving individuals come into contact with one another. This drives the population size down. We observe that as the competition rate increases, the fixation probability turns and starts to increase, eventually converging to \(0.5\). This implies that the entire resident population prior to a mutant arising is converging to \(1\). When comparing between networks, the fixation probability in the star network turns first and converges faster to \(0.5\). This is because individuals are more likely to meet in the center site in a star network which drives the population size down faster than the complete and star networks. Figure 7: Fixation probability in the cycle, complete and star networks plotted against the competition rate \(\gamma\) for different movement rates. Figures (a–c) have low group tolerance (LGT). ‘Analytic (LGT)’ plot is calculated using equation (29), which represents the low migration limit. Figures (e–f) have high group tolerance (HGT). Discussion In this paper we have proposed an evolutionary framework where the population is undated through individual birth, death and migration such that individuals migrate between a network of sites. This framework can be used to understand the effect of migration on an evolutionary process for different network topologies. In other frameworks used to study the effect of network topology on evolution, migration is coupled with birth and death to keep the population size constant as in evolutionary graph theory (EGT) (Lieberman et al., 2005), or combined with birth (Pattni et al., 2021), which is akin to dispersal in plants or the spread of infectious disease. Here, migration is a separate process and not coupled with birth or death allowing migration behaviour based on ecological interactions to be considered. This is illustrated by applying the framework to construct a birth-death-migration model where migration is density dependent based on the Markov migration mechanism in Pattni et al. (2018). In this mechanism, individuals move in response to competition and the parameter used to control this is called group tolerance. Individuals who have high group tolerance are less sensitive to competition so are indifferent to staying or migrating. Individuals with low group tolerance are highly sensitive to competition so prefer beneficial groups and will stay if so, otherwise they migrate. The framework allows for overlapping evolutionary and ecological timescales but in this paper we focus on the rare mutation limit. This means the population evolves in adaptive sweeps (Gerrish and Lenski, 1998) where a mutant either fixates or goes extinct prior to another mutant arising. This allows the effect of network structure on evolution to be measured in terms of the fixation probability of a mutant. Since this approach is also used in models based on the EGT framework, comparisons can therefore be made with a well-established set of literature. When moving away from the rare mutation limit, an alternative is the fixation probability in the presence of clonal interference (Pattni et al., 2021). As an initial example of a model using the framework described in this paper, we opted to use a binary density-dependent migration regimen. In the low group tolerance case, where individuals choose to stay when they are alone, every site is always occupied as it is assumed that individuals die through competition only. In the high group tolerance case, where individuals move even when they are alone, a state where all sites are occupied is approached as the competition rate decreases. This in turn enabled us to obtain analytical results in the low migration limit and allows us to make comparisons with models with fixed population size where all sites are always occupied (Lieberman et al., 2005; Yagoobi and Traulsen, 2021). Generally, density-dependent migration that changes gradually with group compositions can be considered as done in Pattni et al. (2017), Erovenko et al. (2019). In these models, evolutionary steps (where the composition of the population changes) happen at fixed time points to keep the population size fixed. By altering the competition rate we can change the size of the population. A high competition rate implies that individuals exist in solitude as any pair of individuals meeting would result in a death. For low group tolerance in this scenario, individuals therefore prefer being alone. In this setting, we achieved a scenario that is akin to EGT (Lieberman et al., 2005) where each site is occupied by a single individual. Note that this does not mean that the dynamics are identical to EGT, however, this enabled us to make comparisons to obtain further insights. In EGT, it is apparent that mutation dynamics (the process of a mutant appearing in the population) are not explicitly taken into account because it will break the fixed population size assumption (in a population size of \(n\), the population size has to temporarily be \(n+1\) to account for the appearance of a mutant). The effect of this is that the fixation probability with mutation dynamics is weighted by the probability of an initial mutant competing for a space within the population. When competition is high, space comes at a premium and individuals would essentially be competing for a single site. With lower competition, individuals would be competing for a share of space within a site. In EGT, the initial mutant does not compete with the resident on the site it is placed in, and is therefore guaranteed a space in the population. The fixation probability in EGT is therefore greater than or equal to that obtained in our model. In EGT and its various extensions (Pattni et al., 2017; Yagoobi and Traulsen, 2021), a mutant is introduced into the population through a mutant appearance distribution specifying which resident will be replaced by a mutant. Two distributions commonly used are a uniform distribution or a temperature-weighted distribution (Allen and Tarnita, 2014). One way to apply this scheme in populations with variable size, where there are multiple initial states, is to fix the initial state and then replace one resident with a mutant. For example, the initial state can be fixed to one where each site is occupied by one resident (Pattni et al., 2021). This approach is sufficient for specific purposes, for example, understanding how EGT dynamics can be derived from a model with eco-evolutionary dynamics (Pattni et al., 2021). However, for the rare mutation limit, the statistically correct method is to consider the distribution of resident states in the rare mutation limit (\(\mu\to 0\)). This determines the mutant appearance distribution in proportion to the number of resident individuals on a network site. Since the mutant appearance distribution affects the average fixation probability (Allen and Tarnita, 2014; Tkadlec et al., 2019), it is advantageous to explicitly take into account the mutation dynamics so that results can be interpreted unambiguously. In terms of simulated computations, mutation dynamics can reduce the number of simulations that are run to fixation if the fixation probability is lower, saving computation time. For a low competition rate, individuals can coexist with one another and we can obtain a metapopulation model where there are multiple individuals per site. As a point of comparison for such a scenario we use Yagoobi & Traulsen (2021), which considers a metapopulation model with a fixed number of individuals per site. In Yagoobi & Traulsen (2021) it is observed that the circulation theorem (Lieberman et al., 2005) (circulation networks have the same fixation probability) holds for all migration rates, where as in our case this is observed for low and high migration rates. This is because, in our case, the number of individuals on a site is given by a distribution that is determined by the dynamics of the resident population prior to a mutation occurring. For intermediate migration rates, the distribution of individuals is largely dependent upon the dynamics between a local neighbourhood of sites, which would differ between networks due to their topology. This is not the case for low and high migration rates. For low migration rates, the number of individuals in a site would depend upon the dynamics within a site, so if the dynamics are the same within a site the number of individuals in a site would be the same regardless of the network. For high migration rates, the number of individuals in a site would depend upon dynamics across all sites as individuals are constantly interacting with one another, and therefore network structure has little effect. In the star network, Yagoobi & Traulsen (2021) observes that the star network amplifies selection, which we observe as well for low migration rates. A uniform mutant appearance distribution is used in Yagoobi & Traulsen (2021) so mutants are likely to appear on leaf sites regardless of the migration rate. For the birth-death-migration model we define, the distribution of individuals changes such that the number of residents present in the centre site increases. Mutants are then more likely to appear in the centre, which suppresses selection. Yagoobi & Traulsen (2021) also considered different subpopulation sizes in a two-patch metapopulation, which they showed is a suppressor. We can implement sites with different population sizes by, for example, using site-specific competition rates or using networks with sink and source sites. The star network is an example of the latter; the centre site is a sink and leaf sites are sources, which means that more individuals are migrating from a leaf site to the centre site than vice versa. As the migration rate increases, the number of individuals in the centre site increases, which suppresses the fixation probability. In summary, we have presented a network structured population evolution framework where birth, death and migration are uncoupled. We study the effect of network structure in the rare mutation limit and have shown how the mutant appearance distribution affects the success of an invading mutant. Future work will move away the rare mutation limit so that overlapping evolutionary and ecological timescales can be considered in the context of network structure. ## Acknowledgements KP and KS acknowledge funding from EPSRC project grant EP/T031727/1. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 955708. ## Appendix A Simulation Details The evolutionary process is simulated using the Gillespie algorithm Gillespie (1976, 1977). Recall that the infinitesimal generator (equation (1)) describing the evolutionary process is as follows \[\mathcal{L}\phi(\mathcal{S})= \sum_{i\in\mathcal{S}}[1-\mu(i)]b(i,\mathcal{S})[\phi(\mathcal{S }\cup\{i\})-\phi(\mathcal{S})]\] \[+\sum_{i\in\mathcal{S}}\mu(i)b(i,\mathcal{S})\int_{\mathbb{R}^{l }}[\phi(\mathcal{S}\cup\{(u,X_{i})\})-\phi(\mathcal{S})]M(U_{i},u)du\] \[+\sum_{i\in\mathcal{S}}d(i,\mathcal{S})[\phi(\mathcal{S}\backslash \{i\})-\phi(\mathcal{S})]\] \[+\sum_{i\in\mathcal{S}}\sum_{x\in\mathcal{X}}m(i,x,\mathcal{S})[ \phi(\mathcal{S}\cup\{(U_{i},x)\}\backslash\{i\}-\phi(\mathcal{S})].\] For this process, let \(T(k)\) and \(S(k)\) respectively be the time and state after \(k\) events. The simulation follows the following steps: 1. The time, \(T(k+1)\), when the next event happens is given by \[T(k+1)=T(k)-\frac{\ln(\text{Unif}(0,1))}{\lambda_{k}}\] where \(\text{Unif}(0,1)\) is uniformly distributed random number in the range \((0,1)\) and \[\lambda_{k}=\sum_{i\in S(k)}\sum_{x\in\mathcal{X}}b(i,S(k))+d(i,S(k))+m(i,x,S (k)).\] 2. The next state, \(S(k+1)\), is determined by: * Birth without mutation: The probability that \(I_{i}\) gives birth to an offspring with the same type is \[[1-\mu(i)]\frac{b(i,S(k))}{\lambda_{k}}\] then \(S(k+1)=S(k)\cup\{(U_{i},X_{i})\}\). * Birth with mutation: The probability that \(I_{i}\) gives birth to an offspring with type \(w\) is \[\mu(i)M(U_{i},w)\frac{b(i,S(k))}{\lambda_{k}}.\] then \(S(k+1)=S(k)\cup\{(w,X_{i})\}\). * Death: The probability that \(I_{i}\) dies is \[\frac{d(i,S(k))}{\lambda_{k}}\] then \(S(k+1)=S(k)\setminus\{i\}\). * Movement: The probability that \(I_{i}\) moves to site \(n\) is \[\frac{m(i,n,S(k))}{\lambda_{k}}\] then \(S(k+1)=S(k)\cup\{(U_{i},n)\}\setminus\{i\}\). 3. Repeat steps 1 and 2 as required. For the birth-death-migration model to simulate the hitting probability (equation (7)), \[\mathcal{L}h_{\mathcal{M}}(\mathcal{S})=0\] with boundary conditions \(h_{\mathcal{M}}(\mathcal{S})=1\) for \(\mathcal{S}\in\mathcal{M}\) and \(h_{\mathcal{M}}(\mathcal{S})=0\) for \(\mathcal{S}\in\mathcal{R}\), we first need to determine the initial state in the rare mutation limit. To do this, we set \(T(0)=0\), \(\mu(i)=1^{-4}\ \forall i\), \(M(R,M)=1\) and choose \(S(0)\in\mathcal{R}\) such that \(S(0)\) is at the carrying capacity in the deterministic system. This is an added step taken to ensure that the stochastic system is fluctuating around its carrying capacity prior to a mutant arising. The deterministic system is obtained by assuming that, rather than there being a discrete number of individuals, the number of individuals is continuous. Let \(e_{1}(t),\ldots,e_{N}(t)\) be the number of residents at time \(t\) in each site. For low group tolerance (equation (15)), we want the solution to the system (dropping \(t\) for brevity) \[\frac{\mathrm{d}e_{x}}{\mathrm{d}t}=\beta_{R}e_{x}-\gamma e_{x}(e_{x}-1)+\sum _{y}\lambda(e_{y}-1)W_{y,x}-\lambda(e_{x}-1)W_{x,y}=0, \tag{32}\] where the first term accounts for birth, second term death, third term is immigration and fourth term is emigration. Note that migration takes place only if the number of individuals in a site is \(>1\). Similarly, for high group tolerance (equation (16)) we want the solution to \[\frac{\mathrm{d}e_{x}}{\mathrm{d}t}=\beta_{R}e_{x}-\gamma e_{x}(e_{x}-1)+\sum _{y}\lambda e_{y}W_{y,x}-\lambda e_{x}W_{x,y}=0, \tag{33}\] the terms are as in the low group tolerance case but migration in this case happens when the individuals in a site is \(>0\). After obtaining \(e_{1}(t),\ldots,e_{N}(t)\), we round up to the nearest integer and set \(S(0)\) to this. We then repeat steps 1 and 2 as outlined above until a mutant appears. Once a mutant appears, we use this as the initial state to calculate the hitting probability. To continue the simulation, we set \(\mu(i)=0\)\(\forall i\) and continue the simulation until we hit a state in \(\mathcal{R}\) or \(\mathcal{M}\). This is one run of the simulation which we repeat to generate multiple simulations. Note that initialising the population in this way takes into account the mutant appearance distribution (\(p_{x,\mathcal{S}}\)) since a mutant is more likely to appear in a site with more individuals. The average fixation probability of a mutant is then given by \(N_{\mathrm{mut}}/N_{\mathrm{sim}}\) where \(N_{\mathrm{sim}}\) and \(N_{\mathrm{mut}}\) are the total number of simulations and the number of simulations that hit \(\mathcal{M}\) respectively.
2303.14417
Analysis and Visualization of the Parameter Space of Matrix Factorization-based Recommender Systems
Recommender system is the most successful commercial technology in the past decade. Technical mammoth such as Temu, TikTok and Amazon utilize the technology to generate enormous revenues each year. Although there have been enough research literature on accuracy enhancement of the technology, explainable AI is still a new idea to the field. In 2022, the author of this paper provides a geometric interpretation of the matrix factorization-based methods and uses geometric approximation to solve the recommendation problem. We continue the research in this direction in this paper, and visualize the inner structure of the parameter space of matrix factorization technologies. We show that the parameters of matrix factorization methods are distributed within a hyper-ball. After further analysis, we prove that the distribution of the parameters is not multivariate normal.
Hao Wang
2023-03-25T09:58:19Z
http://arxiv.org/abs/2303.14417v1
# Analysis and Visualization of the Parameter Space of Matrix Factorization-based Recommender Systems ###### Abstract Recommender system is the most successful commercial technology in the past decade. Technical mammoth such as Temu, TikTok and Amazon utilize the technology to generate enormous revenues each year. Although there have been enough research literature on accuracy enhancement of the technology, explainable AI is still a new idea to the field. In 2022, the author of this paper provides a geometric interpretation of the matrix factorization-based methods and uses geometric approximation to solve the recommendation problem. We continue the research in this direction in this paper, and visualize the inner structure of the parameter space of matrix factorization technologies. We show that the parameters of matrix factorization methods are distributed within a hyper-ball. After further analysis, we prove that the distribution of the parameters is not multivariate normal. geometric analysis, recommender system, matrix factorization, hypothesis testing Hao Wang* *Ratidar Technologies LLC, Beijing, China, 100011 *Corresponding author: [email protected] ## 1 Introduction Before the invention of ChatGPT, recommender system is considered as a potent competitor to search engine. Major internet players such as Temu, TikTok, JD.com and Amazon.com spend lavishly on R&D of the technology. Unlike other technologies such as programming languages (Python / Go / Julia / Clojure) or databases, recommender system could save billions of USD per year for the industry. By hiring only a team of less than 1000 people, large corporations could increase the volume of traffic or sales (Toutiao.com / Amazon.com) by 30% - 40% without spending marketing fees using Google Ads. The business is extremely profitable for internet firms, and even after decades of evolution, recommender systems is still not wiped into the dust bin of history. One of the most successful recommender system technologies is matrix factorization. Due to its versatile functionality which could incorporate feature engineering and be easily transformed into online learning, matrix factorization is widely adopted in the industry. The most commonly used optimization technique used to solve matrix factorization problem is Stochastic Gradient Descent. With a small randomly sampled dataset, we could solve matrix factorization problem with only several lines of Python code and extremely fast execution speed. Although scientists have shifted their main focus from shallow models such as matrix factorization into deep neural models such as Wide & Deep [1], we are still far from being safe to claim that we fully understand the mechanism behind the matrix factorization framework. For example, although Probabilistic Matrix Factorization [2] was invented as early as 2007, it was only until 2021 and 2022 that true zeroshot learning algorithms [3][4][5][6] based on matrix factorization were proposed. The time gap is 14 years. In 2022, an explainable AI paper [7] was published, which was one of the first geometric interpretation of matrix factorization methods. The time gap is 15 years. The majority of scientists and engineers have spent so much time on accuracy improvement, that they neglected the theoretical foundation of the recommender system technologies. In this paper, we provide an innovative analysis of the matrix factorization-based recommender systems. We choose 2 specific examples of modern day matrix factorization approaches, namely KL-Mat [8] and ZeroMat [3], as test benchmark, to analyze the geometry of the parameter space of the paradigm. We demonstrate in our experiments that the distribution of the parameters of the matrix factorization approaches is spherical in the space, but it is not multivariate normal distribution. ## 2 Related Work Recommender system was invented in late 1980's. Collaborative filtering was introduced as the first batch of the recommender system, although its distributed versions are introduced much later. In 2007, a milestone paper introducing the probabilistic framework of matrix factorization paradigm was published. The paper provides a theoretical foundation using MAP estimation for matrix factorization-based the recommender system technology. Major innovation ideas for matrix factorization framework including SVDFeature [9], SVD++ [10], timeSVD [11], etc. The early inventions focus on increasing the accuracy performance of the algorithm and embodying more user cases. Later inventions focus on cold-start problem, such as ZeroMat [3], DotMat [4], PowerMat [5] and PoissonMat [6], as well as fairness problem, such as Focused Learning [12], MatRec [13], Zipf Matrix Factorization [14], KL-Mat [8], and RankMat [15]. Matrix factorization algorithms could be also used to solve Context-aware Recommendation problem (CARS). Linear models is the next revolutionary wave in the field. Netease [16] and Baidu [17][18] took advantage of linear and linear hybrid models to solve the recommendation and personalization problems. Unlike collaborative filtering and matrix factorization models, linear models assume linearity of the problem structures, and is much faster than other nonlinear methodologies. As the rise of deep neural network models in 2012, recommender system has witnessed a revolution initiated by the technology. Influential models such as DeepFM [19], DLRM [20], and Wide & Deep algorithms [1] have become the de facto standards of industrial engineering of the field. Companies such as Google [21] and Netease [22] spend a lot of resources developing the technologies. There are also researchers who start to research on neural architecture search problem of the field [23][24][25]. Deep neural networks help the internet firms increase their accuracy metrics, but at the same time have increased the cost of infrastructure enormously. Explainable AI is still a new idea in the field. One notable publication in this subfield is Wang [7], which interprets matrix-factorization based recommender system problem as a geometric approximation problem. ## 3 Geometric Analysis The official definition of Matrix Factorization is the following formula : \[L=\sum_{i=1}^{N}\sum_{j=1}^{M}\left(\frac{R_{i,j}}{R_{max}}-\frac{U_{i}^{T} \bullet V_{j}}{\|\mathrm{U_{i}^{T}\bullet V_{j}}\|}\right)^{2}\] By Zipf Law, we know that \(U_{i}^{T}\bullet V_{j}\) are distributed proportional to \(U_{i}^{T}\bullet V_{j}\), or roughly speaking, the number of vector pairs whose dot product value is N is proportional to N. This parameter space is very complicated by a first look, and it is extremely hard to carry out analytical formulas to depict the space. So we visualize the parameter space of 2 specific examples of matrix factorization paradigm to check the distribution of the parameters. The 2 examples that we pick up are KL-Mat, which is a fairness enhancement technique; and ZeroMat, which is a data agnostic zeroshot learning algorithm. We test our algorithms on the MovieLens 1 Million Dataset [26], which contains 6040 users and 3706 items. We visualize the results by t-SNE [27] in Fig. 1. Fig.1 illustrates the visualization of distribution of user feature vectors and item feature vectors in KL-Mat by different Stochastic Gradient Descent steps. As can be observed from the figure, both the user feature vectors (Blue) and the item feature vectors (Green) are distributed within a hyper-ball. The radius of the user feature vector hyper-ball is slightly larger than the item feature vector hyper-ball. We use the Henze-Zirkler Test [28] to check whe is multivariate normal and the test fails at every Stochastic Gradient Descent Learning step. The result is pretty shocking, because: 1. The geometry of the parameter space is so simple that it is hard for us to believe the result is correct ; 2. Although the result looks simple on the surface, we are yet not ready to understand the mechanism of the distribution, because it is not normally distributed. Subplots of Fig.1 demonstrate that although the vectors of the parameter spaces are spherically distributed, they do not seem to vary according to different gradient learning steps. The brim of the difference sets of blue and green points seem to be uniformly distributed, and the core of the intersections of the two sets seem to be denser. The implications of the distribution and a much more rigorous analysis will be found in the Discussion Section. We now test the zeroshot learning algorithm ZeroMat on the same MovieLens 1 Million Dataset, and obtain the results in Fig. 2. Fig. 2 illustrates the visualization of distribution of user feature vectors and item feature vectors in ZeroMat by different Stochastic Gradient Descent steps. As can be observed from the figure, both the user feature vectors (Blue) and the item feature vectors (Green) are distributed within a hyper-ball. The radius of the user feature vector hyper-ball is slightly larger than the item feature vector hyper-ball. We use the Henze-Zirkler Test to check whether the parameter space is multivariate normal and the test fails at every Stochastic Gradient Descent Learning step. The analysis of subplots of Fig.2 is analogous to Fig.1. We leave a formal analysis to the Discussion Section. Fig 1: Distribution of User Feature Vectors and Item Feature Vectors in KL-Mat by Different Stochastic Gradient Descent Learning Steps ## 4 Discussion We take the specific example of KL-Mat and investigate into the distribution of its user feature vector parameter space. We found that although the 2D histogram of the distribution does not tell us much, by observing the 1D histogram of the X-axis and Y-axis of the t-SNE visualization of the parameter space, we draw the conclusion that the distribution of singular dimensional data of the parameter space is triangular : Fig 2. Distribution of User Feature Vectors and Item Feature Vectors in ZeroMat by Different Stochastic Gradient Descent Learning Steps Fig 3 demonstrates that the X-axis of the user feature vectors follow triangular distribution. We also checked the Y-axis of the user feature vectors, and the vector feature space, the distributions are uniquely triangular. Fig. 4 demonstrates that the 3D histogram of the user feature vector density function of KL-Mat is a cone, while the item feature vector density function is similar. ## 5 Conclusion In this paper, we visualized the parameter spaces of KL-Mat and ZeroMat using t-SNE in Fig.1 and Fig.2 and claim that the parameter spaces failed the Henze-Zirkler Multivariate Normality test. After visualizing the histogram of the probability density of KL-Mat and Zero-Mat, we observe that the probability density function of the parameter spaces of matrix factorization methods is a cone. We are surprised the complexity of the parameter spaces leads to a rather simple geometry on appearance, but a rather complicated underlying probabilistic mechanism. In future work, we would like to find out an analytical form for the probabilistic distribution underlying the spherical distribution and pin down the properties of the matrix factorization parameter space. We would also like to conduct a rigorous hypothesis testing for the triangular property of the probability density function of the parameter spaces of the matrix factorization method.
2309.01727
Gas clumping in the outskirts of galaxy clusters, an assessment of the sensitivity of STAR-X
In the outskirts of galaxy clusters, entropy profiles measured from X-ray observations of the hot intracluster medium (ICM) drops off unexpectedly. One possible explanation for this effect is gas clumping, where pockets of cooler and denser structures within the ICM are present. Current observatories are unable to directly detect these hypothetical gas clumps. One of the science drivers of the proposed STAR-X observatory is to resolve these or similar structures. Its high spatial resolution, large effective area, and low instrumental background make STAR-X ideal for directly detecting and characterizing clumps and diffuse emission in cluster outskirts. The aim of this work is to simulate observations of clumping in clusters to determine how well STAR-X will be able to detect clumps, as well as what clumping properties reproduce observed entropy profiles. This is achieved by using yt, pyXSIM, SOXS, and other tools to inject ideally modeled clumps into three-dimensional models derived from actual clusters using their observed profiles from other X-ray missions. Radial temperature and surface brightness profiles are then extracted from mock observations using concentric annuli. We find that in simulated observations for STAR-X, a parameter space of clump properties exists where gas clumps can be successfully identified using wavdetect and masked, and are able to recover the true cluster profiles. This demonstrates that STAR-X could be capable of detecting substructure in the outskirts of nearby clusters and that the properties of both the outskirts and the clumps will be revealed.
Christian T. Norseth, Daniel R. Wik, John A. ZuHone, Eric D. Miller, Marshall W. Bautz, Michael McDonald
2023-09-04T17:19:34Z
http://arxiv.org/abs/2309.01727v2
# Gas clumping in the outskirts of galaxy clusters, an assessment of the sensitivity of _Star-X_ ###### Abstract In the outskirts of galaxy clusters, entropy profiles measured from X-ray observations of the hot intracluster medium (ICM) drops off unexpectedly. One possible explanation for this effect is gas clumping, where pockets of cooler and denser structures within the ICM are present. Current observatories are unable to directly detect these hypothetical gas clumps. One of the science drivers of the proposed _STAR-X_ observatory is to resolve these or similar structures. Its high spatial resolution, large effective area, and low instrumental background make _STAR-X_ ideal for directly detecting and characterizing clumps and diffuse emission in cluster outskirts. The aim of this work is to simulate observations of clumping in clusters to determine how well _STAR-X_ will be able to detect clumps, as well as what clumping properties reproduce observed entropy profiles. This is achieved by using yt, PyXSIM, SOXS, and other tools to inject ideally modelled clumps into 3D models derived from actual clusters using their observed profiles from other X-ray missions. Radial temperature and surface brightness profiles are then extracted from mock observations using concentric annuli. We find that in simulated observations for _STAR-X_, a parameter space of clump properties exists where gas clumps can be successfully identified using wavdetect and masked, and are able to recover the true cluster profiles. This demonstrates that _STAR-X_ could be capable of detecting substructure in the outskirts of nearby clusters and that the properties of both the outskirts and the clumps will be revealed. instrumentation: high angular resolution - X-rays: galaxies: clusters - galaxies: clusters: intracluster medium. + Footnote †: journal: JCAP 0023 The author(s). Published by Oxford University Press on behalf of Royal Astronomical Society. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ([http://creativecommons.org/licenses/by4/0./](http://creativecommons.org/licenses/by4/0./)), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. ## 1 Introduction Galaxy clusters are collections of 100s to 1000s of galaxies and are the largest gravitationally bound structures in the universe. The intracluster medium (ICM), mainly containing a hot (\(\sim\)10\({}^{7}\) K) gas that permeates the space between galaxies in a cluster is detectable in X-rays via its thermal bremsstrahlung radiation, as well as radiation from collisional ionization. The thermodynamical properties of the ICM in galaxy clusters reflect the history of their growth and allow cluster masses to be estimated, which can be used to estimate cosmological parameters (Allen et al., 2011). It is therefore important that the properties derived from the thermodynamic state of the gas in galaxy clusters are accurate. Using X-ray telescopes, radial temperature, and surface brightness profiles of a cluster can be extracted from imaging and spectral analyses of an observation. The entropy of the gas, inferred from temperature and density estimates, is particularly essential to understanding the thermodynamical properties of a cluster, including its evolution (Ghirardini et al., 2021). However, observations have shown unexpected results in the derived entropy profiles for many galaxy clusters. The entropy \(K\) is predicted to increase nearly proportional with radius (\(K\alpha\)or\({}^{-1}\)) in a relaxed cluster in hydrostatic equilibrium (Voit et al., 2005). However, in the outskirts of galaxy clusters, entropy profiles measured from X-ray observations of the ICM drop off unexpectedly. This trend is seen in _Suzaku_ observations of cluster outskirts, such as in the Perseus Cluster (Simionescu et al., 2011), Abell 1246 (Sato et al., 2014), Abell 2029 (Walker et al., 2012a), Abell 1795 (Bautz et al., 2009), Abell 1835 (Ichikawa et al., 2013), and many others (Walker et al., 2019). One possible explanation for this unexpected drop in entropy is gas clumping, where pockets of cooler and denser structures within the ICM are present in the outskirts. Entropy is typically estimated from deprojected temperature \(T(r)\) and density \(n(r)\) profiles, based on fits to projected temperature and surface brightness profiles, as \[K(r)=\frac{T(r)}{n(r)^{2/3}}\,. \tag{1}\] Pockets or clumps of cooler and denser gas more apparent at larger radii would be capable of biasing the projected temperature and especially surface brightness (\(\alpha r^{2}\)) measurements to lower and higher values, respectively, driving the inferred \(K(r)\) below the true entropy of the ambient ICM. Since the properties of gas clumps are unknown, it is therefore a combination of temperature and density bias that drives the entropy down, whether it be mostly through a bias in the density profile or a significant bias in the temperature profile as well. While the low and stable background of _Suzaku_ allowed thermodynamic profiles to be measured at large radii, _Suzaku_'s spatial resolution was insufficient to allow the clumps, if they exist, to be resolved and directly detected. This concept of gas clumping is supported by simulations, which have shown that the distribution of the density of the ICM in the outskirts tends to be non-uniform (Nagai & Lau, 2011). While interfering with the ability to accurately measure entropy, gas clumping in the outskirts is also of interest due to its likely role in the physics and evolution of galaxy clusters (Walker & Lau, 2022). It is therefore important for gas clumps to be directly identified, characterized, and ultimately masked out to recover the true entropy profile of the diffuse gas and properly test our current understanding of the evolution of the ICM in galaxy clusters. Currently, gas clumps in the outskirts of clusters have not been clearly detected by sufficiently sensitive observatories, such as the XIS (X-ray Imaging Spectrometer) instrument on the _Suzaku_ telescope. The lower background provided by Suzaku's low-Earth orbit allowed it to explore the diffuse emission in cluster outskirts, which neither _Chandra_, _XMM-Newton_, nor _eROSITA_ can take advantage of. The superior spatial resolution of the _Chandra_ and _XMM-Newton_ observatories hold more promise in principle, but the higher and less stable background rates have complicated probes of cluster outskirts. It is also unlikely that _eROSITA_ would detect gas clumps as its on-axis PSF is comparable to XMM Newton's. There have been methods developed that correct for bias due to clumping in derived emissivity profiles (Eckert et al., 2015; Eckert et al., 2017). Using the azimuthal median of concentric annuli in the surface brightness of cluster emission, the expected entropy profile can be recovered to within 1\(\sigma\)(Eckert et al., 2015). Although clumps have not been directly detected, it is clear that correcting for inhomogeneities in the ICM is necessary to properly extract the density and therefore entropy profile of a cluster (Morandi & Cui, 2013). Although we can roughly account for gas clumping, directly detecting clumps would allow for a much more accurate derivation of cosmological parameters, as well as reinforcing our understanding of cluster evolution. In addition, the median method mentioned above is limited by spatial resolution. A potential solution to this limitation is the _Survey and Time-domain Astronomical Research eXplorer (STAR-X)_. _STAR-X_ was proposed to the Mid-Sized Explorer Class Mission (MIDEX) call by NASA at the end of 2021 and was selected for Phase A study. Like _Suzaku_, _STAR-X_ will have a low instrumental background due to its low earth orbit, where it will be protected from high energy charged particles by the Earth's magnetosphere. This low background at harder (\(E>2\) keV) energies, coupled with a high soft effective area, will allow sensitive measurements of the outskirts of galaxy clusters. The design of _STAR-X_ provides a large (1\({}^{\circ}\) diameter) field of view (FOV), large effective area, and a 2 arcsec half-power diameter point-spread function (PSF) to minimize confusion with point sources and efficiently survey the outskirts of nearby clusters in single, \(\sim\)100 ks observations (Zhang, 2017). While the high, soft (\(E<2\) keV) effective area allows cooler gas clumps to be directly imaged and masked, the low particle background at higher energies permits accurate temperature measurements of the truly diffuse gas. These advances, possible with _STAR-X_, will provide a greater understanding of cluster evolution and clusters' connection to the cosmic web. In the following, we simulate _STAR-X_ observations of a handful of nearby galaxy clusters with measured drops or flattening in their entropy profiles. We assume the true diffuse gas follows the expected entropy relation and inject clumps with various properties that would have gone undetected by _Suzaku_ and reproduce the observed entropy profiles. The mock observational data are then realistically analysed to determine if _STAR-X_ would be able to detect gas clumping in galaxy cluster outskirts and recover the injected diffuse entropy profile. Throughout this work, we assume a flat \(\Lambda\)CDM cosmology with \(H_{0}=71\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}=0.27\), \(\Omega_{\Lambda}=0.73\). Unless otherwise stated, uncertainty ranges are reported as 90 per cent confidence intervals. ## 2 Methods ### Simulating _STAR-X_ observations In order to assess the impact of gas clumping and the sensitivity of _STAR-X_, a simulated observation or event list of a reconstructed galaxy cluster must first be produced. A summary of the simulation steps are given below: 1. Temperature and emissivity profiles for a given cluster are defined, based on literature fits to observations of that cluster. 2. The uniformly sized cells of a data cube are populated with values based on the profiles. 3. Gas clumps are randomly injected into the data cube based on their assumed properties. 4. A 3D photon list that represents the emission of each cell is produced. 5. The photon list is projected into a 2D event file. 6. Point sources consistent with the cosmic X-ray background (CXB) are added with random positions to the event file. 7. Photons representing the Galactic foreground and _STAR-X_'s estimated instrumental background are also added with random positions, presuming a uniform spatial distribution of both components. We model the radial temperature and emissivity profiles with the following form, as presented in Vikhlinin et al. (2006): \[T_{\rm 3D}(r)=T_{0}\frac{\left(\frac{r}{r_{\rm cool}}\right)^{a_{\rm cool}}+ \frac{T_{\rm min}}{T_{0}}}{\left(\frac{r}{r_{\rm cool}}\right)^{a_{\rm cool}}+1 }\ \frac{\left(\frac{r}{r_{\rm c}}\right)^{a}}{\left[1+\left(\frac{r}{r_{\rm c}} \right)\right]^{a/b}}\ ; \tag{2}\] \[n_{p}n_{e}(r)=n_{0}^{2}\frac{\left(\frac{r}{r_{\rm c}}\right)^{-a}}{\left(1+ \frac{r^{2}}{r_{\rm c}^{2}}\right)^{3/6-a/2}}\frac{1}{\left[1+\left(\frac{r}{r _{\rm c}}\right)^{a}\right]^{c/y}}\] \[+\ \frac{n_{02}^{2}}{\left(1+\left(\frac{r}{r_{\rm c}z}\right)^{2}\right)^{3/2 _{2}}}. \tag{3}\] The radial distance \(r\) determines how the temperature \(T_{\rm 3D}\) and emission measure (\(n_{p}n_{e}\)) varies inside the model cluster, assuming spherical symmetry, controlled by radial scale parameters \(r_{\rm cool}\), \(r_{\rm r}\), \(r_{\rm c}\), \(r_{\rm s}\), and \(r_{\rm c2}\) and dimensionless parameters \(a_{\rm cool}\), \(a\), \(b\), \(c\), \(\alpha\), \(\beta\), \(\epsilon\), \(\gamma\), and \(\beta_{2}\). In equation (2), the profile is normalized by \(T_{0}\) and modulated inside the cool core by \(T_{\rm min}\), while the emissivity is normalized by \(n_{02}\) in the centre and \(n_{0}\) elsewhere in equation (3). The terms controlling the profiles in the outskirts are adjusted so that the entropy profile roughly follows the \(r^{1.1}\) relation; clumps are later added to match the observed entropy profile. See Section 2.3 for more details on the entropy correction. The parameter values for the emission measure and temperature functions are given in Tables 1 and 2, respectively. In order to produce mock observations, we first create discrete 3D grids with cell size based on a grid \(512^{3}\) cells in total size that extends out to the virial radius of each cluster. Our three clusters, Abell 2029 (see Section 4.1 for more details), Abell 1246 (Section 4.2), and the Perseus cluster (Section 4.3) were recreated out to \(\sim\)2.3 Mpc in radius, corresponding to cell sizes of 8.9, 8.6, and 8.6 kpc, respectively. These data cubes are then populated with temperature and emissivity values from equations (2) and (3) to represent the truly diffuse component of the ICM. In Step 3, clumps are injected into the data grids. The number of clumps, their central temperature, central density, and radius are chosen from a predetermined matrix of values (see Section 3.1 for more details). Each clump is built from the centre outwards to create a spherical area in the data grid with a temperature and density profile \(\rho(r)\) with a Gaussian form defined by the radius \(R_{\rm cl}\) encompassing its full width at half-maximum (FWHM), such that \[\rho(r)=\rho\omega e^{-r^{2}/2\rho^{2}}\,, \tag{4}\] where the standard deviation \(\sigma=R_{\rm cl}/\sqrt{2\ln 2}\). The clumps are identical and randomly distributed in the 3D space of the grids. While the true clump distribution is unknown and may only exist in cluster outskirts, clumps that fall within \(R_{\rm 800}\) will contribute negligibly to the emission there; a uniform placement is thus the simplest, realistic distribution to choose. Clump properties for a range of assumed sizes \(R_{\rm cl}\) that reproduce observed entropy profiles are given in Table 3. The temperature and emissivity data cubes are then loaded into yt.1 In combination with the cluster's radius, which is used to generate a cubic bounding box with dimensions equal to the diameter of the cluster near its virial radius, a yt data set can be generated that stores the temperature and density values within the cells of a 3D data set. The data set, combined with the redshift, position, collecting area, and exposure time for the mock observation, is used to generate a 3D photon list based on an apec thermal source model using the python package pyXSIM.2 The 3D photon list is then projected along a chosen axis to produce a 2D event file. An absorption model (tbabs) is also included in this step to account for the hydrogen column \(N_{H}\) in the direction of the cluster. Footnote 1: [https://yt-project.org](https://yt-project.org) Footnote 2: [http://hea-www.cfa.harvard.edu/jzuhone/pyxsim/index.html](http://hea-www.cfa.harvard.edu/jzuhone/pyxsim/index.html) Lastly, the 2D event file is provided as input to SOXS,3 an instrument simulator. SOXS simulates the response and other properties of _STAR-X_, including the FOV, the chip and pixel sizes and arrangements, the focal length, the PSF, the effective area, and the instrument's particle background. Point sources are also added into the simulated observation by SOXS, based on the \(\log N\)-\(\log S\) flux distribution of CXB sources measured in Lehmer et al. (2012) from the _Chandra_ Deep Field South observations. Using SOXS, a mock observation of the reconstructed cluster is finally produced and can be analysed using standard tools and techniques. Footnote 3: [https://hea-www.cfa.harvard.edu/soxs/index.html](https://hea-www.cfa.harvard.edu/soxs/index.html) ### Mock observation analysis At this stage, the mock event file is analysed using existing tools designed for observatories that obtain event lists with spectro-imaging information, such as _Chandra_. Knowledge of the locations or properties of CXB sources (except where noted below) and clumps are not used at any point in the analysis. The analysis steps are as follows. (i) Point sources are identified using wavdetect,4 a Chandra Interactive Analysis of Observations (CIAO)5 tool, and are masked to produce a point-source-corrected image. A separate point-source-corrected image is produced using known point source positions attained from a simulated image that does not include clumping, but does include the same, randomly generated point sources. This second point-source-corrected image is used to assess the accuracy of the initial point-source-correction where clumps are included. wavdetect uses 'Mexican Hat' wavelet functions with different scale sizes to identify sources. A smaller scale size equivalent to the approximate size of a point source is provided to wavdetect in this step. Footnote 4: [https://cxc.cfa.harvard.edu/ciao/ahelp/wavdetect.html](https://cxc.cfa.harvard.edu/ciao/ahelp/wavdetect.html) (ii) Clumps are then identified using wavdetect and subtracted out to produce a point source and clump corrected image. A scale size equivalent to the approximate size of a gas clump is provided to wavdetect in this step. (iii) Spectra are then extracted from concentric annuli centred at the X-ray emission peak, or the centre of the cluster. Annuli are chosen to have roughly the same number of counts per region. Background spectra are also extracted from a simulated, blank sky image. The blank sky image includes galactic foreground, instrumental background, and point sources. The point sources are simulated in the same positions as the cluster observation and masked out using the point source regions previously identified. (iv) The spectra are fitted in XSpec using a standard apec * tbabs model to extract projected temperature values from each annulus. A radial temperature profile for the cluster is then produced. (v) Using pyproffit,6 surface brightness measurements are carried out using images produced from the 2D event files. pyproff - fit generates a deprojected, radial density profile \(n(r)\) using the standard Onion-Peeling method. The annuli used in the spectral extraction are broken into smaller pieces in this step to allow for a greater resolution in the deprojected density data. Each annulus holds roughly the same amount of counts. (vi) \(T_{\rm 3D}\) is fit to the projected temperature profile and a radial entropy profile can then be derived using the equation Footnote 6: [https://pyproffit.readthedocs.io/en/latest/#](https://pyproffit.readthedocs.io/en/latest/#) \[K(r)=\frac{T_{\rm 3D}(r)}{n(r)^{2/3}}\,. \tag{5}\] ### Entropy correction In order to evaluate the impact of individual gas clumping properties, a cluster recreation must first be entropy-corrected, with the goal of injecting gas clumps in to drive the entropy profile down to match observations. Throughout this work, two central temperatures for gas clumps were evaluated, 3.0 and 0.7 keV. The former (3.0 keV) was chosen as it has a lesser impact on a cluster's temperature profile since the temperature in the outskirts for the chosen clusters is around 3.0 keV. This allowed for an analysis of a warmer, central clump temperature and only required a modification to the cluster's density profile to correct for entropy. For clumps with a central temperature of 3.0 keV, the cluster's density profile was lowered in the outskirts to roughly line up with the normalized, expected entropy relation (\(r^{\perp 1}\)) found in Voit et al. (2005). \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline & \(R\) & \(n_{0}\) & \(r_{c}\) & \(r_{s}\) & & & \(n_{02}\) & & \\ Cluster & (kpc) & (\(10^{-3}\) cm\({}^{-3}\)) & (kpc) & (kpc) & \(\alpha\) & \(\beta\) & \(\epsilon\) & (\(10^{-1}\) cm\({}^{-3}\)) & \(r_{s2}\) & \(\beta_{2}\) \\ \hline A2029\({}^{1}\) & 2300 & 15.721 & 84.2 & 908.9 & 1.164 & 0.545 & 1.669 & 3.510 & 5.00 & 1.0 \\ Entropy correction (3.0 keV) &... &... &... & 1309 &... &... & 6.907 &... &... &... \\ Entropy correction (0.7 keV) &... &... & 73.2 & 1309 &... & 0.5 & 6.907 &... &... &... \\ A1246\({}^{2}\) & 2200 & 0.300 & 5.0 & 2153 & 100.1 & 0.45 & 5.0 & 0.2 & 1.54 & 1.0 \\ Entropy correction (3.0 keV) &... &... &... & 1953 &... &... & 17 &... &... &... \\ Entropy correction (0.7 keV) &... &... &... & 1910 &... & 0.445 & 10 &... &... &... \\ Perseus\({}^{3}\) & 2200 & 3.170 & 150 & 370.0 & 2.8 & 0.18 & 3.1 & 0.762 & 33.84 & 5.0 \\ Entropy correction (3.0 keV) &... &... &... & 970.0 &... & 0.3 & 7.1 &... &... &... \\ Entropy correction (0.7 keV) &... &... &... & 970.0 &... & 0.348 & 5.0 &... &... &... \\ \hline \end{tabular} _Notes_. \({}^{1}\) Taken directly from Vikhlinin et al. (2006). \({}^{2}\) Adapted from Sato et al. (2014). \({}^{3}\) Adapted from Urban et al. (2013). \end{table} Table 1: Density profile reconstruction parameters. Modified profiles that correct for entropy are listed below the corresponding cluster. Profiles intended for gas clumps with a central temperature of 3.0 keV have only a modified density profile. For 0.7 keV clumps, the complementary, modified temperature profile can be found in Table 2. \begin{table} \begin{tabular}{l c c c c c c c c} \hline & \(T_{0}\) & \(r\) & & & & \(r_{\rm coed}\) & \(n_{02}\) \\ Cluster & (keV) & (Mpc) & a & b & c & \(T_{\rm min}/T_{0}\) & (kpc) & \(a_{\rm coed}\) \\ \hline A2029\({}^{1}\) & 16.19 & 3.04 & \(-\)0.3 & 1.57 & 5.9 & 0.10 & 93 & 0.48 \\ Entropy correction (0.7 keV) & 15.19 &... &... & 1.7 & 5.1 &... &... &... \\ A1246\({}^{2}\) & 8.0 & 3.1 & 0.05 & 2.3 & 10.0 & 0.21 & 4.5 & 0.2 \\ Entropy correction (0.7 kev) & 8.5 & 3.9 &... &... & 9.0 &... & 5.5 & 0.15 \\ Perseus\({}^{3}\) & 5.3 & 1.1 & 0.18 & 5.71 & 1.29 & 0.88 & 201 & 6.98 \\ Entropy correction (0.7 keV) & 5.6 &... & 0.14 & 7.71 & 0.9 &... &... &... \\ \hline \end{tabular} _Notes_. \({}^{1}\) Taken directly from Vikhlinin et al. (2006). \({}^{2}\) Adapted from Sato et al. (2014). \({}^{3}\) Adapted from Urban et al. (2013). \end{table} Table 2: Temperature profile reconstruction parameters. Modified profiles that correct for entropy are listed below the corresponding cluster, which are intended for gas clumps with a central temperature of 0.7 keV. Their complementary, modified density profiles can be found in Table 1. \begin{table} \begin{tabular}{l c c c c c c c} \hline & & Abell 2029 & & Abell 1246 & & Perseus cluster \\ Radius \(R_{\rm cl}\) & T & \(\rho_{0}\) & Number density & \(\rho_{0}\) & Number density & \(\rho_{0}\) & Number density \\ (kpc) & (keV) & (\(10^{-27}\) g cm\({}^{-3}\)) & (\(10^{-8}\) kpc\({}^{-3}\)) & (\(10^{-27}\) g cm\({}^{-3}\)) & (\(10^{-8}\) kpc\({}^{-3}\)) & (\(10^{-27}\) g cm\({}^{-3}\)) & (\(10^{-8}\) kpc\({}^{-3}\)) \\ \hline 5 & 3.0 & 14.5 & 6.67 & 35.8 & 2.18 & 5.20 & 97.0 \\ & 0.7 & 17.0 & 5.69 & 46.1 & 1.48 & 6.7 & 65.9 \\ 10 & 3.0 & 7.70 & 3.52 & 15.8 & 1.34 & 2.81 & 37.0 \\ & 0.7 & 9.2 & 2.94 & 19.7 & 0.942 & 3.6 & 25.3 \\ 15 & 3.0 & 4.60 & 2.94 & 8.95 & 1.30 & 1.75 & 21.5 \\ & 0.7 & 5.75 & 2.35 & 11.8 & 0.863 & 2.2 & 15.0 \\ 20 & 3.0 & 3.30 & 1.90 & 6.27 & 1.07 & 1.30 & 11.4 \\ & 0.7 & 4.0 & 1.57 & 8.2 & 0.716 & 1.63 & 7.94 \\ 30 & 3.0 & 2.05 & 1.61 & 4.00 & 0.667 & 0.90 & 5.38 \\ & 0.7 & 2.8 & 1.18 & 5.39 & 0.432 & 1.5 & 2.83 \\ 40 & 3.0 & 1.60 & 1.10 & 3.05 & 0.507 & 0.70 & 4.44 \\ & 0.7 & 2.0 & 0.883 & 3.81 & 0.353 & 1.24 & 2.19 \\ 50 & 3.0 & 1.54 & 0.785 & 2.30 & 0.391 & 0.46 & 2.98 \\ & 0.7 & 1.54 & 0.785 & 3.2 & 0.245 & 0.80 & 1.49 \\ \hline \end{tabular} _Notes_. \({}^{1}\) Taken directly from Vikhlinin et al. (2006). \({}^{2}\) Adapted from Sato et al. (2014). \({}^{3}\) Adapted from Urban et al. (2013). \end{table} Table 3: Detectable limits for gas clumps with central temperatures of 0.7 and 3.0 keV based on angular size and density in relation to _STAR_-_X’s instrumental background and the galactic foreground. Densities listed are for clump photons that are at least 95 per cent detectable in an average of 10 samples. Number density refers to the number density of clumps within each cluster that are required to reproduce observed drops in entropy. The overall normalization is adjusted to match the published data for each cluster in the radial range 0-2300 kpc, which results in factors of 1, 0.8, and 0.7 for Abell 2029, Abell 1246, and the Perseus Cluster, respectively. For 0.7 keV central temperature clumps, both the temperature and density profiles of the cluster needed to be modified to correct for entropy as the cooler clumps bias the temperature profile down. This required a more complex correction, which relied on the results for gas clumping with a central temperature of 3.0 keV. To start, a clump size was chosen from the detectable density analysis (Section 3) for gas clumps with a central temperature of 0.7 keV. Using the results from the entropy-corrected analysis of clumps with a central temperature of 3.0 keV (see Section 4 for more details), a total mass estimate for injected clumps that bias the cluster's density profile to match observations was calculated. This mass estimate relied on the entropy-corrected density profile used in the 3.0 keV clump analysis, where only the density profile was modified to correct for entropy. Using this mass estimate as a starting point, a number density of clumps with a central temperature of 0.7 keV for a chosen central clump radius could be calculated using the previously determined central clump density in the 0.7 keV detectability analysis (Section 3). With the chosen clump size, predetermined detectable density, and calculated number density, clumps were injected into a cluster recreation to observe the impact on the cluster's temperature profile. The simulated observations were then analysed to extract temperature and density profiles. A modified temperature profile could then be constructed to account for the predicted temperature drop. To achieve this, the temperature in each radial bin was increased by the difference between the cluster profile fit to observations and the extracted temperature profile with 0.7 keV clumps injected. The modified temperature profile was then used to re-simulate the cluster and adjusted until the injected clumps lowered the temperature profile down to match observations. With the modified temperature profile, the original corrected density profile required modification to properly correct for entropy. The modified density profile was adjusted to account for the now raised temperature so that when combined, the expected entropy was approximated. Clumps were then injected into the cluster with both profiles modified. The profiles and the number density of clumps were then adjusted to identify a combination of temperature and density parameters and a number density that resulted in: a combination of modified temperature and density profiles that resulted in the expected entropy, that when the clumps were injected they biased the profiles to match observations, and because the clump central density came from the detectability analysis, the clumps can be detected, masked out, and the modified profiles and expected entropy can be recovered. Thus, a possible combination of an entropy-corrected temperature and matching density profile was produced that corresponds with the detectable clump properties identified in Section 3. ### Example: the _STAR-X_ simulation of Abell 2029 The galaxy cluster Abell 2029 was initially selected for reconstruction as it had been observed to have a significant drop in entropy in its outskirts (near the virial radius), and the entire cluster fits within _STAR-X_'s 1' FOV. The 3D reconstruction of the cluster used temperature and emissivity profiles that were modelled using best fit parameters taken from Vikhlinin et al. (2006), based on the analysis of _Chandra_ observations. Mock observations for _STAR-X_, as well as an approximation of the XIS Instrument on the _Suzaku_ observatory, were generated and analysed to evaluate the accuracy of the simulation and analysis process. The projected temperature, deprojected density, and calculated entropy profiles derived from a recreation of Abell 2029 are presented in Fig. 1. The data produced in the analysis of the simulated images for both _Suzaku_ and _STAR-X_ follow the temperature and emissivity profiles that were used to construct the cluster, as well as observational data. This establishes that the simulation process and subsequent analysis process is able to reproduce observational data. With a reliable simulation and analysis process established, the clusters' profiles can now be adjusted to correct for entropy and cluster recreations can be injected with gas clumps to observe their impact and assess their detectability. The clump detectability analysis is described in detail in Section 3 and the individual cluster analysis can be found in Section 3.2. An example of an entropy-corrected, _STAR-X_ recreation of Abell 2029 with clumps injected, wavdetect regions overlaid, a point-source-masked and background-corrected image, and a spectrum extracted from an annulus of Abell 2029 is shown in Fig. 2. The corresponding profiles can be found in Fig. 4. ## 3 Clump properties Since the nature of clumps, if they even exist, is unknown, the range of properties they can have is fairly broad, with the main constraint being that they bias entropy profiles in cluster outskirts. Presuming a size range of individual clumps explored in simulations, we first determine how faint such clumps can be and still be separated from truly diffuse emission. The number density of clumps in the cluster volume is then set by the total amount of emission needed to produce drops in entropy comparable to those seen in actual observations. Individual clumps that are brighter (and fewer in total number) will be trivially detected by _STAR-X_, while fainter clumps would need to be more numerous to bias the entropy by the amount observed. ### Properties of detectable clumps Two uniform central clump temperatures were assessed; 3.0 and 0.7 keV. A central temperature of 3.0 keV was used in order to limit the number of variables involved when finding number density values for clumps required to bias entropy profiles. This temperature of 3.0 keV for simulated gas clumps can be found in Vazza et al. (2012) with a high differential distribution and does not impact the projected temperatures of the cluster greatly as the outskirts are around 3.0 keV. This results in an entropy profile greatly impacted by the density of the clumps. A central temperature of 0.7 keV was also assessed to examine cooler clumping scenarios, where the temperature profile of the cluster is also effected. Before injecting clumps into cluster recreations, a range of clump sizes in kpc were selected from Vazza et al. (2012), chosen to match clump sizes found in cosmological simulations. Each clump size is evaluated separately, where individual cluster simulations are populated with clumps of identical size. For each radial size \(R_{\rm d}\), a lower limit for detectable density was identified based on _STAR-X_'s instrumental background and the galactic foreground, at the redshift of each recreated cluster. The purpose of identifying these detectability limits is to test the limit of _STAR-X_'s resolving power. These detection limits allowed for an investigation into clumping properties that recreate observed drops in entropy, assuming that clumps are as faint as possible while still being at least 95 per cent detectable by _STAR-X_. The ICM is not included in this detectability analysis in order to evaluate the clumps against a constant background. When they are injected into their respective clusters, the clumps can be identified and subtracted out to fully recover the unbiased entropy profile, which indicates the clumps are fully detectable and it is appropriate to say they are along this arbitrary detectability threshold. The process for identifying detectability thresholds based on density and radial size is fairly straightforward. Gas clumps are randomly injected into an empty, spherical volume with a radius equal to the viral radius of a chosen cluster. The size of the gas clumps for each trial are assumed to be the same, with sizes spanning the range \(5\,\mathrm{kpc}\leq R_{\mathrm{el}}\leq 50\,\mathrm{kpc}\). Their corresponding angular size is determined by the redshift of each cluster. Using yt, pyXSIM, and SOXS, photons corresponding to each clump in the volume are generated. The photons are then projected, and a mock observation of the clumps as seen by _STAR-X_ are produced. Both an observation that includes _STAR-X_'s instrumental background and the Galactic foreground as well as one that is free of these backgrounds is produced. The observations are then analysed using wavdetect, where the approximate radius of the clump in pixel size is provided to wavdetect, calculated by converting the physical size of the clump to pixel coordinates given the redshift of the cluster. Clump-masked images are then produced and compared to determine the percentage of clump photons that were excluded in the observation with backgrounds relative to the observations without backgrounds included. The central gas density of the clumps is lowered incrementally to identify the threshold at which 95 per cent of the clump photons are detected. The density that satisfies this threshold is established as the baseline of detectability for the 'faintest' clumps. The same procedure is carried out to identify clumps where only 5 per cent of clump photons are detected in order to establish the properties clumps need to have to avoid detection by _STAR-X_. It is in this fashion that a lower limit for detectable clump density can be established over a range of constant, physical clump sizes for different clusters. The results from this initial density threshold analysis for Abell 2029, Abell 1246, and the Perseus Cluster can be seen in Table 3 and Fig. 3. With a baseline for faintest detectable clump densities over a range of clump sizes established, the number density of clumps that recreate observed drops in entropy can now be investigated. ### Recreating observed drops in entropy with clumps With detection limits identified for a range of clump sizes, clumps can be injected into 3D reconstructions of observed clusters. These simulations are then used to determine the number density of clumps at the detectability threshold needed to replicate the observed drops in entropy for each cluster. The clumps that are injected into the reconstructions use the central density detection threshold values for each clump size as reported in Table 3 for a central clump temperature of 3.0 and 0.7 keV. All clumps injected into each reconstruction are identical in size, temperature, and central density. As described in Section 2.3, the observed density profile for the chosen cluster is adjusted in the outskirts to yield the expected entropy profile (\(\propto r^{1.1}\)) without clumping. Clumps are then injected into the 3D recreation and a mock observation is produced and analysed. For each clump size \(R_{\mathrm{el}}\), an increasing number of clumps is randomly injected into the volume until the extracted density and entropy profiles (before clump detection and removal) reflect those of real observations. In this way, the number of clumps needed to reproduce observed entropy drops in these clusters is determined. Figure 1: Top: Projected temperatures for Abell 2029 recreations compared to deprojected temperature values from _Suzaku_ observations (Walker et al., 2012a). The dashed black line signifies the projected temperature profile. Middle: Deprojected hydrogen number density profiles. Bottom: Entropy profiles, dashed grey line signifies the expected entropy (\(r^{1.1}\)) relation. _Suzaku_ observational data are shown in black, a simulated recreation of a _Suzaku_ observation is shown in blue, and a simulated observation from _STAR-X_ is seen in red. The solid purple lines signify best-fitting models used to reconstruct cluster. ## 4 Individual cluster simulations ### Abell 2029 As mentioned in section 2.4, the emissivity and temperature profiles used to recreate Abell 2029 were taken from Vikhlinin et al. (2006). See Tables 1 and 2 for the chosen density and temperature parameters initially used to reconstruct the cluster. Two central clump temperatures were assessed; 3.0 and 0.7 keV. For the following recreations, the temperature and density profiles were modified to correct for entropy. For gas clumps with a central temperature of 3.0 keV, only the density profile was lowered in the outskirts to yield the expected entropy profile. For gas clumps with a central temperature of 0.7 keV, both the temperature and density profiles were modified. For more details on the entropy-correction technique, see Section 2.3. The entropy-corrected profile parameters for both clump temperatures can be found in Tables 1 and 2. For each central clump temperature (3.0 and 0.7 keV), each identified radial size and lower density limit established in Section 3.1 were injected into the cluster and a mock 100 ks _STAR-X_ observation of the cluster was produced and analysed to extract a temperature and density profile following the methods outlined in Section 2 and compared to observations. For gas clumps with a central temperature of 3.0 keV, the number of clumps were increased until the density profile matched observations and the observed drop in entropy was recreated. Thus, a clump number density that recreated observations for each clump size and central gas density were identified, which are listed in Table 3. The clumps were then identified and masked out to produce a clump-free image. See Fig. 4 for the projected temperature, deprojected density, and 3D entropy profiles constructed from the resulting surface brightness profiles and spectral fits. The results for the 3.0 keV clumps were then used in the entropy-correction process for 0.7 keV clumps and to determine number density values, as outlined in Section 2.3. Figure 2: Top left: Simulated \(1^{\circ}\times 1^{\circ}\) observation of Abell 2029 with clumps injected, identified point sources circled in red and identified clumps circled in white. Top right: Zoomed-in region indicated by the red box in the image on left. Bottom left: Smoothed, background-subtracted version of the image above with point sources and clumps masked out. Both the top and bottom left cluster images are scaled logarithmically and have the same FOV. Bottom right: Fit of the spectrum extracted from Annulus 6, shown in white in the bottom left panel. The background-subtracted data are shown in black with the red curve representing the best-fitting model. The blue points indicate the background level. The lower panel shows the ratio of the data to the model, indicating the quality of the fit. The entropy profile was calculated using the 3D temperature model used to construct the cluster, instead of the extracted, projected temperatures. Iterative fitting of the projected surface brightness and temperature profiles, as done for actual observations, is more involved and not crucial for our aims as long as the measured projected temperatures match the projected temperatures produced by the 3D model, which is the case. ### Abell 1246 The same recreation and analysis process in Section 2 was carried out for Abell 1246, also for a 100 ks mock _STAR-X_ observation. Temperature and emissivity profiles were obtained by manually fitting equations (2) and (3) to observational data obtained from Sato et al. (2014). See Tables 1 and 2 for the density and temperature parameters used to initially recreate the cluster, which were verified to match observational data. The temperature and density profiles were then modified in the outskirts to yield the expected entropy profile; the resulting parameters can be found in Tables 1 and 2. The entropy-correction process was carried out for two central clump temperature assessments; 3.0 and 0.7 keV. For a detailed description of the correction process, see Section 2.3. Clumps with a central temperature of 3.0 keV were injected into their respective, corrected recreation to obtain number density values for a range of clump sizes and their respective central densities. The results for the 3.0 keV central clump temperature analysis were then used in the entropy-correction process to assess 0.7 keV clumps and to determine their matching number density values, as outlined in Section 2.3. The projected temperatures, deprojected densities, and derived entropy profiles can be found in Fig. 5. The identified number density values for the clumps can be found in Table 3. The methods for obtaining number density values for clumps and deriving the entropy profile are identical to the previous subsection for Abell 2029. ### Perseus cluster With a smaller redshift, the Perseus cluster would require multiple pointings. As an approximation of an observation from _STAR-X_, the FOV in the simulation was increased to cover the entirety of the Perseus Cluster. Thus, a true mosaic was not produced as would be in a real observation, although the outcome is equivalent since off-axis variations in PSF and effective area are not included in the simulations. The larger FOV mock observation was equivalent to \(\sim\)9 pointings, each with a _STAR-X_ exposure time of 100 ks. The Perseus cluster was recreated using observed profiles from Simionescu et al. (2011) and Urban et al. (2013). See Tables 1 and 2 for the density and temperature parameters used to initially recreate the cluster, which were verified to match observational data. The temperature and density profiles were adjusted in the outskirts to yield the expected entropy profile. This correction process was used to assess two gas clump temperatures; 3.0 and 0.7 keV. See Section 2.3 for details. The corrected profile parameters can be found in Tables 1 and 2. Gas clumps with properties identified in Table 3 for a central temperature of 3.0 keV were then injected into the cluster recreation to obtain number density values that recreate the observed drop in entropy for the Perseus Cluster. The results from the 3.0 keV analysis were then used in the entropy-correction process intended for gas clumps with a central temperature of 0.7 keV and to determine number density values, as outlined in Section 2.3. Projected temperatures, deprojected densities, and derived entropy profiles are shown in Fig. 6. The identified number density values can be found in Table 3. The methods for obtaining number density values for clumps and deriving the entropy profile are identical to the methods used for Abell 2029. ## 5 Discussion The main assumption in this work has been that clumping occurs in the outskirts of galaxy clusters, which then causes observed entropy profiles to unexpectedly drop off in the outskirts. This assumption is supported by simulations (Walker et al., 2019) and the idea that Figure 3: Detectability ranges of gas clumps with central temperatures of 0.7 keV (points) and 3.0 keV (dashed lines) based on STAR-X’s instrumental background and the galactic foreground. The green points and the dashed lines represent central clump densities that are approximately 95 per cent detectable, while the red points and lines represent central clump densities that are approximately 5 per cent detectable. The central density of the clumps are scaled by the density at \(r_{500}\) for each cluster. The green regions highlight the parameter space where clumps with a central temperature of 0.7 keV are extremely likely to be detected by STAR-X, greater than 95 per cent detectable. The red regions highlight the parameter space where 0.7 keV clumps are extremely unlikely to be detected, less than 5 per cent detectable. a galaxy cluster would not stay in a state with a sustained lower temperature and higher density in the outskirts if it is in hydrostatic equilibrium (Biffi et al., 2016). Thus, a phenomena like gas clumping is potentially necessary to explain the unexpected drop in entropy profiles that are consistently observed in the outskirts of galaxy clusters. See Section 1 for a more in-depth description and example of clusters with observed drops in entropy. The goal of this work has been to assess how _STAR-X_ would behave if gas clumping occurs in the outskirts of galaxy clusters. It is proposed that previous missions either lacked the sensitivity or spatial resolution to directly detect the clumps, which resulted in a bias in the derived entropy profiles. This work attempted to assess the properties of gas clumps that would be necessary to recreate observed drops in entropy. The sizes of the clumps used across all of the simulations (\(R_{\rm clump}=5\)-\(50\) kpc) were chosen to match clump sizes found in cosmological simulations, taken from Vazza et al. (2012) along with the central clump temperature of 3.0 keV, which had a high differential distribution in simulations. A central clump temperature of 0.7 keV was also assessed. First, clumps that were bright enough to be detectable by Figure 4: Top: Projected temperatures for an Abell 2029 recreation with injected gas clumps with a central temperature of 3.0 keV (left) and 0.7 keV (right). Results of the injected clumps can be seen in red, while clump-corrected results can be seen in blue. The dashed lines signify projected temperature profiles, while the solid lines represent best-fitting 3D models. Right panel includes an entropy-corrected temperature profile (blue), which was used to construct the cluster. Middle: Deprojected hydrogen number density profiles for the clumpy A2029 recreation (red) and the clump-corrected results (blue). The blue line represents the density profile necessary to yield the expected entropy and was used to construct the cluster. In the right panel, the entropy-corrected density (blue) is tied to the entropy-corrected temperature profile above (blue) to yield the expected entropy. Bottom: Derived entropy profiles for the clumpy A2029 recreation and accompanying clump-correction (blue). The dashed grey line signifies the expected entropy (\(r^{1.1}\)) relation. The model fit to observations (pink) is identical in left and right panels for each quantity measured. _STAR-X_ were identified for a range of clump sizes by establishing a detectability threshold based on clump density and size at different cluster redshifts. This was done to explore a range of clump properties that _STAR-X_ would be able to detect, which can be seen in Fig. 3. The central density \(\rho_{0}\) of the clumps, relative to the density of the ambient ICM at \(r_{500}\), is compared to the size of the clumps \(R_{\rm clump}\) in terms of _STAR-X_'s ability to successfully detect and mask out the clumps. In the green region of Fig. 3, clumps with these properties will be easily identifiable with _STAR-X_, with \(>\)95 per cent of clump emission being masked out; the points at the edge of this region indicate the clump properties used in our simulations. The red region indicates the range of properties that clumps would need to have in order to continue to avoid detection. It is worth noting that in actual observations, the sizes of the clumps would be unknown. In the detectability analysis, the known clump size is used to calculate a projected scale size, which is fed to wavdetect. It is in this fashion that only one clump size is searched for in each observation. In the detectability analysis, we found that Figure 5: Top: Projected temperatures for an Abell 1246 recreation with injected gas clumps with a central temperature of 3.0 keV (left) and 0.7 keV (right). Results of the injected clumps can be seen in red, while clump-corrected results can be seen in blue. The dashed lines signify projected temperature profiles, while the solid lines represent best-fitting 3D models. Right panel includes an entropy-corrected temperature profile (blue), which was used to construct the cluster. Middle: Proprojected hydrogen number density profiles for the clumps A1246 recreation (red) and the clump-corrected results (blue). The blue line represents the density profile necessary to yield the expected entropy and was used to construct the cluster. In the right panel, the entropy-corrected density (blue) is tied to the entropy-corrected temperature profile above (blue) to yield the expected entropy. Bottom: Derived entropy profiles for the clumpy A1246 recreation and accompanying clump-correction (blue). The dashed grey line signifies the normalized, expected entropy (\(t^{-1.1}\)) relation. The model fit to observations (pink) is identical in left and right panels for each quantity measured. The grey points in left panels are observational data used to model the cluster recreation (Sato et al., 2014). a small portion of a larger clump could be falsely identified when searching at a smaller scale, leaving the rest of the clump undetected. It is for this reason that the detectability analysis in Section 3.1 uses a single scale size and the point source detection and clump detection are done separately in Section 4, to allow wavdetect to search for only one scale size at a time. It may be feasible to search an actual observation for different clump sizes individually to account for this discrepancy, rather than feeding wavdetect a range of scale sizes all at once. With an established range of detectable clump properties, observed drops in entropy were recreated by injecting clumps into entropy-corrected versions of observed clusters. Once sufficient drops were observed through mock observations, _STAR-X_'s ability to resolve gas clumps was assessed. For clumps with a central temperature of 0.7 keV, a slightly higher central density was necessary to produce clumps that were as detectable compared to clumps with a central temperature of 3.0 keV, as determined in Section 3 and seen in Fig. 3. Figure 6: Top: Projected temperatures for a Perseus Cluster recreation with injected gas clumps with a central temperature of 3.0 keV (left) and 0.7 keV (right). Results of the injected clumps can be seen in red, while clump-corrected results can be seen in blue. The dashed lines signify projected temperature profiles, while the solid lines represent best-fitting 3D models. Right panel includes an entropy-corrected temperature profile (blue), which was used to construct the cluster. Middle: Deprojected hydrogen number density profiles for the clumpy Perseus recreation (red) and the clump-corrected results (blue). The blue line represents the density profile necessary to yield the expected entropy and was used to construct the cluster. In the right panel, the entropy-corrected density (blue) is tied to the entropy-corrected temperature profile above (blue) to yield the expected entropy. Bottom: Derived entropy profiles for the clumpy Perseus recreation and accompanying clump-correction (blue). The dashed grey line signifies the normalized, expected entropy (\(r^{2-1}\)) relation. The model fit to observations (pink) is identical in left and right panels for each quantity measured. The grey points in left panels are observational data used to model the cluster recreation (Urban et al., 2013). A higher central density implied that a lower number density of clumps to bias entropy profiles is necessary, as seen in Table 3. The entropy-correction for clumps with a central temperature of 0.7 keV yielded cluster temperature profiles that were hotter in the outskirts with corresponding density profiles that were less dense in the outskirts. Combining these profiles to calculate entropy (equation 1) resulted in an approximation of the expected entropy profile. The resulting density profiles were slightly denser than the entropy-corrected profiles intended for clumps with a central temperature of 3.0 keV clumps. This makes sense because for the 3.0 keV clumps, the cluster temperature profiles were not modified. Therefore, for the lower clump temperature of 0.7 keV, which included a warmer temperature cluster profile, the density profile would have to be slightly higher to achieve the same entropy profile achieved with the 3.0 keV clumps. Three relaxed clusters with observed drops in entropy in their outskirts were chosen for reconstruction: Abell 2029, Abell 1246, and the Perseus Cluster. Each cluster is in a relaxed dynamical state. Abell 2029 and Abell 1246 were chosen because they fit entirely within _STAR-X's_ FOV and could be observed in one pointing. This is an efficient characteristic of _STAR-X_ as it has a wide FOV. The Perseus cluster is an alternative case. Due to its low redshift, multiple pointings would be required for a complete observation. Because it is in closer proximity, fainter clumps could theoretically be detectable in the Perseus cluster. Abell 2029 was observed by the _Suzaku_ observatory to have a significant drop in entropy in the outskirts (Walker et al. 2012a). With an ample number of clumps randomly distributed within an approximation of an entropy-corrected Abell 2029, a similar drop in entropy can be recreated. This is seen in Fig. 4. Clumping scenarios that are detectable by _STAR-_X_and recreate an observed drop in entropy can be found in Table 3. The number density of clumps within the cluster is identified through the reconstructions and analysis. It is apparent that for Abell 2029, if gas clumping is a viable explanation for the apparent drop in entropy and clumps are within the bounds of detectability established in Section 3.1, _STAR-X_ would be sufficiently sensitive enough for the clumps to be identified and subtracted out in order to recover the expected entropy profile (the relation identified in Voit et al. 2005). Abell 1246 was chosen as a cluster to reconstruct as it is another plausible candidate for _STAR-X_ that fully fits within the FOV and has had an observed drop in entropy in the outskirts (Sato et al. 2014). It has a slightly lower temperature than Abell 2029. The results for Abell 1246 follow those of Abell 2029. Once enough clumps are injected into the cluster, a drop in entropy equivalent to that seen in observations can be recreated. With sufficiently dense clumps, _STAR-X_ is able to identify and mask out clumps to recover the expected entropy profile. See Fig. 5 for the relevant temperature, density, and entropy profiles and Table 3 for the established clump properties. The Perseus cluster was selected to provide an example of a cluster with a lower redshift that would require multiple observations to fully encompass it. It was also chosen to examine scenarios in which fainter clumps could be detected due to the lower redshift of the cluster. Clumps that are 95 per cent detectable by _STAR-X_ for the Perseus Cluster therefore had much lower density values for a range of clump sizes, as demonstrated in Table 3. For clump sizes that are less than 20 kpc, the number density of detectable clumps that recreate an observed drop in entropy is so great that during the correction process, much of the cluster's counts in the outskirts are removed. It was therefore difficult to fully recover the expected entropy profile for larger number densities as the number of counts per annulus was so low. For clumps larger than 20 kpc, with a density on the detectability threshold for _STAR-X_ at the Perseus Cluster's redshift, clumps could be masked out without removing so much solid angle that the emission from the diffuse ICM could not be constrained. Every clump size was able to recreate the observed drop in this cluster. However, the sheer number of clumps required at a lower density and radial size yielded potentially unrealistic physical situations. Each of the three clusters chosen for recreation produced similar results. With established brightness thresholds for gas clumps at each of the cluster's redshifts, a number density of clumps that recreated an observed drop in entropy could be identified. In each case, _STAR-X_ was able to identify clumps sufficiently enough to recover the expected entropy profile. For each cluster, there was an inner radius within which no clump photons were detected as the cluster emission is much brighter in the centre. In cluster outskirts where the emission was much fainter, a radius was also present within each cluster where beyond it clumps were 90(\(\pm\)5) per cent detectable. Beyond this radius clumps were considered to be fully detectable. For clusters corrected for gas clumps with a central temperature of 0.7 keV, clump photons were 0 per cent detectable within 7.2, 2.7, and 10 arcmin for Abell 2029, Abell 1246, respectively, and 90(\(\pm\)5) per cent detectable beyond 16.7, 6.5, and 63 arcmin for Abell 2029, Abell 1246, and the Perseus Cluster, respectively. For clusters corrected for gas clumps with a central temperature of 3.0 keV, clump photons were 0 per cent detectable within 5.2, 2.5, and 9.1 arcmin for Abell 2029, Abell 1246, and the Perseus Cluster, respectively and 90(\(\pm\)5) per cent detectable beyond 14.7, 6.4, and 59 arcmin for Abell 2029, Abell 1246, and the Perseus Cluster, respectively. For clumps that were at the lower end of the detectability range (around 5 per cent detectable), clump-subtraction and entropy correction have little effect. Since they cannot be masked out, the clumps still impact the entropy profile heavily. At 5 per cent detectability, the number of clumps required to recreate an observed drop in entropy is roughly four times that of the 95 per cent detectable clumps. An overabundance of clumps is not necessarily realistic. At a certain point, if the ICM is saturated with pockets of denser and cooler structure, the 'clumps' become the ICM. In any case, the large number of clumps would create small-scale surface brightness fluctuations that could be detected statistically. Zhuravleva et al. (2015) relates gas clumps to large scale density fluctuations in the ICM, which could imply larger clumps being more probable. It is also possible that gas clumps do not exist in the outskirts of clusters and the drop in entropy that is observed is caused by something else entirely, such as projected large-scale structures along the line of sight, such as more distant background clusters or foreground groups. These objects can be thought of as large clumps, and given the trend in Fig. 3, they should be detectable even with apparent central densities comparable to the ICM outskirts. Alternatively, Walker et al. (2012b) proposes that a drop in entropy is caused by a reduction in accretion shock strength, in agreement with Cavaliere et al. (2011) and Lapi, A. et al. (2010). A lack of observed clumps in real _STAR-X_ data would be in support of this scenario. Since _STAR-X_'s FOV and other capabilities are similar to that of _Athena_'s Wide Field Imager (Barcons 2015), these results can generally be applied to _Athena_ observations of nearby clusters as well. ## 6 Conclusion The primary goal of this research was to determine whether or not _STAR-X_, an observatory proposed to the MIDEX call by NASA in 2021, would be sensitive enough to detect and mask out gas clumps in the outskirts of galaxy clusters in order to recover entropy profiles of the truly diffuse ICM. We first determined what properties gas clumps needed to have in order to be detectable by _STAR-X_. A detectability threshold based on their central gas density was identified for a range of clump sizes at the specific cluster redshifts considered here. This allowed for an establishment of properties of the dimmest gas clumps that would be detectable by _STAR-X_ in these clusters. Then the goal was to identify how many of each detectable clump were necessary to recreate a drop in entropy in the outskirts of our reconstructed galaxy clusters equivalent to that observed in the real clusters. Mock observations included the Galactic foreground and instrumental background of _STAR-X_, as well as randomly generated point sources following the CXB. These mock observations were then analysed using wavdetect to subtract out point sources and gas clumps and two radial entropy profiles were extracted to assess the impact of the gas clumps and the success of clump identification for the derived entropy profile in the mock observations. Abell 2029, Abell 1246, and the Perseus Cluster were chosen for reconstruction and in each case, clump properties that recreated observed drops in entropy could be identified. For the majority of identified clump properties, _STAR-X_ was able to identify and mask out gas clumps in order to recover the expected entropy profiles. A parameter space for clumps that are both detectable by _STAR-X_ and which can recreate observed drops in entropy were subsequently identified through this process. If gas clumps are as prominent as they have been found to be in simulations (Vazza et al., 2012) and in this work, _STAR-X_ would allow for a much deeper understanding of cluster outskirts. ## Acknowledgements We thank the anonymous referees for their constructive feedback that helped to improve the paper. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2302.03242
Combating Online Misinformation Videos: Characterization, Detection, and Future Directions
With information consumption via online video streaming becoming increasingly popular, misinformation video poses a new threat to the health of the online information ecosystem. Though previous studies have made much progress in detecting misinformation in text and image formats, video-based misinformation brings new and unique challenges to automatic detection systems: 1) high information heterogeneity brought by various modalities, 2) blurred distinction between misleading video manipulation and nonmalicious artistic video editing, and 3) new patterns of misinformation propagation due to the dominant role of recommendation systems on online video platforms. To facilitate research on this challenging task, we conduct this survey to present advances in misinformation video detection. We first analyze and characterize the misinformation video from three levels including signals, semantics, and intents. Based on the characterization, we systematically review existing works for detection from features of various modalities to techniques for clue integration. We also introduce existing resources including representative datasets and useful tools. Besides summarizing existing studies, we discuss related areas and outline open issues and future directions to encourage and guide more research on misinformation video detection. The corresponding repository is at https://github.com/ICTMCG/Awesome-Misinfo-Video-Detection.
Yuyan Bu, Qiang Sheng, Juan Cao, Peng Qi, Danding Wang, Jintao Li
2023-02-07T04:03:55Z
http://arxiv.org/abs/2302.03242v3
# Online Misinformation Video Detection: A Survey ###### Abstract With information consumption via online video streaming becoming increasingly popular, misinformation video poses a new threat to the health of the online information ecosystem. Though previous studies have made much progress in detecting misinformation in text and image formats, video-based misinformation brings new and unique challenges to automatic detection systems: 1) high information heterogeneity brought by various modalities, 2) blurred distinction between misleading video manipulation and ubiquitous artistic video editing, and 3) new patterns of misinformation propagation due to the dominant role of recommendation systems on online video platforms. To facilitate research on this challenging task, we conduct this survey to present advances in misinformation video detection research. We first analyze and characterize the misinformation video from three levels including signals, semantics, and intents. Based on the characterization, we systematically review existing works for detection from features of various modalities to techniques for clue integration. We also introduce existing resources including representative datasets and widely used tools. Besides summarizing existing studies, we discuss related areas and outline open issues and future directions to encourage and guide more research on misinformation video detection. Our corresponding public repository is available at [https://github.com/ICTMCG/Awesome-Misinfo-Video-Detection](https://github.com/ICTMCG/Awesome-Misinfo-Video-Detection). ## 1 Introduction With the prevalence of online video platforms, information consumption via video streaming is becoming increasingly prominent. Popular video distribution platforms like YouTube1 and TikTok2 have attracted billions of monthly active users [21]. Studies on news consumption show that about a third of U.S. adult users of YouTube and TikTok regularly get news from these platforms [16]. Footnote 1: [https://www.youtube.com/](https://www.youtube.com/) Footnote 2: [https://www.tiktok.com/](https://www.tiktok.com/) Unfortunately, the massive growth of video news consumption also boosts the rapid spread of misinformation videos, posing an increasingly serious challenge to the online information ecosystem. For instance, 124 TikTok misinformation videos about COVID-19 vaccines gathered over 20 million views and 300 thousand shares [14]. Fig. 1 is an example about the existence of microchips in COVID-19 vaccines. Compared with previously studied misinformation which is mostly in text and image format, video-based misinformation is more likely to mislead the audience and go viral. Research in the political domain shows that individuals are more likely to believe an event's occurrence when it is presented in video form [17]. Another experiment indicates that the video format makes news pieces perceived as more credible and more likely to be shared [21]. Such an effect makes misinformation videos dangerous and it may lead to further negative impacts for various stakeholders. For individuals, misinformation videos are more likely to mislead audiences to generate false memory and make misinformed decisions. For online platforms, the wide spread of misinformation videos may lead to reputation crises, users' inactivity, and regulatory checks. For watchdog bodies, misinformation videos may be weaponized to foment Figure 1: A misinformation video post on TikTok, along with the attached social context information, indicating that COVID-19 vaccines contain microchips. For privacy concerns, we replace the user avatars and names with placeholders. unrest and even undermine democratic institutions. Therefore, actions are urgently needed to tackle this problem. As a countermeasure, online platforms have made efforts to mitigate the spread of misinformation videos. For instance, TikTok introduces an enhanced in-app reporting feature to help curb the spread of COVID-19 misinformation [11]. However, the current solution relies much on human efforts from expert teams or active users, which is labor-intensive and time-consuming. It usually fails to perform real-time detection, and thus could not react rapidly when a new event emerges or previously debunked misinformation recurs [13]. Moreover, this solution may introduce uncertain individual biases and errors [14]. To tackle the above problems, developing techniques and systems for automatic misinformation video detection becomes a promising option. Compared with text-based or text-image misinformation detection, automatic video-based misinformation detection presents several unique challenges. First, the proliferation of heterogeneous information from diverse modalities brought more uncertainty and even noise to the final prediction. Second, ubiquitous video-editing behaviors blur the distinction between forged and real videos. Third, the recommendation-dominated content distribution of online video platforms reshapes the misinformation propagation from explicit behaviors like forwarding to implicit behaviors like re-uploading. These challenges necessitate new technical solutions for detecting video-based misinformation and also highlight the importance of conducting a careful investigation specifically on this problem. Despite many valuable surveys conducted on misinformation detection, limited attention is given to video-based misinformation. Most of them regard the video as a kind of visual content as the image and discuss general multimodal techniques for misinformation detection [1, 1, 1]. However, the above-mentioned uniqueness of video-based misinformation is not sufficiently considered. Other related surveys focus on a specific type of misinformation video, such as forged videos [1, 13], which provide detailed reviews but lack comprehensiveness for the problem. Considering the potential harms of misinformation videos, conducting a comprehensive survey on the detection problem is of urgent need. To change the status quo and facilitate further exploration of this challenging problem by the research community, we present an overview of misinformation video detection in this survey. Our main contributions are as follows: * We present a comprehensive analysis of characteristics of the misinformation video from three different levels including signals, semantics, and intents; * We give a systematical overview of existing multimodal detection methods for misinformation in the video form with a principled way to group utilized clues and integration mechanisms; * We discuss several open issues in this area and provide concrete future directions for both research and real-world application scenarios. The rest of this survey is organized as follows: Sec. 2 characterizes the misinformation video. Sec. 3 formulates the detection task and introduces features and techniques utilized by existing works. Following that, Sec. 4 presents some representative datasets and widely-used tools. Then, several related areas are discussed in Sec. 5. Finally, we outline the open issues and provide concrete future directions in Sec. 6 and conclude the survey in Sec. 7. ## 2 Misinformation Video Characterization In this section, we characterize the misinformation video by giving the definition and analyzing it from three levels. Following [1], we define misinformation video as: **DEFINITION 1** (Misinformation Video).: _A video post that conveys false, inaccurate, or misleading information._ Note that a video post may include not only the video itself. On text-dominated social platforms like Facebook3 and Twitter4, there could be a text paragraph attached; and on video-dominated platforms mentioned before, a title or a short text description is generally included. Footnote 3: [https://www.facebook.com/](https://www.facebook.com/) Footnote 4: [https://twitter.com/](https://twitter.com/) To help find detection clues, we characterize the misinformation video according to how it is produced. The analysis is presented from three levels, including signal, semantic, and intent. ### Signal Level Misinformation videos often contain manipulated or generated video and audio content in which the forgery procedure often leads to traces in underlying digital signals. Forgery methods that produce such traces can be classified into two groups: Editing and Generation. Editing refers to visual alterations on existing data of video and audio modality. Typical editing actions include screen cropping, frame splicing, wave cutting, tempo changing, etc., which could be done using multimedia editing software [1]. Generation actions, by contrast, are done by neural networks which are trained to directly generate complete vivid videos. The generated videos may contain forged human faces or voices to mislead the audience. ### Semantic Level The falsehood is conveyed through incorrect semantic changes. For a misinformation video, such changes may occur in one specific modality or across multiple ones. In the former case, manipulated elements (e.g., an exaggerated claim in the text description) are reflected by a single modality. In the latter case, which is more common in multimodal situations, misinformation might be conveyed by wrong semantic associations among non-forged contents of different modalities. For instance, a creator may upload a real video of an event that happened before but add a real text description of a newly emerging event. ### Intent Level The creation of misinformation is often motivated by underlying intents, such as political influence, financial gain, and propaganda effects [20]. To achieve the underlying intent, misinformation videos generally pursue wide and fast spread. This leads to unique patterns of expression, propagation, and user feedback, which are different from real ones. For example, [21] find that compared with real news videos, fake news videos have more significant emotional preferences, involve more user engagement, and are more likely to be questioned in comments. ## 3 Misinformation Video Detection The misinformation video detection problem is generally formulated as a binary classification task: Let \(\mathcal{V}\) and \(\mathcal{S}\) denote a Video Post and the attached Social Context, respectively. \(\mathcal{V}\) consists of attributes such as title/description \(t\), video \(v\), and audio \(a\). Social Context \(S\) consists of two major components: User Profile \(p\) and User Engagement \(e\). User Profile \(p\) includes a set of features to describe the uploader account, such as the location indicated by IP address, self-written and verified introduction, and follower count. User Engagement \(e\) includes comments as well as statistics such as the number of views, likes, and stars. Fig. 1 illustrates an example of \(\mathcal{V}\) from TikTok. The task of misinformation video detection is to predict whether the video post \(\mathcal{V}\) contains misinformation given all the accessible features \(\mathcal{E}=\{\mathcal{V},\mathcal{S}\}\), i.e., \(\mathcal{F}:\mathcal{E}\mapsto\{0,1\}\). Following the characterization in Sec. 2, we further introduce a series of relevant works for detecting misinformation videos from clues at different levels to integration patterns. Fig. 2 gives an overview of this section. ### Signal Level Since misinformation videos are often created using forgery techniques, the detection of video forgery traces would provide a significant clue for detecting a misinformation video. Related works, which are always described as multimedia forensics, have received significant attention over the past decades. We introduce some commonly used techniques for the detection of editing traces and generation traces respectively. Given that there have been surveys on multimedia forensic techniques, here we will not go much into detail about the individual studies. #### 3.1.1 Editing Traces As aforementioned, editing refers to alterations to the visual or audio content by multimedia editing software (mostly requiring manual operations). Existing detection methods for editing traces can be mainly categorized into two groups: active detection and passive detection. In active detection methods, digital watermarks or digital signatures are pre-embedded and extracted to detect the trace of secondary editing. For instance, [15] propose a blind and semi-fragile video watermarking scheme for detection. Combined watermarking (frames and audio watermarking) is used for detecting manipulation in both channels. Compared to passive detection, active detection methods provide a less time-consuming response. However, most of the videos in practice are not pre-embedded with such additional information. The passive detection methods highlight their advantages as they use the characteristics of the digital video itself to detect tampering. For the visual content, the detection consists of inter-frame and intra-frame tampering detection. The former detects the change of frame sequence and the latter detects the alteration of objects that could be reflected in a single frame. For the audio content, statistical features inspired by observed properties are leveraged for forgery detection. Among them, Electric Network Frequency is widely used for forensic needs, thanks to its properties of random fluctuation around the nominal value and intra-grid consistency [14]. #### 3.1.2 Generation Traces Videos generated by neural networks (e.g., generative adversarial networks) are also known as "deepfake videos". Among them, deepfake videos containing vivid, generated human faces have been used for impersonating celebrities and brought negative impacts. The past few years have witnessed significant progress in detecting visual deepfakes. With reference to how deepfake videos are created, detection methods leverage clues generated at different steps including camera fingerprints, biological signals, visual artifacts, and temporal consistency [16]. With recent advances in Text-To-Speech and Voice Conversion algorithms, generating fake audio that is indistinguishable for humans become easier and attracted increasing attention. ASVspoof challenges [23] were organized for accelerating progress in deepfake audio detection. Most existing works for fake audio detection adopt handcrafted acoustic features like Mel-frequency cepstral coefficients (MFCC), linear frequency cepstral coefficient (LFCC), and constant-Q cepstral coefficients (CQCC) and apply classifiers such as Gaussian mixture model and light convolutional neural network to make predictions. Recent work also attempts to leverage pre-trained wave2rec-style models for speech representation extraction and build the end-to-end framework [11]. Figure 2: Overview of misinformation video detection techniques where we group utilized features in three levels and integration mechanisms in a principled way. [Zhou and Lim, 2021] present a joint audiovisual deepfake detection task that handles the case that either one (or both) of the visual or auditory modalities have been manipulated for the first time. Learned intrinsic synchronization between video and audio is proven to boost the performance of both video-based and audio-based deepfake detection as well as whole sequence prediction. Considering that the action of video compression is widely used in the default upload setting of online video platforms, the forensic of compressed deepfake videos becomes an important issue. Compared with uncompressed videos, compressed videos produce compression artifacts, which adds noise to the data and makes it difficult to detect. Moreover, the application of quantization and inverse quantization in the compression and decompression process could also result in quantization noise and distortion. To tackle these problems, [Hu _et al._, 2021] propose a two-stream method by analyzing the frame-level and temporality-level of compressed deepfake videos for detection. ### Semantic Level Semantic-level clues provide rich information that is related to misinformative content. Thus, many recent works focus on leveraging multimodal semantic clues from not only the video content but also descriptive textual information for detection. In this section, we discuss the features exploited by semantic-based methods from a multimodal perspective. **Textual Feature.** The video content is always served with descriptive textual information such as video description, and title. Apart from these directly accessible texts, subtitles and transcriptions extracted from the video also present useful information. Early works most extract statistical features from these texts for classification. [Papadopoulou _et al._, 2017] first exploit linguistic features of the title, which contain basic attributes like text length, as well as designed indicators like whether a title contains specified characters (e.g., the question/exclamation mark and 1st/2nd/3rd person pronoun), the number of special words (e.g., slang words and sentiment-indicative words) and the readability score. [Palod _et al._, 2019; Li _et al._, 2022] also considers the existence of specific expressions like clickbait phrases and violent words for detection. Corpus-aware features, such as n-grams, TF-IDF, lexical richness, and LIWC lexicon overlapping, are leveraged by [Hou _et al._, 2019; Serrano _et al._, 2020]. In addition to handcrafted features, continuous representation generated using deep learning has been increasingly adopted. [Jagtap _et al._, 2021] employs GloVe pre-trained on Wikipedia and Word2Vec pre-trained on Google News and Twitter to generate subtitle embeddings. [Shang _et al._, 2021; Choi and Ko, 2021] train bidirectional recurrent neural networks as text encoders to encode the semantics of textual information including the title, description, and transcription. Recent advances in pre-trained language models (e.g., BERT) also drives the latest multimodal detection models to obtain contextualized representation. **Visual Feature.** The visual content is usually represented at the frame level or clip level. The former presents static visual features while the latter presents additional temporal features. [Shang _et al._, 2021] extract frames through uniform sampling and input the resized sampled frames into the advanced object detection network Fast R-CNN for visual features of object regions. Corresponding caption representation is used to guide the integration of object regions to help generate the frame visual representation. [Choi and Ko, 2021] extract frames according to their similarity to the thumbnail. Pre-trained VGG-19 is utilized to extract visual features from the video frames. [McCrae _et al._, 2022] break video into 32-frames-long clips with each clip beginning at a keyframe. The keyframes are detected through the FFmpeg scene detection filter. For each clip, features related to human faces, objects, and activities are extracted through pretrained FaceNet, ResNet50, and S3D networks, respectively. [Wang _et al._, 2022a] break the video into clips with fixed duration directly and uses S3D to extract visual features likewise. [Qi _et al._, 2023] represent visual content both at the frame level and clip level. The pre-trained VGG19 model and pre-trained C3D model are used to extract frame features and clip features respectively. **Acoustic Feature.** As a unique modality compared to text-image misinformation, the audio modality including speech, environment sound, and background music [Qi _et al._, 2023], plays an essential role in expressing information in videos. As for the detection, in addition to the transcription mentioned above, current works search for useful clues from the acoustic characteristics. [Hou _et al._, 2019] firstly import emotional acoustic features to the detection model, where predefined feature sets widely used for emotion recognition of raw speech are exploited. [Shang _et al._, 2021] design an acoustic-aware speech encoder by introducing the MFCC features. [Qi _et al._, 2023] use the pre-trained VGGish model to extract the audio features for classification. **Crossmodal Correlation.** Mismatches between modalities, such as video-text and video-audio, are often caused by video repurposing for the misleading aim, which would lead to important changes in crossmodal correlation. [McCrae _et al._, 2022] leverage an ensemble method based on textual analysis of the caption, automatic audio transcription, semantic video analysis, object detection, named entity consistency, and face verification for mismatch identification. [Wang _et al._, 2022a] propose two new methods based on contrastive learning and masked language modeling for joint representation learning to identify semantic inconsistencies. In [Choi and Ko, 2021], topic distribution differences between different modalities are utilized to robustify the detection model. ### Intent Level Social contexts refer to user social engagements and profiles when information spreads on platforms with social media characteristics. As mentioned in Sec. 2.3, unique social contexts might reflect the spreading intent of misinformation creators and thus provide useful features [Qi _et al._, 2023]. Current works mostly make use of user comments and statistics on user engagement. The comments are usually exploited by extracting handcrafted features [Papadopoulou _et al._, 2017] or generating general representation vectors through deep models [Palod _et al._, 2019; Choi and Ko, 2021; Qi _et al._, 2023]. Some works go deeper in mining comments. For example, [Serrano _et al._, 2020] learn a feature of com ment conspiracy and [21] give an eye to the domain knowledge. User engagement statistics such as the number of likes, comments, and views are generally directly concatenated with other features before being put into the classifier [14]. Some work also uses statistical numbers as importance weights to help generate embedding. [21] generate video comment embeddings by calculating the weighted sum of embeddings of each comment using their numbers of likes. The publisher profile provides auxiliary information about source credibility in post-level detection. [17, 18] leverages a series of features, such as the number of views of the publisher channel, number of published videos, and follower-following ratio. [19] also point out that user profiling features like geolocation information and whether the account is verified or not can be useful to the detection, and exploit the textual publisher description in their model. ### Clue Integration #### 3.4.1 Feature Fusion Fusion mechanisms are often categorized into early fusion and late fusion, with the former known as feature-level fusion and the latter known as decision-level fusion. In many cases, the early fusion of different modalities outperforms late fusion and requires less computation time [1]. Here we discuss several commonly used fusion architectures belonging to early fusion. **Concatenation-Based**: The majority of the existing works on multimodal misinformation detection embed each modality into a representation vector and then concatenate them as a multimodal representation. The generated representation can be utilized for classification tasks directly or input into a deep network (e.g., the convolutional neural network) for deeper fusion and classification [15]. Linear combination is another simple but effective way to combine feature vectors of different modalities [21]. The fusion process that combines features from different modalities can be done at the video level or at the frame/clip level [16]. **Attention-Based**: The attention mechanism is a more effective approach for utilizing embeddings of different modalities, as it jointly exploits the multimodal feature by focusing on specific parts and allows dynamic fusion for sequential data. [20] use a co-attention module that simultaneously learns the pairwise relation between each pair of a video frame and spoken word to fuse the visual and speech information. [22] model the joint distribution of video and text by using a variant of masked language modeling. A transformer is trained to predict each text token given its text context and the video. [19, 20] utilize a crossmodal transformer to model the mutual interaction between different modalities. **Multitask-Based**: Another utilized fusion architecture is based on multitask learning. Under this architecture, auxiliary networks are applied to learn individual or multimodal representations, spaces, or parameters better and improve the classification performance [1]. For example, [21] use a topic-adversarial classification to guide the model to learn topic-agnostic features for good generalization. [22] use contrastive learning to build the joint representation space of video and text. #### 3.4.2 Pipeline-Based Integration Due to the possibility of various types of misinformation videos, it is difficult to tackle the issue in a single way. Instead, a multi-pronged approach is needed to combat this problem. However, most related works focus on misinformation video detection either at the signal level or the semantic level. [1] propose the first two-pronged method for detecting misinformation in videos. Based on the detection principle of measuring the similarity between the video in question and the original video, first, reverse image search is used to retrieve the original video. As for detection, it first finds the similarity between frames of the video to ensure that the video in question is not a deepfake, then analyzes the semantic similarity of captions to determine if the meaning and intent behind the two videos are the same. If a low semantic similarity score is returned, the method applies sentiment analysis to check if the low semantic similarity scores are just due to a difference in opinion or the video in question being out of context. Based on the semantic and sentiment similarity of the non-deepfake videos, the model gets the final judgment of whether the video contains misinformation. ## 4 Resources Resources are often the limiting factors for conducting research on misinformation video detection. Here we introduce the datasets and tools in this area and highlight their characteristic features and application contexts. ### Datasets Due to the difficulty of video crawling (mostly based on carefully selected keywords) and human annotation, many existing datasets are small-scale and topic-specific. [1] provide the dataset VAVD, which contains 123 fake and 423 real videos from YouTube. [14] construct a dataset with 250 prostate cancer-related misinformation videos from YouTube using manual annotation. The dataset used in [15] consists of 113 fake and 67 real YouTube videos, with over 150 thousand comments attached. [20] present a dataset of 891 COVID-19 related short videos on TikTok. Here we detail three relatively large and publicly available datasets: * **FVC5**. The initial version of FVC comprises videos from a variety of event categories (e.g., politics and sports), and consists of 200 fake and 180 real videos. Using the initially collected videos as seeds and searching on three video platforms (YouTube, Facebook, and Twitter), researchers extend FVC to a multi-lingual dataset containing 3,957 fake and 2,458 real videos, with textual news content and user comments attached. Footnote 5: [https://mklab.iti.gr/results/fake-video-corpus/](https://mklab.iti.gr/results/fake-video-corpus/) * **YouTubeAudit6**. This dataset includes 2,943 YouTube videos that were published in 2020 and cover five popular misinformative topics [17]. Each sample is labeled as promoting, debunking, and neutral to misinformation. YouTubeAudit provides rich information including metadata (video URL, title, description, duration), social context (numbers of views, likes, dislikes, favorites, and comments), and user profiles (gender, age, geolocation, and watch history). * **FakeSV7**. This dataset consists of 5,538 Chinese short videos (1,827 fake, 1,827 real, and 1,884 debunking videos) that were crawled from short video platforms Douyin and Kuaishou. It covers 738 news events that happened between 2019 and 2022. Besides the video content, it also provides rich social contexts including user responses and publisher profiles. Footnote 7: [https://github.com/ICTMCG/FakeSV](https://github.com/ICTMCG/FakeSV) ### Tools Tools for specific utilities are valuable for verifying suspicious videos because they provide important auxiliary information that could be hardly learned by a model only trained on data of this task. We list three representative publicly available tools for different application contexts: * **DeepFake Detector**: An AI service to judge whether a given video contains deepfake manipulated faces, developed within WeVerify Project8. It uses the URL of a suspicious image or video as the input and returns the deepfake probability score. There are also commercial substitutions like the Sensity API9. Footnote 8: [https://weverify.eu/tools/deepfake-detector/](https://weverify.eu/tools/deepfake-detector/) Footnote 9: [https://sensity.ai/deepfakes-detection/](https://sensity.ai/deepfakes-detection/) Footnote 10: [https://www.invid-project.eu/tools-and-services/](https://www.invid-project.eu/tools-and-services/) * **Reverse Image Search**: Web search services with an image as a query. By extracting the keyframe from a video and uploading it, the search engine could find web pages that contain similar images. This would be useful for detecting misinformation that changes the context of an old, genuine video. Most search engine providers have this service, such as Google, Bing, Baidu, and Yandex. * **Video Verification Plugin**: A free plugin that runs as a Google Chrome extension to verify videos provided by the InVID European Project11. It provides a toolbox to obtain contextual information from YouTube or Facebook, extract keyframes for reverse search, show metadata, and perform forensic analysis. Footnote 11: [https://github.com/ICTMCG/FakeSV](https://github.com/ICTMCG/FakeSV) ## 5 Related Areas To clarify both similarities and differences between these areas and ours, we further discuss three areas related to misinformation video detection. ### Deception Detection Deception detection aims at identifying the existence of deceptive behaviors, namely whether a person is lying to others, which is crucial for personal and public safety. In earlier research, both verbal and nonverbal cues play important roles in deception detection [1]. Verbal cues mainly refer to the linguistic characteristics of the statement while non-verbal cues include neurological, visual, and vocal indicators. For video-based detection, [12] combine visual, audio, text, and micro-expression features to predict if a courtroom trial is of deceptive nature. Though video misinformation is not always conveyed by perceptible deceptive behaviors, the techniques used for detecting deception might help detect misinformative monologue videos in which a video creator narrates a false news story to deceive audiences. ### Harmful Content Detection Harmful content generally renders as doxing, identity attack, identity misrepresentation, insult, sexual aggression, and the threat of violence [1]. Detecting video-based harmful content often relies on capturing indicative features in multiple modalities, such as linguistic features of audio transcription, video sentiment, and flagged harmful objects [1]. Compared with misinformation detection which focuses more on credibility, harmful content detection focuses more on the possibility of causing mental harm. However, misinformation videos are often created to catch more attention, which corresponds to the eye-catching characteristic of video-based harmful content, indicating that the features might be shared across the two tasks. ### Clickbait Detection Clickbait is a term commonly used to describe eye-catching and teaser headlines (thumbnails) in online media [2]. Clickbait video detection is to determine whether a video is faithfully representing the event it refers to. Content-based methods focus on analyzing the semantic gaps between the initially presented information (i.e., title and video thumbnail) and that expressed by the whole video, while others exploited creator profiles and audience feedback through comments and likes/dislikes [23]. Though not all misinformation videos contain exaggerated descriptions or eye-catching thumbnails, features for clickbait detection might be useful to model the intent of spreading misinformation in videos. ## 6 Open Issues and Future Directions Though the works surveyed above have demonstrated significant advances in detecting misinformation videos, there are still many issues that barrier their application to real-world systems. Here we present four open issues in this area and provide several concrete future directions for each issue to advance the landscape of practical detection systems. ### Transferability Transferability reflects how well a detection system tackles data distribution shift which is common and inevitable in real-world applications. Despite being a hot research topic, this issue remains largely underexplored in misinformation video detection. This will be a crucial barrier for detection methods to be put into practice. Here we provide four transfer-related subproblems in different aspects: #### 6.1.1 Multi-platform detection The differences in contents and user groups among platforms shape different social contexts and may provide extra clues. Cross-platform differences like user preferences and susceptibility to fake news have been confirmed helpful in detection [23, 24]. However, the principle of tackling multi-platform distribution gaps remains unclear. #### 6.1.2 Cross-lingual Detection Datasets in high-resource languages (e.g., English) dominate existing research, which may lead to technical gaps among countries having different amounts of misinformation data. Cross-lingual detection aims to leverage high-resource language data to train a detection model for misinformation in a low-resource language. #### 6.1.3 Multi-domain Detection Misinformation texts in different news domains have different word use and propagation patterns, which leads to data shifts [25, 26]. Therefore, the investigation and mitigation of the domain gap for video misinformation is a promising direction. #### 6.1.4 Temporal Generalization Distribution shift over time is unavoidable for online video platforms. Effective features on past data might perform poorly in the online test [25]. How to find stably effective features and how to rapidly adapt to current video data require further exploration. ### Explainability Most existing methods focus on improving accuracy and neglect the importance of providing an explanation. Without explanations aligned with human expectations, human users could hardly learn about the strengths and weaknesses of a detection system and decide when to trust it. This issue should be tackled in two aspects: #### 6.2.1 Distinguishing fine-grained types of misinformation In addition to binary classification, the model should further predict a concrete type of detected misinformation samples (e.g., video misuse). This requires a new taxonomy and fine-grained annotation on datasets. #### 6.2.2 Verifying the factuality of video claims Against external knowledge sources like online encyclopedias and reliable news outlets, a model can be trained to infer whether a video claim is entailed by known information. Studies mostly focus on verifying text and images [19, 2], while few consider claims in videos. ### Clue Integration & Reasoning The diversity of involved modalities in a video post requires the detection model to have a higher clue integration and reasoning ability than that for text- and image-based detection. In most cases, the final judgment of misinformation depends on neither a single modality nor all modalities, and finding out effective combinations is non-trivial. For example, for an old accident video that is repurposed to be the scene of a new accident and added background music, what is crucial for judgment is the mismatch between video and text, rather than that between video and audio. However, clue integration in this area is typically accomplished by directly aligning and fusing all representation vectors obtained from different modalities, which makes it hard for models to learn to reason among modalities. We believe that enabling reasoning among modalities will be important for better clue integration and more flexible detection. The possible directions include: #### 6.2.1 Inter-modality relationship modeling Following tasks requiring reasoning ability like visual question answering [23], one can build graphs to guide interaction among modalities. #### 6.2.2 Problem decomposition By transforming the detection problem as a procedure of fixing several subproblems, one can use Chain-of-Thought [24] to prompt large language models (e.g., ChatGPT [1]) to reason. ### Recommendation-Detection Collaboration The collaboration between recommendation-based video distribution and misinformation video detection is crucial for practical systems, whose ultimate goal is to keep recommending videos that are of interest to users while avoiding mis informing them. To achieve this, detection systems are expected to contain different models and strategies to exploit rich side information from recommender systems as well as make recommendations more credible. Here, we provide three concrete collaboration scenarios: #### 6.2.1 User-interest-aware detection The viewing history of the recommended videos reflects not only users' interests but also how susceptible they are to specific topics (e.g., elections). Therefore, we could prioritize these recommended videos and detect misinformation with awareness of topics (a similar case for text fake news is [27]). #### 6.2.2 User-feedback-aware detection Feedback from the crowd to the platform might be valuable indicators of suspicious videos. A recent example is to use users' reports of misinformation as weak supervision in text-based fake news detection [27]. Using more user feedback derived from recommender systems like expressions of dislike due to factuality issues will be a promising direction. #### 6.2.3 Credibility-aware Recommendation Considering information credibility in recommender systems can mitigate the exposure of misinformation videos and make the recommendation more accountable. A possible solution is to include misinformation video detection as an auxiliary task or use a well-trained detector as a critic to provide feedback. ## 7 Conclusion In this paper, we surveyed the existing literature on misinformation video detection and provided an extensive review of the advanced detection solutions, including clues at the signal, semantic, and intent levels and clue integration techniques. We also summarized available datasets and tools and discussed related areas to facilitate future research. Furthermore, we presented four critical open issues for real-world applications and provided concrete research directions. We also provide a corresponding public repository11 which will be updated to include future advances. We hope this survey could shed light on further research for defending against misinformation videos. Footnote 11: [https://github.com/ICTMCG/Awesome-Misinfo-Video-Detection](https://github.com/ICTMCG/Awesome-Misinfo-Video-Detection)
2301.11316
Open Problems in Applied Deep Learning
This work formulates the machine learning mechanism as a bi-level optimization problem. The inner level optimization loop entails minimizing a properly chosen loss function evaluated on the training data. This is nothing but the well-studied training process in pursuit of optimal model parameters. The outer level optimization loop is less well-studied and involves maximizing a properly chosen performance metric evaluated on the validation data. This is what we call the "iteration process", pursuing optimal model hyper-parameters. Among many other degrees of freedom, this process entails model engineering (e.g., neural network architecture design) and management, experiment tracking, dataset versioning and augmentation. The iteration process could be automated via Automatic Machine Learning (AutoML) or left to the intuitions of machine learning students, engineers, and researchers. Regardless of the route we take, there is a need to reduce the computational cost of the iteration step and as a direct consequence reduce the carbon footprint of developing artificial intelligence algorithms. Despite the clean and unified mathematical formulation of the iteration step as a bi-level optimization problem, its solutions are case specific and complex. This work will consider such cases while increasing the level of complexity from supervised learning to semi-supervised, self-supervised, unsupervised, few-shot, federated, reinforcement, and physics-informed learning. As a consequence of this exercise, this proposal surfaces a plethora of open problems in the field, many of which can be addressed in parallel.
Maziar Raissi
2023-01-26T18:55:43Z
http://arxiv.org/abs/2301.11316v1
# Open Problems in Applied Deep Learning ###### Abstract This work formulates the machine learning mechanism as a bi-level optimization problem. The inner level optimization loop entails minimizing a properly chosen loss function evaluated on the training data. This is nothing but the well-studied training process in pursuit of optimal model parameters. The outer level optimization loop is less well-studied and involves maximizing a properly chosen performance metric evaluated on the validation data. This is what we call the "iteration process", pursuing optimal model hyper-parameters. Among many other degrees of freedom, this process entails model engineering (e.g., neural network architecture design) and management, experiment tracking, dataset versioning and augmentation. The iteration process could be automated via Automatic Machine Learning (AutoML) or left to the intuitions of machine learning students, engineers, and researchers. Regardless of the route we take, there is a need to reduce the computational cost of the iteration step and as a direct consequence reduce the carbon footprint of developing artificial intelligence algorithms. Despite the clean and unified mathematical formulation of the iteration step as a bi-level optimization problem, its solutions are case specific and complex. This work will consider such cases while increasing the level of complexity from supervised learning to semi-supervised, self-supervised, unsupervised, few-shot, federated, reinforcement, and physics-informed learning. As a consequence of this exercise, this proposal surfaces a plethora of open problems in the field, many of which can be addressed in parallel. keywords: supervised learning, semi-supervised learning, self-supervised learning, unsupervised learning, few-shot learning, federated learning, reinforcement learning, and physics-informed learning + ## 1 Introduction The general mechanism for building machine learning solutions is illustrated in Fig. 1 and outlined in the following. 1) Everything starts with data as the "source code" for machine learning. 2) We would then write a model to fit the data. 3) The model is then trained to maximize the likelihood of the training data or minimize a distance/divergence between the training data distribution and model predictions in the case of generative adversarial networks. The training process typically entails multiple steps of stochastic gradient descents (e.g., Adam optimizer [1]). 4) The model is then evaluated using a properly chosen performance metric (e.g., accuracy, mean average precision, inception score, etc.) on the validation data. 5) Next is the iteration step where the aforementioned steps 1-4 need to be repeated tens of thousands of times to find the most performant solution. This step entails model engineering (e.g., neural network architecture design) and management, experiment tracking, dataset versioning/augmentation, in addition to seemingly "minor" details such as choosing learning rates and learning rate schedules, batch sizes, weight decay and batch/weight/layer/group/spectral normalizations, just to name a few. The iteration step is a crucial piece of the machine learning pipeline and is usually the most time and resource consuming step while often being overlooked. This is what makes machine learning difficult. 6) Before putting the model into production, we test it one last time on some test data. 7) The final stage is serving the model in production to millions of customers/users while constantly monitoring its performance and re-training it if needed. Step 5 (i.e., the iteration step) is the topic of this work. The jury is still out whether the iteration step should be automated (i.e., AutoML) or left to the intuitions of machine learning students, engineers, and researchers. Regardless of the route we take, there is a need to reduce the computational cost of the iteration step and as a direct consequence reduce the carbon footprint of developing artificial intelligence algorithms [2]. The computational bottleneck of the iteration step is the training of each model to convergence, only to measure its performance on the validation data which is then used as the feedback signal to guide the iteration process. ## 2 Background In mathematical notations, we are trying to solve bi-level optimization problems [3; 4] of the following form; \[\begin{split}\min_{\alpha}&\quad\mathcal{M}_{\text{ val}}(w^{*}(\alpha),\alpha)\\ \text{s.t.}&\quad w^{*}(\alpha)=\arg\min_{w}\mathcal{ L}_{\text{train}}(w,\alpha).\end{split} \tag{1}\] Here, \(\alpha\) denotes the "hyper-parameters" of the model in an abstract sense, encapsulating concepts as generic as learning rate (schedule), depth and width of neural networks, discrete choices between CNNs, RNNs, or Transformers [5] and their variants [6], presence or absence of different types of normalization layers [7], to pad or not to pad, convolutional kernel sizes, to use dropout or not, etc. Moreover, \(\mathcal{M}_{\text{val}}\) denotes a performance metric such as accuracy, mean average precision [8], Frechet Inception Distance [9], etc. The loss function is denoted by \(\mathcal{L}_{\text{train}}\) while the model parameters are represented by \(w\). Whereas Eq. 1 explains the iteration problem (see Fig. 1) in a clean and unified mathematical formulation, the solution to this problem is complex and case specific. Figure 1: The general mechanism for building machine learning solutions. For some machine learning problems, the evaluation metric \(\mathcal{M}_{\mathrm{val}}\) is a discrete and non-differentiable function of \(\alpha\) (e.g., accuracy [10]). For some other problems, it isn't well-defined (e.g., domain adaptation [11], semi-supervised [12], self-supervised and unsupervised learning [13; 14] as well as generative models [15]) or is even non-existent (e.g., physics-informed deep learning [16; 17; 18; 19]). The hyper-parameters \(\alpha\) could be continuous (learning rate, momentum, weight-decay, etc.) or discrete variables (e.g., the choice between a regular convolution or a depth-wise-separable one [20; 21], number of layer-s/channels, to use batch-norm [22] or not, etc.). Sometimes \(\alpha\) is a function of the current training epoch (e.g., learning rate schedule, progressive growing of GANs [23], etc.). More often than not, the loss function is a weighted combination of multiple loss functions (e.g., multi-task learning [24], physics-informed machine learning [25], etc.); Those weights could be part of \(\alpha\) and require special treatment to balance the trade-off between focusing on one objective function versus another. The loss function \(\mathcal{L}_{\mathrm{train}}\) could itself be discrete and non-differentiable (e.g., cost/reward in reinforcement learning [26]). Therefore, the solutions to the problem specified in Eq. 1 and illustrated in Fig. 1 are not as clean as its formulation and end up being case specific. The first solution that comes to mind is to rely on the intuition of students, researchers, and engineers obtained through tens of thousands of collective trial and errors and knowledge sharing over the Internet (e.g., stackoverflow.com), blog posts, and social media platforms. We can also build tools to facilitate such experimentations. A good example is "Weights & Biases", an MLOps platform to help researchers build better models faster with experiment tracking, dataset versioning, and model management. MLOps is an active area of research both in academia and industry. In the cases where \(\mathcal{M}_{\mathrm{val}}\) is a well-defined function, let it be discrete or non-differentiable, one could use grid search (albeit for low dimensional \(\alpha\)), random search [27], Bayesian optimization [28], reinforcement learning [29; 30; 31] or evolutionary algorithms [32; 33] to find \(\alpha^{*}\), even if \(\alpha\) is discrete. The shortcoming of such approaches for solving Eq. 1 is their extensive computational cost (e.g., 32,400-43,200 GPU-Hours, the equivalent of using 450 GPUs for 3-4 days [34; 35]). Such approaches have therefore a high carbon footprint [2]. The bottleneck is solving the inner optimization problem in Eq. 1 (i.e., the training loop) to completion, only to measure the model's performance on the validation data which is then used as the feedback signal to guide the iteration process (i.e., the outer optimization problem in Eq. 1). One solution to this problem is to trade off computation for more memory consumption using parameter sharing ideas [34]. This could result in up to 1000x faster solutions to Eq. 1 (e.g., 16 GPU-Hours) in some cases. The idea is to warm-start the training process using some shared parameters cached in memory, rather than starting from random parameters (i.e., cold-starting). This could speed up the convergence of the inner optimization problem in Eq. 1. Alternatively, one could approximate the validation function \(\mathcal{M}_{\mathrm{val}}\) with a differentiable function (e.g., the loss function on the validation data \(\mathcal{L}_{\mathrm{val}}\)[35]). Doing so will enable us to use plain-vanilla stochastic gradient descent algorithms (e.g., the Adam optimizer [1]) to optimize over both the hyperparameters \(\alpha\) and parameters \(w\); One could take a couple of gradient descent steps for \(w\) and one step for \(\alpha\) in an iterative fashion. There is no need to solve the inner optimization problem to completion. This will also result in significant speed-up by trading computation for more memory consumption. As a compromise between purely automated solutions to Eq. 1 (i.e., AutoML) and the expert-in-the-loop solutions (i.e., MLOps), we could also use a combination of the two [36; 37]. Moreover, as an additional layer of complication, the performance metric \(\mathcal{M}_{\mathrm{val}}(w^{*}(\alpha),\alpha)\) could pursue multiple competing objectives such as minimizing the error rates (i.e., maximizing accuracy) and reducing the computational and memory costs of the model, perhaps to make them suitable for smaller devices such as mobile phones, tablets, or IoT (Internet of Things) devices [38; 39]. Here, balancing the trade-off between optimizing one objective versus the other is an open problem. This will introduce additional hyper-parameters (lets call them "hyper-hyper-parameters") for the outer optimization problem in Eq. 1. Last but not least, data augmentation policies [27; 31] could also be part of the search space (i.e., \(\alpha\)). This should give us enough background on the current state-of-the-art in AutoML and help us outline the open problems that need to be addressed collectively by the machine learning community. ## 3 Open Problems As mentioned above, despite the clean and unified mathematical formulation of the iteration step as a bi-level optimization problem (see Eq. 1 and Fig. 1), its solutions are case specific and complex. In the following, we will consider such cases while increasing the level of complexity from supervised learning to semi-supervised, self-supervised, unsupervised, few-shot, federated, reinforcement, and physics-informed learning. As a consequence of this exercise, this work exposes a multitude of open challenges in the field, many of which can be addressed in parallel. ### Supervised Learning Most of the progress made over the last few years, ever-since the advent of AutoML, falls under the umbrella of supervised learning, in particular (image) classification [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39]. These works use accuracy as their performance metric or a differentiable approximation of it (e.g., the negative of the loss function or the log-likelihood). As a low hanging fruit, we could investigate if such techniques would still work in the face of other performance metrics, such as precision, recall, F1 score, calibration [40], etc., for balanced and unbalanced datasets. Finding differentiable approximations of such metrics is particularly interesting because we can employ similar techniques to the ones used in [35] (i.e., using plain-vanilla stochastic gradient descent to optimize over both hyper-parameters and parameters of our models). Such techniques are interesting not only because of their computational efficiency but also because they require very little "hyper-hyper-parameter" tuning. However, how to make such methods memory efficient is still an open question. Moreover, methods such as Bayesian optimization, reinforcement learning, evolutionary algorithms, and even parameter-sharing for solving Eq. 1 introduce additional hyper-parameters (i.e., "hyper-hyper-parameter"); In simple terms, we would like to avoid doing AutoAutoML, AutoAutoAutoML, etc., and fragmenting our datasets beyond training, validation, and testing. We could also study the effect of such approximations to different performance metrics and provide theoretical upper-bounds on the loss of performance as a result of such approximations. **Large Networks:** Recently, we are witnessing a trend in computer vision trying to replace convolutions neural networks with transformers [41; 6; 42] or even multi-layer perceptrons [43], inspired by their success in language [44; 45; 46]. The question is if the currently available techniques (see the background section) for solving Eq. 1 could generalize to such architectures and improve their performance. It is worth noting that the techniques outlined in the background section are primarily designed for convolutions neural networks. Furthermore, when it comes to data-augmentation strategies, the current techniques leverage only a single image [31], the question is if novel data-augmentation strategies such as mix-up [47] and cut-mix [48] (leveraging pairs of images) can be discovered as part of solving Eq. 1. Answering these two questions would entail rethinking the design space (i.e., the space in which \(\alpha\) is assumed to live). Another interesting and fundamental question is investigating the possibility of automating the discovery of learning rate schedules such as the cosine learning rate schedule [49]. Here, the learning rate is a function of the current epoch rather than being a constant. This will significantly increase the complexity of the iteration problem (see Eq. 1). Methods similar to the ones presented in [35] seem to have a good chance at solving this problem because they don't rely on solving the inner optimization problem (i.e., training) in Eq. 1 to completion. This will allow us to modify the learning rate in tandem with the training process per each epoch. **Small Networks:** There are times when not only we are looking for the most performant model but also we want the model to be as memory and compute efficient as possible. This is an important stepping stone towards democratizing artificial intelligence in anticipation of the future of Internet of Things where a lot of our devices (e.g., cellphones, cars, security cameras, refrigerators, air conditioners, etc.) will be intelligent. Such devices usually have smaller compute capabilities and memory capacity than our computers in data-centers or on the cloud. To make them intelligent we need to take their constraints into consideration. Mathematically speaking, \(\mathcal{M}_{\mathrm{val}}\) is a weighted combination of at least two objectives; One is the performance metric, while the other is about making the model more nimble and could take different forms such at FLOPs, MAC (memory access cost), number of parameters, latency and memory consumption of the target devices. This last item necessitates a hardware-in-the-loop approach. The weights given to each objective function are what we called "hyper-hyper-parameters" earlier in this document. It is still an open question how to set such weights in order to balance the tradeoff between optimizing one objective versus another. We are therefore dealing with a multi-objective bi-level optimization problem. Here, ideas such as the ones proposed in [50] for multi-task learning using uncertainty to weigh different objectives could be extended to solve our multi-objective bi-level optimization problem. A similar multi-objective optimization problem arises in physics-informed deep learning [17, 18, 19] where we need to balance the trade-off between fitting the data and respecting the law of physics modeled using ordinary and partial differential equations. Alternatively, we could investigate the possibility of automating the process of making pre-trained models smaller. This is an after-the-fact approach where we would like to automate the discovery of methods such as knowledge distillation [17]. lation [51], model pruning and compression [52; 53; 54], etc. This approach will avoid the aforementioned multi-objective bi-level optimization problem and will instead break the problem into two or more stages; In the first stage we will be looking for the most performant model, regardless of its cost, while in the second stage we will look for the best model compression strategies. This will lead to a multi-stage (versus multi-objective) bi-level optimization problem. **Robustness:** It is a well-known fact that deep neural networks are vulnerable to adversarial and backdoor attacks [55; 56; 57; 58; 59]. We could be investigating how much of this vulnerability can be attributed to the iteration step (see Fig. 1 and Eq. 1) and whether automating the process can alleviate or aggravate the problem. **Explainable AI:** In the past few years, the machine learning community has made a lot of progress in the emerging field of explainable and trustworthy AI (see e.g., [40; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72] and the references therein) to the extend that deep neural networks are no longer considered black boxes but rather gray ones. However, there has been very little effort in the literature (if any) to explain the choices made as part of the iteration process (see Fig. 1). The question is what features of the data, or rather the meta-data, explain the choices we make for learning rate schedules, architecture designs, data-augmentation, etc. What is the effect of noise in the data on such choices? How important are the size and intrinsic dimensionality of the data? This a place where the underlying structure of the data (i.e., the lower dimensional manifold on which the data lives) could help us shed some light on these fundamental questions. If successful, efforts in this direction could lead to a new field, namely "Explainable AutoML". **Transfer Learning:** Moreover, in a parallel thrust, we could be studying the impact of the iteration process (see Fig. 1) on the generalizability and transferability of the learned features to downstream tasks. The field of deep learning is largely driven by the ideas of Transfer learning to the extent that we rarely train our models from scratch. Along the same lines, an important question worthy of systematic studies is transferability (or lack thereof) of data augmentation policies (e.g., AutoAugment and RandAugment) from one dataset (e.g., ImageNet) to another (e.g., Pascal VOC). **Semantic Segmentation:** When it comes to the task of semantic segmentation, we typically start with neural networks pre-trained on a related classification task and do transfer learning [73, 74, 75, 76, 77, 78, 79, 80, 81]. This is mainly because we usually have access to smaller datasets for this task as labeling every single pixel in an image is more cumbersome than labeling an entire image with a single label. Referring back to Fig. 1, there is very little work on an iteration phase dedicated to the semantic segmentation task. Here, the performance metrics (e.g., pixel accuracy, mean accuracy, mean IU, frequency weighted IU, etc.) are more complex than the accuracy metric often used for the classification task. Finding differentiable approximations of such metrics so that we can employ similar techniques to the ones used in [35] is particularly interesting. Moreover, when it comes to the task of semantic segmentation, we need to not only capture the global information in an image (i.e., resolving the "what" of the image) but also the local information (i.e., resolving the "where" of the objects in the image). It would be interesting to investigate how the iteration stage (see Fig. 1) would have an impact on the "what" and "where" components of the semantic segmentation task. In particular, would the iteration phase leverage tools such as atrous convolutions, short-cut connections, conditional random fields, multi-scale aggregation, deep supervision, deconvolutions, upsampling, attention mechanisms, etc.? If so, to what extent? **Super-Resolution, Denoising, and Colorization:** When it comes to creative tasks such as super-resolution, denoising, colorization, and style transfer where the output of a neural network is an image, it is very hard to judge the quality of the generated images in a quantitative fashion. Available performance metrics such as Peak Signal-to-Noise (PSNR), Structural Similarity (SSIM), and Feature Similarity (FSIM) fall short of doing justice to the task. It is therefore very hard to measure progress in these fields [82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94] and more importantly guide the iteration phase (see Fig. 1). In this regard, deep features extracted from deep neural networks (e.g., VGG) trained on the ImageNet classification task show unreasonable empirical effectiveness as perceptual metrics [93]. However, there aren't many works (if any) that guide the iteration phase using such deep features (e.g., Learned Perceptual Image Patch Similarity (LPIPS) metric) in an automated fashion. In a related note, the tasks of super-resolution, denoising, colorization, and style transfer usually entail balancing the trade-off between multiple loss functions (e.g., reconstruction L1/L2 loss versus perceptual loss [83]). It is still an open problem how to strike such a balance in the absence of universally accepted performance metrics. Here, ideas such as the ones proposed in [50] for multi-task learning using uncertainty to weigh different objectives could be extended to solve this problem. Alternatively, we could investigate the possibility of guiding the iteration process (see Fig. 1) using human-in-the-loop Reinforcement Learning algorithms where the reward signal comes from the judgment of human beings; A human can easily take a look at an image and associate a quality score to it, perhaps from 1-10. This is feasible because with Reinforcement Learning we don't need to differentiate through the reward signals or the thought process of the human evaluator. Human-in-the-loop techniques are gaining traction these days (see e.g., [95]) because writing well-defined reward functions is very challenging if not impossible for many real-word applications of Reinforcement Learning beyond games and simulated environments. **Pose Estimation:** A keyboard and a mouse are not the only means of interacting with a computer; A key topic in the field of human-computer interaction in particular and the meta verse in general is human pose estimation [96, 97, 98, 99, 100]. Two evaluation metrics that could guide the iteration phase (see Fig. 1) are Percentage of Correct Parts (PCP) and Percent of Detected Joints (PDJ). PCP measures detection rate of limbs, where a limb is considered detected if the distance between the two predicted joint locations and the true limb joint locations is at most half of the limb length. As for PDJ, a joint is considered detected if the distance between the predicted vector and the true joint is within a certain fraction of the torso (e.g., left shoulder and right hip) diameter. A closely related metric is Percentage of Correct Keypoints (PCK) which measures the percentage of detections that fall within a normalized distance of the ground truth. It would be interesting to investigate how the iteration stage (see Fig. 1) would leverage these metrics or their differentiable approximations to come up with novel architectures (e.g., stacked hourglass blocks, cascaded pyramid layers, part affinity fields, etc.) in an automated fashion. This is not a well-studied topic as of this writing. **Optical Flow and Depth Estimation:** The world around us is 3D and evolving in time. On the one hand, depth estimation [101, 102, 103, 104, 105] allows us to add a third dimension to our 2D images and has applications for self-driving cars and robots. On the other hand, optical flows [106, 107, 108] enable us to capture the evolution in time and the relationship between consecu tive frames in a video. In addition to applications for self-driving vehicles and robotics, optical flows can be used as additional features for the task of action recognition in videos. For optical flows, End Point Error (EPE) is typically used as the performance metric guiding the iteration phase (see Fig. 1). It is the Euclidean distance between the predicted flow vector and the ground truth, averaged over all pixels. Here, the training data is usually simulated because it is very hard to measure optical flows in the physical world [106]. Inevitably, we have to rely on domain adaptation techniques to close the reality gap (i.e., the gap between the real and simulated data distributions) to the extent possible. It is therefore an intriguing research question to study the effect of domain adaptation techniques on the iteration phase (see Fig. 1) and vice versa. The task of depth estimation also suffers from lack of enough labeled training data. Fortunately, there are ways to leverage the underlying physics of the problem to perform unsupervised monocular depth estimation with e.g., left-right consistency [103]. Here, the training loss would involve a weighted combination of multiple individual loss functions [103, 104]. Properly setting those weights in an automated fashion is an open question. Here, we could utilize performance metrics such as absolute relative distance, squared relative distance, or root mean square error (RMSE) to guide the iteration stage (see Fig. 1). **Object Detection:** Generally speaking, there are two major types of object detectors: multi-stage (typically two) and one-stage detectors. With multi-stage detectors [109, 110, 111, 8, 112, 113, 8], the primary objective is to find the most performant model (measured using mean average precision) while efficiency (measured using frames per second) is a secondary objective. With single-stage detectors [118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131], the primary objective is to find the most agile model (measured using frames per second) while performance (measured using mean average precision) is a secondary objective. Referring back to equation 1, \(\mathcal{M}_{\rm val}\) is pursuing multiple (i.e., at least two) competing objectives for the task of object detection. Balancing the trade-off between optimizing one objective versus the other is an open problem. It is worth mentioning that this problem is about what we called "hyper-hyper-parameters" in the iteration phase and that we would like to avoid fragmenting our datasets beyond training, validation and testing. As for hyper-parameters \(\alpha\), the design space (the space in which \(\alpha\) lives) is a much more complex one for object detection compared to classification tasks. The input data could be in the form of images, image patches, image pyramids, etc. The backbone could be in the form of VGG, ResNet, ResNeXt, Darknet, Hourglass Network, Transformers, etc. The neck of the architecture could be in the form of FPN (Feature Pyramid Network), PANet, Bi-FPN (Bi-directional FPN), etc. The head of the object detection system could be a dense predictor (RPN, YOLO, SSD, RetinaNet, FCOS) or a sparse one (Faster R-CNN, R-FCN). As for data augmentation we could use CutMix, MixUp, Mosaic, Bluring, etc. For the loss functions, we could use L1, L2, Smooth L1, or CIoU for the regression component of the total loss function and MSE, binary or multi-class cross-entropy loss for the classification portion of the total loss. Therefore, exploring the design space in a systematic and automatic fashion as part of the iteration phase (see Fig. 1) is a challenging task. Last but not least, all metrics are wrong, some are useful. This is specially true when it comes to the object detection task. There are heated debates in the literature about the appropriateness of mean average precision (mAP) or its COCO style variants as valid performance metrics for the object detection task (see e.g., [124]). **Face Recognition and Detection:** Face detection is a special case of the object detection and key point (pose) estimation topics that we covered earlier in this document. Futhermore, face recognition (verification and identification) can be viewed as a close-set or an open-set problem, depending on the type of available data. Close-set face recognition is nothing but a classification task that we covered earlier in this document. Open-set face recognition on the other hand is about metric learning, i.e., learning features that are capable of pulling similar images together while pushing dissimilar images apart. The literature on open-set face recognition spends a lot of time designing new loss functions such as the triplet loss [132; 133], the center loss [134], angular softmax loss [135], additive angular margin loss [136], etc. It is therefore a natural question to ask if it is possible to automate the search for appropriate loss functions as part of the iteration phase (see Fig. 1). To guide the iteration process we could leverage the ROC (Receiver Operator Characteristic) curves relating the true positive rate to the false positive rate. **Video & 3D data:** As mentioned earlier in this document, the world around us is evolving in time and is 3D. When it comes to videos, we could think of at least two important applications, namely action recognition and object tracking. Action recognition [137; 138; 139; 140; 141; 142; 143; 144; 145; 146] is a classification task albeit on a sequence of image frames in a video as the input data. Here the design space (the space in which \(\alpha\) lives, referring to equation 1) is more complex compared to the design space for images. It would therefore be interesting to see if ideas such as early/late/slow fusion, multi-streaming, using optical flows as additional input features, 3D convolutions, trajectory pooling, or slow-fast networks would survive the iteration phase (see Fig. 1). Furthermore, when it comes to object tracking [147; 148], there is very little work on automating the iteration phase (see Fig. 1) and studying its impact on the resulting algorithms. Here, we could use evaluation metrics such as the center location error and the bounding box overlap ratio to guide the iteration phase. In a parallel thrust, performing object recognition, detection and segmentation on 3D point cloud data is more challenging than doing so on images [149; 150; 151; 152; 153]. These types of data (e.g., LIDAR data) appear naturally in self-driving vehicles and robotics applications. However, there is very little work on automating the iteration phase (see Fig. 1) for performing object recognition, detection and segmentation on 3D point cloud data. ### Beyond Supervised Learning It is now time to increase the level of complexity and move beyond supervised learning towards semi-supervised, self-supervised, unsupervised, few-shot, federated, reinforcement, and physics-informed learning. We will be approaching these topics from the perspective of the iteration phase (see Fig. 1). **Natural Language Processing:** For applications such as word vector representations [154; 155; 156; 157; 158], text classification and sequence tagging [159; 160; 161; 162], translation [173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186] and language modeling [187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 285; 287; 289; 288; 289; 289; 291; 280; 281; 282; 286; 287; 288; 289; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 324; 325; 326; 327; 328; 329; 333; 336; 337; 338; 339; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 3777; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 410; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 431; 432; 433; 434; 435; 436; 437; 438; 439; 444; 445; 446; 447; 448; 450; 451; 452; 453; 454; 455; 456; 457; 458; 459; 460; 461; 462; 463; 464; 465; 466; 466; 467; 468; 469; 470; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 491; 492; 493; 494; 495; 496; 497; 498; 499; 499; 499; 410; 499; 411; 413; 414; 416; 417; 418; 419; 420; 421; 423; 424; 425; 426; 427; 428; 429; 431; 432; 433; 436; 437; 438; 439; 445; 439; 446; 431; 430; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 43; 44; 44; 44; 44; 44; 45; 45; 467; 47; 48; 489; 491; 492; 493; 494; 495; 496; 497; 498; 499; 499; 410; 499; 411; 411; 41; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 42; 42; 42; 42; 42; 42; 42; 42; 43; 43; 44; 44; 44; 45; 46; 47; 48; 49; 49; 50; 51; 52; 53; 54; 52; 53; 54; 53; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 66; 67; 68; 68; 69; 70; 71; 72; 73; tual similarity, language comprehension, conversational response generation, etc. For instance, it is a well-known observation that the common practice of extracting sentence embeddings from the BERT language model, by average pooling the last layer output vectors or using the output of the first token (i.e., the [CLS] token), yields rather bad sentence embeddings [204], often worse than averaging the GloVe vectors. This is why researchers came up with the ideas of Sentence-BERT and Siamese BERT-Networks. Another observation is that BERT in its original form cannot perform translation. This is why researchers introduced BART, GPT, T5, etc. However, each one of these contributions focus on a hand-full of downstream tasks to evaluate the performance of their language modeling capabilities. Some focus on the GLUE benchmark [211] of a suit of downstream tasks, some focus on the BLUE score [212] for translation, etc. Perhaps a better strategy to guide the iteration process (see Fig. 1) is to approach language modeling from a multi-task learning perspective where \(\mathcal{M}_{\mathrm{val}}\) in equation 1 is a weighted combination of the performance metrics for a multitude of downstream tasks. Here, an open question is how to properly weigh one objective function versus the others. Here, ideas such as the ones proposed in [50] for multi-task learning using uncertainty to weigh different objectives could be extended to solve our multi-objective bi-level optimization problem. In addition, we could include extra objectives in \(\mathcal{M}_{\mathrm{val}}\) to penalize the computational complexity and memory consumption of the resulting language models. The idea is to come up with the smallest language model that is good at solving a multitude of downstream tasks in an automated fashion. Of particular interest are techniques similar to the ones used in [35] (i.e., using plain-vanilla stochastic gradient descent to optimize over both hyper-parameters and parameters of our models). Last but not least, many of the ideas in natural language processing can be extend to graphs (e.g., social networks) [213, 214, 215, 216, 217, 218, 219, 220, 221, 222]. **Multimodal Learning:** With multimodal learning [223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234], we are taking baby steps towards human level artificial intelligence (i.e., artificial general intelligence). For instance, if we look at an intelligent robot and say "pick that up and put it on the table" while pointing at a box sitting on the ground, to be able to execute the command correctly, the robot should not only process speech and language but also should be able to use its vision system to understand what we mean when we say "that". Moreover, a common criticism to large language models is that even if they manage to generate seemingly cohesive text, they have very little idea about what they are actually talking about; For example, a language model trained only on textual data has never seen images of airplanes, cars and ships, it has only read about them on the internet. The field of multimodal learning is therefore attracting the attention of a lot of great researchers both in academia and the industry. Two important applications are translating images (or videos) to text (e.g., image and video captioning) and vice versa (e.g., text to image synthesis). Another equally important application is visual question answering. Moreover, training large language models both on images and textual data is also showing some great promise. We will be approaching these topics from the perspective of the iteration phase (see Fig. 1). When it comes to translating images to text, we could use the BLUE score to guide the iteration phase. However, there are heated debates in the literature if the BLUE score is the best performance metric both for image captioning and translation. Coming up with better metrics is an open problem. Moreover, there is very little work (if any) on automating the iteration phase of translating images to texts. For the text to image synthesis type of tasks, like any other creative task (e.g., super-resolution, denoising, colorization, style transfer and generative adversarial networks), it is very hard to judge the quality of the generated images in a quantitative fashion. Here, we could use metrics such the Inception Score, the Frechet Inception Distance or the Learned Perceptual Image Patch Similarity (LPIPS) metric to guide the iteration phase. As of this writing, there aren't many works (if any) that have done so before. Furthermore, when it comes to training large language models on unlabeled textual data as well as images, the fundamental question is what performance metric(s) should we use to guide the iteration phase (see Fig. 1). In particular, the question is how existing performance metrics such as perplexity can be generalized to handle both text and image type of data. Alternatively, similar to language models trained only on text, we could define a set of downstream tasks involving both text and images as benchmarks to guide the iteration phase. The idea is then to approach multimodal modeling from a multi-task learning perspective. **Generative Networks:** Generative models are either trained to maximize the likelihood (or rather its lower bound) of the training data (e.g., Variational Auto-Encoders) or minimize a distance/divergence between the training data distribution and model predictions (e.g., unconditional and conditional Generative Adversarial Networks). A central question here is how to measure the quality of the generated data in a quantitative fashion. For images, we could use the Inception score or the Frechet Inception Distance to guide the iteration process, despite all their imperfections. However, it is not clear what performance metrics we should use for other types of data such as text, speech, graphs (e.g., social networks), etc. Furthermore, it is a well-known observation that training generative adversarial networks is an unstable process and most of the contributions in this field are made towards stabilizing this process by using different loss functions (e.g., Feature Matching, Least Squares GANs, Wasserstein GANs, Hinge Loss, etc.), normalization and regularization schemes (e.g., Gradient Penalty, Spectral Normalization, Orthogonal Regularization, Adaptive Instance Normalization, Path Length Regularization, etc.), architectures (e.g., DCGANs, Self-Attention GANs, etc.) and training schedules (Progressive Growing, two time-scale update rule, historical averaging of parameters, etc.). It would be interesting to study if such techniques would survive an automated iteration process (see Fig. 1). Of particular interest is the progressive growing idea because the neural network architecture itself is a function of the current training epoch. This will significantly increase the complexity of the iteration problem (see Eq. 1). Methods similar to the ones presented in [35] seem to have a good chance at solving this problem because they don't rely on solving the inner optimization problem (i.e., training) in Eq. 1 to completion. This will allow us to modify the architecture in tandem with the training process per each epoch. Last but not least, when it comes to conditional GANs (e.g., image-to-image translation), we typically try to minimize a total loss function being a weighted combination of multiple individual loss functions. It is still an open problem how to come up with those weights in the absence of validation data and appropriate performance metrics. Perhaps reformulating the problems as a multi-task learning problem and using ideas similar to the ones proposed in [50], that leverage uncertainty to weigh different objectives, could help us address this problem. **Domain Adaptation:** The concept of domain adaptation [235, 236, 237, 238, 11] is related to scenarios where we have a lot of labeled data (e.g., simulated data) from a source domain and zero (or very few) labeled data (e.g., real data) from a target domain. Such scenarios happen frequently in many engineering fields (sometimes called multi-fidelity modeling [239, 240, 241] in fluid and solid mechanics) including but not limited to self-driving cars and robotics. Domain adaptation can help us close the so called reality gap between the simulated and real data distributions. We are going to approach domain adaptation from the perspective of the iteration phase (see Fig. 1). With domain adaptation we would like to minimize the risk of making errors on the target data, not necessarily the source data. Now, the question is how can we measure the performance of our models on the target data in the absence of any target labels (e.g., unsupervised domain adaptation) or in the presence of very few of them (e.g., weakly-supervised domain adaptation). One idea is to use unsupervised hyper-parameter selection techniques [11]. We can split the source domain labeled data into a training set \(S_{\text{train}}\) and a validation set \(S_{\text{val}}\). Similarly, we can split the target domain unlabeled data into a training set \(T_{\text{train}}\) and a validation set \(T_{\text{val}}\). We can then use \(S_{\text{train}}\) and \(T_{\text{train}}\) to learn a model using domain adaptation techniques. The trained model can now be used to generated labels for the unlabeled data \(T_{\text{train}}\). We then remove the labels from \(S_{\text{train}}\). We will then train a reverse model using \(T_{\text{train}}\) as the source domain and \(S_{\text{train}}\) as the target domain. The reverse model can now be evaluated on the validation set \(S_{\text{val}}\). We could use this reserve validation risk as a proxy for the true validation risk to guide the iteration process. **Few-shot Learning:** When it comes to using a machine learning model to serve millions of users perhaps over the internet, not only we need to take care of the distributional shift between the training and test data (i.e., domain adaptation) but also we need to be able to handle new use cases and more importantly new labels associated to such use cases. As an example, in the context of a recommendation system [242; 243; 244; 245; 246; 247; 248; 249; 250; 251], we can think of new items (e.g., movies, products, etc.) to be recommended to new/existing users. This is related to the topic of few-shot learning [252; 253; 254] where our models need to be able to handle new labels given very few observations per each label. For classification tasks, we can use \(N\)-way \(K\)-shot classification accuracy to guide the iteration phase (see Fig. 1). It would therefore be interesting to study the effect of an automated iteration stage on the resulting algorithms. More importantly, it is still unclear what performance metrics we should use for other applications of few-shot learning beyond classification such as semantic segmentation, object detection, pose estimation, depth estimation, etc. **Federated Learning:** In addition to the publicly available data on the internet, there is a wealth of data sitting on our privately-held devices (e.g., cellphones, tablets, laptops, etc.). Our devices are also getting more powerful in their compute and data collection capabilities (e.g., multiple camera lenses on the back of our cellphones). Federated Learning tries to leverage such privately held data to train machine learning models while preserving the privacy of the data. The idea is to bring the models to the data (rather than bringing the data to the models on the cloud) and use the heterogeneous compute capabilities of user devices to train our models. What is being communicated to the cloud is the parameters of our models or rather their gradients. The field is still in its infancy and there are many open technical and non-technical challenges (e.g., communication efficiency, the non-i.i.d nature of the data, data pre-processing, training self-supervised models, privacy preserving, being robust to backdoor attacks, being able to train models on smaller devices, etc.) to be addressed before we can fully realize the potential of federated learning. In this work, we are approaching federated learning from the iteration perspective (see Fig. 1). Given the distributed nature of the data over millions of user devices, the question is how can we evaluate the performance of our models. One idea is to have the users of our models give star ratings (perhaps out of five) to our models. We can then aggregate these stars as a feedback signal to guide the iteration phase. Given that such a performance metric is discrete and non-differentiable we can use methods based on Reinforcement Learning to perform hyper-parameter selection. **Semi-Supervised & Self-Supervised Learning:** Let us now move towards the cases where we have access to a lot of unlabeled data and very few labeled data, if any. We faced a similar situation in natural language processing. However, with language we are working with discrete tokens which makes it easier to perform self-supervision by defining next token prediction (e.g., GPT style models) or masked token prediction tasks (e.g., BERT style models). Discrete tokens make it possible to use the softmax function as the last layer of a neural network (e.g., a Transformer architecture) and turn the self-supervision problem into a classification one. However, image and speech type of data are continuous signals. Normal, Laplace, or even mixture of Gaussians for modeling the distribution of continuous random variables are not as flexible as the softmax function is in modeling the distribution of discrete random variables. It is therefore required to rethink the semi-supervised and self-supervised learning paradigms when it comes to continuous signals (e.g., images and speech). In fact, this field has been growing at an exponential rate over the past two or three years. One common theme emerging in the literature is to take a single image and augment it into two different views. These two views should then give consistent representations once processed by the same neural network (or two similar ones). Here, a central challenge that needs to be overcome is avoiding the trivial solutions, either implicitly or explicitly. A network being supervised by itself or by another similar network is prone to converging to a trivial solution (e.g., a constant function ignoring its inputs altogether). We face a similar problem with physics-informed neural networks (e.g., any constant function is a solution to the Navier-Stokes equations) [17, 18, 19]. We are going to approach this problem from the perspective of the iteration stage (see Fig. 1). We need to either explicitly include a term in our training objective function that encourages non-trivial solutions, or design our search space in such a way that it includes mechanisms that have shown empirical success in avoiding trivial solutions such as stop-gradients, predictor heads, model averaging, contrastive losses, etc. Here, balancing the trade-off between the consistency loss and avoiding the trivial solution is very delicate. Fortunately, with semi-supervised learning [255, 256, 257, 258, 41, 258] we have some labeled data and performance metrics that we can leverage to guide the iteration phase. However, with self-supervised learning [259, 260, 261, 13, 14], neither such labeled data exists nor are there any universally accepted performance metrics. One idea is to define a set of downstream tasks (e.g., object recognition, detection, and segmentation for images) to judge the transferablility of the learned features. Here, how much importance we should give to each downstream task is an open problem. **Speech:** Similar to the language modeling paradigm for text, there is an emerging trend over the past few years to model speech [272, 273, 274, 275, 276]. The idea is that before we (as human beings) learn to read and write, we learn to listen and speak. This self-supervised learning paradigm for speech is sometimes also called learning by listening. Speech being a continuous signal inherits many of the challenges that we went over in the previous paragraph on self-supervised learning for images. In addition, as of this writing, there are only two well-defined downstream tasks, namely translating speech to text [277, 278, 279, 280, 281, 282, 283, 284, 285] and vice versa [286, 287, 288], to guide the iteration stage (see Fig. 1) using their respective performance metrics (i.e., label error rate for speech recognition and subjective 5-scale mean opinion score in naturalness for speech synthesis). The performance metric for speech synthesis, however, requires human in the loop evaluators and leaves Reinforcement Learning or evolutionary algorithms as the only options to guide the iteration phase. Fortunately, there are some researchers in both academia and industry who are trying to come up with more downstream tasks to judge the quality of speech models. Even in the presence of such downstream tasks, we will need to solve multi-objective bi-level optimisation problems of a form that generalizes the one given in equation 1. This is still an open problem. **Reinforcement Learning:** If we take a closer at the literature, many of the success stories of Reinforcement Learning are for Games (e.g., Atari, Chess, Shogi, Go, and StarCraft II) or in simulated environments (e.g., OpenAI Gym, MuJoCo, etc.). For such cases, we have well-defined reward functions and are able to interact with the environment as many times as we like to collect enough experiences (i.e., data). This is not the case in the real world due the well-known physical constraints. What makes Reinforcement Learning difficult is 1) the need to collect plenty of experiences (i.e., inefficient use of data), 2) lack of well-defined rewards signals to not only guide the training process but also the iteration process (see Fig. 1) and 3) the sheer number hyper-parameters (i.e., degrees of freedom). We will therefore approach Reinforcement Learning from the perspective of the iteration process (see Fig. 1). A central question here whose answer can address (at least partially) all three of the aforementioned challenges is how to properly balance the trade-off between exploration and exploitation in the absence of well-denied reward functions. Here, we will investigate the possibility of using human-in-the-loop reward signals [95] (see also ChatGPT); A human can easily take a look at the performance of a robot in the real world and give it feedback. It is worth noting that this is different from imitation learning. To deal with the data-inefficiency issue, we can reformulate Reinforcement Learning as a multi-task learning problem; One task is to make the human evaluator happy and the other is to encourage exploration. Balancing the weights given to each objective is an open problem and has a direct impact on becoming more data-efficient. **Physics-Informed Learning:** So far, we have been working on the brain of our artificial intelligent agents. If we take this brain, mount it on a robot (e.g., a drone) and ask it to operate in the real physical world (e.g., in a fluid), it will most definitely fail. This is because it has never learned to respect the laws of physics (e.g., conservation of mass, momentum and energy, gravity, etc.). If anything, it has learned to find loopholes of the simulated environment and bypass such laws (e.g., go faster than the speed of light), simply because it is trained only to maximize a reward signal or fit the corresponding data. This motivated research in Physics-Informed Neural Networks (PINNs) [17]. The field has been growing at an exponential rate ever since its advent in 2019. PINNs can be used to solve a wide range of problems involving (partial) differential equations, namely forward, inverse, model discovery, surrogate modeling and uncertainty quantification. However, PINNs have an Achilles heel. Namely, how to balance the trade-off between fitting the data and respecting the laws of physics in the absence of validation data? We are therefore dealing with a multi-objective optimization problem. Here, ideas such as the ones proposed in [50] for multi-task learning using uncertainty to weigh different objectives could be extended to solve our multi-objective optimization problem. Here, another challenge that we need to overcome is avoiding trivial solutions (e.g., any constant function is a solution to the Navier Stokes equations). A feasible strategy is to explicitly include a term in our training objective function that encourages non-trivial solutions. ## 4 Concluding Remarks Artificial intelligence (AI) evangelizes the idea of automation. On the surface, AI algorithms take the data, develop their own understanding of it, and generate valuable insights and predictions - all without human intervention. In truth, AI involves an enormous amount of repetitive manual operations, all hidden behind the scenes. This is what we call the "iteration process". Among many other degrees of freedom, this process entails model engineering (e.g., neural network architecture design) and management, experiment tracking, dataset versioning and augmentation. The iteration process is typically carried out by data engineers, data scientists, machine learning engineers, and other highly-trained (and highly-paid) specialists. However, at least part of their work can be streamlined by AutoML. In recent years, AutoML has demonstrated some promise in solving simple supervised learning problems, in particular (image) classification. However, this does not mean that AutoML will be successful in the face of more complex problems beyond (image) classification. It remains to be seen and tested in practice.
2308.05668
Dynamic delegation in promotion contests
I study how organizations assign tasks to identify the best candidate to promote among a pool of workers. Task allocation and workers' motivation interact through the organization's promotion decisions. The organization designs the workers' careers to both screen and develop talent. When only non-routine tasks are informative about a worker's type and non-routine tasks are scarce, the organization's preferred promotion system is an index contest. Each worker is assigned a number that depends only on his own type. The principal delegates the non-routine task to the worker whose current index is the highest and promotes the first worker whose type exceeds a threshold. Each worker's threshold is independent of the other workers' types. Competition is mediated by the allocation of tasks: who gets the opportunity to prove themselves is a determinant factor in promotions. Finally, features of the optimal promotion contest rationalize the prevalence of fast-track promotion, the role of seniority, or when a group of workers is systemically advantaged.
Théo Durandard
2023-08-10T16:13:10Z
http://arxiv.org/abs/2308.05668v1
# Dynamic Delegation in Promotion Contests+ ###### Abstract I study how organizations assign tasks to identify the best candidate to promote among a pool of workers. Task allocation and workers' motivation interact through the organization's promotion decisions. The organization designs the workers' careers to both screen and develop talent. When only non-routine tasks are informative about a worker's type and non-routine tasks are scarce, the organization's preferred promotion system is an _index contest_. Each worker is assigned a number that depends only on his _own type_. The principal delegates the non-routine task to the worker whose current index is the highest and promotes the first worker whose type exceeds a threshold. Each worker's threshold is independent of the other workers' types. Competition is mediated by the allocation of tasks: who gets the opportunity to prove themselves is a determinant factor in promotions. Finally, features of the optimal promotion contest rationalize the prevalence of fast-track promotion, the role of seniority, or when a group of workers is systemically advantaged. ## Introduction Matching tasks to the right workers is crucial to an organization's success. First, productive efficiency requires that more talented workers perform more complex, non-routine tasks. Second, workers' success in their current tasks is informative: Organizations also allocate tasks to learn about the workers and improve future matches. Third, a worker's assignment determines what he learns on the job. Assigning the right worker to the right task is then especially important when the organization seeks to identify and develop talented workers.1 Non-routine tasks are opportunities for workers to prove themselves. Workers understand that their career trajectories in the organization depend on the opportunities to showcase their talents. Task allocation and workers' motivation then interact through the organization's promotion decisions. Designing a good promotion system is thus both challenging and essential for the organization's success.2 Footnote 1: Former Xerox CEO Anne Mulcahy insists in Mulcahy (2010) that it is crucial to identify candidates for promotion early and give them “developmental responsibilities” to develop strong workers and test their abilities. Footnote 2: For example, Rosen (1982) insists on the importance of selecting the right person to lead an organization as they set its course, and their decisions are “magnifies” many times. Rohman et al. (2018) note that when employees believe that promotion decisions are efficient, they are more likely to exert effort and follow the organization’s leaders’ directions and recommendations. The same authors also point out that “stock returns are nearly three times the market average, voluntary turnover is half that of industry peers, and metrics for innovation, productivity, and growth consistently outperform competitors” at companies that manage promotion effectively. A sound promotion system must include the task allocation process and the promotion rule. It must also balance exploitation (delegating non-routine tasks to a worker known to be good and eventually promoting him) and exploration (giving responsibilities to new workers). Ignoring exploration, one misses an essential part of the story: who gets the opportunity to prove themselves is a determining factor in promotions. In this paper, I ask how the organization optimally designs the promotion system to motivate workers and tackle the exploration-exploitation trade-off. I also address the following questions: How does incentive provision affect the allocation of tasks and the promotion decision? Can the allocation of opportunities exacerbate initial differences to induce significant disparities over time? What characteristics of a worker increase his chance of being promoted? I explore these questions in a _centralized dynamic contest model._ A principal (she) has one prize to award, the promotion. She decides how to allocate a non-routine task sequentially to \(N\) workers (he/they) and when to give the prize. Each worker has a type represented by a stochastic process, and the processes are independent. When the principal allocates the non-routine task to a worker and the worker exerts effort, the principal gets a reward that depends on the worker's type. The worker's type also evolves, and the evolution of types could reflect the principal learning about the worker's or the worker acquiring new skills on the job. The other workers are assigned uninformative routine tasks, and their types remain frozen. Finally, the principal can only use the promise of future promotion to motivate the workers. In particular, in the baseline model, I rule out transfers to focus on the interaction between the two classical and conflicting3 purposes of promotions: to "assign people to the roles where they can contribute the most to the organization's performance" and to provide "incentives and rewards" (Roberts and Milgrom (1992)). Footnote 3: As famously illustrated by the Peter’s principle described in Peter et al. (1969). My model builds on the canonical multi-armed bandit model but departs in one critical aspect: the arms are workers who exert effort. Hence the arms are strategic. In the classic bandit model, when the decision maker pulls an arm, she gets a reward drawn stochastically from some fixed distribution. In some contexts, this assumption fits the behavior of the problem inputs: In clinical trials, it is natural to think that the patients will comply. However, in my problem, each arm corresponds to a worker whose incentives differ from the principal's. So, the arms are _strategic_. When the principal allocates the non-routine task, her reward and the flow of information are controlled by the workers' choices of effort, and the workers exert effort only when compensated for it by the promise of future promotion. To incentivize effort, the principal must eventually promote a worker, stopping exploration. In this setting, I characterize the principal-optimal promotion contest. Solving the principal's problem is challenging for three reasons. As in bandit models, the first one is the problem's dimensionality. The set of feasible promotion contests is large. Second, the principal's promotion decision can depend on all workers' types and their effort histories. So, it introduces a degree of dependence among the workers that complicates the problem.4 Finally, the workers are strategic. Not all delegation and promotion rules incentivize effort. The set of "implementable" promotion contests is complex. Footnote 4: The stoppable bandit models studied in the literature were so under the (exogenous) restriction that the decision to freeze an arm can only depend on the state of this arm. See, for example, Glazebrook (1979) or Fershtman and Pavan (2022b). Nevertheless, the optimal contest is simple. I prove that, as in the canonical model, _indexability_ holds: The optimal promotion contest takes the form of an _index contest_. Each worker is assigned a number (his _index_) that depends only on his type and the cost of incentive provision. The principal sequentially delegates the non-routine task to the worker whose _index_ is the highest. Eventually, she promotes the _first_ worker whose type exceeds a threshold. Both the worker's index and promotion threshold are _independent_ of the successes and failures of the other workers. Finally, I also show that it is optimal to promote one worker only when the principal can design the prize-sharing rule, i.e., decide to promote multiple workers. The optimal contest is a winner-take-all contest. In the index contest, the delegation rule mediates the competition for promotion between the workers. To understand the determinants of promotions, it is crucial to consider the factors that affect the allocation and timing of opportunities. This has two significant consequences: (i) for the contractual arrangements between the principal and the workers and (ii) for the effect of initial differences on promotion decisions (especially when thinking about discrimination in promotion practices). First, no mention of competition needs to appear in the contractual arrangement between each worker and the principal. One interpretation of the index contest is the following. The principal successively offers short-term individual trial contracts to one worker at a time. Each trial contract specifies a target and a (potentially stochastic) deadline. The worker gets the promotion if he achieves the target before the deadline. Otherwise, the manager offers a new trial contract to one of the other workers until a worker eventually succeeds. The trial contracts do not rely on relative performance measures: they are independent of what the other workers do 5 The principal uses contracts that incentivize workers separately: the promotion thresholds do not condition on indicators of relative performance (on the other workers' types). This may be surprising: why would not the principal use relative performance to compare the candidates and select the best of them? However, it should not be. Who gets the opportunity already summarizes the relevant ranking information. In the _index contest_, other workers' efforts and successes only affect the likelihood that the worker will be delegated the non-routine task and, hence, get the chance to prove themselves. It is irrelevant to the promotion decision once the worker gets the opportunity. Second, the principal delegates the non-routine task sequentially to the workers in the index contest. This generates significant path dependence in promotion decisions: a worker who is not given a chance initially may never get the opportunity to prove himself, hence, will not be promoted. In particular, early successes have an outsized impact on the probability of promotion. They increase both the likelihood of being promoted before any of the other workers gets the opportunity to showcase their talent and the likelihood that the worker is allocated the non-routine task again later. One should therefore be careful where to look to identify discrimination in promotion practices. In particular, a firm may always promote the most qualified candidate and yet discriminates. That is because the principal may also discriminate through the allocation of opportunities. In the context of my model, the index delegation rule may treat different groups very differently. For example, minor differences among workers in learning speed or the cost of effort may lead the principal to delegate to one first. In _reinforcing environments_,6 this dramatically affects their promotion chance and expected time to promotion. There, the early assignment of the non-routine task largely determines the promotion decision. If the principal delegation rule is biased toward one group, workers from the other group will never get an opportunity to be promoted. Moreover, at the time of promotion, they will also appear less qualified than the promoted worker. Their type will be lower than that of the promoted worker, and they will not have worked on non-routine tasks as much. This is an instance of what has been described outside of economics as systemic discrimination: discrimination based on systemic group differences in observable characteristics or treatment (see Bohren et al. (2022) for a treatment of systemic discrimination within economics). Footnote 6: This includes bad news Poisson learning or realistic representations of on-the-job learning. See Definition 8 in Section 3. I also obtain further predictions for organizational design from my characterization. A notable feature of the index contest is that the promotion thresholds decrease over time. So, a worker's type, when promoted, decreases with time. A first consequence is that fast tracks (i.e., that a quickly promoted worker often gets another promotion soon after, see Baker et al. (1994) and Ariga et al. (1999)) should then not be surprising. When a worker is promoted quickly, his type upon promotion is higher. Hence, he starts from a better place when entering a potential new promotion contest at the next level. So, his expected time to promotion decreases: the worker is expected to be promoted again soon. Second, the type of a worker at the time of promotion decreases the longer he stays in his current position. So, faster-promoted workers should perform better upon promotion than more slowly-promoted workers. Third, in the _index contest_, seniority is not explicit but still confers an advantage. It becomes easier to be promoted for a worker as time passes. The _index contest_ backloads incentives, implicitly putting weight on seniority.7 Footnote 7: Seniority has been widely used as an explicit promotion criterion (especially in public administrations), and, although it has fallen from favor since the 1980s, it is still seen as an important determinant of promotions, see Dobson (1988) or Phelan and Lin (2001). Finally, I study multiple extensions: I relax some of the assumptions made in the model. I show that the winner-take-all index context is optimal when the principal can design the prize. I consider the possibility of transfers, and I study different information structures. In my setting, if transfers are unrestricted, the manager can incentivize effort at no cost, and the first best is achieved. However, if wages only depend on the workers' current types and the workers are protected by limited liability, the principal cannot freely punish a worker who decides to shirk. So, intertemporal distortions like the ones absent transfers are reintroduced, and the _index contest_ (with adapted indices and promotion thresholds) is optimal. Secondly, in the baseline model, information is symmetric. All the players observe the types of all workers. If only the workers observe their types and can credibly communicate them to the principal, the _index contest_ is still implementable (and optimal). In particular, workers do not need to observe who has been in charge of the project and how successful other workers were. Besides promotion decisions, my results apply to various problems in which a principal owns an asset and wants to allocate it to the best agent among a pool of candidates of unknown ability. This includes outsourcing and procurement decisions by firms, a venture capitalist's investment decision between multiple start-ups, or the CERN research board deciding which team of researchers can use the colliders and when an experiment should be abandoned, for example. When the principal earns rewards and learns by delegating the asset, the optimal mechanism is an _index contest_. The rest of the article is organized as follows. The related literature is discussed in the remainder of the introduction. In Section 1, I formally describe the environment. In Section 2, I introduce the _index contest_ and presents its properties. In Section 3, I study the implications of my findings on discrimination. In Section 4, I present an outline of the proof of my main result: the optimality of the _index contest_. In Section 5, I consider extensions. I conclude in Section 6 with a brief discussion of the results and lines of future research. All proofs not in the main text are in the Appendices 1, 2, and Online Appendix 3. Literature:This paper studies a dynamic contest for experimentation and characterizes the optimal promotion contest when the principal can control the learning process. I then use this model to study a big under-studied topic in personnel economics: how the allocation of tasks affects promotions and worker careers. So, my paper builds on several streams of literature. First, as mentioned above, I build on the canonical bandit problem solved in Gittins and Jones (1974). Gittins et al. (2011) offer a textbook treatment. Bergemann and Valimaki (2006) is a good survey on bandit problems (in economics). Here, the authors solve for the optimal delegation rule when the arms are passive. Other papers in this literature consider learning about multiple alternatives before making an (irreversible) decision. Examples includes Austen-Smith and Martinelli (2018), Fudenberg et al. (2018), Ke and Villas-Boas (2019), Ke et al. (2016), and Che and Mierendorff (2019). More closely related is the study of stoppable bandit models in Glazebrook (1979). Glazebrook considers a multi-armed bandit model in which the decision maker decides which arm to pull every period but can also choose to freeze an arm and play it forever. He gives conditions under which indexability is preserved. Again, all these papers are concerned with a decision problem: the arms or alternatives are passive, and they study the optimal way to allocate attention before making a decision absent incentive consideration. This is fundamentally different from my model. I am interested in how organizations allocate tasks when the arms are _strategic_. To solve this problem, I identify a new condition under which indexability holds in bandit superprocess problems. I then use this condition to show that the problem's separability is preserved. Despite the strategic nature of the problem, the optimal delegation rule is an index rule. A few other papers also look at bandit problems with strategic arms. The key distinction between these papers and mine is that, in my paper, the principal is constrained in her ability to provide incentives. In particular, there are no transfers (or only limited transfers), and promotions are scarce. So the agents compete for the prize, which creates a strategic dependence between arms. This strategic dependence is largely absent in the other papers in this literature. In Bergemann and Valimaki (1996) and Felli and Harris (1996, 2018), a principal allocates an asset between two strategic agents over time. The values from allocating to each agent for the principal are initially unknown but can be learned over time. Their models, like mine, can be understood as bandit models with strategic arms. However, in Bergemann and Valimaki (1996); Felli and Harris (1996, 2018), transfers are unrestricted and only affect the "cost of utilization" of each arm, not the information the players get. This implies that all (Markov Perfect) equilibria are efficient (or pairwise efficient in Felli and Harris (2018)). So, the allocation policy is undistorted in any (Markov Perfect) equilibrium.8 Since the classic Gittins index policy maximizes total surplus, the principal always chooses the agent whose associated Gittins index is the highest. The question is then how surplus is allocated between players. On the other hand, in my framework, the conflict of interest between the principal and the workers prevents allocative efficiency. This is also the case in Kakade et al. (2013), Pavan et al. (2014), Bardhi et al. (2020), or Fershtman and Pavan (2022a), who study strategic bandit models in which the experimentation outcomes are privately observed; or in Guo (2016) and McClellan (2017), who study versions of a \(1^{\frac{1}{2}}\)-arm strategic bandit model when the principal has limited instruments. For example, in Kakade et al. (2013), Pavan et al. (2014), Bardhi et al. (2020), or Fershtman and Pavan (2022a), to incentivize disclosure, the principal needs to pay rents to the agents. The latter creates dissonance between the principal's and the agents' value for experimentation and, hence, changes the relative value of pulling one arm rather than another. In these papers, the indices are, therefore, also distorted. Contrary to my paper, however, there is no strategic dependency between the arms in the above papers. The presence of transfers allows them to abstract from any linkage of incentive problems across employees. The allocation maximizes the total virtual value and therefore follows from the standard Gittins characterization applied to the "virtual value processes". In my setting, this linkage of incentives is central as promotions, hence incentives, are scarce. The principal has to promise an eventual promotion for which the workers compete. Promise-keeping then distorts the future delegation process. In particular, the set of implementable delegation rules in the continuation game depends on the history. The classical approach to indexability therefore fails. Nevertheless, I show that indexability still holds. The indices reflect the strategic nature of the problem and the constraints it places on both learning and exploitation. I then focus on how incentives provision distorts the delegation and promotion rules. Footnote 8: Although, in Felli and Harris (2018), the players’ investment decisions may be distorted. My paper is related to a last stream of works on multi-agent experimentation. See, for instance, Bolton and Harris (1999), Keller et al. (2005), Bonatti and Horner (2011), and Halac et al. (2017). However, the fundamental trade-offs are different. In these papers, the agents experiment on a common bandit machine and therefore have incentives to free-ride on each others' costly experimentation. Free-riding is absent from my model, as each arm is a separate agent and the agent's types are independent. So, there is no positive externality across workers. The central trade-off in my paper is between retaining the option value of experimentation and motivating workers. Two other papers on multi-agent experimentation are related. De Clippel et al. (2021) and Deb et al. (2022) also study how to select the best agent to execute a task when the agents only care about being selected. They focus on different trade-offs than I do. Deb et al. (2022) look at the trade-off between retaining option value via competition and harnessing gains from collaboration, while De Clippel et al. (2021) are interested in mechanisms guaranteeing that the agents willingly display their private information, ensuring efficiency. My paper is complementary to theirs. It illustrates how the principal-optimal allocation rule responds to a different environment and trade-off. In particular, I show that the optimal allocation rule is an _index contest_. So, my paper also contributes to the growing literature on the design of dynamic contests pioneered by Taylor (1995) and extended by Halac et al. (2017), Benkert and Letina (2020), or Ely et al. (2021). The critical difference between my paper and the rest of the dynamic contest literature is that, in my model, the contest is centralized; i.e., the principal controls the assignment of tasks. Therefore, the set of participants at every point is endogenous and chosen by the principal. This is crucial for my application. Organizations control the allocation of tasks. Therefore, the results I derive are qualitatively different from those in the rest of the dynamic contest literature, where the set of participants is exogeneous. Moreover, the optimal contest in my model is a winner-take-all contest and not a prize-sharing contest as in Halac et al. (2017) or Ely et al. (2021), for example. Comparing my results to these papers can also help us understand when a more meritocratic (winner-take-all) system or a more equal (prize-sharing) system helps the principal. Finally, my paper contributes to the extensive literature on personnel economics that studies careers in organizations (see Prendergast (1999) for a survey). I consider an environment where learning about workers shapes their career trajectories and hence generates career-concern incentives (Harris and Holmstrom (1982); Holmstrom (1999)). MacDonald (1982a,b), or Gibbons and Waldman (1999) also emphasize the importance of learning and task assignments in shaping career dynamics, which Pastorino (2019) empirically documents. In these papers, tasks are equally informative. So the players' choices do not affect learning. Instead, I focus on a setting where tasks vary in the information they generate, as in Antonovics and Golan (2012), Canidio and Patrick (2019), or Madsen et al. (2022). These papers, however, focus on the distortionary effects of promotions and career concerns on risk-taking when the workers control their occupational choices. In contrast, in my paper, the principal controls the allocation of tasks. So my paper complements their findings. In particular, I study a trade-off that arises in task allocation problems when the principal is primarily concerned with alleviating the time inconsistency problem of promotions, as in Waldman (2003), which is absent in their papers. Since promotions reward past effort and sort workers, a sound promotion system should do both. Moreover, the optimal way to incentivize effort may be suboptimal for selection. Indeed, as mentioned above, the optimal index contest vastly differs from the optimal dynamic contest to incentivize effort in Ely et al. (2021). I show how the principal-optimal task allocation balances incentives provision and selection. This trade-off is also absent from other papers that look at how firms assign tasks and learn, such as Pastorino et al. (2004) or Bar-Isaac and Levy (2022), in which the principal can incentivize each worker's effort separately. Finally, in all the previous papers that study learning through task allocation, the employer faces no constraints on learning. She can assign all workers to non-routine tasks simultaneously. On the contrary, I assume that non-routine tasks are scarce, reflecting that not all workers can simultaneously lead a team, for example. So, my paper complements their works by studying how firms design careers to screen and develop workers when learning opportunities are limited. In particular, I show that the delegation process is sequential, meritocratic, and creates a significant path dependence in promotion decisions: The principal first delegates opportunities to the best workers as measured by their _index_ and immediately promotes them in case of success. So, my paper also relates to the analysis of turnover in a leadership position. This question has been studied by, among others, Mortensen and Pissarides (1994), Atkeson and Cole (2005), and Garrett and Pavan (2012). As in these papers, seniority matters for promotion decisions, and I extend such finding to a multi-agent setting. Here the main difference is my focus on the dynamic process of experimentation that leads to the promotion decision. ## 1 Model Let \((\Omega,\bar{\mathcal{F}},\mathbb{P})\) be a probability space rich enough to accommodate all the objects defined below. A principal (she) and \(N\) workers (he/they) interact in an infinite-horizon continuous-time stochastic game. All players discount the future at a common discount rate \(r>0\).9 The principal has to decide how to delegate one non-routine task and many routine tasks among the workers to maximize her continuation value. When the non-routine task is delegated to one of the workers, the principal gets a flow rewards that depends on the current type of the worker and whether the worker exerts effort. If he does, his type also evolves (stochastically). To motivate the workers to exert effort when delegated the non-routine task, the principal can allocate an indivisible prize that the workers value; i.e., she can decide to promote one of them. ### Actions Heuristically, at each time \(t\), the principal and the workers play in the "stage game" depicted in figure 1. Within each period \([t,t+dt)\), the principal chooses who to delegate the non-routine task to. The other workers are allocated routine tasks. When she delegates the non-routine task to worker \(i\in\{1,\ldots,N\}\), worker \(i\) then decides whether to exert effort to complete the task. If worker \(i\) exerts effort, the principal learns about worker \(i\), gets a reward \(\pi^{i}\left(x^{i}\right)\) that depends on worker \(i\)'s current type \(x^{i}\), and worker \(i\)'s type evolves (stochastically). If he does not, the principal gets no reward and worker \(i\)'s type stay the same. The principal then decides whether to (i) continue to experiment before allocating the prize, (ii) promote one of the workers, or (iii) allocate the prize to an external worker (i.e. take her outside option \(W\)). If she chooses to continue to experiment, the next "period", \([t+dt,t+2dt)\) the players play the same "stage game". If she chose to allocate the prize, her only decision in the continuation game is who to delegate the non-routine task to. The workers then decides whether to exert effort to complete the task. I assume that the principal can commit at time zero to an history contingent sequence of plays, while the workers cannot. Formally, at time \(t=0\), the principal commits to a history-dependent promotion contest comprising of (i) a promotion time \(\tau\) specifying when the promotion is allocated; (ii) a promotion decision \(d\) specifying which of the workers is promoted; and (iii) a delegation rule \(\alpha\) that assigns at every instant the non-routine task to some worker. The promotion time, \(\tau\), is a \(\bar{\mathcal{F}}\)-measurable mapping from \(\Omega\) to \(\mathbb{R}\). The promotion decision is a (stochastic) process \(d=\left(d^{0}=\left\{d^{0}_{t}\right\}_{t\geq 0},\ldots,d^{N}=\left\{d^{N}_{t }\right\}_{t\geq 0}\right)\), which takes value in \(\{0,1\}^{N+1}\cap\Delta^{N+1}\), where \(\Delta^{N+1}\) is the \(N+1\)-dimensional simplex. If \(d^{i}_{\tau}=1\), worker \(i\) is promoted at time \(\tau\). \(d^{0}_{\tau}=1\) stands for the principal's decision to take her outside option. Finally, the delegation rule \(\alpha=\left(\alpha^{1}=\left\{\alpha^{1}_{t}\right\}_{t\geq 0},\ldots, \alpha^{N}=\left\{\alpha^{N}_{t}\right\}_{t\geq 0}\right)\) is a (stochastic) process which takes value in the \(N\)-dimensional simplex, \(\Delta^{N}\). \(\alpha^{i}_{t}\) is the share of the non-routine task worker \(i\) is responsible for at each instant \(t\geq 0\). The process \(t\to\alpha_{t}\) is (at least) Borel measurable \(\mathbb{P}\)-a.s..10 Footnote 10: The filtration to which \(\alpha\), \(\tau\), and \(d\) are adapted to is not defined yet. This is deferred to Section 1.3, after the dynamics of the workers’ types and the information structure are introduced. Workers cannot commit. Each instant, they decide whether to exert effort when delegated (a positive share of) the non-routine task. \(a_{t}^{i}\in\{0,1\}\) denotes the effort decision of worker \(i\) at time \(t\geq 0\). The effort process generated by the decisions of worker \(i\) is \(a^{i}=\{a_{t}^{i}\}_{t\geq 0}\). \(t\to a_{t}^{i}\) is required to be Borel measurable \(\mathbb{P}\)-a.s..11 Footnote 11: As above, a more precise measurability requirement is postponed until Section 1.3. ### Workers' types Together, the choices of effort and the delegation rule determine the evolution of the workers' types. To describe the state dynamics, I follow the multi-parameter approach pioneered by Mandelbaum in discrete time in Mandelbaum (1986) and in continuous time in Mandelbaum (1987). For all \(i\in\{1,\ldots,N\}\), let \(\mathcal{F}^{i}\coloneqq\left\{\mathcal{F}_{t}^{i}\right\}_{t\geq 0}\) be a filtration in \(\bar{\mathcal{F}}\) and \(X^{i}=\left\{X_{t}^{i}\right\}_{t\geq 0}\) be a \(\mathcal{F}^{i}\)-adapted process with values in the interval \(\mathcal{X}^{i}\subseteq\mathbb{R}\). For simplicity, assume that either \(X^{i}\) does not reach the boundary of the set \(\mathcal{X}^{i}\) or the boundary is absorbing. Define \[T^{i}(t)\coloneqq\int_{0}^{t}a_{s}^{i}\alpha_{s}^{i}ds,\,\forall\,0\leq t<\infty. \tag{1}\] Figure 1: Heuristic “stage game” \(T^{i}(t)\) is the amount of time worker \(i\) worked on the non-routine task. At time \(t\), the type of worker \(i\) is \(X^{i}_{T^{i}(t)}\). So worker \(i\)'s type evolves (stochastically) only when he exerts effort. When he does not, his type is frozen. Intuitively, one can think of the evolution of the type as follows: Nature first draw a path \(X^{i}=\left\{X^{i}_{s}\right\}_{s\geq 0}\) for worker \(i\). The delegation rule and the worker's choices of effort then jointly control "the passage of time", \(T^{i}(t)\), i.e., the speed at which the worker's type moves along the path \(X^{i}\). Define the delegation process \(T=\left(T^{1}=\left\{T^{1}(t)\right\}_{t\geq 0},\ldots,T^{N}=\left\{T^{N}(t) \right\}_{t\geq 0}\right)\). The state of the game at time \(t\) is \[X_{T(t)}=\left(X^{1}_{T^{1}(t)},\ldots,X^{N}_{T^{N}(t)}\right).\] \(\left\{X_{T(t)}\right\}_{t\geq 0}\) is a multi-parameter process adapted to the multi-parameter filtration \[\mathcal{F}\coloneqq\left\{\mathcal{F}_{\bar{t}}\coloneqq\bigvee_{i=1}^{N} \mathcal{F}^{i}_{t^{i}},\quad\bar{t}=(t^{1},\ldots,t^{N})\in[0,\infty)^{N}\right\}\] defined on the orthant \([0,\infty)^{N}\). I make the following assumptions on the types' processes. **Assumption 1**.: _The filtrations \(\mathcal{F}^{i}\), \(i\in\left\{1,\ldots N\right\}\), are mutually independent and they satisfy the usual hypothesis of right-continuity and completeness.12_ Footnote 12: See, e.g., Protter (2005). Assumption 1 implies that the manager does not learn anything about the type of one worker by observing the type of another. **Assumption 2**.: _The processes \((X^{i},\mathcal{F}^{i})\), \(i=1,\ldots,N\), are Feller.13_ Footnote 13: Recall that any Feller process admits a cadlag modification. So I always assume that \(X^{i}\) is cadlag. Assumption 2 is made to guarantee that the type process has the strong Markovian property: the distribution of future realizations only depends on the current value of the process. The Feller property is however stronger: it also guarantees that the expectation operator conditional on the value of the type process is continuous. This second property is not needed, but simplifies the analysis. **Assumption 3**.: _For all \(i\in\left\{1,\ldots,N\right\}\), if \(X^{i}_{0}=x^{i}\geq\tilde{X}^{i}_{0}=\tilde{x}^{i}\), then, for all \(s\geq 0\), \(X^{i}_{s}\geq\tilde{X}^{i}_{s}\)\(\mathbb{P}\)-a.s.._ Assumption 3 states that if a worker's initial type increases, so does his type at any instant \(t\geq 0\). Because Feller processes are time-homogeneous, it also implies that, if a worker's type is higher at time \(t\) along one path than along another, so is his type at any instant \(s\geq t\). **Assumption 4**.: _For each \(i\in\{1,\ldots,N\}\), if \(t\to X_{t}^{i}\) is not continuous, then either (i) \(X_{t^{-}}^{i}-X_{t}^{i}>0\) at all discontinuity points \(t\in\mathbb{R}\); or (ii) \(X_{t^{-}}^{i}-X_{t}^{i}<0\) at all discontinuity points \(t\in\mathbb{R}\)._ Assumption 4 is a restriction on the jump of the processes \(X^{i}\).The jumps must be"one-sided"; i.e., if the process \(X^{i}\) jumps up, it cannot jump down, and conversely. In particular, if \(X^{i}\) is a continuous process, Assumption 4 holds trivially. **Assumption 5**.: _For all \(i\in\{1,\ldots,N\}\) and for all \(x\in\mathcal{X}^{i}\), \(\mathbb{P}_{x}\left(\{\tau_{(x,\infty)}^{i}=0\}\right)=1\), where \(\tau_{(x,\infty)}^{i}\coloneqq\inf\left\{t\geq 0\,:\,X_{t}^{i}\in(x, \infty)\right\}\). Moreover, if \(X^{i}\) jumps down, for all \(\kappa,\epsilon>0\), there exists \(\delta>0\) such that, for all \(x\in\mathcal{X}^{i}\), \(\mathbb{E}_{x}\left[\tau_{(x-\delta,x+\delta)}\right]<\epsilon\)._ Assumption 5 states that any worker can always become more productive. It simplifies the arguments and guarantees the existence of a solution to the principal's problem. The second part of Assumption 5 strengthens the first part for the case in which \(X^{i}\) jumps down. In particular, it guarantees that the expected time \(X^{i}\) stays in any small interval is small. I relax this assumption in Section 5.1. In Appendix 1.1, I show that my framework accommodates all jump-diffusion processes that satisfy mild regularity and monotonicity conditions. In particular, it includes the commonly studied cases in which workers can be either good or bad and the principal learns whether the worker is good or bad according to a Brownian signal or a bad news Poisson signal. In these examples, worker \(i\)'s type at time \(t\), \(X_{T^{i}(t)}^{i}\), is the belief that worker \(i\) is good after he has worked for \(T^{i}(t)\) unit of time on the project. ### Information and Strategies Information:The principal and the workers perfectly observe the delegation rule chosen by the principal, and the effort decisions and types of all the workers. Information is symmetric, but incomplete.14 Workers' strategies:It is well-known that perfect monitoring in continuous-time games can come with complications.15 To avoid the issue, I take a reduced form approach. Footnote 15: Continuous time is not well-ordered, and, therefore, seemingly well-defined promotion contests and effort strategies can fail to uniquely determine the outcome of the game. For a more detailed discussion, see Simon and Stinchcombe (1989), Bergin and MacLeod (1993), or Park and Xiong (2020) for deterministic games, and Durandard (2022b) for stochastic games. **Definition 1**.: _A **dynamic delegation process** is a process_ \[T=\left\{T(t)=\left(T^{1}(t),\ldots,T^{N}(t)\right),\,t\geq 0\right\}\] _taking values in \([0,\infty)^{N}\) such that, for all \(i\in\{1,\ldots,N\}\),_ 1. \(\{T(t)\leq\bar{t}\}=\bigcap_{i=1}^{N}\left\{T^{i}(t)\leq t^{i}\right\}\in \mathcal{F}_{\bar{t}}\) _for all_ \(\bar{t}=(t^{1},\ldots,t^{N})\in[0,\infty)^{N},\,t\geq 0\)_,_ 2. \(T^{i}(\cdot)\) _is nondecreasing, right-continuous, with_ \(T^{i}(0)=0\)_,_ 3. \(\sum_{i=1}^{N}\left(T^{i}(t)-T^{i}(u)\right)\leq t-u,\quad\forall t\geq u\geq 0\)_._ _Denote by \(\mathcal{D}\) the set of all dynamic delegation processes.16_ Footnote 16: In the theory of multi-parameter processes, \(T(t)\) is a stopping point in \([0,\infty)^{N}\) and a delegation rule \(T\) is called an optional increasing path (Walsh 1981, Walsh (1981)). It can be thought of as a multi-parameter time change. Condition 1. in Definition 1 ensures that delegation processes are adapted to the multi-parameter filtration \(\mathcal{F}\). So they are non-anticipative: they do not depend on future events. Given a dynamic delegation process \(T\in\mathcal{D}\), define the one parameter filtration \(\mathcal{G}^{T}=\left\{\mathcal{G}^{T}_{t}\right\}_{t\geq 0}\) as follows. Let \(\nu:\Omega\to[0,\infty)^{N}\). \(\nu\) is a stopping point of \(\mathcal{F}\) if \(\{\nu\leq\bar{t}\}\in\mathcal{F}_{\bar{t}}\) for all \(\bar{t}\in[0,\infty)^{N}\). For any stopping point \(\nu\), define the sigma-field \[\mathcal{F}(\nu)\coloneqq\left\{A\in\bar{\mathcal{F}}\,:\,A\cap\{\nu\leq\bar {t}\}\in\mathcal{F}_{\bar{t}},\,\forall\bar{t}\in[0,\infty)^{N}\right\}.\] Then, for all \(0\leq t<\infty\), let \(\mathcal{G}^{T}_{t}\coloneqq\mathcal{F}(T(t))\). In the remaining of the paper, with a small abuse of notation, I will redefine promotion contests as: **Definition 2**.: _A promotion contest \((T,\tau,d)\) consists of a dynamic delegation process \(T\), a \(\mathcal{G}^{T}\)-stopping time \(\tau\), and a \(\mathcal{G}^{T}\)-optional promotion decision rule \(d\), such that \(\mathbb{P}\)-a.s._ \[T^{i}(t)\coloneqq\int_{0}^{t}a^{i}_{s}\alpha^{i}_{s}ds,\] _for all \(0\leq t\leq\tau\) and all \(i=1,\ldots,N\)._ _Denote by \(\mathcal{P}\) the set of all promotion contests._ Finally, a strategy profile is **admissible** if it uniquely defines a promotion contest after all histories \(\mathbb{P}\)-a.s.. I will require that the space of strategies is such that (i) any strategy profile in which all workers change their effort decision at most once and the principal can adjust the contest upon observing such changes is included, and (ii) if a strategy profile belongs to the strategy space, then any \(h_{t}\)-"truncated" strategy profile does too. The \(h_{t}\)-"truncated" strategy profile is the strategy profile that coincides with the original profile for any history that does not contain \(h_{t}\) and such that all players play a Markov continuation strategy after history \(h_{t}\). Both conditions (i) and (ii) are richness conditions on the strategy space. They are satisfied for example by the space of semi-Markov strategies or the strategy space defined in Durandard (2022b). In particular, condition (i) guarantees that any promotion contest can be obtained as the outcome of an **admissible** strategy profile. By definition of admissibility, the set of continuation values at any instant \(t\geq 0\) that can be generated in the game coincides with the set of values generated by the set of promotion contests. ### Payoffs and objective At time \(t\geq 0\), when worker \(i\) is delegated a share \(\alpha_{t}^{i}\) of the project and exerts effort \(a_{t}^{i}\), the principal gets a flow reward \(\alpha_{t}^{i}a_{t}^{i}\pi^{i}\left(X_{T^{i}(t)}^{i}\right)\). Worker \(i\) incurs a flow cost \(\alpha_{t}^{i}a_{t}^{i}c^{i}\left(X_{T^{i}(t)}^{i}\right)\), proportional to the fraction of the task he is responsible for. Upon promotion (at time \(\tau\)), worker \(i\) gets a payoff, \(g^{i}>0\), and is now compensated for working on the non-routine task: he gets a flow payoff \(\alpha_{t}^{i}a_{t}^{i}c^{i}\left(X_{T^{i}(t)}^{i}\right)\). When the principal takes the outside option, i.e., allocate the promotion to an external worker, she gets \(W>0\). I make the following assumption on the principal's flow rewards and the workers' flow costs. **Assumption 6**.: _(i) \(\pi^{i}:\mathcal{X}^{i}\rightarrow\mathbb{R}\) is upper semicontinuous, nondecreasing, nonnegative, and such that_ \[\mathbb{E}\left[\int_{0}^{\infty}e^{-rt}\pi^{i}(X_{t}^{i})dt\mid X _{0}^{i}=x\right]<\infty\] _for all \(x\in\mathcal{X}\). (ii) \(c^{i}:\mathcal{X}^{i}\rightarrow\mathbb{R}\) is lower semicontinuous, nonincreasing, and nonnegative._ So, given a promotion contest \((T,\tau,d)\), the principal's expected payoff is \[\Pi^{M}\left(T,\tau,d;W\right)\coloneqq\mathbb{E}\left[\sum_{i=1}^{N}\int_{0}^{ \tau}e^{-rt}\pi^{i}(X_{T^{i}(t)}^{i})dT^{i}(t)+e^{-r\tau}\bar{\pi}\left(X_{T( \tau)},d_{\tau}\right)\right],\] The workers' expected payoffs are \[U^{i}\left(T,\tau,d\right)\coloneqq\mathbb{E}\left[e^{-r\tau}gd_{\tau}^{i}- \int_{0}^{\infty}e^{-rt}(1-d_{\tau}^{i}\mathbb{1}_{\{t\geq\tau\}})c^{i}\left(X_ {T^{i}(t)}^{i}\right)dT^{i}(t)\right].\] Define also the workers' continuations payoff at time \(t\geq 0\) as \[U_{t}^{i}\left(T,\tau,d\right)\coloneqq\mathbb{E}\left[e^{-r(\tau-t)}g^{i}d_{ \tau}^{i}\mathbb{1}_{t\leq\tau}-\int_{t}^{\infty}e^{-r(s-t)}(1-d_{\tau}^{i} \mathbb{1}_{\{s\geq\tau\}})c^{i}\left(X_{T^{i}(s)}^{i}\right)dT^{i}(s)\mid \mathcal{G}_{t}^{T}\right].\] **Definition 3**.: _A promotion contest \((T,\tau,d)\) is **implementable** if there exists a promotion contest \((\alpha,\tau,d)\) such that (i) there exists a (weak) Perfect Bayesian Nash equilibrium with effort processes \(a\) in the game defined by \((\alpha,\tau,d)\) played by the workers, and (ii) such that, for all \(i\in\{1,\ldots,N\}\),_ \[T^{i}(t)=\int_{0}^{t}\alpha_{s}^{i}a_{s}^{i}ds,\quad 0\leq t\leq\tau,\,\mathbb{P} \text{-a.s..}\] _Denote by \(\mathcal{P}^{I}\) be the set of all implementable promotion contests._ The principal designs the promotion contest to maximize her total expected payoff among all implementable promotion contests: \[\Pi^{M}\coloneqq\sup_{(T,\tau,d)\in\mathcal{P}^{I}}\mathbb{E}\left[\sum_{i=1}^ {N}\int_{0}^{\tau}e^{-rt}\pi^{i}(X_{T^{i}(t)}^{i})dT^{i}(t)+e^{-r\tau}\bar{\pi }\left(X_{T(\tau)},d_{\tau}\right)\right].\] (Obj) Finally, I make the following assumption. **Assumption 7**.: _For all \(i\in\{1,\ldots,N\}\), there exists \((T,\tau,d)\in\mathcal{P}^{I}\), with \(T^{i}(t)=t\) for all \(t\geq 0\), such that_ \[\mathbb{E}\left[\int_{0}^{\tau}e^{-rt}\pi^{i}\left(X_{t}^{i} \right)dt+e^{-r\tau}\left((1-d_{\tau}^{0})\int_{\tau}^{\infty}e^{-r(t-\tau)} \pi^{i}\left(X_{t}^{i}\right)dt+d_{\tau}^{0}W\right)\right]\] \[\qquad\qquad>\mathbb{E}\left[\int_{0}^{\infty}e^{-rt}\pi^{i} \left(X_{t}^{i}\right)dt\right].\] Assumption 7 guarantees that the principal's problem when worker \(i\) is the only candidate is not trivial, i.e., she can do better than promote worker \(i\) immediately. It is not needed, but it simplifies some of the arguments by restricting the number of cases to consider. ### Discussion of the model Before moving to the analysis, I comment on several features of the model. A constrained multi-armed bandit model:As mentioned in the introduction, the model is a bandit problem with strategic arms. At each instant, the principal chooses which arm to pull (which worker to delegate) or takes her outside option. As in bandit models, the workers' types only evolve when they work on the project. For example, the principal learns about a worker's fixed but unknown potential. Implicit here is that the principal allocates her attention only to the worker delegated the non-routine task or that performing other jobs is not informative for the promotion. So learning is conditional on delegation. Another example corresponds to the acquisition of new skills and on-the-job learning: the workers' skills improve when responsible for the non-routine task. Moreover, I assume that the workers' types are independent. The performance of one of the workers when delegated the task is uninformative about the potential of another worker. In particular, the workers do not learn from one another. I also focus on environments in which the workers do not need to cooperate: in my model, there is no payoff externality. The workers' efforts are substitutes, and the reward the principal obtains only depends on who is in charge of the non-routine task (and not on the types of other workers). These assumptions are crucial. As in classic bandit models, very little can be said when the workers' types are correlated or when a worker's type evolves when the principal does not delegate the project to him.17 Footnote 17: One could relax the last assumption (the absence of payoff externalities), following the analysis in Nash (1980) or Eliaz et al. (2021). They prove that indexability (with Nash indices) holds in multi-armed bandit problems in which the reward from pulling an arm also depends on the states of the other arms. Nash (1980) consider the case when arms are complements, while Eliaz et al. (2021) consider both the cases when arms are substitutes and complements. Multi-parameter formulation:To describe the types' dynamics, I adopt the multi-parameter approach pioneered by Mandelbaum in Mandelbaum (1986) for the multi armed bandit model. This is critical to guarantee that the types' processes can be defined on a fixed (exogenous) probability space. It also simplifies the analysis. It also allows capturing many dynamic contracting environments with one unified approach. The alternative method would be to assume that the type of each worker is defined as the solution of a stochastic differential equation with drift, diffusion, and jump coefficients equal to zero when the workers do not exert effort. However, such stochastic differential equations would be unlikely to admit strong solutions.18 By taking the multi-parameter approach, I do not need to work with multiple (endogenous) probability measures. Footnote 18: See Karatzas (1984), and the discussion in Mandelbaum (1987). Only value of promotions is strategic:Another assumption of the model is the absence of direct value in promoting someone for the principal. The flow payoff from the non-routine task is the same whether the worker completing it has been promoted. One can think that a given non-routine task is associated with an opening position in the organization, for example, bringing a new product to the market. The principal allocates this same task whether or not she has already promoted a worker. So, the promotion has _no direct payoff effect_. It has, however, a strategic role. Workers value promotion. Hence, the principal uses the promises of a future promotion to motivate the workers. In particular, upon promotion, the workers get an _exogenous_ prize and are compensated for their effort when working on the non-routine task.19 This is for simplicity. It reflects the idea that the organization designs the position so that the promoted worker willingly exerts effort and obtains a strictly positive rent. It guarantees that the model remains tractable and allows me to focus on the interaction between the allocation of tasks and the promotion decision. Footnote 19: In Section 5.2, I relax this assumption and allow the principal to design the prize. ## 2 Main Result Lemma 2.1 below characterizes the set of implementable promotion contests \(\mathcal{P}^{I}\). In particular, it shows that it is nonempty and, hence, that the value of the principal is finite (by Assumption 6 (i)). **Lemma 2.1**.: _A promotion contest is implementable if and only if the continuation value of each worker is nonnegative after any possible history: For all \(i\in\{1,\ldots,N\}\) and all \(t\geq 0\), \(U^{i}_{t}\geq 0\)._ Its proof is in Appendix 1.2. It follows from Lemma 2.1 that, in any implementable promotion contest, the non-routine task is allocated to the promoted worker forever once the promotion decision is made. The principal already spent her incentive capital. The best thing she can do is then to delegate the task to the promoted worker. So, with a small abuse of notation, redefine the continuation value the principal obtains upon promotion as \[\bar{\pi}\left(X_{T(\tau)},d\right)\coloneqq d_{\tau}^{0}W+\sum_{i=1}^{N}d_{ \tau}^{i}\int_{\tau}^{\infty}e^{-rt}\pi^{i}\left(X_{T^{i}(\tau)+t}^{i}\right)dt.\] The principal's problem (Obj) is then equivalent to: \[\Pi^{M}\coloneqq\sup_{(T,\tau,d)\in\mathcal{P}}\mathbb{E}\left[\sum_{i=1}^{N} \int_{0}^{\tau}e^{-rt}\pi^{i}(X_{T^{i}(t)}^{i})dT^{i}(t)+e^{-r\tau}\bar{\pi} \left(X_{T(\tau)},d_{\tau}\right)\right],\] subject to the dynamic participation constraints: for all \(i\) and all possible histories \(h_{t}\) with \(t\leq\tau\), \[\mathbb{E}\left[e^{-r(\tau-t)}g^{i}d_{\tau}^{i}-\int_{0}^{\tau}e^{-r(s-t)}c^{ i}\left(X_{T^{i}(s)}^{i}\right)dT^{i}(s)\mid h_{t}\right]\geq 0.\] ### Benchmark A natural benchmark is when the principal does not need to incentivize the workers to exert effort (which corresponds to \(c^{i}(\cdot)=0\) for all \(i\)). The problem then reduces to the standard multi-armed bandit problem (with passive arms): \[\sup_{(T,\tau)\in\mathcal{D}\times\mathcal{T}}\mathbb{E}\left[\sum_{i=1}^{N} \int_{0}^{\tau}e^{-rt}\pi^{i}\left(X_{T^{i}(t)}^{i}\right)dT^{i}(t)+e^{-r\tau }W\right].\] (Bm) Hence, when \(c^{i}(\cdot)=0\), all promotion contests give a nonnegative continuation payoff to the worker \(i\) after any possible history. Since the flow rewards the principal obtains when worker \(i\) performs the non-routine task are the same before and after promotion, promoting worker \(i\) has no direct value. It also has no strategic value when \(c^{i}(\cdot)=0\). However, it has a cost: it restricts the principal's future options. So the principal always wants to postpone the promotion. When the workers do not need to be incentivized, any rationale for promotion disappears, and it is never optimal to promote a worker. The solution of this problem is well-known. It is the index rule associated with the the (classic) Gittins' index. Both index rules and the Gittins' indices are defined now. **Definition 4**.: _A delegation process \(T\) is called an **index rule** if, for all \(i\in\{1,\ldots,N\}\), there exists a \(\mathcal{F}^{i}\)-adapted processes \(\Gamma^{i}\coloneqq\{\Gamma^{i}_{t}\}_{t\geq 0}\) such that \(T^{i}(t)\) is flat off the set_ \[\left\{t\geq 0\,:\,\underline{\Gamma}^{i}_{T^{i}(t)}=\bigvee_{j=1}^{N}\underline {\Gamma}^{j}_{T^{j}(t)}\right\}\,\mathbb{P}\text{-a.s.,}\] _where \(\underline{\Gamma}^{i}_{t}=\inf_{0\leq s\leq t}\Gamma^{i}_{s}\)._ _The process \(\Gamma^{i}\) is **worker \(i\)'s index**._ In continuous time, the existence of index rules is not obvious. It is proved (by construction) in Mandelbaum (1987), El Karoui and Karatzas (1994), or El Karoui and Karatzas (1997). For completeness, in Appendix 2.1, I reproduce the construction in El Karoui and Karatzas (1997) to obtain an index delegation rule associated with (arbitrary) indices \(\left(\Gamma^{1},\ldots,\Gamma^{N}\right)\), as I will need properties specific to this construction. **Definition 5**.: _The (classic) Gittins' index \(\Gamma^{g,i}\coloneqq\left\{\Gamma^{g,i}_{t}\right\}_{t\geq 0}\) associated with worker (arm) \(i\) is defined by, for all \(t\geq 0\),_ \[r\Gamma^{g,i}_{t}\coloneqq\sup_{\tau>t}\frac{\mathbb{E}\left[\int_{t}^{\tau}e ^{-rs}\pi^{i}(X^{i}_{s})ds\mid\mathcal{F}^{i}_{t}\right]}{\mathbb{E}\left[\int _{t}^{\tau}e^{-rs}ds\mid\mathcal{F}^{i}_{t}\right]},\] (GI) _with the convention that \(\frac{0}{0}=-\infty\)_ \(\Gamma^{g,i}_{t}\) is the maximal constant price the principal is willing to pay to include worker \(i\) in the pool of candidates up to time \(t+\tau^{*}\); where \(\tau^{*}\) is the optimal stopping time in (GI). \(\Gamma^{i}_{t}\) captures both the payoff from exploiting arm \(i\) up to time \(t+\tau^{*}\) and the value of the information the principal obtains. **Proposition 2.2**.: _The index rule associated with the Gittin's indices is optimal in the multi-armed bandit problem (with passive arms)._ Proposition 2.2 restates the well-known optimality of the Gittins' index rule for the multi-armed bandit problem. It is obtained as a special case of the main Theorem 2.4 below. Its proof is in Appendix 1.4. Proposition 2.2 formally establishes that giving the prize to any of the candidates is never optimal when they do not need to be incentivized. Hence, it confirms that the value of the promotion is purely strategic in my model. When the workers do not need to be incentivized, the principal never promotes them. However, she still takes her outside option (i.e., hire externally) when she becomes too pessimistic any of the workers is good. As a result, the principal only has to balance exploration and exploitation: delegating to a new worker to learn about him or to a worker known to be good to enjoy the higher reward obtained from the non-routine task. The Gittins' index rule addresses this trade-off. To see this, suppose that every time the principal delegates to worker \(i\), she has to pay \(\underline{\Gamma}_{T^{i}(t)}^{i}\coloneqq\inf_{0\leq s\leq t}\Gamma_{T^{i}(s)} ^{i}\). By definition, it is the maximal flow price the principal would pay to delegate to worker \(i\) from time \(t\) to \(t+\tau^{*}\). So the principal is indifferent between allocating the task to worker \(i\) and stopping the game: her value from delegating to worker \(i\) is zero. Following the Gittins index rule guarantees that her continuation value at all times is zero. If she, however, were to choose a different strategy, her value would be negative. So, given such prices, the index rule is optimal: it maximizes the profit collected by the bandit machine as it moves up the use of the more costly arms and postpones the use of the less costly ones. Since the prices are set to be the greatest possible to ensure the principal participation, they maximize the profit of the bandit machine among all possible prices. The index rule, therefore, maximizes total surplus and hence is optimal. This intuition was developed by Weber in his proof of indexability in Weber (1992). However, because the Gittins' rule never promotes any of the workers, it is not implementable: the continuation value of each worker when delegated the non-routine task is strictly negative. Hence the need to incentivize the workers to exert effort constrains the principal ability to learn. So the Gittins' "prices" are too high in the index contest: the principal would not delegate to the workers at these prices. In the next Section, I solve the multi-armed bandit problem with strategic arms. ### The index contest The strategic index rule will play a crucial role. To define it formally, I need to introduce the promotion thresholds, \(P^{i}(\cdot)\)'s, and promotion times, \(\tau^{s,i}\)'s, first. Define \(\tau^{i}_{(\underline{x},\bar{x})}\coloneqq\inf\left\{t\geq 0\,:\,X^{i}_{t} \not\in(\underline{x},\bar{x})\right\}\). For all \(i\in\left\{1,\ldots,N\right\}\), for all \(\underline{x}\leq x\leq\bar{x}\in\mathcal{X}^{i}\), let \[U^{i}(x,\underline{x},\bar{x})\coloneqq\mathbb{E}\left[e^{-r\tau^{i}_{( \underline{x},\bar{x})}}g^{i}\mathbb{1}_{\left\{X^{i}_{r}\geq\bar{x}\right\}}- \int_{0}^{\tau^{i}_{(\underline{x},\bar{x})}}e^{-rt}c^{i}(X^{i}_{t})dt\mid X^{ i}_{0}=x\right].\] \(U^{i}(x,\underline{x},\bar{x})\) is \(i\)'s continuation value when his current type is \(x\) and he exerts effort until his type exceeds \(\bar{x}\) and he is promoted or his type falls below \(\underline{x}\) and he "quits" and gets payoff \(0\). Define then \(i\)'s promotion threshold as: \[\bar{P}^{i}(\underline{x})=\sup\Big{\{}\bar{x}\geq\underline{x}\,:\,\lim_{x \rightarrow\underline{x}}U^{i}\left(x,\underline{x},\bar{x}\right)\geq 0 \Big{\}}.\] \(\bar{P}^{i}(\underline{x})\) is the largest promotion threshold for which worker \(i\) is willing to stay in the game as long as his type does not fall below \(\underline{x}\). Moreover \(\bar{P}^{i}(\cdot)\) is _increasing_. Define also the stopping time \(\tau^{s,i}\coloneqq\inf\left\{t\geq 0\,:\,X^{i}_{t}\geq\bar{P}^{i}\left( \underline{X}^{i}_{t}\right)\right\}\), where \(\underline{X}^{i}_{t}\coloneqq\inf\limits_{0\leq s\leq t}X^{i}_{s}\) is the running minimum of \(X^{i}\). Theorem 4.2 in Section 4.2 shows that \(\tau^{s,i}\) is the optimal promotion time when worker \(i\) is the only worker. Next define the \(\mathcal{F}^{i}\)-adapted process \(h^{i}\) as \[h^{s,i}_{t}\coloneqq\pi^{i}\left(X^{i}_{t}\right)\mathbbm{1}_{\{t<\tau^{s,i} \}}+\bar{\pi}^{i}\left(X^{i}_{\tau^{s,i}}\right)\mathbbm{1}_{\{t\geq\tau^{s,i} \}},\quad t\geq 0,\] where \[\bar{\pi}^{i}\left(x\right)\coloneqq r\mathbb{E}\left[\int_{0}^{\infty}e^{-rt }\pi^{i}\left(X^{i}_{t}\right)dt\mid X^{i}_{0}=x\right].\] The **strategic index** of worker \(i\) is defined by \[\Gamma^{s,i}_{t}\coloneqq\inf\left\{W\geq 0\,:\,V^{i}(t;W)\leq W\right\},\] where \[V^{i}(t;W)\coloneqq\sup_{\tau\geq t}\mathbb{E}\left[\int_{0}^{\tau}e^{-r(s-t) }h^{s,i}_{t}dt+e^{-r(\tau-t)}W\mid\mathcal{F}^{i}_{t}\right].\] Worker \(i\)'s index is the "equitable surrender value", i.e., the smallest \(W\) such that the principal prefers to take the outside option immediately rather than to delegate to worker \(i\) for some time before making a decision (when worker \(i\) is the only worker). Moreover, observe that, by assumption 2, the strategic index is a function of \(X^{i}_{t}\) and \(\underline{X}^{i}_{t}\) only: \(\Gamma^{s,i}_{t}=\Gamma^{s,i}\left(X^{i}_{t},\underline{X}^{i}_{t}\right)\). As in the classical bandit problem, it can be shown to be equal to \[r\Gamma^{s,i}_{t}=\sup_{\tau>t}\frac{\mathbb{E}\left[\int_{t}^{\tau}e^{-rs}h^{ s,i}_{s}ds\mid\mathcal{F}^{i}_{t}\right]}{\mathbb{E}\left[\int_{t}^{\tau}e^{- rs}ds\mid\mathcal{F}^{i}_{t}\right]}, \tag{2}\] with the convention that \(\frac{0}{0}=-\infty\). The strategic index coincides with the classical Gittins index for the modified payoff stream \(\{h^{i}_{s}\}_{s\geq 0}\). In particular, \(\Gamma^{s,i}_{t}\) is the maximum price the principal is willing to pay for the possibility of including the worker in the pool of candidates. Moreover, the second expression also makes it clear that the worker's _strategic index_ is similar to the classic Gittins' index (GI). The difference resides in what information is optimally _acquired_. Here, the workers control both the rewards and the flow of information. Moreover, their incentives differ from the principal's. Worker \(i\)'s strategic index then takes into account the incentive provision problem. Since worker \(i\) only exerts effort if it increases his chance of promotion, the principal has to motivate him to work by promising he will eventually get the prize. However, upon promotion, collecting information has no value for the principal, as she cannot incentivize other workers to work anymore. That's captured in the definition of the process \(h^{s,i}\): after the promotion time, the flow reward is \(\mathbb{E}\left[\pi^{i}(X^{i}_{t})\mid\mathcal{F}^{i}_{\tau}\right]\). It is as if no new information is obtained. Contrary to Gittins' index, the strategic index ignores the information generated after the promotion decision when assessing the value of delegating to a worker. Interestingly, when the cost of providing incentives goes to zero (when \(c^{i}\to 0\)), the strategic index process converges to the Gittins' index from below pointwise \(\mathbb{P}\)-a.s.. The index delegation rule associated with the strategic index processes \(\left(\Gamma^{s,1},\ldots,\Gamma^{s,N}\right)\) is called the **strategic index rule**. **Definition 6**.: _The **index contest** (i) follows the **strategic index rule**, (ii) promotes the **first** worker \(i\) whose type reaches his **promotion threshold**\(\bar{P}^{i}(\underline{X}^{i}_{t})\), and (iii) takes the outside option at time \(\tau^{0}=\inf\left\{t\geq 0\,:\,\bigvee_{i=1}^{N}\underline{\Gamma}^{i}_{ \mathcal{T}^{s,i}(t)}\leq W\right\}\) if no worker was promoted before._ Figure 2.2 illustrates the **index contest** with two workers. Each can be good or bad. Their types are the posterior beliefs that they are good, and the principal learns about them according to the Poisson arrival of bad news. Initially, worker 1 is better, so the principal first delegates to worker 1. However, too much bad news arrives. Therefore, she switches to worker 2. Worker 2 performs well and eventually gets the promotion. **Proposition 2.3**.: _The index contest can be **implemented** in a (weak) Perfect Bayesian equilibrium without commitment. The workers' strategies only depends on their own type \(X^{i}_{T^{i}(t)}\), the current running minimum of their type \(\underline{X}^{i}_{T^{i}(t)}\), and whether the principal promoted a worker. The principal's payoff is_ \[\mathbb{E}\left[\int_{0}^{\infty}re^{-rt}\bigvee_{i=1}^{N}\mathbb{L}^{s,i}_{T^{ s,i}(t)}dt\right].\] ( \[\Pi^{M}\] ) The proof of Proposition 2.3 is in Appendix 1.3. ### Optimality The main result is the optimality of the _index contest_: despite the agency frictions, indexability is preserved. When deciding who to delegate the non-routine task to and who to promote, the principal considers each worker separately. She delegates to and eventually promotes the best worker, as measured by the value of his associated strategic index. **Theorem 2.4**.: _The **index contest** maximizes the principal's payoff among all implementable promotion contests._ Theorem 2.4 is proved in Section 4. Its proof requires to address two main challenges. First, implementable promotion contests have to balance the incentives of all workers. So, there is no reason a priori that it treats them separately. For example, if the principal promotes worker 1 when his type exceeds that of worker 2 by \(\frac{1}{2}\), it creates a strategic dependence between the arms. The optimal delegation rule may not be an index rule, and the index of both workers 1 and 2 does not remain frozen when the other worker is in charge of the non-routine task. To overcome this problem, I show that the principal can treat the workers separately (when it comes to incentive provision), i.e., she chooses \(N\) different promotion rules, each incentivizing one worker to exert effort. To do so, I study a relaxed problem in which participation constraints only hold in expectations (conditional on the workers' type). The relaxed problem coincides with the setting where each worker can only see the evolution of his own type. It pulls together many information sets. In that relaxed problem, workers have to be informed when promoted to maximize the length of the experimentation phase. Otherwise, their continuation payoff would be strictly less than the value they associate with the promotion. Therefore the principal could delegate the project a little longer before making her decision (which benefits the principal). So, the promotion time associated with each worker has to be measurable with respect to their own type. As a result, it is without loss of optimality for the principal to choose a delegation rule and \(N\) individual promotion contracts (i.e., \(N\) individual promotion time and promotion decision that depend only on the type of the worker). The solution, however, needs not be a solution to the original problem, as the delegation rule and individual promotion contracts may not be jointly implementable. Hence the principal may be unable to keep the independent promises she made to distinct workers. I will come back to this after describing the relaxed problem more carefully. Second, even if each worker's promotion time and decision depend only on his own type process, the problem is still not a standard bandit problem. To use the techniques developed in the bandit literature, I rewrite the flow payoff the principal gets upon promotion as the expected payoff from delegating to the worker, conditional on the information available at the time of promotion. Each arm is then a superprocess: each arm comprises multiple possible reward processes, one for each individual promotion contract. So, the principal chooses both which arm to pull and which contract to offer. In particular, when the principal pulls an arm, the flow payoff and the information depends on the "promotion contract". There is no guarantee that indexability holds for superprocesses. However, in the Markovian setting, there exists a condition for which it does, sometimes known as the Whittle condition or condition W (Whittle (1980), Glazebrook (1982)). It says that the optimal action chosen in each state in the single-armed retirement problem is independent of the outside option W. If for some value of the outside option, it is optimal to choose an action rule, then it is also optimal to choose the same action rule for any other value of the outside option (as long as the arm is not retired). If the Whittle condition holds, the bandit problem with superprocesses is indexable. In my setting, I show that a version of condition W for general (non-Markovian) superprocesses holds in the single-worker promotion problem. At time 0, the optimal promotion contract in the single-worker problem is independent of the outside option (before the principal takes her outside option). That is, provided that the principal has not taken her outside option yet, if the worker is promoted after some history, he is also promoted after this same history when the value of the outside option is smaller. I then show that this condition is sufficient for indexability to hold. The index contest is the solution to this problem. To build some intuition for this result, consider the case of two workers. Suppose that the strategic index of worker 1 is initially higher than that of worker 2. Suppose also that the value of the principal's outside option equals worker 2's index. If worker 2 were the only worker, the principal would take the outside option immediately. The principal's problem then reduces to the problem in which she can only delegate to worker 1, promote worker 1, or take the outside option. The _index contest_ guarantees that the principal offers the optimal single agent promotion mechanism to worker 1 (as Theorem 4.2 in Section 4.2 below shows). Eventually, either worker 1 is promoted, and the game ends, or his strategic index falls below the strategic index of worker 2, hence, below the value of the outside option. In that case, the principal should take her outside option. Instead, imagine that, when it happens, the value of the outside option also falls to the level of worker 1's strategic index. In the new continuation problem, the principal never delegates to worker 1. However, she is willing to delegate to worker 2. In particular, she offers the single-player optimum promotion contract to worker 2. The _index contest_ repeatedly plays the single-player optimal promotion mechanism for the best current worker (as measured by the strategic indices) until one worker is promoted or the principal takes her outside option. Promotion happens the first time a worker's type reaches \(\bar{P}^{i}(\underline{X}^{i}_{t})\). The principal takes the outside option when there is no benefit from experimentation anymore, i.e., when \(\Gamma^{s,i}\leq W\) for all \(i\). So, at any point in time, when one worker is delegated the project, his promotion threshold is equal to the optimal threshold in the single-agent problem. As mentioned above, the index contest needs not be implementable. Fortunately, it is, as Proposition 2.3 shows. Intuitively, when only one worker is allocated the task, the only promise-keeping constraint that matters is the one for the worker currently assigned the task. All other constraints are redundant and can be ignored. The above intuition suggests the following interpretation of the _index contest_. The principal approaches the workers successively. The indices' ranking determines the order in which the workers are approached. When the principal selects one worker, she offers him an _individual trial contract_. It consists of a target: the promotion threshold, and a (potentially stochastic) deadline. If the worker achieves the target before the deadline, the principal promotes him. He is then in charge of the non-routine task forever. Otherwise, the principal approaches another worker until one succeeds, or the principal becomes too pessimistic and takes her outside option. Interestingly, if the principal could reoptimize at the end of each short-term contract (when she starts delegating to a new worker in the index contest), she would choose the same continuation promotion contest. Every time a new worker gets an opportunity to prove himself, the continuation mechanism is optimal for the principal. The above interpretation of the index contest is reminiscent of promotion practices described in the strategic management literature. For example, Stumpf and London (1981) propose to evaluate the workers sequentially until the principal identifies a satisfactory one. More generally, the _index contest_ is also related to absolute merit-based promotion systems,20 in which the first worker who meets a minimum performance target gets the promotion. My results suggest that one should expect organizations to use absolute merit-based promotion systems when it is important to fill the position with the right worker. On the other hand, when motivating effort is more important, other promotion systems, such as the classic winner-take-all promotion contest of Lazear and Rosen (1981) or the cyclical egalitarian contest proposed by Ely et al. (2021) may be better, and, hence, more common. Intuitively, these promotion systems are very good at incentivizing effort but less so at ensuring that the promoted worker's potential is high. The _index contest_ guarantees that the non-routine task runs smoothly after the promotion decision is made. It balances incentives provision and selection. ### Features of the index contest No commitment:Often, one may want to assume little commitment within an organization: most of the day-to-day activities are not governed by formal contracts, it is unlikely that the performance of a worker is verifiable...In my setting, the principal does not need any commitment power, as Proposition 2.3 shows. The _index contest_ is implementable even if the principal cannot commit to the delegation rule, delegation time, or promotion decision. Maybe even more interestingly, it does not require sophisticated coordinated punishments. It is implementable in a (weak) Perfect Bayesian equilibrium by "grim trigger" strategies. Moreover, each worker's strategy only depends on his current type, the running minimum of his type process, and whether the principal has promoted a worker yet. Fast track:In the **index contest**, the promotion thresholds are decreasing over time (as increasing functions of the running minimums of the workers' types). So a worker's potential upon promotion decreases with time: **Proposition 2.5** (Speed and accuracy).: _If \(\pi^{i}(\cdot)=\pi(\cdot)\) for all \(i\in\{1,\ldots,N\}\) and the processes \(X^{i}\)'s have the same law, then the promoted worker's type and the principal's continuation value upon promotion are nonincreasing over time \(\mathbb{P}\)-a.s.._ Proposition 2.5 follows from the fact that the promotion threshold is \(\mathbb{P}\)-a.s. nonincreasing over time. The proof is omitted. Pushing the interpretation beyond the model, the above proposition suggests that fast tracks21 should not be surprising. When a worker is promoted quickly, his type upon promotion is high. So, when entering a potential new contest for further promotion at the next level of the organization, he starts from a better position. In turn, it implies that his expected time to promotion is shorter and that the worker's chances to be promoted again soon are high. Footnote 21: I.e., that a quickly promoted worker often gets another promotion soon after. See Baker et al. (1994) and Ariga et al. (1999). Seniority:Finally, the decrease over time of the promotion thresholds also has an interesting implication for seniority. As time passes, it becomes easier for each worker to be promoted (conditional on his type). Proposition 2.6 formalizes this statements. **Proposition 2.6**.: _In the index contest, worker \(i\)'s promotion probability, \(\mathbb{E}\left[d^{i}\mid\mathcal{G}_{t}^{T^{s}}\right]\) and continuation value, \(U_{t}^{i}\), are non nondecreasing over time conditional on \(X_{T^{i}(t)}=x\). His expected time to a promotion is nonincreasing in \(t\), conditional on \(X_{T^{i}(t)}^{i}=x\)._ Proposition 2.6 also follows immediately from the promotion threshold being nonincreasing over time \(\mathbb{P}\)-a.s.. The proof is omitted. Convex compensation structure:Learning is essential when the cost of promoting the wrong worker is high. Hence, the principal always benefits from a larger prize, as illustrated by the following proposition. **Proposition 2.7** (Value of the project).: _The principal's value increases with the value of the promotion \(g=\left(g^{1},\ldots,g^{N}\right)\)._ Proposition 2.3 is immediate: Let \(\bar{g}\geq\underline{g}\). Then any promotion contest feasible for the value vector \(\bar{g}\) is also feasible for \(\underline{g}\). But, she should benefit from a larger prize especially when learning is paramount. That's because it makes incentivizing experimentation easier and helps the principal make a better decision. Proposition 2.8 confirms this point and shows that the principal acquires more information about the promoted worker as \(g^{i}\) increases. **Proposition 2.8**.: _As \(g^{i}\) increases, the principal learns more about worker \(i\)._ Proof of Proposition 2.8.: Let \(\bar{g}^{i}\geq\underline{g}^{i}\). Observe first that the index of worker \(i\) is greater when the prize is \(\bar{g}^{i}\); therefore, the principal acquires information about worker \(i\) sooner. Moreover, in the index contest with reward \(g^{i}\in\left\{\bar{g}^{i},\underline{g}^{i}\right\}\), worker \(i\) is promoted after being responsible for the non-routine task for the time \(\tau^{i}(g^{i})=\inf\left\{t\geq 0\,:\,X_{t}^{i}\geq\bar{P}^{i}(\underline{X}_{t} ^{i};g^{i})\right\}\). Note that \(\bar{P}^{i}(\cdot,\bar{g}^{i})\geq\bar{P}^{i}(\cdot,\underline{g}^{i})\), and therefore \(\tau^{i}\left(\bar{g}^{i}\right)\geq\tau^{i}\left(\underline{g}^{i}\right)\). Putting these two observations together concludes the proof. Intuitively, when workers value the promotion more (i.e., the prize is bigger), they are willing to exert effort for an extended time. So the principal can acquire more information and make a better promotion decision. This can help understand why many organizations have a convex compensation structure (i.e., the bonuses paid upon promotion and the wage spread between positions increase when moving up in the hierarchy).22 At the top of the organization, the cost of promoting the wrong worker is potentially high. Extending the exploration phase is, therefore, valuable. A convex compensation structure achieves this. However, how to measure the value of information here is not obvious. I propose to use the following definition: **Definition 7**.: _The value of information in the promotion problem with \(\tilde{\pi}^{i}\left(\cdot\right)\) is higher than the value of information in the promotion problem with \(\pi^{i}\left(\cdot\right)\) if, for all \(t\geq 0\),_ \[\frac{\partial\Gamma_{t}^{s,i}(\tilde{\pi}^{i})}{\partial\tau^{s,i}}\geq\frac{ \partial\Gamma_{t}^{s,i}(\pi^{i})}{\partial\tau^{s,i}},\] \(\mathbb{P}-a.s.\)_, where \(\Gamma_{t}^{s,i}(\pi)\) is the strategic index of worker \(i\) when the flow payoff the principal gets when worker \(i\) with type \(x^{i}\) leads the project is \(\pi\left(x^{i}\right)\)._ Intuitively, the above definition says that the benefit from waiting for one more instant before promoting worker \(i\) is larger for \(\tilde{\pi}^{i}\) than for \(\pi^{i}\), i.e., there is more to gain from acquiring information as the cost of mistakes increases. It captures the extent to which marginal information is actionable: whether it helps the principal to make a better decision. **Proposition 2.9**.: _Let \(\bar{g}\geq\underline{g}\) and the value of information associated with \(\tilde{\pi}^{i}\) be higher than the value of information associated with \(\pi^{i}\), for all \(i\in\{1,\ldots,N\}\). Then_ \[\Pi^{M}(\bar{g},\tilde{\pi})-\Pi^{M}(\bar{g},\pi)\geq\Pi^{M}(\underline{g}, \tilde{\pi})-\Pi^{M}(\underline{g},\pi). \tag{3}\] Proof of Proposition 2.9.: To prove (3), it is enough to show that, for all \(i\in\{1,\ldots,N\}\), \[\frac{\partial\Pi^{M}(g,\pi)}{\partial g^{i}}\text{ is increasing in $\pi$ for the order of Defintion \ref{def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:defdef:def:def:def:def:def:defdef:def:def:defdef::defdef:def:defdef::defdef:defdef::defdefdef::def since \[\frac{\partial\Gamma_{T^{s,i}(t)}^{s,i}}{\partial g^{i}}=\frac{ \partial\tau^{s,i}}{\partial g}\frac{\partial\underline{\Gamma}_{T^{s,i}(t)}^ {s,i}}{\partial\tau^{s,i}},\] \(\tau^{s,i}\) is independent of \(\pi^{i}\), and \(\frac{\partial\tau^{s,i}}{\partial g}\geq 0\) by Proposition 2.8. Traditionally, contest theory has suggested that the convexity of the compensation structure in organizations results from the higher return of effort at higher positions in the hierarchy. My results offer a complimentary story: when the returns of selecting the right worker are high, larger bonuses let the principal experiment longer and promote a better worker. ## 3 Strategic amplification One of the initial questions I asked was whether the allocation of opportunities could exacerbate initial differences to produce significant disparities over time. Because in the _index contest_, the principal delegates the project _sequentially_ and promotes the _first_ worker whose type reaches his promotion threshold, being delegated first is an advantage. This is especially true if, at every step of the index contest, i.e., during every trial contract, the probability that the worker leading the project reaches his target and hence gets the promotion is large. In this section, I define a class of environments that I call _reinforcing environments_, in which initial differences compound. In these environments, being delegated the project leads to a significant chance of promotion. This has two main implications: First, the timing of the first opportunity matters. A worker in charge of the non-routine task earlier is much more likely to be promoted. So, what determines the assignment of non-routine tasks early on is crucial to understanding who has a chance to be promoted. Secondly, initial differences lead to substantial differences during the exploration phase. To identify discrimination, conditioning on the potential of the workers upon promotion or their history of responsibilities in the organization may be a bad idea. Both depend on the endogenous delegation path. If discrimination occurs in the allocation of opportunities, it will remain undetected. The following example illustrates the logic. Two symmetric workers compete for the promotion. Their types' processes \(X^{i}\) keep track of their instantaneous (nonnegative) productivity. When they work on the project, their productivity drifts up at a constant speed \(\mu\). This could reflect on-the-job learning. However, they can reach a dead end. Dead ends arrive according to a Poisson process with parameter \(\lambda\). When a dead end comes, the worker needs to devise a new strategy and restart from scratch: his type jumps to zero. So, the type of each worker evolves according to the differential equation \(dX_{t}^{i}=\mu dt\) if he does not reach a dead end and jumps down to zero if he does. The principal gets a flow payoff of \(X_{t}^{i}\) when he delegates the project to worker \(i\in\{1,2\}\). I assume that the workers' costs of effort are constant and equal to \(c>0\) and that both associated value \(g>0\) to the promotion. Finally, I assume the principal's outside option is small and, therefore, never taken. Let \(\bar{t}\) be the unique solution of \[\lambda c\int_{0}^{\bar{t}}e^{-(r+\lambda)t}dt=g.\] The workers' promotion thresholds are given by \(\bar{P}^{i}(\underline{X}_{t}^{i})=\underline{X}_{t}^{i}+\mu\bar{t}\). The workers' indices can be taken to be the worker's types.23 Footnote 23: For \(i\in\{1,2\}\), \(\Gamma_{t}^{s,i}=\Gamma^{s}(X_{t}^{i},\underline{X}_{t}^{i})\) and the function \(\Gamma^{s}(\cdot,\cdot)\) is increasing both variables. So the ranking of the indices at any instant is the same as the ranking of types when the principal plays the associated index delegation rule. In the index contest, the first worker is promoted before the second worker even has a chance to lead the project with probability \((1-e^{-\lambda\bar{t}})\). Moreover, if the principal (lexicographically) prefers worker 1, i.e., when indifferent, she delegates to worker 1, then the probability that worker 2 is promoted in this environment is \((1-e^{-\lambda\bar{t}})e^{-\lambda\bar{t}}\). That is, worker 2 is promoted if and only if worker 1 does not succeed initially and worker 2 does not hit a dead end the first (and only) time he works on the non-routine task. So, when \(\bar{t}\) is either small or large, worker 2's promotion probability is close to zero. On the other hand, worker 1's promotion probability is close to one. The above example is simple and clearly illustrates that the sequential nature of delegation exacerbates small differences in environments in which the workers' types (and, hence, their indices) tend to go up when they work. Under the condition below, the logic of the above example easily extends. **Definition 8**.: _An environment \((X^{i},c^{i}(\cdot),g^{i},\pi^{i}(\cdot))_{i=1}^{N}\) is **reinforcing** if, there exists \(\delta>0\) _such that, for all \(i\in\underset{j\in\{1,\ldots,N\}}{\operatorname{arg\,max}}\,\Gamma_{0}^{i,s}\),_ \[\mathbb{P}\left(\tau^{i}\leq\tau_{-}^{i}(X_{0}^{i})\right)>\delta,\] (RC) _where \(\tau^{i}=\inf\left\{t\geq 0\,:\,X_{t}^{i}\geq\bar{P}^{i}(X_{0}^{i})\right\}\) and \(\tau_{-}^{i}(X_{0}^{i})=\inf\left\{t\geq 0\,:\,X_{t}^{i}<\bar{P}^{i}(X_{0}^{i})\right\}\)._ **Proposition 3.1**.: _In a reinforcing environment, a worker \(i\not\in\underset{j\in\{1,\ldots,N\}}{\operatorname{arg\,max}}\,\Gamma_{0}^{i,s}\)'s probability to be promoted is bounded above by_ \[(1-\delta)^{K}.\] _where \(K\) is the cardinality of \(\underset{j\in\{1,\ldots,N\}}{\operatorname{arg\,max}}\,\Gamma_{0}^{i,s}\)._ Proof.: In the index contest, every worker \(k\in\underset{j\in\{1,\ldots,N\}}{\operatorname{arg\,max}}\,\Gamma_{0}^{i,s}\) will be delegated the project before worker \(i\). The probability that each of the worker \(k\in\underset{j\in\{1,\ldots,N\}}{\operatorname{arg\,max}}\,\Gamma_{0}^{i,s}\) succeeds upon being delegated the project is greater than \(\delta\). The result then follows. A direct consequence of Proposition 3.1 is that if \(\delta\) is large, then the first worker gets the promotion with a considerable probability, and the other workers will not. Moreover, in large promotion contests with two different groups, each composed of initially identical workers, workers from the disadvantaged group face long odds when it comes to promotions. When the pool of candidates for promotion is large, any slight initial disadvantage is disqualifying. The logic here is reminiscent of Cornell and Welch (1996). My findings can help understand some of the mechanisms behind the "promotion gaps" documented in the literature (see, for example, Bronson and Thoursie (2021), Benson et al. (2021) and Hospido et al. (2022)). This is especially important as wage growth is known to be closely related to job mobility, especially within firms (see Baker and Holmstrom (1995), Lazear and Shaw (2007), or Waldman (2013) and the references therein). The main point is that understanding and addressing the roots and causes of the different allocations of opportunities is crucial. ## 4 Proof of Theorem 2.4 The proof of Theorem 2.4 is divided into the five following steps. * In Section 4.1, I relax the problem: in particular, each worker's dynamic participation constraint must only hold on expectation (conditional on the worker's own type), but not necessarily after all possible histories. * Section 4.2 solves the problem with only one worker. Its solution is given in Theorem 4.2. The argument adapts the logic of the proof of Theorem 1 in McClellan (2017) to our setting.24 Footnote 24: See also Harris and Holmstrom (1982), Thomas and Worrall (1988), or Grochulski and Zhang (2011) for similar ideas. * In Section 4.3, I show that it is without loss of optimality to focus on promotion contests such that at most one worker is promoted and such that the promotion time of worker \(i\) is a \(\mathcal{F}^{i}\)-stopping time. * Next, in Section 4.4, I derive an upper bound on the payoff the principal can get in any promotion contest that gives a nonnegative continuation value to all workers at all times, using the results from the three previous steps. Proposition 4.5 establishes that the principal's payoff in any implementable promotion contest is at most \((\Pi^{M})\). * Section 4.5 verifies that the **index contest** achieves the upper bound, hence proving Theorem 2.4. This follows from Proposition 2.3. ### The Relaxed Program The principal solves the following optimization program: \[\Pi^{M}\coloneqq\sup_{(T,\tau,d)\in\mathcal{P}}\mathbb{E}\left[ \sum_{i=1}^{N}\int_{0}^{\tau}e^{-rt}\pi^{i}(X^{i}_{T^{i}(t)})dT^{i}(t)+e^{-r \tau}\bar{\pi}\left(X_{T(\tau)},d_{\tau}\right)\right],\] (Obj) subject to the dynamic participation constraints: for all \(i\) and all possible histories \(h_{t}\) with \(t\leq\tau\), \[\mathbb{E}\left[e^{-r(\tau-t)}g^{i}d^{i}_{\tau}-\int_{0}^{\tau}e ^{-r(s-t)}c^{i}\left(X^{i}_{T^{i}(s)}\right)dT^{i}(s)\mid h_{t}\right]\geq 0.\] As a first step in the proof, I consider the relaxed problem in which the principal can randomize over possible stopping point. To introduce it formally, I need to define a number of new objects. For a filtration \(\mathcal{H}=\left\{\mathcal{H}_{t}\right\}_{t\geq 0}\), define the set of \(\mathcal{H}\)**-randomized stopping times as \[\mathcal{S}\left(\mathcal{H}\right)\coloneqq\left\{S\in\mathcal{N}_{0}^{\infty}( \mathcal{H})\,:\,dS\in\mathcal{M}_{+}^{\infty}(\mathcal{H}),\,S_{0^{-}}=0,\,S_{ \infty}\leq 1\right\}.\] \(\mathcal{N}_{0}^{\infty}\left(\mathcal{H}\right)\) is the set of \(\mathcal{H}\)-adapted process with values in \([0,\infty)\) such that \(n\in\mathcal{N}_{0}^{\infty}\left(\mathcal{H}\right)\) if \(n\) has nondecreasing paths \(\mathbb{P}\)-a.s.. \(\mathcal{M}_{+}^{\infty}\left(\mathcal{H}\right)\) is the set of \(\mathcal{H}\)-optional random measure. Observe that any randomized stopping time is equivalent to a \(\mathcal{F}_{t}\otimes\mathcal{B}([0,1])\)-stopping time defined on the enlarged filtered probability space \((\Omega\times[0,1],\mathcal{H}\times\mathcal{B}([0,1]),\{\mathcal{H}_{t} \times\mathcal{B}([0,1])\}_{t\geq 0},\mathbb{P}\otimes\lambda)\), where \(\lambda\) is the Lebesgue measure on \([0,1]\).25 Finally, let \(\mathcal{C}\) be the set of \(\bar{\mathcal{F}}\)-measurable promotion rule: Footnote 25: See, for example, Camboni and Durandard (2022). \[\mathcal{C}\coloneqq\left\{d\,:\,\,\,\text{for all $t\geq 0$, $d_{t}$ is $\bar{\mathcal{F}}$-measurable and }\sum_{i=0}^{N}d_{t}^{i}=1\,\,\mathbb{P}\text{-a.s.}\right\},\] and \(\mathcal{C}^{*}\) be the set of nondecreasing promotion rule: \[\mathcal{C}^{*}\coloneqq\left\{d\in\mathcal{C}\,:\,d^{i}\text{'s paths are cadlag and nondecreasing $\mathbb{P}$-a.s. for }i=1,\ldots,N\right\}.\] The set of **randomized promotion contest** consists of all the promotion contests such that the promotion time \(\tau\) is a randomized stopping time: \(\tau\in\mathcal{S}(\mathcal{G}^{T})\), and the decision rule \(d\) belongs to \(\mathcal{C}^{*}\). It is denoted by \(\mathcal{P}^{r}\). Consider then the relaxed program: \[\Pi\coloneqq\sup_{(T,\tau,d)\in\mathcal{P}^{r}}\mathbb{E}\Bigg{[}\sum_{i=1}^{ N}\int_{0}^{\tau}e^{-rt}\pi^{i}\left(X_{T^{i}(t)}^{i}\right)dT^{i}(t)+e^{-r\tau} \bar{\pi}\left(X_{T(\tau)},d\right)\Bigg{]}\] (RP) subject to, for all \(i\in\{1,\ldots,N\}\), for all \(t\geq 0\), \(\mathbb{P}\)-a.s., \[\mathbb{E}\left[e^{-r(\tau-\tau\wedge t)}g^{i}d_{\tau}^{i}-\int_{\tau\wedge t }^{\tau}e^{-r(s-\tau\wedge t)}c^{i}\left(X_{T^{i}(s)}^{i}\right)dT^{i}(s)\mid \mathcal{F}_{T^{i}(t)}^{i}\right]\geq 0.\] (DPC) **Proposition 4.1**.: _The value of (Obj) is weakly lower than the value of (RP): \(\Pi^{M}\leq\Pi\)._ Proposition 4.1 shows that the value of program (RP) is an upper bound on the principal's payoff for any implementable promotion contest. If an implementable promotion contest achieves this upper bound, this is the principal's preferred one. It relaxes (Obj) in three ways. First, it replaces the feasibility set \(\mathcal{P}^{I}\) by the set of all randomized pro motion contests. This will allow to prove compactness. Secondly, it only requires that the workers have nonnegative continuation payoffs at all times \(\mathbb{P}\)-a.s. (and not necessarily after all possible histories). Thirdly, it pulls together all the \(\mathcal{G}_{t}^{T}\) information sets that are not \(\mathcal{F}_{T^{i}(t)}^{i}\) measurable, hence relaxing the constraints the principal faces. Its proof is in Appendix 1.5. The remaining of Section 4 is dedicated to the proof that the index contest achieves the optimum in (RP). ### The \(1^{\frac{1}{2}}\)-arm case As in the classic bandit framework, the solution builds on the one arm problem. When there is only one worker, say worker \(i\), the relaxed problem (RP) introduced above becomes \[\Pi^{i}\coloneqq\sup_{(\tau,d^{i})\in\mathcal{P}^{r,i}}\mathbb{E} \Bigg{[}\int_{0}^{\tau}e^{-r\tau}\pi^{i}\left(X_{t}^{i}\right)dt+e^{-r\tau} \left(d_{\tau}^{i}\int_{\tau}^{\infty}e^{-r(t-\tau)}\pi^{i}\left(X_{t}^{i} \right)dt+(1-d_{\tau}^{i})W\right)\Bigg{]}\] (RP \[{}^{i}\] ) subject to, for all \(t\geq 0\), \(\mathbb{P}\)-a.s., \[\mathbb{E}\left[e^{-r(\tau-\tau\wedge t)}g^{i}d_{\tau}^{i}-\int_{ \tau\wedge t}^{\tau}e^{-r(s-\tau\wedge t)}c^{i}\left(X_{s}^{i}\right)ds\mid \mathcal{F}_{t}^{i}\right]\geq 0.\] (DPC \[{}^{i}\] ) \(\mathcal{P}^{r,i}\) is the set of all pairs \((\tau,d^{i})\) such that \(\tau\) is a (randomized) \(\mathcal{F}^{i}\)-stopping time and \(d^{i}\) is a \(\mathcal{F}^{i}\)-optional decision rule in \(\mathcal{C}^{*}\). Define also \(\mathcal{P}^{I,r,i}\): the set of all pairs \((\tau,d^{i})\in\mathcal{P}^{r,i}\) that satisfy the constraints (DPC\({}^{i}\)). Recall that \[U^{i}(x,\underline{x},\bar{x})\coloneqq\mathbb{E}\left[e^{-r\tau}g^{i}d_{ \tau}^{i}-\int_{0}^{\tau}e^{-rt}c^{i}\left(X_{t}^{i}\right)dt\mid x\right]\] is the continuation value of the worker with \(X_{0}=x\), \(\tau=\inf\left\{t\geq 0\,:\,X_{t}^{i}\not\in(\underline{x},\bar{x})\right\}\) and \(d_{\tau}^{i}=\mathbb{1}_{\{X_{\tau}^{i}\geq\bar{x}\}}\). Define then \[p^{i}(P)\coloneqq\inf\left\{x\in\mathcal{X}^{i}\,:\,\sup_{p\in\mathcal{X}^{i}} U^{i}(P,p,x)>0\right\}.\] \(p^{i}(P)\) is the smallest value of \(x\in\mathcal{X}^{i}\) at which the worker is willing to keep working if he is promoted only when his type exceed \(P\). Finally also Recall also that worker \(i\)'s promotion threshold is given by the (nondecreasing) function \(\bar{P}^{i}\) by \[\bar{P}^{i}(\underline{x})=\sup\Big{\{}\bar{x}\geq\underline{x}\,:\,\lim_{x \rightarrow\underline{x}}U^{i}\left(x,\underline{x},\bar{x}\right)\geq 0\Big{\}}.\] Finally define \(\underline{p}^{i}(W)\) as the unique solution of \[\Gamma^{s,i}\left(\underline{p}^{i},\underline{p}^{i}\right)=W.\] Theorem 4.2 characterizes the solution of the single worker promotion contest: (RP\({}^{i}\)). **Theorem 4.2**.: _The promotion contest_ \[\tau\coloneqq\inf\left\{t\geq 0\,:\,X_{t}^{i}\not\in\left[\underline{p}^{i}(W), \bar{P}^{i}\left(\underline{X}_{t}^{i}\right)\right)\right\}\text{ and }d_{\tau}^{i}\coloneqq \mathbb{1}_{\{X_{\tau}^{i}\geq\bar{P}^{i}\left(\underline{X}_{\tau}^{i} \right)\}}\] _is optimal in the single worker problem_ (RP\({}^{i}\)). Theorem 4.2 states that it is optimal to delegate to worker 1 until his type either (i) reaches the promotion threshold \(\bar{P}^{i}(\underline{X}_{t}^{i})\), or (ii the principal becomes too pessimistic about him. To understand why that is, recall that the flow reward (conditional on worker 1's type) the principal obtains when worker 1 operates the project is the same before and after promotion. So the principal always wants to postpone her decision, as she gets more information about the worker at no cost if she waits. Since the worker's type is strongly Markovian, a likely candidate for the promotion time is the first hitting time of a threshold as high as possible. In particular, if the cost of effort is zero, the principal promotes the worker when his type reaches the upper boundary of \(\mathcal{X}^{i}\). However, when effort is costly, this threshold is too high. So, the principal chooses the highest threshold for which the worker is willing to exert effort instead. If the agent's type increases, the promotion threshold stays constant: the principal needs to keep her promises. On the other hand, when the worker's type decreases, the worker becomes more pessimistic about his promotion chances. The principal then has to lower the promotion threshold to motivate the worker. The logic is the same as in McClellan (2017): the promotion threshold becomes laxer when the participation constraint binds.26 Because of the monotonicity of the problem, this constraint binds precisely when the worker's type decreases. Footnote 26: See also Harris and Holmstrom (1982), Thomas and Worrall (1988), or Grochulski and Zhang (2011). Formally, the proof of Theorem 4.2 is based on the idea of the proof of Theorem 1 in McClellan (2017). It follows from the five steps below: * First consider a relaxation of problem (\(\mathrm{RP}^{i}\)) for which the constraint (\(\mathrm{DPC}^{i}\)) only needs to hold for on a finite set of (stopping) times. * Lemma 1.5 derives the Lagrangian associated with the relaxed problem as an application of Theorem 1 in Balzer and Janssen (2002). * In the third step, useful properties of the solution of the relaxed problem introduced in step 1 are established. * The fourth step identifies a promotion contest that guarantees the principal a payoff of at least the value of the relaxed problem introduced in the first step. It is enough to focus on promotion contests that promote worker \(i\) after good performances (as \(X^{i}\) crosses an upper threshold from bellow) and take the outside option after bad outcomes (when \(X^{i}\) crosses a lower threshold from above). * Putting everything together and letting the set of times at which (\(\mathrm{DPC}^{i}\)) holds grow dense yields Theorem 4.2. Steps 1, 2, and 5 are essentially the same as in the proof of Theorem 1 in McClellan (2017). Steps 3 and 4 are new and specific to our setting. The details are in Appendix 1.6.1. Supporting Lemmas are in Appendix 1.6.2. **Corollary 4.3**.: _Let \((\tau,d^{i})\) be feasible in the single worker problem (\(\mathrm{RP}^{i}\)) Then, for all \(\bar{W}\geq W\),_ \[\mathbb{E}\left[\int_{0}^{\tau}e^{-rt}\pi^{i}\left(X^{i}_{t} \right)dt+e^{-r\tau}\left(d^{i}_{\tau}\bar{\pi}^{i}\left(X^{i}_{\tau}\right)+ (1-d^{i}_{\tau})\bar{W}\right)\right]\] \[\leq\mathbb{E}\left[\int_{0}^{\tau^{s,i}\wedge\tau^{i}(p^{i}( \bar{W}))}e^{-rt}\pi^{i}\left(X^{i}\right)dt+e^{-r\tau^{s,i}\wedge\tau^{i}(p^ {i}(\bar{W}))}\left(\bar{\pi}^{i}\left(X^{i}_{\tau^{s,i}}\right)\mathbb{1}_{ \{\tau^{s,i}<\tau^{i}(p^{i}(\bar{W}))\}}+\bar{W}\mathbb{1}_{\{\tau^{s,i}\geq \tau^{i}(p^{i}(\bar{W}))\}}\right)\right].\] Proof.: Observe that the set \(\mathcal{P}^{I,r,i}\) is independent of \(\bar{W}\) and that Assumption 7 is satisfied for any \(\bar{W}\geq W\). The result follows from Theorem 4.2. ### Measurable stopping The main result of this section shows that it is enough to focus on a subset of the implementable promotion contests such that the decision to promote worker \(i\) does not depend on the type of the other workers. **Proposition 4.4**.: _The supremum in (RP) is achieved by a (randomized) promotion contest \((T,\tau,d)\). Moreover, \(\tau=\left(\bigwedge_{i=1}^{N}\tau^{i}\right)\wedge\tau^{0}\), where \(\tau^{i}\) is a \(\mathcal{F}^{i}\)-stopping time, \(\tau^{0}\) is a \(\mathcal{G}^{T}\)-randomized stopping time, and \(d_{\tau}^{i}=1\) only if \(\tau^{i}\leq\tau=\left(\bigwedge_{i=1}^{N}\tau^{i}\right)\wedge\tau^{0}\)._ Proposition 4.4 has two parts. The first part states that the supremum in (RP) is achieved by a a promotion contest. It follows from Theorem 1.17 in Appendix 1.7.1. The second part characterizes the promotion time \(\tau\). It is the minimum of \(N\)\(\mathcal{F}^{i}\)-stopping times, \(\tau^{i}\)'s, and one \(\mathcal{G}^{T}\)-randomized stopping time \(\tau^{0}\). It follows from Corollary 1.24 in Appendix 1.7.3. ### An upper bound on the value of (RP) Proposition 4.5 derives an upper bound on the principal's payoff in any implementable promotion contest. **Proposition 4.5**.: _The value of (RP) is bounded above by_ \[\mathbb{E}\left[\int_{0}^{\infty}re^{-rt}\bigvee_{i=1}^{N}\underline{\Gamma}_ {T^{s,i}(t)}^{s,i}dt\right].\] ( \[\Pi^{M}\] ) To build some intuition, it is useful to go back to the proof of indexability for superprocesses.27 Start with \(N\) independent payoff processes \(\tilde{\pi}_{t}^{i}\), one for each superprocess. To each of these payoff processes, associate the index process \(\tilde{\Gamma}_{t}^{i}\) defined as the "equitable surrender value", i.e. the smallest \(W\) such that Footnote 27: See Chapter 4 in Gittins et al. (2011) or Durandard (2022a), for example. \[W=\tilde{V}_{t}^{i}(W)\coloneqq\sup_{\tau\geq t}\mathbb{E}\left[\int_{t}^{ \tau}e^{-r(s-t)}\tilde{\pi}_{s}^{i}ds+e^{-r(\tau-t)}W\mid\mathcal{F}_{t}^{i} \right].\] This index process has the desirable property that, for all \(\tilde{W}\),28 Footnote 28: See Proposition 3.2 in El Karoui and Karatzas (1994). \[\mathbb{E}\left[\int_{0}^{\infty}e^{-rt}r\underline{\tilde{\Gamma}}_{t}^{i} \lor Wdt\right]=\sup_{\tau}\mathbb{E}\left[\int_{0}^{\tau}e^{-rt}\tilde{\pi}_ {t}^{i}dt+e^{-r\tau}W\right],\] where \(\underline{\tilde{\Gamma}}_{t}^{i}\) is the lower envelope of \(\tilde{\Gamma}_{t}^{i}\). Whittle's condition guarantees that one of the payoff processes within each superprocess is such that its associated index process \(\tilde{\Gamma}^{i,*}\) dominates the associated index process of all other possible index processes associated with this superprocess, i.e., for all \(W\), \[\mathbb{E}\left[\int_{0}^{\infty}e^{-rt}r\tilde{\underline{\Gamma}}_{t}^{i,*} \lor Wdt\right]\geq\mathbb{E}\left[\int_{0}^{\infty}e^{-rt}r\tilde{\underline {\Gamma}}_{t}^{i}\lor Wdt\right].\] It is then possible to show that the optimal policy picks the dominating process for each superprocess and pulls the arm whose index is the highest at each instant \(t\). Here, start with an implementable promotion contest \((T,\tau,d)\in\mathcal{P}^{I}\) and find \(N\) single-arm implementable promotion contests \((\tau^{i},d^{i})\in\mathcal{P}^{I,i}\). This \(N\) single-arm implementable promotion contests generates \(N\) payoff processes for the principal: \[h_{t}^{i}\coloneqq\pi^{i}(X_{t}^{i})\mathds{1}_{\{t<\tau^{i}\}}+r\bar{\pi}^{i} \left(X_{\tau^{i}}^{i}\right)\mathds{1}_{\{t\geq\tau^{i}\}}.\] By the results of Section 4.3, each \(h^{i}\) can be chosen to be \(\mathcal{F}^{i}\)-adapted. As in the proof of indexability for superprocesses, one would like to associate to each of these payoff processes an _index process_\(\Gamma_{t}^{i}\). However, the index process cannot be the "equitable surrender value" in the retirement problem: \[\bar{V}_{t}^{i}(W)\coloneqq\sup_{\tau\geq t}\mathbb{E}\left[\int_{t}^{\tau}e^{ -r(s-t)}h_{s}^{i}ds+e^{-r(\tau-t)}W\mid\mathcal{F}_{t}^{i}\right].\] Intuitively, this would allow the principal to break her promises and take the outside option while the worker's promised continuation utility is _strictly_ positive. Hence, the retirement problem above does not take into account the worker's participation constraint. To overcome this issue, consider instead the optimal retirement problem in which the principal can take the outside option only on a set of decision times at which the continuation value of the worker is zero: \[\tilde{V}^{i}\left(t,W;\tau^{i},d^{i}\right)\coloneqq\sup_{\rho\in\mathcal{T} ^{s}(t;h^{i})}\mathbb{E}\left[\int_{t}^{\rho}e^{-r(s-t)}h_{s}^{i}ds+e^{-r\rho }W\mid\mathcal{F}_{t}^{i}\right];\] where \[\mathcal{T}^{s}(t;\tau^{i},d^{i})=\left\{s\geq t\,:\,U_{s}^{i}(\tau^{i},d^{i} )=0\right\},\] and \(U_{s}^{i}(\tau^{i},d^{i})\) is worker \(i\)'s continuation value at time \(s\) for the single-arm promotion contest \((\tau^{i},d^{i})\in\mathcal{P}^{I,i}\). The index process is then the "equitable surrender value" in this alternative retirement problem. One can therefore think of the problem as a multi-armed bandit problem in which the completion time of each task is the random duration between two times such that the worker's continuation is zero. Finally, Corollary 4.3 guarantees that each index process is dominated (in the sense of Whittle) by the strategic index process. The conclusion then follows from the same arguments as in the nonstrategic case. The proof of Proposition 4.5 is in Appendix 1.8.1. The derivation of the index processes associated with our alternative retirement problem is in Appendix 1.8.2. ### Proof of Theorem 2.4 By Proposition 4.5, any implementable promotion contest gives a payoff weakly smaller than \[\mathbb{E}\left[\int_{0}^{\infty}re^{-rt}\bigvee_{i=1}^{N}\Gamma_{T^{s,i}(t)}^ {s,i}dt\right].\] By Proposition 2.3, the principal obtains an expected payoff of \[\mathbb{E}\left[\int_{0}^{\infty}re^{-rt}\bigvee_{i=1}^{N}\underline{\Gamma}_ {T^{s,i}(t)}^{s,i}dt\right]\] in the index contest. Thus the index contest is optimal. ## 5 Extensions In this section, I discuss multiple extensions. ### Relaxing Assumptions 5 and 7 Assumptions 5 and 7 simplify the analysis but rule out potentially interesting settings. In particular, Assumption 5 excludes Poisson learning with good news, a case that has received a lot of attention in the economic literature, while Assumption 7 excludes problems in which the principal has no outside option, i.e., in which the position has to fill internally. However, both can be relaxed, as the Corollaries below establishes. Interestingly, both Corollaries rely on the continuity of the principal's value. Corollary 5.1 uses that the value is continuous in the payoff from the outside option, \(W,\) while Corollary 5.2 uses that the value is continuous in the process \(X^{i}\) (in the appropriate topology). Their proofs are in Online Appendix 3.1. **Corollary 5.1**.: _The index contest is still optimal when Assumption 7 does not hold. The principal never takes the outside option._ Corollary replaces Assumption 5 with the following assumption: **Assumption 8**.: _For all \(i\in\{1,\ldots,N\}\), there exists a sequence \(\left(X^{i,n}\right)_{n\in\mathbb{N}}\) such that (i) \(X^{i,n}\) satisfies Assumption 5, (ii) \(X^{i,n}-X^{i}\) is \(\mathcal{F}^{i}\)-adapted, and (iii) \(X^{i}=\underset{n\to\infty}{\lim}X^{i,n}\) uniformly on compact sets \(\mathbb{P}\)-a.s.._ Hence, any process satisfying Assumptions 2, 3, and 4, but not 5 can be approximated by a sequence of processes \(X^{i,n}\) that satisfy 5. Assumption 8 simply guarantees that this sequence is \(\mathcal{F}^{i}\)-adapted. In particular, if, for all \(i\), the probability space \(\left(\Omega,\bar{\mathcal{F}},\mathbb{P}\right)\) contains a \(\mathcal{F}^{i}\)-Brownian motion, Assumption 8 is satisfied. Define \[\tau^{0}\coloneqq\inf\left\{t\geq 0\,:\,\Gamma^{s,i}_{T^{i}(t)}\leq W\,\, \,\text{for all }i\right\},\] and \[\tau^{i}\coloneqq\inf\{t\geq 0\,:\,T^{i}(t)>\bar{P}^{i}\left(\underline{X}^{i }_{T^{i}(t)}\right)\}\wedge\tau^{p,i},\] where \(\tau^{p,i}\) is the first tick of a Poisson clock that runs only on \(\{X^{i}_{t}=\bar{P}^{i}(\underline{X}^{i}_{t})\}\) which intensity is chosen to leave \(i\) indifferent between exerting effort or not when promoted at time \(\tau^{i}\). **Corollary 5.2**.: _Suppose that Assumption 5 is replaced with Assumption 8 and that the \(\pi^{i}\)'s are continuous. Then the index contest associated with the strategic indices \(\Gamma^{s,i}\) and the promotion time \(\tau^{*}=\tau^{0}\wedge\bigwedge_{i=1}^{N}\tau^{i}\) is optimal._ Interestingly, Corollary 5.2 shows that when \(\bar{P}^{i}(\underline{X}^{i}_{t})=\underline{X}^{i}_{t}=X^{i}_{t}\) the strategic index associated to worker \(i\) is equal to the expected value of promoting \(i\) immediately: information has no value. This is the case, for example, if worker can be good or bad, the principal learns about worker \(i\) through the Poisson arrival of good news, the probability that worker \(i\) is good at time \(t\) (hence his type \(X^{i}_{T^{i}(t)}\coloneqq\mathbb{P}\left(\{i\text{ is good }\}\mid\mathcal{F}^{i}_{T^{i}(t)}\right)\)) is too low. ### Prize design The main of this section establishes that when the principal can design the prize, the index contest is still optimal, i.e., the principal prefers to allocate the entire prize to one worker only. Moreover, there is no value in giving multiple "smaller" promotion to a worker. The model is identical to the one presented in Section 1, except for the following two differences: (i) the prize is divisible, and (ii) the principal chooses (potentially) multiple times at which to promote workers. Formally, at time \(t=0\), the principal commits to a history-dependent promotion contest comprising of (i) a set of promotion time \(\left\{\tau_{k}\right\}_{k=1}^{K}\) (with \(K\in\mathbb{N}\cup\infty\)) specifying when a fraction of the prize is allocated; (ii) a promotion decision \(d\) specifying which of the workers is promoted; and (iii) a delegation rule \(\alpha\) that assigns at every instant the non-routine task to some worker. The promotion times, \(\tau_{k}\)'s, are \(\mathcal{G}^{T}\)-stopping time such that \(\tau_{0}=0\) and \(\tau_{k}<\tau_{k+1}\)\(\mathbb{P}\)-a.s.. The promotion decision is a \(\mathcal{G}^{T}\)-adapted (stochastic) process \(d=\left(d^{0}=\left\{d^{0}_{t}\right\}_{t\geq 0},\ldots,d^{N}=\left\{d^{N}_{t} \right\}_{t\geq 0}\right)\in\mathcal{C}^{*}\). Again \(d^{0}\) stands for the principal's decision to take her outside option. Finally, the delegation rule \(T=\left(T^{1}=\left\{T^{1}(t)\right\}_{t\geq 0},\ldots,T^{N}=\left\{T^{N}(t) \right\}_{t\geq 0}\right)\in\mathcal{D}\) is a delegation process. The workers only decides to exert effort \(a^{i}_{t}\) in \(\left\{0,1\right\}\) when they are delegated the non-routine task. Finally, the following additional assumption is maintained in this section. **Assumption 9**.: _(i) For all \(i\in\left\{1,\ldots,N\right\}\), the process \(\left\{\pi^{i}\left(X^{i}_{s}\right)\right\}_{s\geq 0}\) is a submartingale._ _(ii) For all \(i\in\left\{1,\ldots,N\right\}\), the cost of effort is constant: \(c^{i}\left(\cdot\right)\coloneqq c^{i}\)._ Assumption 9 (i) guarantees that upon promotion, the principal always wants there is no penalty from delegating the full project to the promoted worker. Assumption 9 (ii) simplifies the argument. So, given a promotion contest \(\left(T,\left\{\tau_{k}\right\}_{k=1}^{K},d\right)\), the principal's expected payoff is \[\Pi^{M}\left(T,\left\{\tau_{k}\right\}_{k=1}^{K},d;W\right) \coloneqq\mathbb{E}\left[\sum_{k=1}^{K}\left(\sum_{i=1}^{N}\int_{\tau_{k-1}}^ {\tau_{k}}e^{-rt}\pi^{i}(X^{i}_{T^{i}(t)})dT^{i}(t)+e^{-r\tau_{k}}\bar{\pi} \left(X_{T(\tau)},d_{\tau_{k}}\right)\right)\right],\] where \[\bar{\pi}\left(x,d\right)\coloneqq d^{0}W+\sum_{i=1}^{N}\mathbb{E} \left[\int_{0}^{\infty}e^{-rt}\pi^{i}\left(X^{i}_{d^{i}t}\right)d\left(d^{i}t \right)\mid X^{i}_{0}=x^{i}\right].\] The workers' expected payoffs are \[U^{i}\left(T,\left\{\tau_{k}\right\}_{k=1}^{K},d\right)\coloneqq\mathbb{E}\left[ \sum_{k=1}^{K}e^{-r\tau_{k}}gd_{\tau_{k}}^{i}-\int_{0}^{\infty}e^{-rt}(1-\sum_ {k=1}^{K}d_{\tau_{k}}^{i}\mathbbm{1}_{\left\{t\geq\tau_{k}\right\}})c^{i}dT^{i }(t)\right].\] The principal's objective is to design the promotion contest that maximizes her payoff among all implementable promotion contest. As above, this is equivalent to the maximization program: \[\Pi^{M}\coloneqq\sup_{(T,\left\{\tau_{k}\right\}_{k=1}^{K},d) \in\mathcal{P}}\mathbb{E}\left[\sum_{k=1}^{K}\left(\sum_{i=1}^{N}\int_{\tau_{ k-1}}^{\tau_{k}}e^{-rt}\pi^{i}(X_{T^{i}(t)}^{i})dT^{i}(t)+e^{-r\tau_{k}}\bar{ \pi}\left(X_{T(\tau)},d_{\tau_{k}}\right)\right)\right],\] (Prize design) subject to the dynamic participation constraints: for all \(i\) and all possible histories \(h_{t}\), \[\mathbb{E}\left[\sum_{k=1}^{K}e^{-r(\tau_{k}-t)}gd_{\tau_{k}}^{i}\mathbbm{1}_ {\left\{t\leq\tau_{k}\right\}}-\int_{t}^{\infty}e^{-rt}(1-\sum_{k=1}^{K}d_{\tau _{k}}^{i}\mathbbm{1}_{\left\{t\geq\tau_{k}\right\}})c^{i}dT^{i}(t)\mid h_{t} \right]\geq 0.\] **Theorem 5.3**.: _Suppose that Assumption 9 holds. Then the **index contest** solves (Prize design)._ Theorem 5.3 shows that optimal promotion contest grants the _entire prize to one worker at most_. The optimal contest is a winner-take-all. This is reminiscent of the classic result in Moldovanu and Sela (2001) of the optimality of a single prize. In our dynamic setting, fully allocating the prize to only one worker is also optimal. The index contest is meritocratic: the worker who performs the best (upon getting the opportunity) is promoted. This contrasts from recent results in dynamic contest theory in which the optimal contest was shown to be more egalitarian (see Halac et al. (2017) and Ely et al. (2021), for example). In Online Appendix 3.2, I indicate how to modify the proof of Theorem 2.4 to obtain Theorem 5.3. In particular, it follows the same steps. The only difference is in showing that one can focus on promotion contests in which the promotion times are measurable. Proposition 3.3 in Online Appendix 3.2 replaces Proposition 4.4. The rest of the proof is identical. ### Transfers I ruled out transfers for three reasons in the main model. The first and most fundamental one was that I wanted to focus on the trade-off between the two classical promotion roles, i.e., incentives provision and sorting. The second reason for this restriction is empirical. In most organizations, compensation is promotion based.29 This is the case in public administrations, where the salary grid is fixed, for example. Finally, the analysis developed in this article becomes intractable for general wages (although the main trade-off seems to be preserved when workers are protected by limited liability). So a complete analysis of transfers is well beyond the scope of my paper. Nevertheless, in this section, I point out how my model can accommodate restricted forms of transfers. Footnote 29: Baker et al. (1988) find that “[m]ost of the average increases in an employee’s compensation can be traced to promotions and not to continued service in a particular position.”. See also Gibbs (1995) and Bernhardt (1995). It is also consistent with the observed separation of roles: Compensation and benefits managers within the human resource department have authority over the compensation structure, while the assignment of responsibilities and tasks are made within each department by managers that can closely monitor and supervise their team. Suppose that the principal can only choose transfers that depend on the worker's current type and his effort decision (i.e., pay a flow wage \(w_{t}^{i}=w^{i}(a_{t}^{i},X_{t}^{i})\) to worker \(i\) at every instant \(t\geq 0\)) and that the workers are protected by limited liability (i.e. \(w_{t}^{i}\geq 0\)). Then the index context is still optimal, under Assumption 4.(ii) (when the workers' types can only jump down), as long as \(\pi^{i}(\cdot)-w^{i}(1,\cdot)\) is nondecreasing. This can be seen from the proof of Theorem 2.4 directly. For example, if the wage paid to each of the workers is a constant fraction \(\beta^{i}\in[0,1]\) of the flow payoff the principal obtains (i.e., \(w_{t}^{i}=\beta^{i}\pi^{i}(X_{T^{i}(t)}^{i})dT^{i}(t)\)), then the index contest is optimal. The strategic index are computed for the payoff process \((1-\beta^{i})\pi^{i}(X^{i}(t))\), effort costs \(c(X_{t}^{i})-\beta^{i}\pi^{i}(X^{i}(t))\), and value of promotion \(\tilde{g}^{i}\left(X_{t}^{i}\right)\coloneqq g^{i}+\beta^{i}\bar{\pi}^{i}(X_{ t}^{i})\). One can then imagine that the principal engages in Nash bargaining with the workers (with threat points equal to their outside option) before the game starts to determine the \(\beta^{i}\)'s. ### Different information structures Finally, the workers' types are assumed to be observable by all the players: by both the other workers and the principal. Interestingly, the index promotion contest remains optimal if each worker only observes his type and the principal does not observe the evolution of the types, but the workers can reveal their current type to the principal credibly. Hence it is easily seen that it is weakly dominant for the workers to reveal their type to the principal when \(X_{t}^{i}=\underline{X}_{t}^{i}\) or \(X_{t}^{i}=\bar{P}^{i}\left(\underline{X}_{t}^{i}\right)\), which is the only information the principal need to implement the index contest. Verifiability is important here: the same result cannot be obtained with cheap talk communication only. ## 6 Conclusion I study the design of centralized dynamic contests in a general environment. Workers are heterogeneous and strategic. They have to be incentivized to exert effort, and their types evolve (stochastically) when they work. I showed that despite the richness of the model, the solution is simple and takes the form of an _index contest_. My analysis is limited to the specific extension of the multi-armed bandit model I consider, and I do not suggest that my findings would hold in different environments. Some of the assumptions, such as the independence of the type processes, appear crucial and very hard to relax in a significant manner (although one could consider a particular form of conditional independence for multi-parameter processes, known as condition F4, see El Karoui and Karatzas (1997) or Walsh (1981) for example). However, the intuition behind the result is valid in other environments. For example, when the information about the project's success is private and cannot be credibly communicated but the uncertainty is small, results from the multi-armed bandit literature suggest that _index contest_ would still perform well. This can be seen directly by inspecting the principal's payoff in the index contest (\(\Pi^{M}\)). When the uncertainty is small, the lower envelopes of the index processes associated with the case in which the principal observes the workers' types directly or observes a signal are close. Still, characterizing the specific form of the optimal mechanism when the workers have private information about the outcome of the delegation process would be interesting. More generally, the idea that the endogenous allocation of opportunities or the endogenous acquisition of information affects the final decision when allocating an asset or promoting a worker is very natural and deserves more attention in future research.
2310.08106
Generalized Logit Adjustment: Calibrating Fine-tuned Models by Removing Label Bias in Foundation Models
Foundation models like CLIP allow zero-shot transfer on various tasks without additional training data. Yet, the zero-shot performance is less competitive than a fully supervised one. Thus, to enhance the performance, fine-tuning and ensembling are also commonly adopted to better fit the downstream tasks. However, we argue that such prior work has overlooked the inherent biases in foundation models. Due to the highly imbalanced Web-scale training set, these foundation models are inevitably skewed toward frequent semantics, and thus the subsequent fine-tuning or ensembling is still biased. In this study, we systematically examine the biases in foundation models and demonstrate the efficacy of our proposed Generalized Logit Adjustment (GLA) method. Note that bias estimation in foundation models is challenging, as most pre-train data cannot be explicitly accessed like in traditional long-tailed classification tasks. To this end, GLA has an optimization-based bias estimation approach for debiasing foundation models. As our work resolves a fundamental flaw in the pre-training, the proposed GLA demonstrates significant improvements across a diverse range of tasks: it achieves 1.5 pp accuracy gains on ImageNet, an large average improvement (1.4-4.6 pp) on 11 few-shot datasets, 2.4 pp gains on long-tailed classification. Codes are in \url{https://github.com/BeierZhu/GLA}.
Beier Zhu, Kaihua Tang, Qianru Sun, Hanwang Zhang
2023-10-12T08:01:11Z
http://arxiv.org/abs/2310.08106v3
Generalized Logit Adjustment: Calibrating Fine-tuned Models by Removing Label Bias in Foundation Models ###### Abstract Foundation models like CLIP allow zero-shot transfer on various tasks without additional training data. Yet, the zero-shot performance is less competitive than a fully supervised one. Thus, to enhance the performance, fine-tuning and ensembling are also commonly adopted to better fit the downstream tasks. However, we argue that such prior work has overlooked the inherent biases in foundation models. Due to the highly imbalanced Web-scale training set, these foundation models are inevitably skewed toward frequent semantics, and thus the subsequent fine-tuning or ensembling is still biased. In this study, we systematically examine the biases in foundation models and demonstrate the efficacy of our proposed Generalized Logit Adjustment (GLA) method. Note that bias estimation in foundation models is challenging, as most pre-train data cannot be explicitly accessed like in traditional long-tailed classification tasks. To this end, GLA has an optimization-based bias estimation approach for debiasing foundation models. As our work resolves a fundamental flaw in the pre-training, the proposed GLA demonstrates significant improvements across a diverse range of tasks: it achieves 1.5 pp accuracy gains on ImageNet, an large average improvement (1.4-4.6 pp) on 11 few-shot datasets, 2.4 pp gains on long-tailed classification. Codes are in [https://github.com/BeierZhu/GLA](https://github.com/BeierZhu/GLA). ## 1 Introduction Thanks to the Web-scale data and self-supervised strategies, foundation models like CLIP [40] empower zero-shot transfer to a wide variety of domains [7; 1; 52]. However, the zero-shot performance is still weak on several domain-specific tasks such as differentiating models of cars, species of flowers, and variants of aircraft [40; 7]. Therefore, it is a common practice to improve the downstream performance via supervised fine-tuning on labeled data, _e.g_., linear probing, prompt tuning [54; 55], and end-to-end fine-tuning. However, fine-tuned models are easily biased: they are adept in exploiting spurious correlations that only hold on the downstream distribution [40; 39; 48; 55]. To improve the robustness, several studies [48; 55; 56] propose to combine fine-tuned models with zero-shot models. For example, WiSE-FT [48] ensembles the fine-tuned and zero-shot models in weight space and ProGrad [55] uses zero-shot predictions to regularize the fine-tuning gradient. The underlying assumption lies in that the zero-shot models are robust to distribution shifts [40], and their predictions are complementary to those of fine-tuned models [48]. Despite these methods exhibiting performance gains on both in-distribution and out-of-distribution evaluations, they all overlook the inherent bias originating from the foundation models. Specifically, the Web-scale data for pre-training foundation models exhibit a highly skewed distribution due to Zipf's law of nature [42]. The resulting foundation models develop a biased decision boundary that leads to a poor zero-shot performance on rare classes. As evidenced in Figure 1(a) and (b), the purple line encounters a dramatic drop, and the zero-shot performance of tail classes is significantly lower than that of head classes (\(57.2\%\)_vs._\(78.0\%\)). Existing ensemble methods like WiSE-FT [48] overlook the label bias, resulting in an improvement in top-1 (\(+0.5\%\)) and head accuracy (\(+1.7\%\)) while a noticeable degradation on the tail performances (\(-0.9\%\)) in Figure 1(b). Another evidence is that the orange line (WiSE-FT) is below the blue line (fine-tuned models) for rare classes in Figure 1(a). We propose Generalized Logit Adjustment (GLA), a simple post-hoc method consisting of two steps: **I**) removing the label bias of zero-shot model via estimating the label distribution in the pre-training dataset; **2**) ensembling the fine-tuned and debiased zero-shot models. As illustrated in Figure 1 (b), our GLA achieves consistent improvement across all three subgroups, particularly showing a significant gain on tail classes (\(+1.5\%\)). Despite its simplicity, our GLA has a firm statistical grounding: it is the Bayes optimal classifier given the fine-tuned and zero-shot models, thus consistent for minimizing the error on a class-balanced target distribution (Section 4.2). It is worth noting that removing the bias of foundation models is challenging since the label distribution is often inaccessible due to privacy or copyright concerns. In this work, we only use the downstream labeled data and the zero-shot model to estimate the foundation label bias. Specifically, we formulate the problem by adjusting the margin of the zero-shot models such that the lowest error is achieved on the downstream dataset. This grounding translates into strong empirical performance on real-world datasets, covering few-shot, many-shot, and long-tail learning (Section 5). The contributions and novelties of this work are summarized as follows: * We point out the overlooked label bias in foundation models, which originates from the skewness of pre-training distribution that affects the performance of downstream tasks. * We formalize the estimation of the label bias as a constrained optimization problem (Section 3.2) with theoretical justification (Section 4.3). The entire process does not require access to the pre-training dataset, making it practical for fine-tuning scenarios. We present Generalized Logit Adjustment (GLA) method, which ensembles the debiased zero-shot and fine-tuned models, and demonstrate its superiority over conventional fine-tuning and ensembling by proving it's a Bayes optimal classifier (Section 4.2). * We build a comprehensive benchmark for evaluation, which considers three real-world settings and three fine-tuning paradigms. The settings are: 1) many-shot learning with abundant data 2) few-shot learning; and 3) long-tail classification, representing a more challenging scenario that combines many-shot and few-shot data (Section 5). The three fine-tuning paradigms include: 1) end-to-end fine-tuning; 2) linear probing; and 3) prompt tuning (Section 3.1). * We demonstrate the efficacy of our proposed method GLA by conducting extensive experiments across various settings and fine-tuning paradigms. We observe 1 to 1.5 pp accuracy gains on ImageNet, large averaged improvement (1.4 to 4.6 pp) on 11 few-shot datasets and 2.4 pp averaged accuracy gains on long-tail datasets. (Section 5). Figure 1: (a) Per class accuracy of CLIP-ViT/B16 on ImageNet. Class index are sorted using the estimated pre-training label prior. Curves are smoothed for better visualization. (b) Beak-down performance of different models on ImageNet. We equally divide the ImageNet classes into three subgroups, according to the class index. Existing ensemble methods like WiSE-FT [48] exhibits a clear performance loss on tail classes, while our GLA stands out for all three subgroups. Related Work **Image-text foundation models.** Foundation models pre-trained by contrastive objectives have set impressive milestones for image and text representation learning, with CLIP [40], ALIGN [23], CoCa [52] and Flamingo [1] being the exemplars. Such models exhibit impressive prompt-based zero-shot performance on various image recognition downstream tasks. Our method aims to reduce foundation model biases to boost performance in downstream tasks. While [2] also addresses word frequency bias, we differ in two key areas: Firstly, we debias zero-shot models using fixed prompts, whereas [2] refines the prompting process. Secondly, our GLA doesn't require access to a subset of the pre-training data. **Ensembles.** Ensemble methods aim to boost performances by combining multiple networks, which can be either implemented by aggregating model outputs [12; 4; 28; 14; 27; 51], weight-space ensembling [48; 22], or ensemble distillation [19; 29]. For the adaptation of foundation models, several work propose to ensemble the fine-tuned and zero-shot models for better performance: Wortsman et al. [48] ensembles them in weight space; ProGrad and ProReg [55; 56] propose to fuse them via knowledge distillation. Our GLA is orthogonal to these approaches, as it concentrates on mitigating the biases in foundation models that are detrimental to ensemble models. **Logit adjustment.** Logit adjustment [35; 45; 24; 49; 20] is a post-hoc technique to adjust the biased output of classification networks. Kang _et al_[24] proposes an element-wise scaling adjustment for the classifier weight. Tang _et al_[45] removes the projection of features on a global biased direction. Menon [35] derives the theoretically optimal adjustment from the training distribution. Unlike the those approaches which rely on a transparent training data or class distribution, our GLA can eliminate the class bias without accessing to the pre-training statistics. ## 3 Methods ### Setup **Task.** Consider a classification problem with instances \(\mathbf{x}\in\mathcal{X}\) and labels \(y\in\mathcal{Y}=[K]=\{1,...,K\}\). We have a zero-shot model \(f_{\text{zs}}\) (given below), a downstream dataset \(\mathcal{D}_{s}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{N}\) drawn from source distribution \(P_{s}\) and a fine-tuned model \(f_{\text{ft}}\) (given below) trained on the dataset \(\mathcal{D}_{s}\). Give a model \(f:\mathcal{X}\rightarrow\mathbb{R}^{K}\) that outputs prediction score, we define the risk of \(f\) on the target distribution \(P_{t}\) as the mis-classification rate: \(\mathcal{R}_{t}(f)=\mathbb{E}_{\mathbf{x},y\sim P_{t}}[y]\neq\operatorname{ argmax}_{i}f(\mathbf{x})_{i}\). Our goal is to learn a model \(f_{\text{gla}}\) that best leverages \(f_{\text{zs}}\) and \(f_{\text{ft}}\) that minimizes the risk \(\mathcal{R}_{t}\). **Zero-shot models.** We primarily explore CLIP [40] for zero-shot models. CLIP consists of a visual encoder \(\Phi_{\text{v}}(\mathbf{x})\) and a text encoder \(\Phi_{\text{t}}(\mathbf{t})\), producing \(l_{2}\)-normalized features from an image \(\mathbf{x}\) and a text \(\mathbf{t}\) respectively. Zero-shot model \(f_{\text{zs}}\) for \(K\) classes is enabled by matching image features \(\mathbf{v}=\Phi_{\text{v}}(\mathbf{x})\) with classification weights \(w_{k}=\Phi_{\text{t}}(\mathbf{t}_{k})\), where \(\mathbf{t}_{k}\) is obtained by extending the class name \(\{c_{k}\}\) to a pre-defined prompt, _e.g._, "a photo of a \(\{c_{k}\}\).". Additional details are provided in Appendix B.2. The probability of \(\mathbf{x}\) being classified as \(y\) is defined as: \[P(y|\mathbf{x})=\operatorname{softmax}(f_{\text{zs}}(\mathbf{x}))_{y}=\frac{ \exp(\mathbf{v}^{T}\mathbf{w}_{y})}{\sum_{k=1}^{K}\exp(\mathbf{v}^{T}\mathbf{ w}_{k})}. \tag{1}\] **Fine-tuned models.** Standard fine-tuning initializes the model \(f_{\text{ft}}\) with the pre-trained parameters and then solve \(f_{\text{ft}}=\operatorname{argmin}_{f}\mathcal{R}_{s}(f)\) to minimize the risk on downstream dataset. We consider three common variants of fine-tuning: (1) end-to-end, where all parameters of \(\Phi_{\text{v}}\) and \(\mathbf{w}_{k}\) are updated; (2) linear probing, where only \(\mathbf{w}_{k}\) is modified while \(\Phi_{\text{v}}\) is fixed; (3) prompt tuning, where the text input \(\mathbf{t}_{k}\) is learned, while keeping \(\Phi_{\text{v}}\) and \(\Phi_{\text{t}}\) freezed. See Appendix B.3 for details on fine-tuning methods. **Notation.** Let \(P_{p}(y)\), \(P_{s}(y)\) and \(P_{t}(y)\) be the marginal probability of class \(y\in[K]\) for pre-training, source (training) and target (test) distribution, respectively. Let \(\pi_{p}\) and \(\pi_{s}\) denote the log probabilities of class for pre-training and training distribution, _i.e._, \(\pi_{p}(y)=\log P_{p}(y)\) and \(\pi_{s}(y)=\log P_{s}(y)\). ### Generalized Logit Adjustment Framework Fine-tuned models often yield significant gains compared to zero-shot models, and ensembling them can further improve performance. This leads to a natural question: How should we best leverage the zero-shot and fine-tuned models for the prediction tasks? We attempt to answer by proposing generalized logit adjustment in Definition 1. **Definition 1**.: _(GLA) The Generalized Logit Adjustment (GLA) model \(f_{\mathsf{gla}}\) is defined as follows:_ \[f_{\mathsf{gla}}(\mathbf{x})=f_{\mathsf{ft}}(\mathbf{x})+f_{\mathsf{zs}}( \mathbf{x})-\pi_{s}-\pi_{p}. \tag{2}\] In Section 4.2, we prove that given the zero-shot and fine-tuned models, our GLA model is the _Bayes_ optimal classifier and no other combination of the two models can outperform it. Here remains one important question: how could we obtain \(\pi_{p}\) as we have no access to the pre-training statistics? We provide the estimation process of \(\pi_{p}\) in Eq (4) and postpone the justification in Section 4.3. The entire GLA algorithm consists of two steps which is given as follows: **Step 1: Estimation of \(\pi_{p}\).** Let \(\mathbf{q}\) be an arbitrary _probability simplex_ over \(K\) classes. Given the validation data from \(P_{t}\) or the balanced training data, we can estimate the \(\hat{\pi}_{p}=\log\mathbf{q}^{*}\) as the constrained optimization problem (proof in Section 4.3): \[\mathbf{q}^{*}=\operatorname*{argmin}_{\mathbf{q}}\mathcal{R}_{t }(f_{\mathsf{zs}}-\log\mathbf{q})\] \[\mathrm{s.t.}\ \mathbf{q}_{i}\geq 0,\mathrm{for}\ i\in[K],\] \[\sum_{i\in[K]}\mathbf{q}_{i}=1. \tag{3}\] We constrain the sum of \(\mathbf{q}\) to be 1 and ensure that each element is non-negative, guaranteeing that it forms a valid probability distribution. We solve the following Lagrangian problem to find optimal \(\mathbf{q}^{*}\): \[\min_{\mathbf{q}}\max_{\lambda_{i}\geq 0,v}\mathcal{R}_{t}(f_{\mathsf{zs}}- \log\mathbf{q})-\sum_{i}\lambda_{i}\mathbf{q}_{i}+v(1-\sum_{i\in[K]}\mathbf{ q}_{i}) \tag{4}\] **Step 2: GLA ensembling.** Given the estimated \(\hat{\pi}_{p}\) and the known downstream training \(\pi_{s}\), we ensemble the zero-shot model \(f_{\mathsf{zs}}\) and the fine-tuned model \(f_{\mathsf{ft}}\) to get our GLA model \(f_{\mathsf{gla}}\) via Eq. (2). We can regard \(f_{\mathsf{zs}}-\hat{\pi}_{p}\) and \(f_{\mathsf{ft}}-\pi_{s}\) as the debiased zero-shot model (Figure 2(c)) and debiased fine-tuned models, respectively. Our GLA is actually ensembling two debiased models. Note that, different from [55; 48], we _do not_ require a hyper-parameter to adjust the contribution of the two models, the optimal solution is to combine them equally (see Section 4.2 for justification). ## 4 Theoretical Analysis In this section, we explain why our GLA model best ensembles the zero-shot and fine-tuned models (Section 4.2) and justify the estimation process of the pre-training label distribution \(\pi_{p}\) (Section 4.3). We start with some preliminaries on _Bayes_ optimal classifier. ### Preliminaries Suppose we have a pair \((X,Y)\sim P\) takes values in \(\mathcal{X}\times\mathcal{Y}\), where \(Y\) is the class label of input \(X\). **Definition 2**.: _The 0-1 error (risk) of a classifier \(\hat{y}:\mathcal{X}\rightarrow\mathcal{Y}\) on distribution \(P\) is given by:_ \[\mathcal{R}(\hat{y})=P(Y\neq\hat{y}(X)) \tag{5}\] Figure 2: Illustration of debiasing process on ImageNet validation set. (a) The original distribution of zero-shot outputs; (b) the estimated pre-train distribution \(\mathbf{q}\) based on our algorithm; (c) the distribution of debiased zero-shot outputs using estimated \(\mathbf{q}\). However, the 0-1 error is non-smooth, one typically minimizes a surrogate loss \(\ell\), _e.g._, cross-entropy: \(\ell(f(\mathbf{x}),y)=\log[\sum_{i\in[K]}\exp(f(\mathbf{x})_{i}-f(\mathbf{x})_{ y})]\), where \(\hat{y}(\mathbf{x})=\operatorname*{argmax}_{i}f(\mathbf{x})_{i}\). It is known that the cross-entropy loss is _Bayes consistent_[53], _i.e._, a nearly optimal minimizer of the cross-entropy loss (\(\mathbb{E}_{\mathbf{x},y\sim P}[\ell(f(\mathbf{x}),y)]\) is also a nearly optimal optimizer of the mis-classification error (\(\mathbb{E}_{\mathbf{x},y\sim P}[y\neq\hat{y}(\mathbf{x})]\)). **Definition 3**.: _The Bayes optimal classifier \(y^{*}\) for \(P\) given input \(\mathbf{x}\) is defined as:_ \[y^{*}(\mathbf{x})=\operatorname*{argmax}_{y\in\mathcal{Y}}P(y|\mathbf{x}) \tag{6}\] It is called _Bayes_ optimal classifier because on the average _no_ other classifier using the same hypothesis and prior knowledge can outperform it. **Lemma 1**.: _The Bayes optimal classifier \(y^{*}\) for \(P\) has lower risk than all classifiers \(\hat{y}:\mathcal{X}\rightarrow\mathcal{Y}\)._ \[\mathcal{R}(y^{*})\leq\mathcal{R}(\hat{y}) \tag{7}\] ### Generalized Logit Adjustment Leads to Better Ensembling **Zero-shot and fine-tuned models are complementary.** We revisit an empirical phenomena observed in [48] Section 5.1: After exploring a series of measures of diversity, covering predictions and features, they find that zero-shot and fine-tuned models have diverse predictions, despite sharing the same backbone. As the two data distribution \(P_{s}\) and \(P_{p}\) is known to be different, the resulting models leverage different cues to predict: fine-tuned models risk exploiting _spurious correlations and in-domain patterns_ which only hold for downstream dataset [46, 3]; On the other hand, zero-shot CLIP models capture _stable correlations_ across diverse domains and exhibit much higher robustness [40]. For instance, zero-shot models rely on robust features for decisions that can achieve high performance on sketch and adversarial samples, while the fine-tuned models that trained on real images typically fail on these samples, as they rely on spurious correlations that only hold on real images. We formulate the phenomena in the following assumption. **Assumption 1**.: _Zero-shot and fine-tuned models have diverse predictions:_ \[(f_{\mathsf{ft}}(\mathbf{x})\perp f_{\mathsf{zs}}(\mathbf{x}))|y. \tag{8}\] We derive the conditional probability \(P_{t}(y|f_{\mathsf{ft}}(\mathbf{x}),f_{\mathsf{zs}}(\mathbf{x}))\) w.r.t. the outputs of \(f_{\mathsf{zs}}(\mathbf{x})\) and \(f_{\mathsf{ft}}(\mathbf{x})\): **Lemma 2**.: _For a balanced target distribution1, where \(P_{t}(y)=1/K\) for all \(y\in[K]\), we have:_ Footnote 1: This lemma can be easily extended to imbalanced target distributions (proof in Appendix A). Yet, as most test sets are class-balanced, we focus on the balanced case for brevity. \[P_{t}(y|f_{\mathsf{ft}}(\mathbf{x}),f_{\mathsf{zs}}(\mathbf{x}))=\operatorname {softmax}(\underbrace{f_{\mathsf{ft}}(\mathbf{x})+f_{\mathsf{zs}}(\mathbf{x}) -\pi_{s}-\pi_{p}}_{\mathsf{fgla}}(\mathbf{x})}_{f_{\mathsf{fgla}}(\mathbf{x})} (y) \tag{9}\] Intuitively, since the zero-shot and fine-tuned models provide diverse predictions, conditioned on the two predictions is equivalent to adding the logits in log space. Additionally, as the target distribution is class-balanced, we need to remove the class bias of two models by subtracting \(\pi_{s}\) and \(\pi_{p}\). The formal proof is given in Appendix A.1. Note that the RHS of Eq. (9) is exactly the softmax output of our GLA model by Definition 1, which exhibits the following property: **Proposition 1**.: _Let \(g:\mathbb{R}^{K}\times\mathbb{R}^{K}\rightarrow\mathbb{R}^{K}\) be an arbitrary function that ensemble the outputs of \(f_{\mathsf{zs}}\) and \(f_{\mathsf{ft}}\). Our GLA classifier \(f_{\mathsf{gla}}\) has lower risk than any function \(f_{g}(\mathbf{x})=g(f_{\mathsf{zs}}(\mathbf{x}),f_{\mathsf{ft}}(\mathbf{x}))\), i.e._ \[\mathcal{R}_{t}(f_{\mathsf{gla}})\leq\mathcal{R}_{t}(f_{g}). \tag{10}\] Proof.: From Lemma 2 and Definition 1, we have: \[\operatorname*{argmax}_{y\in\mathcal{Y}}f_{\mathsf{gla}}(\mathbf{x})_{y}= \operatorname*{argmax}_{y\in\mathcal{Y}}\operatorname*{softmax}(f_{\mathsf{ft}}( \mathbf{x})+f_{\mathsf{zs}}(\mathbf{x})-\pi_{s}-\pi_{p})_{y}=\operatorname*{ argmax}_{y\in\mathcal{Y}}P_{t}(y|f_{\mathsf{ft}}(\mathbf{x}),f_{\mathsf{zs}}( \mathbf{x})), \tag{11}\] which means \(f_{\mathsf{gla}}\) is the Bayes optimal classifier (see Definition 3) given \(f_{\mathsf{ft}}(\mathbf{x})\) and \(f_{\mathsf{zs}}(\mathbf{x})\). According to Lemma 1, any other classifier \(g(f_{\mathsf{ft}}(\mathbf{x}),f_{\mathsf{zs}}(\mathbf{x}))\) must have higher risk, _i.e._, \(\mathcal{R}_{t}(f_{\mathsf{gla}})\leq\mathcal{R}_{t}(f_{g})\). Proposition 1 demonstrates that our \(f_{\mathsf{gla}}\) model is the _best_ model, as it has the lowest risk on target distribution. Proposition 1 further explains the superiority of \(f_{\mathsf{gla}}\) over the fine-tuned model \(f_{\mathsf{ft}}\) and the naive ensemble \(f_{\mathsf{fn}}(\mathbf{x})=f_{\mathsf{ft}}(\mathbf{x})+f_{\mathsf{zs}}(\mathbf{ x})\): **Corollary 1**.: \(f_{\mathsf{gla}}\) _performs better than fine-tuned model \(f_{\mathsf{ft}}\) and naive emsembling \(f_{\mathsf{ens}}\):_ \[\mathcal{R}_{t}(f_{\mathsf{gla}})\leq\mathcal{R}_{t}(f_{\mathsf{ft}}),\; \mathcal{R}_{t}(f_{\mathsf{gla}})\leq\mathcal{R}_{t}(f_{\mathsf{ens}}) \tag{12}\] **Discussion: when do the GLA models degenerate?** Note that there are two equality signs in Eq. (12), indicating that the performance of the GLA model can degenerate to be equivalent to that of the fine-tuned model and naive ensembling in the following two cases. **Case 1**: For the first equality, if zero-shot model \(f_{\mathsf{zs}}(\mathbf{x})\) provides no further information about \(y\) given \(f_{\mathsf{ft}}(\mathbf{x})\), _i.e._, \((y\perp f_{\mathsf{zs}}(\mathbf{x}))|f_{\mathsf{ft}}(\mathbf{x})\), then \(P_{t}(y|f_{\mathsf{ft}}(\mathbf{x}),f_{\mathsf{zs}}(\mathbf{x}))\) degenerates to \(P_{t}(y|f_{\mathsf{ft}}(\mathbf{x}))\) and the first equality applies. However, in practice, as downstream model and zero-shot model provides diverse predictions, we usually encounter strict inequality, _i.e._, \(\mathcal{R}_{t}(f_{\mathsf{gla}})<\mathcal{R}(f_{\mathsf{ft}})\). **Case 2**: The second equality applies when pre-training and downstream training distribution are both class-balanced. In fact, the pre-training dataset for foundation models are known to be highly skewed. Therefore, in most cases, we have \(\mathcal{R}_{t}(f_{\mathsf{gla}})<\mathcal{R}_{t}(f_{\mathsf{ens}})\). In summary, the above two equalities are usually unattainable, which means that theoretically, our GLA model performs better than both the fine-tuned and the naive ensemble models. ### Estimate the label bias of the pre-training dataset However, \(\pi_{p}\) is usually unknown as we have no access to pre-training dataset. In this work, we seek to estimate \(\pi_{p}\) using the zero-shot models and the downstream data. Similar to Proposition 1, we have the following proposition says that \(f_{\mathsf{zs}}-\pi_{p}\) has lower error on target distribution than any other classifiers that use \(f_{\mathsf{zs}}\), see Appendix A.2 for the full proof. **Proposition 2**.: _Let \(h:\mathbb{R}^{K}\rightarrow\mathbb{R}^{K}\) be an arbitrary function that predicts labels using the outputs of the zero-shot model \(f_{\mathsf{zs}}(\mathbf{x})\). Let the derived classifier be denoted as \(f_{h}(\mathbf{x})=h(f_{\mathsf{zs}}(\mathbf{x}))\). The classifier \(f_{\mathsf{zs}}-\pi_{p}\) is better than any \(f_{h}(\mathbf{x})\): \(\mathcal{R}_{t}(f_{\mathsf{zs}}-\pi_{p})\leq\mathcal{R}_{t}(f_{h}(\mathbf{x}))\)._ Let \(\mathbf{q}\) be an arbitrary probability simplex over \(K\) classes, then we have \(\mathcal{R}_{t}(f_{\mathsf{zs}}(\mathbf{x})-\pi_{p})\leq\mathcal{R}_{t}(f_{ \mathsf{zs}}(x)-\log\mathbf{q})\). Therefore, we choose to optimize a _probability simplex_\(\mathbf{q}\) over \(K\) classes such that the model \(f_{\mathsf{zs}}-\log\mathbf{q}\) achieves the minimal empirical risk, as formulated in Eq. (3) (the Step 1 of GLA algorithm). Once we obtain the estimated class prior \(\hat{\pi}_{p}=\log\mathbf{q}\), we can easily implement the GLA model by ensembling \(f_{\mathsf{gla}}(\mathbf{x})=f_{\mathsf{ft}}(\mathbf{x})+f_{\mathsf{zs}}( \mathbf{x})-\pi_{s}-\hat{\pi}_{p}\) (the Step 2 of GLA algorithm). **Toy experiment.** We conducted an experiment to show that the estimated label distribution closely approximates the true one. Specifically, we trained a model with a ResNet32 backbone on the imbalanced CIFAR-10-LT [10] dataset with an imbalanced ratio of 10. Subsequently, we used only the test set combined with our proposed method to estimate the label distribution. This procedure simulates scenarios where only downstream data is available and the pre-training data is inaccessible. Figure 3 reveals a strong alignment between the estimated (orange line) and the actual distributions (blue line), which is further emphasized by a small KL-divergence value of 0.00062. The toy experiment validates the effectiveness of our debiasing method. **Discussion.** The \(\log\mathbf{q}\) we estimated is not the marginal log-probability of the entire pre-training distribution but the label bias matches the downstream distribution. In the above toy experiment, although training and test sets show different label distributions, their conditional distribution \(P(\mathbf{x}|y)\) remains invariant. In this case, our estimate will converge to the actual training label bias. For CLIP models, with diverse pre-training data, some might not align with the downstream domain, potentially compromising the accuracy of the estimation of the entire pre-training distribution. However, we'd like to point out that removing the label bias of entire pre-training distribution may not optimal for downstream tasks. As a thought experiment, consider a pre-training dataset "sketch" Figure 3: Estimating label bias of CIFAR-10-LT-IB-10. and "photo" styles for "dog" and "cat" samples. Suppose the sample size of "dog" and "cat" is equal but there are more "sketch dogs" than "sketch cats". This means that even if the overall distribution is balanced, each style isn't, resulting in biased zero-shot predictions. if we aim to deploy models for the "sketch dogs and cats" domain, adjusting the overall label bias is insufficient. Instead, the optimal label bias should be estimated on the "sketch" distribution. We also provide experiments using LAION-400M dataset in Appendix C.2, illustrating the situation when the downstream data diverges from the pre-training set. ## 5 Experiments We evaluate our GLA on three real-world scenarios: many-shot (Section 5.1), few-shot (Section 5.2) and long-tail learning (Section 5.3). We show that our GLA boosts performance on all three settings. ### Many-shot learning **Datasets.** We use ImageNet [11] and CIFAR100 [26] for generic object classification, Stanford-Cars [25] for fine-grained classification, and SUN397 [50] for scene recognition. See Appendix B.1 for details. **Baselines.** We compare GLA against four methods: (1) Zero-shot model, (2) Linear Probing (LP), (3) End-to-End fine-tuning (E2E), and (4) weight ensembling method WiSE-FT[48]. **Implementation details.** We consider two models: CLIP ViT-B/32 and ViT-B/16. For learning-based models, we fine-tune with AdamW using a cosine annealing learning rate scheduler. We fine-tune for 10 epochs on ImageNet and 20 epochs on other datasets. See Appendix B.3 for further details. **Main results.** Table 1 compares our GLA with various baselines. We observe that our GLA can increase the performance of end-to-end fine-tuned models: it achieves \(1.5\%\) gains on ImageNet. Compared to WiSE-FT, GLA gains \(1.1\%\) top-1 accuracy boost on ImageNet dataset,. Beyond generic object recognition, our method also improves accuracy on the fine-grained dataset (Stanford Cars) and the scene recognition dataset (SUN397), by \(0.4\%\) and \(0.5\%\), respectively. **Breakdown performance analysis.** To analyze the impact of pre-training label bias on fine-tuned and ensemble models, we present the breakdown results on ImageNet using CLIP ViT-B/16, as shown in Table 2. Specifically, we sort the class index using the estimated \(\pi_{p}\), and assign the top third of the classes as the head classes, the last third as the tail classes, and the remaining classes as the medium classes. Due to the label bias, the zero-shot tail performance is significantly lower than the head one (\(57.2\%\)_vs_. \(78.0\%\)). The resulting E2E models are also affected by the bias, with the tail performance being \(6.3\%\) lower than the head. Existing ensemble WiSE-FT overlooks the bias, exhibiting noticeable degradation on the tail performances (\(-0.9\%\)) compared to E2E model, while our GLA stands out for all three subgroups. **Estimated \(\pi_{p}\) is transferable across different zero-shot models.** The estimated \(\pi_{p}\) should be transferable across different zero-shot models if they are trained on the same pre-training dataset. To verify this, we employed a CLIP ViT-B/32 based zero-shot model to estimate \(\pi_{p}\), which is subsequently used to debias zero-shot models based on CLIP ViT-B/16 and ViT-L/14. As shown in Table 3, our debiased models outperform the original zero-shot versions by a clear margin. \begin{table} \end{table} Table 1: Accuracy of various methods using CLIP ViT-B/32 and ViT-B/16. LP: linear probe; E2E: end-to-end fine-tuning. Results were obtained using the official implementation from WiSE-FT [48]. **Ensembling with mixing coefficient.** In Section 5.1, we prove the optimal solution is to combine the debiased zero-shot and fine-tuned models equally. We now examine the claim by introducing a mixture coefficient \(\alpha\in[0,1]\). The ensemble predictions are given by: \(f_{\text{gla}}(\mathbf{x},\alpha)=(1-\alpha)\cdot(f_{\text{ms}}(\mathbf{x})- \pi_{p})+\alpha\cdot(f_{\text{ft}}(\mathbf{x})-\pi_{s})\). We compare the GLA and the naive ensembling with mixture \(\alpha\) in Figure 4, where GLA meets its optimal performance at \(\alpha=0.5\), which is in line with our theoretical analysis. We also observe that the debiased zero-shot model increases accuracy by \(2.3\%\) and our GLA consistently outperforms naive ensembling with various \(\alpha\). ### Few-shot learning For few-shot scenarios, we primarily choose prompt tuning for fine-tuning, since it is empirically more effective than end-to-end fine-tuning and linear probing [54; 55; 8]. **Datasets.** We follow CoOp[54] to use 15 datasets: ImageNet [11], Caltech101 [13], OxfordPets [37], StanfordCars [25], Flowers102 [36], Food101 [6], FGVCAircraft [34], EuroSAT [16], UCF101 [44], \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & Head & Med. & Tail & All \\ \hline Zero-shot & 78.0 & 69.8 & 57.2 & 68.3 \\ E2E & 83.6 & 83.0 & 77.3 & 81.3 \\ WiSE-FT & **85.3** & 83.7 & 76.4 & 81.7 \\ GLA & **85.2** & **84.3** & **78.8** & **82.8** \\ \hline \hline \end{tabular} \end{table} Table 2: Breakdown results on ImageNet. Figure 5: Accuracy (%) of few-shot learning on 11 datasets. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & Source & \multicolumn{2}{c}{Target} \\ \cline{2-4} & ViT-B/32 & ViT-B/16 & ViT-L/14 \\ \hline \(f_{\text{ms}}(\mathbf{x})\) & 63.4 & 68.8 & 75.6 \\ \(f_{\text{ms}}(\mathbf{x})=\hat{\pi}_{p}\) & **65.4** & **69.3** & **76.3** \\ \hline \hline \end{tabular} \end{table} Table 3: Estimated \(\hat{\pi}_{p}\) is transferable across different backbones. \(\hat{\pi}_{p}\) is estimated using CLIP ViT-B/32. DTD [9], SUN397 [50], ImageNet-{V2 [41], Sketch [47], A [18], R [17]}. We randomly select {1, 2, 4, 8, 16} shots for training and use the original test set for evaluation. See Appendix B.1 for details. **Baselines.** We compare with three prompt tuning methods: (1) CoOp [54] optimizes prompts via empirical risk minimization; (2) ProGrad [55] prevents forgetting the general knowledge using zero-shot predictions; (3) PLOT [8] applies optimal transport to match the vision and text modalities. **Implementation details.** We implement our proposed GLA method using CLIP-ResNet-50 as the foundation model and adopt class-specific prompt tuning from CoOp. Results are averaged over three seeds, with training configurations aligned with CoOp. See Appendix B.4 for further information. **Main results.** Figure 5 summarizes the few-shot results on 11 datasets. The detailed accuracy and standard deviation are in Appendix C.1. Overall, our GLA clearly outperforms the baselines on average performance by a large margin, _e.g._, our GLA gains \(4.6\%\), \(4.8\%\), \(3.1\%\), \(2.6\%\) and \(1.6\%\) performance boost over CoOp at \(1,2,4,8,16\) shots. In particular, on ImageNet dataset, we observe a large improvement, _e.g._, \(3.9\%\), \(2.9\%\), \(1.9\%\), \(2.0\%\) and \(2.2\%\) over ProGrad at \(1,2,4,8,16\) shots. Furthermore, on OxfordPerts, Food101 and SUN397 datasets, our method's performance remains stable and consistently improves with increasing sample sizes, while the one of the baseline methods fluctuates significantly. In particular, on Food101, baseline models even underperform zero-shot models by a large margin, while our GLA shows clearly better performance than zero-shot models. **Robustness to distribution shifts.** Following CoOp, we used the ImageNet at 16 training shots as the source domain and assess the robustness on ImageNet-{V2, Sketch, A, R} datasets. Table 4 summarizes the results, where the prompt tuning baselines perform worse on distribution shifted datasets compared to the zero-shot model, as fine-tuning on limited data misleads the model to learn in-distribution correlations. In comparison, our GLA approach makes the best of both fine-tuned and zero-shot models, thus consistently outperforms other methods on both source and target domains. **GLA improves accuracy over naive ensemble.** Figure 6 compares the results between our GLA and naive ensembling. We rank the absolute improvements over fine-tuning baseline at 16 training shots. In summary, our GLA demonstrates superior accuracy gains. It is worth noting that the naive ensembling does not always lead to improvements, _e.g._, on EuroSAT, SUN, Caltech, UCF and Aircraft, naive ensembling even underperforms fine-tuning baseline. **Debiased zero-shot models perform better.** We estimate \(\pi_{p}\) at 16 shots and compare the original zero-shot models with the debiased zero-shot models in Table 5. It is clear that the debiasing leads to improvement on all 11 datasets, showcasing an average accuracy gain of \(1.6\%\). ### Long-tail learning **Datasets and metrics.** We evaluate our method on two standard benchmarks: Places365-LT and ImageNet-LT [31]. In addition to top-1 accuracy, we report the accuracy on three test subsets according to the number of samples per class: many-shot (\(>100\) samples), medium-shot (\(20\sim 100\) samples), and few-shot (\(<20\) samples). Detailed information is provided in Appendix B.1. **Fine-tuning and long-tail learning baselines.** We compare our GLA with the combinations of fine-tuning protocols and long-tailed recognition methods. We consider three fine-tuning protocols: 1) Linear Probe (LP) 2) End-to-End (E2E) fine-tuning; 3) Prompt Tuning (PT). The three fine-tuning paradigms are introduced in Section 3.1 with details in Appendix B.3. We compare with 5 long-tail learning methods: 1) standard Empirical Risk Minimization (ERM); 2) Learnable Weight Scaling (LWS) [24]; 3) Logit Adjustment (LA) [35]; 4) Balanced Softmax (BS) [43], and 5) BALLAD [33], which is designed for VLMs. See Appendix B.6 for more details on long-tail baselines. **Implementation Details.** For all combinations of the fine-tuning and long-tail learning baselines, visual backbones are initialized from CLIP-ResNet-50 and classifiers are initialized via zero-shot prompting. We use SGD for 50 epochs with batch size of 512. See Appendix B.6 for further details. **Results.** Table 6 shows that our GLA method consistently surpasses baselines across all long-tailed datasets. Our approach outperforms PT-based models by \(3.5\%\) and \(4.4\%\) on ImageNet-LT and Places365-LT, respectively. Against E2E approaches, GLA exceeds not just the WiSE-FT but also the current SOTA method BALLAD, by a large margin, _e.g._, 1pp gains on ImageNet-LT. ## 6 Conclusion and Limitation In this paper, we identify the label bias in foundation models and underscore its adverse effects on downstream task performance. We propose the Generalized Logit Adjustment (GLA) framework for fine-tuning foundation models, which boosts the performance by effectively eliminating label bias and combining diverse predictions from zero-shot and fine-tuned models. We prove that when presented with zero-shot and fine-tuned models, our GLA is the Bayes optimal classifier for downstream task. Extensive experiments across a diverse range of tasks and fine-tuning framework demonstrate the effectiveness of our approach. We believe that the proposed GLA may partially improve the fairness and credibility of foundation models. The first limitation is that we only focus on the label bias while other forms of model biases, _e.g._, representation bias [5], cannot be addressed by our algorithm yet. The second limitation is that we primarily focus on enhancing the fine-tuning performance for discriminative models. However, applying our GLA framework to generative models presents challenges. For instance, language generation operates as a Markov process, meaning each output depends on previous ones. This implies it's not straightforward to estimate the biasedness of a sequence with our GLA, as we only compute the bias in a pre-defined and independent label space. \begin{table} \end{table} Table 6: The performances on ImageNet-LT and Places365-LT. ## Acknowledgments This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2021-01-002 and AI Singapore AISG2-RP-2021-022). ## Appendix A Proofs ### Proof of Lemma 2 **Restated Lemma** (Lemma 2).: _Let \(\pi_{t}\) denote the log probability of the target label distribution \(\pi_{t}(y)=\log P_{t}(y)\), we have:_ \[P_{t}(y|f_{\mathsf{ft}}(\mathbf{x}),f_{\mathsf{zs}}(\mathbf{x}))=\mathrm{softmax }(f_{\mathsf{ft}}(\mathbf{x})+f_{\mathsf{zs}}(\mathbf{x})-\pi_{s}-\pi_{p}+\pi_ {t})(y). \tag{13}\] _In class-balanced target distribution, Eq. (13) simplifies to:_ \[P_{t}(y|f_{\mathsf{ft}}(\mathbf{x}),f_{\mathsf{zs}}(\mathbf{x}))=\mathrm{ softmax}(f_{\mathsf{ft}}(\mathbf{x})+f_{\mathsf{zs}}(\mathbf{x})-\pi_{s}-\pi_{p})(y). \tag{14}\] Proof.: Denote the output \(\mathbf{e}=f_{\mathsf{ft}}(\mathbf{x})\) and \(\mathbf{z}=f_{\mathsf{zs}}(\mathbf{x})\). We first use the Bayes Rule to decompose \(P_{t}(y|\mathbf{e},\mathbf{z})\) into \(P_{t}(\mathbf{e},\mathbf{z}|y)\), \(P_{t}(y)\) and \(P_{t}(\mathbf{e},\mathbf{z})\) in Eq. (15), then rewrite \(P_{t}(\mathbf{e},\mathbf{z}|y)\) in Eq. (16) according to Assumption 1. Focusing on label shift problem [35; 20; 30] where \(P(\mathbf{x}|y)\) does not change, we derive Eq. (17) \[P_{t}(y|\mathbf{e},\mathbf{z}) =\frac{P_{t}(\mathbf{e},\mathbf{z}|y)P_{t}(y)}{P_{t}(\mathbf{e}, \mathbf{z})} \tag{15}\] \[=P_{t}(\mathbf{e}|y)P_{t}(\mathbf{z}|y)\frac{P_{t}(y)}{P_{t}( \mathbf{e},\mathbf{z})}\] (16) \[=P_{s}(\mathbf{e}|y)P_{p}(\mathbf{z}|y)\frac{P_{t}(y)}{P_{t}( \mathbf{e},\mathbf{z})}\] (17) \[=\frac{P_{s}(y|\mathbf{e})P_{s}(\mathbf{e})}{P_{s}(y)}\frac{P_{p} (y|\mathbf{z})P_{p}(\mathbf{z})}{P_{p}(y)}\frac{P_{t}(y)}{P_{t}(\mathbf{e}, \mathbf{z})}\] (18) \[=\frac{P_{s}(y|\mathbf{e})}{P_{s}(y)}\frac{P_{p}(y|\mathbf{z})}{ P_{p}(y)}\frac{P_{s}(\mathbf{e})P_{p}(\mathbf{z})P_{t}(y)}{P_{t}(\mathbf{e}, \mathbf{z})} \tag{19}\] Since \(\mathbf{e},\mathbf{z}\) are fixed, we can replace the terms that not rely on \(y\) with a constant \(C_{1}\) in Eq. (21). We replace \(P_{s}(y)=\exp(\log P_{s}(y))=\exp(\pi_{s}(y))\), \(P_{p}(y)=\exp(\log P_{p}(y))=\exp(\pi_{p}(y))\) and \(P_{t}(y)=\exp(\log P_{t}(y))=\exp(\pi_{t}(y))\). Suppose the underlying class-probabilities \(P_{s}(y|\mathbf{e})\propto\exp(\mathbf{e}_{y})\) and \(P_{p}(y|\mathbf{z})\propto\exp(\mathbf{z}_{y})\) for \(y\in[K]\). Denote the constants \(C_{s}\) and \(C_{p}\) for normalizing \(\exp(\mathbf{e}_{y})\) and \(\exp(\mathbf{z}_{y})\) into probabilities, and merge all constants to \(C=\frac{C_{1}}{C_{s}C_{p}}\), we get Eq. (23) \[P_{t}(y|\mathbf{e},\mathbf{z}) =\frac{P_{s}(y|\mathbf{e})}{P_{s}(y)}\frac{P_{p}(y|\mathbf{z})}{P_ {p}(y)}P_{t}(y)C_{1} \tag{21}\] \[=\exp(\mathbf{e}+\mathbf{z}-\pi_{s}-\pi_{p}+\pi_{t})(y)\frac{C_{ 1}}{C_{s}C_{p}}\] (22) \[=C\cdot\exp(\mathbf{e}+\mathbf{z}-\pi_{s}-\pi_{p}+\pi_{t})(y) \tag{23}\] Because the summation of \(P_{t}(y|\mathbf{e},\mathbf{z})\) is 1, \(C=1/\sum_{i\in[K]}\exp(\mathbf{e}+\mathbf{z}-\pi_{s}-\pi_{p}+\pi_{t})(i)\). Therefore, we have: \[P_{t}(y|f_{\mathsf{ft}}(\mathbf{x}),f_{\mathsf{zs}}(\mathbf{x})) =P_{t}(y|\mathbf{e},\mathbf{z}) \tag{24}\] \[=\frac{\exp(\mathbf{e}+\mathbf{z}-\pi_{s}-\pi_{p})_{y}}{\sum_{i \in[K]}\exp(\mathbf{e}+\mathbf{z}-\pi_{s}-\pi_{p}+\pi_{t})_{i}}\] (25) \[=\mathrm{softmax}(f_{\mathsf{ft}}(\mathbf{x})+f_{\mathsf{zs}}( \mathbf{x})-\pi_{s}-\pi_{p}+\pi_{t})_{y} \tag{26}\] In class-balanced target distribution case, \(\pi_{t}=\log\frac{1}{K}\) is constant. Since the softmax function is invariant to constant offsets, Eq. (26) simplifies to: \[P_{t}(y|f_{\mathsf{ft}}(\mathbf{x}),f_{\mathsf{zs}}(\mathbf{x}))=\mathrm{ softmax}(f_{\mathsf{ft}}(\mathbf{x})+f_{\mathsf{zs}}(\mathbf{x})-\pi_{s}-\pi_{p})_{y} \tag{27}\] ### Proof of Proposition 2 **Restated Proposition (Proposition 2)**.: _Suppose that the target distribution \(P_{p}\) is class-balanced. Let \(h:\mathbb{R}^{K}\rightarrow\mathbb{R}^{K}\) be an arbitrary function that predicts labels using the outputs of the zero-shot model \(f_{\mathbf{z}\mathbf{s}}(\mathbf{x})\). Let the derived classifier be denoted as \(f_{h}(\mathbf{x})=h(f_{\mathbf{z}\mathbf{s}}(\mathbf{x}))\). The classifier \(f_{\mathbf{z}\mathbf{s}}-\pi_{p}\) is better than any \(f_{h}(\mathbf{x})\): \(\mathcal{R}_{t}(f_{\mathbf{z}\mathbf{s}}-\pi_{p})\leq\mathcal{R}_{t}(f_{h}( \mathbf{x}))\)._ Proof.: Denote the output \(\mathbf{z}=f_{\mathbf{z}\mathbf{s}}(\mathbf{x})\). Similar to Eq. (15)-Eq. (26), we have \[P_{t}(y|\mathbf{z}) =\frac{P_{t}(\mathbf{z}|y)P_{t}(y)}{P_{t}(\mathbf{z})} \tag{28}\] \[=\frac{P_{p}(\mathbf{z}|y)P_{t}(y)}{P_{t}(\mathbf{z})}\] (29) \[=\frac{P_{p}(y|\mathbf{z})}{P_{p}(y)}\frac{P_{t}(y)}{P_{t}( \mathbf{z})}\] (30) \[=\exp(\mathbf{z}-\pi_{p})(y)/\sum_{i\in[K]}\exp((\mathbf{z}-\pi_ {p})(i))\] (31) \[=\operatorname{softmax}(\mathbf{z}-\pi_{p})=\operatorname{softmax }(f_{\mathbf{z}\mathbf{s}}(\mathbf{x})-\pi_{p}) \tag{32}\] Therefore, we have: \[\operatorname*{argmax}_{y\in\mathcal{Y}}(f_{\mathbf{z}\mathbf{s}}(\mathbf{x}) -\pi_{p})_{y}=\operatorname*{argmax}_{y\in\mathcal{Y}}\operatorname{softmax }(f_{\mathbf{z}\mathbf{s}}(\mathbf{x})-\pi_{p})_{y}=\operatorname*{argmax}_ {y\in\mathcal{Y}}P_{t}(y|f_{\mathbf{z}\mathbf{s}}(\mathbf{x})) \tag{33}\] Again, using Lemma 1, any other classifier \(f_{h}(\mathbf{x})\) has higher risk than \(f_{\mathbf{z}\mathbf{s}}(\mathbf{x})-\pi_{p}\), _i.e._, \(\mathcal{R}_{t}(f_{\mathbf{z}\mathbf{s}}-\pi_{p})\leq\mathcal{R}_{t}(f_{h}( \mathbf{x}))\). ## Appendix B Experimental Details ### Dataset details **Many-shot and few-shot datasets.** For many-shot learning, we use ImageNet, CIFAR100, StanfordCars and SUN397 datasets. For few-shot learning, we evaluate models on 15 datasets. The details of each dataset are presented in Table 7. **Long-tail datasets.** We use two standard long-tail benchmarks: Places365-LT and ImageNet-LT [31]. The skewness of a long-tailed training set is typically represented by imbalanced ratio, which is defined as \(N_{\max}/N_{\min}\). \(N_{\max}\) (\(N_{\min}\)) denotes the largest (smallest) number of instances per class. A \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & Classes & Train size & Test size & Task \\ \hline ImageNet & 1,000 & 1.28M & 50,000 & Object-level \\ CIFAR100 & 100 & 50,000 & 10,000 & Object-level \\ Caltech101 & 100 & 4,128 & 2,465 & Object-level \\ DTD & 47 & 2,820 & 1,692 & Textures \\ EuroSAT & 10 & 13,500 & 8,100 & Satellite images \\ FGVCAircraft & 100 & 3,334 & 3,333 & Fine-grained aircraft \\ Flowers102 & 102 & 4,093 & 2,463 & Fine-grained flowers \\ Food101 & 101 & 50,500 & 30,300 & Fine-grained food \\ OxfordPets & 37 & 2,944 & 3,669 & Fine-grained pets \\ StanfordCars & 196 & 6,509 & 8,041 & Fine-grained car \\ SUN397 & 397 & 15,880 & 19,850 & Scene-level \\ UCF101 & 101 & 7,639 & 3,783 & Action \\ \hline ImageNetV2 & 1,000 & - & 10,000 & Robustness to collocation \\ ImageNet-Sketch & 1000 & - & 50,889 & Robustness to sketch domain \\ ImageNet-A & 200 & - & 7,500 & Robustness to adversarial attack \\ ImageNet-R & 200 & - & 30,000 & Robustness to multi-domains \\ \hline \hline \end{tabular} \end{table} Table 7: The detailed statistics of datasets for many-shot and few-shot learning. larger imbalanced ratio means a more imbalanced training set. The test sets are divided into three splits: many-shot subset contains classes with \(>100\) images, medium-shot subset includes classes with \(\geq 20\) & \(\leq 100\) images, and few-shot subset covers classes with \(<20\) images. Details are listed in Table 8. ### CLIP zero-shot We use prompt ensembling of 80 prompts provided by CLIP [48] for ImageNet, CIFAR100, and Caltech101 to improve performance, _i.e._, averaging the text embedding of many captions, _e.g._, "a photo of a \(\{c_{k}\}\)." and "an image of a \(\{c_{k}\}\).". For OxfordPets, StanfordCars, Flowers102, Food101, FGVCAircraft, EuroSAT, UCF101, DTD and SUN397, we use the pre-defined prompt from CoOp [54]. ### Fine-tuned models **End-to-end and linear probe fine-tuning.** We follow WiSE-FT [48] to implement fine-tuning. We initialize the classifier with the zero-shot classifier and the output of the image encoder \(\Phi_{\mathbf{v}}\) is normalized during fine-tuning. We fine-tune for a total of 10 epochs using AdamW [32] optimizer with default hyper-parameters \(\beta_{1}=0.9,\beta_{2}=0.999,\epsilon=10^{-8}\) and weight decay \(0.1\). We choose a batch size of 512. We use the same data augmentation and cosine-annealing learning rate schedule as [48]. ### Prompt tuning. Prompt tuning like CoOp [54] automates prompt engineering by learning the prompt given few samples from downstream tasks. CoOp provides two options of prompt design: unified prompt that is shared among all classes and class-specific prompt that is different for each class. In this paper, we adopt the class-specific prompt design as the fine-tuned model to implement GLA. In specific, given the word embedding \(\mathbf{t}_{k}^{b}\) initialized by zero-shot prompts, we aim to learn a collection of class-specific word embedding \(\mathbf{R}=\{\mathbf{r}_{k}\}_{k=1}^{K}\), such that the text input \(\mathbf{t}_{k}=\mathbf{t}_{k}^{0}+\mathbf{r}_{k}\) minimizes the empirical risk: \(\mathbf{R}^{*}=\operatorname*{argmin}_{\mathbf{R}}\mathbb{E}_{\mathbf{x},y}[y \neq\operatorname*{argmax}_{i}f(x;\mathbf{R})_{i}]\). We adhere CoOp to use CLIP ResNet-50 as image encoder for few-shot classification. The word embedding \(\mathbf{R}\) is initialized from zeros. For the \(m\) few-shot classification setting (where \(m\in\{1,2,4,8,16\}\)), we randomly sample \(m\) training and \(m\) validation points from the respective full datasets. For all few-shot datasets except ImageNet, the training epoch is set to 200 for 16/8 shots, 100 for 4/2 shots, and 50 for 1 shot. For ImageNet, the epoch is set to 50 for all shots. We fine-tune the prompt with SGD optimizer decayed by the cosine annealing rule. The base initial learning rate and batch size are set to \(10^{-4}\) and \(32\). When given an \(m\)-shot sample setting, we increase the learning rate and batch size by \(m\) times simultaneously to accelerate the training speed. ### Estimation of the class prior To estimate the log-probability of the pre-training distribution \(\hat{\pi}_{s}=\log\mathbf{q}\), we utilize the optimization toolkit Cooper [15] from [https://github.com/cooper-org/cooper](https://github.com/cooper-org/cooper). \(\mathbf{q}\) is initialized as a uniformed distribution, \(\mathbf{q}(y)=\frac{1}{K}\) for all \(y\in[K]\). We use the standard SGD as the primal and dual optimizers for 2000 steps. ### Long-tail learning baselines and training details We compared with 5 long-tailed classification methods: \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Dataset & \begin{tabular}{c} Size of \\ all classes \\ \end{tabular} & \begin{tabular}{c} Size of \\ many classes \\ \end{tabular} & \begin{tabular}{c} Size of \\ medium classes \\ \end{tabular} & \begin{tabular}{c} Size of \\ few classes \\ \end{tabular} & \begin{tabular}{c} Size of \\ training samples \\ \end{tabular} & \begin{tabular}{c} Imbalanced \\ ratio \\ \end{tabular} \\ \hline \begin{tabular}{l} Places365-LT \\ ImageNet-LT \\ \end{tabular} & 365 & 131 & 163 & 71 & 62.5K & 996 \\ \begin{tabular}{l} ImageNet-LT \\ \end{tabular} & 1000 & 385 & 479 & 136 & 186K & 256 \\ \hline \hline \end{tabular} \end{table} Table 8: Details of long-tailed datasets. 1. Standard ERM: We learn the model by standard empirical risk minimization on the long-tailed data. 2. Learnable Weight Scaling (LWS) [24]: We first learn the model by standard ERM, then fix the model and learn to re-scale the magnitude of the classifier using class-balanced sampling. 3. Logit Adjustment (LA) [35]: We first learn the model by standard ERM, then compensates the long-tailed distribution by subtracting a class-dependent offset to the model outputs. 4. Balanced Softmax (BS) [43] modifies the Softmax cross-entropy loss which explicitly accommodate the label distribution shift during optimization. 5. BALLAD [33] first fine-tunes the vision-language models via contrastive loss on long-tailed data, then freezes the backbone and finally employs an adapter to enhance the representations of tail classes with re-sampling strategies. For all combinations of the fine-tuning baselines and long-tailed learning methods, visual backbones are initialized from CLIP-ResNet-50 and all classifiers are initialized by feeding prompt with class names to the text encoder. We use SGD for all experiments with a momentum of 0.9 for 50 epochs with batch size of 512. The initial learning rate is set to \(1.6\times 10^{-3}\) which is decayed by the cosine annealing rule. To mitigate explosive gradients, we use the warmup learning rate equals to \(10^{-5}\) during the first epoch. For the sake of fairness in comparison, all hyper-parameters of baselines are carefully searched using grid search on the validation set. ## Appendix C Additional Experiments ### Few-shot learning accuracy We provide mean and standard deviation in Table 9 in for {1, 2, 4, 8, 16} shots on all 11 few-shot learning datasets. ### Experiments on LAION-400M To support our thought experiment in the discussion of Section 4.3, we use the Open-CLIP ViT-B/16 [21], the first 20k image in LAION-400M datasets and the bias estimation method proposed by [2] to estimate the expected logits across 8 classes: "dog", "cat", "squirrel", "tiger", "elephant", "horse", "pig" and "bird". The bias estimation proposed by [2] provides a good estimation of \(\log P(\text{x})\) over the labels under pre-training distribution. Our GLA estimates the label bias matches the downstream domain, we consider two downstream domain styles, _i.e._, "photo" and "sketch" from DomainNet [38] dataset. For each domain, we randomly sampled 50 images for each class. We present the expected logits estimated by [2], along with the one calculated from "photo" and "sketch" downstream domain data in Table 10. Since the softmax is invariant to constant offsets, _i.e._, \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Method & dog & cat & squirrel & tiger & elephant & horse & pig & bird \\ \hline expected logits by [2] & 0.059 & 0.039 & 0.043 & 0.053 & 0.043 & 0.062 & 0.061 & 0.028 \\ \(\pi_{p}\) by “photo” & 0.059 & 0.041 & 0.047 & 0.055 & 0.048 & 0.062 & 0.060 & 0.033 \\ \(\pi_{p}\) by “sketch” & 0.059 & 0.043 & 0.063 & 0.061 & 0.067 & 0.042 & 0.047 & 0.056 \\ \hline \hline \end{tabular} \end{table} Table 10: Comparison among different bias estimation using Open-CLIP-ViT-B/16. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Dataset & 1 shot & 2 shots & 4 shots & 8 shots & 16 shots \\ \hline ImageNet & 61.65 \(\pm\) 0.15 & 62.64 \(\pm\) 0.01 & 63.32 \(\pm\) 0.07 & 64.51 \(\pm\) 0.09 & 65.61 \(\pm\) 0.03 \\ Caltech101 & 89.08 \(\pm\) 0.09 & 90.25 \(\pm\) 0.25 & 90.98 \(\pm\) 0.43 & 91.90 \(\pm\) 0.21 & 92.58 \(\pm\) 0.42 \\ OxfordPets & 87.79 \(\pm\) 0.15 & 87.86 \(\pm\) 0.21 & 88.22 \(\pm\) 0.21 & 88.09 \(\pm\) 0.27 & 89.53 \(\pm\) 0.16 \\ StanfordCars & 60.00 \(\pm\) 0.14 & 63.10 \(\pm\) 0.42 & 66.25 \(\pm\) 0.19 & 69.87 \(\pm\) 0.09 & 73.95 \(\pm\) 0.11 \\ Flowers102 & 73.45 \(\pm\) 0.60 & 81.00 \(\pm\) 0.46 & 88.31 \(\pm\) 0.65 & 92.89 \(\pm\) 0.46 & 95.41 \(\pm\) 0.32 \\ Food101 & 78.41 \(\pm\) 0.07 & 78.62 \(\pm\) 0.07 & 78.68 \(\pm\) 0.06 & 78.85 \(\pm\) 0.19 & 79.54 \(\pm\) 0.47 \\ FGVCircraft & 20.22 \(\pm\) 0.59 & 22.09 \(\pm\) 0.37 & 24.65 \(\pm\) 0.85 & 28.23 \(\pm\) 0.44 & 31.99 \(\pm\) 0.50 \\ SUN397 & 64.29 \(\pm\) 0.19 & 66.32 \(\pm\) 0.16 & 68.01 \(\pm\) 0.08 & 69.99 \(\pm\) 0.18 & 71.64 \(\pm\) 0.21 \\ DTD & 47.38 \(\pm\) 1.23 & 50.75 \(\pm\) 1.46 & 56.90 \(\pm\) 0.20 & 62.73 \(\pm\) 0.80 & 65.78 \(\pm\) 0.49 \\ EuroSAT & 56.50 \(\pm\) 1.34 & 67.26 \(\pm\) 3.58 & 72.40 \(\pm\) 2.43 & 77.59 \(\pm\) 1.84 & 84.93 \(\pm\) 1.89 \\ UCF101 & 65.32 \(\pm\) 0.17 & 68.42 \(\pm\) 0.81 & 70.88 \(\pm\) 0.50 & 74.23 \(\pm\) 0.24 & 76.07 \(\pm\) 0.03 \\ \hline \hline \end{tabular} \end{table} Table 9: GLA Accuracy (%) with standard deviation of few-shot learning on 11 datasets. \(\mathrm{softmax}(\mathbf{x}+c)=\mathrm{softmax}(\mathbf{x})\), we align the three label bias to yield the same logits on "dog" class by subtracting constant values. We observe that the label bias estimated by the "photo" domain align closely with [2] due to its close resemblance to the pre-trained domain. Conversely, the "sketch" image style, which significantly differs from the pre-training domain, results in a more pronounced deviation in the pre-trained label bias. Additionally, we apply the three estimated label biases to debias the zero-shot model and evaluate the classification performance. The results are shown in Table 11, where the superiority of our method becomes evident on "sketch" domain (93.00% vs 92.25%). Applying the label bias from [2] on the "sketch" domain degrades the model's performance (from 92.25% to 89.50%). This is attributed to the overall pre-training label bias does not adequately reflect the bias specific to the"sketch" domain. \begin{table} \begin{tabular}{l l l} \hline \hline Method & Sketch & Real Photo \\ \hline Zero-shot Open-CLIP & 92.25 & 97.00 \\ Debiased by [2] & 89.50 & 97.50 \\ Debiased by GLA & 93.00 & 97.75 \\ \hline \hline \end{tabular} \end{table} Table 11: Performance on sketch and real photo domain.
2301.08647
Image Memorability Prediction with Vision Transformers
Behavioral studies have shown that the memorability of images is similar across groups of people, suggesting that memorability is a function of the intrinsic properties of images, and is unrelated to people's individual experiences and traits. Deep learning networks can be trained on such properties and be used to predict memorability in new data sets. Convolutional neural networks (CNN) have pioneered image memorability prediction, but more recently developed vision transformer (ViT) models may have the potential to yield even better predictions. In this paper, we present the ViTMem, a new memorability model based on ViT, and evaluate memorability predictions obtained by it with state-of-the-art CNN-derived models. Results showed that ViTMem performed equal to or better than state-of-the-art models on all data sets. Additional semantic level analyses revealed that ViTMem is particularly sensitive to the semantic content that drives memorability in images. We conclude that ViTMem provides a new step forward, and propose that ViT-derived models can replace CNNs for computational prediction of image memorability. Researchers, educators, advertisers, visual designers and other interested parties can leverage the model to improve the memorability of their image material.
Thomas Hagen, Thomas Espeseth
2023-01-20T15:55:35Z
http://arxiv.org/abs/2301.08647v1
# Image Memorability Prediction with Vision Transformers ###### Abstract Behavioral studies have shown that the memorability of images is similar across groups of people, suggesting that memorability is a function of the intrinsic properties of images, and is unrelated to people's individual experiences and traits. Deep learning networks can be trained on such properties and be used to predict memorability in new data sets. Convolutional neural networks (CNN) have pioneered image memorability prediction, but more recently developed vision transformer (ViT) models may have the potential to yield even better predictions. In this paper, we present the ViTMem, a new memorability model based on ViT, and evaluate memorability predictions obtained by it with state-of-the-art CNN-derived models. Results showed that ViTMem performed equal to or better than state-of-the-art models on all data sets. Additional semantic level analyses revealed that ViTMem is particularly sensitive to the semantic content that drives memorability in images. We conclude that ViTMem provides a new step forward, and propose that ViT-derived models can replace CNNs for computational prediction of image memorability. Researchers, educators, advertisers, visual designers and other interested parties can leverage the model to improve the memorability of their image material. memorability | vision transformers | psychology | semantic information ## Introduction Everyone knows that our memories depend on the experiences we have had, facts we have encountered, and the abilities we have to remember them. Combinations of these factors differ between individuals and give rise to unique memories in each of us. However, a complementary perspective on memory focuses on the material that is (to be) remembered rather than the individual that does the remembering. In one central study, Isola et al. (1) presented more than 2000 scene images in a continuous repeat-detection task. The participants were asked to respond whenever they saw an identical repeat. The results revealed that the memorability score (percent correct detections) varied considerably between images. Most importantly, by running a consistency analysis in which Spearman's rank correlation was calculated on the memorability scores from random splits of the participant group, Isola and colleagues (1) were able to show that the memorability score ranking was consistent across participants - some images were memorable and some were forgettable. These results indicate that the degree to which an image was correctly detected depended on properties intrinsic to the image itself, not the traits of the observers. This is important because it shows that one can use the memorability scores in a stimulus set to predict memory performance in a new group of participants. These results have been replicated and extended in a number of studies, revealing that similar findings are obtained with different memory tasks (2), different retention times (1, 2), different contexts (3), and independent of whether encoding is intentional or incidental (4). However, although image memorability has proven to be a robust and reliable phenomenon, it has not been straightforward to pinpoint the image properties that drive it. What seems clear though, is that memorability is multifaceted (5, 6). One way to characterize the underpinnings of memorability is to investigate the contribution from processes at different levels of the visual processing stream. For example, at the earliest stages of processing of a visual scene, visual attributes such as local contrast, orientation, and color are coded. At an intermediate level, contours are integrated, surfaces, shapes, and depth cues are segmented, and foreground and background are distinguished. At a higher level, object recognition is conducted through matching with templates stored in long term memory. Positive correlations between brightness and high contrast of objects with memorability has been found (7), but in general, low-level visual factors such as color, contrast, and spatial frequency do not predict memorability well (5, 8, 9). This is consistent with results showing that perceptual features are typically not retained in long term visual memory (10). In contrast to the low-level features, the evidence for a relation between intermediate to high level semantic features and memorability is much stronger. For example, images that contain people, faces, body parts, animals, and food are often associated with high memorability, whereas the opposite is a typical finding for objects like buildings and furniture and images of landscapes and parks (3, 7, 11, 12). Other intermediate to high level features such as object interaction with the context or other objects, saliency factors, and image composition also contribute to memorability (5). Furthermore, although memorability is not reducible to high-level features such as aesthetics (1, 12), interestingness (1, 13), or popularity (12), emotions, particularly of negative valence, seem to predict higher memorability (9, 12). Finally, memorability seems to be relatively independent of cognitive control, attention, or priming (14). Overall, the available evidence indicates that memorability seems to capture intermediate- to high-level properties of semantics, such as objects or actions, and image composition, such as layout and clutter, rather than low-level fea tures [5, 15]. This fits well with the central role of semantic categories in organizing cognition and memory [16]. Generally, the priority of semantic-level information enables us to quickly understand novel scenes and predict future events [17]. For example, when inspecting a novel scene or an image, we do not primarily focus on low-level perceptual features or pixels, but prioritize more abstract visual schemas involving spatial regions, objects, and the relation between them [18]. Also, when people are asked to indicate which regions of an image helps them recognize an image, there is high consistency between people's responses [18]. Similarly, fixation map data from eye-tracking have shown that there is a positive correlation between fixation map consistency and scene memorability, and this relation is associated with the presence of meaningful objects [3, 7, 19]. Bylinskii et al. [5] suggest that these properties most efficiently signal information of high utility to our species, for example, emotions, social aspects, animate objects (e.g., faces, gestures, interactions), unexpected events, and tangible objects. ### Memorability prediction The finding that the memorability of an image is governed by properties intrinsic to the image itself, not only implies that one can predict memory performance in a new set of participants, as described above, but also that one can predict the memorability of a novel set of images (i.e., memorability is an "image computable" feature). Given the availability of computational algorithms and high-quality training sets of sufficient size, one can predict memorability in novel sets of images for future (or already conducted) behavioral or neuroimaging studies. Such memorability prediction could also be valuable in a number of applied settings (e.g., within education, marketing and human-computer interaction). Memorability researchers have employed computer vision models such as convolutional neural networks (CNNs) from early on [12], and advancements in the field have allowed researchers to predict image memorability with increasing precision [20, 21, 22]. The inductive bias (the assumptions of the learning algorithms used to generalize to unseen data) of CNNs is inspired by knowledge about the primate visual system, and activations in the networks layers have, with some success, been used to explain neural activations [23]. However, some vulnerabilities of CNNs have been noted. For example, CNNs appear to depend more on image texture than biological vision systems do [24], and have problems with recognizing images based on the shape of objects (e.g., when texture is suppressed or removed). However, this vulnerability is reduced when the model's shape bias is increased through training on shape representations [25]. The LaMem train/test splits is a well-established benchmark for memorability prediction [12]. The original MemNet [12], which is based on AlexNet [26], achieved a Spearman rank correlation of 0.64 on this benchmark. There have been several improvements on this benchmark, the leading approaches utilize image captioning to enhance memorability predictions. That is, a CNN produces a textual description of the image, which is then used to provide more high-level semantic information which is embedded into a semantic vector space before being combined with CNN image features in a multi-layered perceptron network. Squalli-Houssaini et al. [21] used this approach to reach a Spearman correlation of 0.72, with a mean squared error (MSE) of approximately 0.0092 [22]. Leonardi et al. [22] used the captioning approach with dual ResNet50s and a soft attention mechanism to reach a rank correlation of 0.687 with an MSE of 0.0079. The ResMem model [20], which is a CNN-based residual neural network architecture (ResNet), uses LaMem, but also takes advantage of a more recently published dataset named MemCat [11]. This is a data set containing 10,000 images based on categories of animals, food, landscape, sports and vehicles. This data set also has a higher split half correlation than LaMem. Needell and Bainbridge [20] argue that the LaMem dataset on its own is lacking in generalizability due to poor sampling of naturalistic images. That is, the images are more intended as artistic renderings designed to attract an online audience. Hence by combining MemCat with LaMem this should potentially yield a more generalizable model. Moreover, the increased size of the combined dataset might help in driving the model performance further than previous models based on LaMem. The authors of ResMem also noted the importance of semantic information and structured their approach to utilize semantic representations from a ResNet model in order to improve predictions. An added benefit of ResMem is that it is shared on the python package index, which makes it easily accessible to researchers in diverse fields. ### Vision transformers Vision transformers (ViT) have recently been shown to provide similar or better performance than CNNs in a variety of computer vision tasks [27]. This architecture was first introduced in the natural language processing field [28] for capturing long-range dependencies in text. This architecture leads to superior speed/performance balance relativ to ResNet architectures [29]. Moreover, ViTs have been shown to produce errors that are more similar to human errors [30], suggesting that they could take similar information into account (see also [31]). A reason for this may be that ViTs are likely to take more of the global context into account and be more dependent on the shape of objects rather than their texture [30]. While it is not entirely clear why such properties may yield better predictions of image memorability, it could still help inform the discourse on the visual characteristics that are relevant as well as potentially yielding a better model for predicting image memorability. Hence, we set out to investigate if vision transformers can yield better predictions of memorability than the state-of-the-art in image memorability prediction. In particular, we aimed to (i) benchmark a model based on ViT against the well-established LaMem train/test splits [12], (ii) train a ViT against the combined LaMem and MemCat data sets [20] to benchmark against the ResMem model [20], (iii) train a final ViT model against a more diverse and deduplicated data set, (iv) validate the final ViT model against additional independent data sets and (v) inspect semantic level distributions of memorability scores for behavioral and predicted data. ## Methods As our model is based on ViT to predict memorability we named it ViTMem. Because it has been shown that low-level visual features are less important for image memorability prediction, it would seem appropriate to use image augmentations in training our ViTMem model to reduce overfitting. This approach have also been used by others [(22)], although not to the extent done here. The augmentations used consisted of horizontal flipping, sharpen, blur, motion blur, random contrast, hue saturation value, CLAHE, shift scale rotate, perspective, optical distortion and grid distortion [(32)]. For training all models we used PyTorch, the ADAM optimizer and mean squared error (squared L2 norm) for the loss function. Images were input as batches of 32 in RGB and resized to 256x256 pixels before applying augmentations with a probability of 0.7 and center cropping to 224x224 pixels. For creating ViTMem we used transfer learning on a vision transformer [(27)] model pretrained on ImageNet 1k (vit_base_patch16_224_mil) [(33)]. The final classification layer was reduced to a single output and a sigmoid activation function. As we aim to provide an accessible model to the research community, it is also necessary to compare against the publicly available ResMem model. Unfortunately, the authors of ResMem did not publish their held-out test set, hence it is difficult to make a balanced comparison between the currently published ResMem model and any competing models. We propose to do 10 train/test splits that can be used by future researchers (available at [https://github.com/brainpriority/vitmem_data](https://github.com/brainpriority/vitmem_data)). Moreover, ResMem was not benchmarked on LaMem, hence a fair comparison can only be made on the combined LaMem and MemCat data set. For the semantic level analysis, we chose to use image captioning [(34)] as this provides an efficient method for deriving semantic properties from images at scale. Importantly, as the image captioning model was trained on human image descriptions, it is likely to extract content that humans find meaningful in images, and in particular objects and contexts that are relevant for conveying such meanings. Hence, nouns derived from such descriptions are likely to be representative portions of the content that would convey meaning to humans observing the images. ### Data Sources. For the large-scale image memorability (LaMem) benchmark we used the LaMem dataset [(12)]. The image set used by ResMem is a combination of the image sets LaMem [(12)] and MemCat [(11)]. LaMem containing 58,741 and MemCat 10,000 images, for a total of 68,741 images. ResMem is reported to have used a held-out test set with 5000 images, hence we randomly selected 5000 images as our test set for our 10 train/test splits for this combined data set. For our final model we aimed to clean up the data and combine more of the available data sets on image memorability. As number of duplicated images within and between data sets is unknown and duplicated images may interfere with performance measures, we aimed to deduplicate the data for this model. Duplicated images were identified by simply deriving embeddings from an off-the-shelf CNN model, and then visually inspecting the most similar embeddings. Our analysis of the data sets LaMem and MemCat showed that LaMem have 229 duplicated images while MemCat have 4. Moreover, 295 of the images in LaMem is also in MemCat. We aimed to build a larger and more diverse data set by combining more sources, and for this we chose CVPR2011 [(9)] and FIGRIM [(3)]. CVPR2011 had 6 internal duplicates, 651 duplicates against LaMem, 78 against MemCat og 9 against FIGRIM. FIGRIM had 20 duplicates against MemCat and 70 against LaMem. All identified duplicates were removed before merging the data sets. As the images from FIGRIM and CVPR2011 were cropped, we obtained the original images before including them in the data set. This resulted in a data set with 71,658 images. For this data set we performed a 10% split for the test set. ## Results ### Results on LaMem data set. On the LaMem data set the ViTMem model reached an average Spearman rank correlation of 0.711 and an MSE of 0.0076 (see Table 1). Here we compare our performance to measures obtained by MemNet [(12)], Squalli-Houssaini et al. [(21)] and Leonardi et al. [(22)]. ### Results on the combined LaMem and MemCat data set. Training on 10 train/test splits on the combined data set the results showed that ViTMem performed better than the ResMem model (see Table 2). The average across splits showed a Spearman rank correlation of 0.77 and an MSE of 0.005. ### Results on combined and cleaned data set. To assess model performance on the larger and cleaned data set, we made a train/test split and then performed repeated k-fold cross validation with 10 train/test splits on the training set. This resulted in a mean MSE loss of 0.006 and a mean Spearman rank correlation of 0.76 (see Table 3). In order to provide a model for the community we used the full data \begin{table} \begin{tabular}{l c c} \hline \hline Model & MSE Loss \(\downarrow\) & Spearman \(\rho\uparrow\) \\ \hline MemNet & Unknown & 0.640 \\ Squalli-Houssaini et al. & 0.0092 & 0.720 \\ Leonardi et al. & 0.0079 & 0.687 \\ ViTMem & 0.0076 & 0.711 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of model performance on LaMem data set \begin{table} \begin{tabular}{l c c} \hline \hline Model & MSE Loss \(\downarrow\) & Spearman \(\rho\uparrow\) \\ \hline ResMem & 0.009 & 0.67 \\ ViTMem & 0.005 & 0.77 \\ \hline \hline \end{tabular} \end{table} Table 2: Model performance on LaMem and MemCat combined dataset set to train the final model (ViTMem Final Model), which is published on the python package index as version 1.0.0. This was trained on the full training set and tested on its corresponding test set. The results showed a Spearman rank correlation of 0.77 and an MSE of 0.006 (see Table 3). The train/test splits are available on github. ### Validation on independent data sets. To further validate our model, we used memorability scores from an independent data set by Dubey and colleagues named PASCAL-S [(7, 35)] consisting of 852 images and cropped objects from the same images. ViTMem achieved a Spearman correlation of 0.44 on the images and 0.21 on the objects. In comparison ResMem achieved a correlation of 0.36 on the images and 0.14 on the objects. Validating against the THINGS data set [(15)], which consists of 26,106 images with memorability scores, achieved a Spearman rank correlation of 0.30 for ViTMem and 0.22 for ResMem. ### Semantic level analysis. In order to better understand how the model predictions relate to the semantic content of the images, we performed image captioning [(34)] on the combined LaMem and MemCat data set and the Places205 data set [(36)]. We extracted nouns from the resulting image descriptions and averaged behavioral or predicted memorability scores for each noun [(37)]. That is, the memorability for each image was assigned to each noun derived from the image captioning procedure. For the combined LaMem and MemCat data set we averaged behavioral memorability scores over nouns (see Figure 1), while for the Places205 data set we averaged predicted memorability scores from the ViTMem model (see Figure 2). A general interpretation of the visualizations in Figure 1 and 2 is that they appear to reveal a dimension from nouns usually observed outdoors to more indoor related nouns and ending with nouns related to animals, and in particular, humans. This would appear to reflect the distributions observed in previous work [(9, 15)], and hence help to validate the model in terms of the image content it is sensitive to. To further investigate how well memorability associated with nouns were similar across the models we selected nouns occurring more than the 85th percentile in each set (654 nouns for LaMem and MemCat, 2179 nouns for Places205), this resulted in 633 matched nouns across sets. Analysis of these showed a Spearman ranked correlation of 0.89 and a R\({}^{2}\) of 0.79, _p_<0.001 (see Figure 3). This analysis indicates that nouns from image captioning is a strong predictor of image memorability and that the ViTMem model is able to generalize the importance of such aspects from the training set to a new set of images. ## Discussion Using vision transformers we have improved on the state-of-the-art in image memorability prediction. Results showed that ViTMem performed equal to or better than state-of-the-art models on LaMem, and better than ResMem on the LaMem and MemCat hybrid data set. In addition, we assembled a new deduplicated hybrid data set and benchmarked the ViTMem model against this before training a final model. The model was further validated on additional data sets, and performed better than ResMem on these as well. Finally, we ran a semantic level analysis by using image captioning on the hybrid data set. We ranked the behavioral memorability scores on the images, labeled with nouns extracted from the captioning procedure. The results revealed that images labeled by nouns related to landscapes, cities, buildings and similar, were ranked lowest, whereas images labeled by nouns related to animate objects and food, were ranked highest. This finding is consistent with known category effects on memorability [(3, 7, 11, 12, 15)] and suggests that the labels extracted from captioning procedure is strongly related to factors that drive memorability for those images. Subsequently, we predicted memorability scores on images from a novel data set [(Places205)], ran the image captioning procedure, and ranked the predicted memorability scores on the images, labeled with nouns extracted from the captioning procedure. Visual inspection of the results revealed that the ranks were similar across samples and methods. This impression was confirmed by a strong correlation between matching pairs of nouns and 79\(\%\) explained variance, suggesting that ViTMem captures the semantic content that drives memorability in images. The use of image augmentations in training the ViTMem model in combination with state-of-the-art performance suggest that such augmentations are not disrupting the ability of the model to predict image memorability and hence may further support the importance of semantic level properties in image memorability. That is, the augmentations modify a range of low-level image properties but mostly leave the semantic content intact. In comparison with ResMem, which relies on a CNN-based residual neural network architecture, ViTMem is based on vision transformers which integrate information in a more global manner [(30)]. As images are compositions of several semantically identifiable objects or parts of objects, a more holistic approach may be more apt at delineating the relative relevance of objects given their context. That is, we speculate that a broader integration of image features allows for a more complete evaluation of its constituent features in relation to each other. Hence, if semantic content is important for predicting image memorability, the model may have weighed the importance of semantic components in relation to each other to a larger degree than models based on CNNs. ViTMem code and train/test sets are shared on github ([https://github.com/brainpriority/](https://github.com/brainpriority/)), and a python package named vitmem is available on the python package index (see supplementary Sup. Note 1 for a tutorial). Researchers and interested parties can use the model to predict memorability \begin{table} \begin{tabular}{l c c} \hline \hline Model & MSE Loss \(\downarrow\) & Spearman \(\rho\uparrow\) \\ \hline ViTMem & 0.006 & 0.76 \\ ViTMem Final Model & 0.006 & 0.77 \\ \hline \hline \end{tabular} \end{table} Table 3: Model performance on combined and cleaned data set in existing or novel stimuli and employ them in research or applied settings. The ViTMem model will allow researchers to more precisely predict image memorability. The release of ViTMem follows up ResMem in providing an accessible method for predicting image memorability. This is important for studies aiming to control for how easily an image can be remembered. This will for example allow experimental psychologists and neuroscientists to better control their research. Similarly, educators, advertisers and visual designers can leverage the model to improve the memorability of their content. Despite state-of-the-art performance in memorability prediction, improvements may still be possible to achieve. Previous works have shown benefits of pretraining their networks on data sets of places and objects prior to fine tuning for memorability prediction [(39)]. Moreover, ViTMem do not take image captioning into account, which have been successfully done with CNNs [(21, 22)]. Hence there is potentially more to be gained from incorporating image semantics and/or pretraining on data sets of objects and places. In addition, ViTMem is only based on the "base" configuration of the available ViT models. Model performance may still increase by adopting the "large" or "huge" configurations of the model. We conclude that ViTMem can be used to predict memorability for images at a level that is equal to or better than state-of-the-art models, and we propose that vision transformers provide a new step forward in the computational prediction of image memorability. Figure 1: Average behavioral image memorability scores for nouns that were extracted from images in the LaMem and MemCat data sets. The nouns shown are those that occurred most frequently or that are more frequent in the English language [(38)]. Figure 3: Average memorability scores for images with matching nouns in different data sets. The y-axis shows average predicted memorability scores from ViTMem on the Places205 data set. The x-axis shows average behavioral memorability scores on the combined LaMem and MemCat data set. Figure 2: Average ViTMem predicted image memorability scores for nouns that were extracted from images in the Places205 data set. The nouns shown are those that occurred most frequently or that are more frequent in the English language [(38)].
2302.09009
Invoice discounting using kelly criterion by automated market makers-like implementations
There is a persistent lack of funding, especially for SMEs, that cyclically worsens. The factoring and invoice discounting market appears to address delays in paying commercial invoices: sellers bring still-to-be-paid invoices to financial organizations, intermediaries, typically banks that provide an advance payment. This article contains research on novel decentralized approaches to said lending services without intermediaries by using liquidity pools and its associated heuristics, creating an Automated Market Maker. In our approach, the contributed collateral and the invoice trades with risk is measured with a formula: The Kelly criterion is used to calculate the optimal premium to be contributed to a liquidity pool in the funding of the said invoices. The behavior of the algorithm is studied in several scenarios of streams of invoices with representative amounts, collaterals, payment delays, and nonpayments rates or mora. We completed the study with hack scenarios with bogus, nonpayable invoices. As a result, we have created a resilient solution that performs the best with partially collateralized invoices. The outcome is decentralized market developed with the Kelly criterion that is reasonably resilient to a wide variety of the invoicing cases that provides sound profit to liquidity providers, and several premium distribution policies were checked that contributed with extra resilience to the performance of the algorithm.
Peplluis R. Esteva, Alberto Ballesteros Rodríguez
2023-01-30T09:00:06Z
http://arxiv.org/abs/2302.09009v1
# Invoice Discounting using Kelly Criterion by Automated Market Makers-like Implementations ###### Abstract There is a persistent lack of funding, especially for SMEs, that cyclically worsens. The factoring and invoice discounting market appears to address delays in paying commercial invoices: sellers bring still-to-be-paid invoices to financial organizations, intermediaries, typically banks that provide an advance payment. This article contains research on novel decentralized approaches to said lending services without intermediaries by using liquidity pools and its associated heuristics, creating an Automated Market Maker. In our approach, the contributed collateral and the invoice trades with risk is measured with a formula: The Kelly criterion is used to calculate the optimal premium to be contributed to a liquidity pool in the funding of the said invoices. The behavior of the algorithm is studied in several scenarios of streams of invoices with representative amounts, collaterals, payment delays, and nonpayments rates or mora. We completed the study with hack scenarios with bogus, nonpayable invoices. As a result, we have created a resilient solution that performs the best with partially collateralized invoices. The outcome is decentralized market developed with the Kelly criterion that is reasonably resilient to a wide variety of the invoicing cases that provides sound profit to liquidity providers, and several premium distribution policies were checked that contributed with extra resilience to the performance of the algorithm. AMM; Autonomous Market Maker; Invoice Discounting; Kelly Criterion ## 1 Introduction The market of invoice discounting is a market with a double-digit potential growth rate over the next years in Europe and worldwide [1]. Invoice discounting already represents 10% of banks' provided credit, and it has become a major source of working capital finance globally after the restriction of bank financing due to the 2011 credit crunch [2]. It seems it will become even greater in the next crisis announced for 2023 an on. In particular, as an advantage, invoice discounting helps companies, especially Small and Medium Enterprises (SMEs), that have cash flow problems because of late payments from customers (i.e., invoices are usually paid in 30-90 days). For European SMEs it has surpassed loans and other forms of financing over the past decades [2]. Another advantage of invoice discounting is the confidentiality: namely, suppliers control the sales ledger by collecting payments as usual and sending out reminders. The customer (debtor) is not involved in the discounting process, hence it is not informed that the supplier (creditor) is getting his/her credit financed. From the point of view of the suppliers' companies, the confidentiality is an advantage since, for easing negotiations with partners, they might not want to disclose their use of working capital finance. As [3] claims, the majority of small companies that belong to this financial market are poorly served despite the fact that it is recognized that this market has the potential to address the liquidity needs of small companies. Two critical features of an invoice market are extremely difficult to evaluate for factors: authenticated identities of parties involved (seller and buyer) and whether or not an invoice has already been factored. As a result, factors can be victim of frauds by sellers that factor non-existent invoices or factor an invoice to multiple factors (double pledging). At the same time, buyers might be contacted by fraudulent factors that claims to have factored an invoice of a legitimate seller and demand payments for such invoices. Even in absence of frauds, paying the invoice to wrong organization creates a number of issues. A quick solution would have been a centralized system where all invoices could be stored and checked by interested parties. Yet, despite of the existence of factoring markets for centuries, no such entities emerged. The main reason for such an absence is the quest for confidentiality. For different reasons, buyers, factors, and sellers have no interest of sharing information on past invoices as its disclosure can go against them. In the scheme of centralized systems, the work presented in [3] proposes a third trusted party to fix all that. As [4] claimed in their work, the main property that a factoring system needs to fulfill is to prevent an invoice from being factored twice. Blockchain can be used to remake a wide range of finance processes: inter company transactions (when there are multiple ERPs), procure-to-pay, order-to-cash, rebates, warranties, and financing (such as trade finance, letters of credit, and invoice factoring). Any place paper piles up present an opportunity for blockchain to move in and knock it down. ### Invoice discounting and factoring The main benefit of invoice discounting is the acceleration of cash flow from customers to suppliers: suppliers get advance payments from the bank rather than waiting for the customers to pay. Hence, thanks to the quick availability of capital, businesses can invest in expansion and growth. More specifically, one of the most relevant problems today is how to provide better and faster invoice discounting services while preventing double spending and maintaining risk low. The blockchain frameworks have the potential to provide the right solution and thus to revolutionize the invoice discounting process. The benefits for suppliers, customers and financial institutions are related to the increased transparency added to the whole discounting process and the following risk reduction for the banks due to the capability to enhance the entire process and to reduce the double spending. About Factoring, it is a financial transaction and a type of debtor finance in which a business sells its accounts receivable (i.e., invoices) to a third party (called a factor) at a discount. A business will sometimes factor its receivable assets to meet its present and immediate cash needs. The actual problems of factoring are those of inter-mediation and those of the general area of trade finance and supply chain. This paper works deeper in novel alternative approaches to invoice discounting or factoring where we will radically decentralise the way invoices are used. We plan to find solutions other than dumping all invoices on a ledger so that algorithms implemented as dApps or smart contracts make scoring of companies, decentralise crowd funding instead of banks. We have conceived with byppay a new solution to invoice compensation with debt soothing effects with a radically decentralised approach other than invoice uploading so that only those invoices related to the clients and suppliers of an actor are declared on a ledger with a full mechanism of PoE (Proof of Existence), and the inherent benefit of such a decentralised approach to invoice compensation is bolstered [5]. ### State of the Art of Blockchain for Funding For the discounter the benefits of decentralization and blockchain adoption are as follows [6]: (i) an immutable and time stamped record of the existence of every invoice emitted by a company, (ii) an immutable and time stamped record of the debtor's receipt, and (iii) the confirmation and verification of the invoice (against which a discounter would fund). Hence, the overall invoice discounting process will be enhanced. Indeed, the trust and security mechanisms of the blockchain allow for the elimination of on-site audits of receivables and debtors, of recivables' notification and debtors' verification, and of month-end reconciliation processes. Moreover, the adoption of blockchain will also allow for a fast and cheaper value transfer, in particular for cross-border payments. More specifically, the debtor's verification of the invoice validity and of the reception of goods and services reduces significantly the risk of dispute and non-payment of that invoice. Moreover, the debtor could have an incentive to acknowledge and confirm its invoices without delay, as his/her own track record of confirming invoices would be visible on the blockchain to his/her suppliers and thus it could be used to influence the payment terms and the offered contract prices. This immutable debtor's verification could also potentially eliminate the risk of invoice fraud for a discounter as there would be no "consensus" met for double invoicing transactions. In fact, time-stamping an invoice has a legal value: if a company attempts to assign its invoice more than once, it would prevent any subsequent assignee being a \(bonafide\) purchaser for value without notice, thus protecting the first assignee. ### Proof of Existence In the paper [1], they had provided evidence that the invoice discounting service might be improved by adopting approaches based on a distributed ledger. In our opinion, the prevailing approaches, despite of using blockchain, are all centralized in nature, based on a full declaration of invoices put available online for their graph analysis. We differently advocate for a bottom-up approach, a sort of agent approach, which the decentralized implementations are more appropriate, that fits nice and straightforward with Blockchain / Distributed Ledger Technologies (DLTs). Distributed Intelligence algorithms, multi agent paradigms, require a proper framework that the computing and ledger possibilities of DLT platforms like Ethereum or Polygon, Polkadot, BitcoinSV, and Binance Smart Chain among others are ideal for developing such intelligence. Many distributed bottom-up behaviours and strategies are open for research; coalition formation techniques and distributed autonomous organizations that require the proper platforms and environments to develop fully. There are actors like [7] that uses the BitcoinSV platform or [8] that use blockchain to enhance the PoE. ### On Collateral Our solution requires no third parties and no collateral (but invoices are the basic asset) and is anti-cyclical, similarly to the complementary currencies like WIR Bank1[9]. In fact, currencies are the oldest information system of humanity, and today we have the means to create new information systems by new forms of payment that will complete the strength of the currencies. Footnote 1: [https://www.wir.ch](https://www.wir.ch) However, using collateral and liquidity pools will indeed boost the invoice discounting, factoring or byppay adoption. Thus, in this project we'll cover the areas of distributing intelligence with a DLT, and adopt new DeFi (Decentralized Finances) to cover risks or enhance liquidity of the invoices as assets, Real-world assets, tokenized on a ledger. The DeFi implementation is backed by the ideas of value on the Internet of Value (IoV), where tokenized invoices back the value and their rights are traded among peers. With our new implementation of collateral, some of the IoV Systemic Risks are solved, remarkably the overcollateralization will be not necessary, as stated in our contributions to the recent report of [10]. ### Liquidity Pools About Liquidity Pools for Crowd Funding of Factoring, there are Protocols for Loanable Funds (PLF) that let users borrow/lend digital assets in a decentralized fashion [11]. Automated smart contracts behave as middle-men. They lock assets deposited by the lender and allow borrowers to get liquidity in exchange for collateral. These types of smart contracts are also called Lending Pools [8]. These Lending Pools typically lock a pair of tokens, a loanable token, and a collateral token. By providing liquidity, lenders gain interest rates depending on the supply & demand. Because there is no guarantee of paying back, Borrowers must over-collateralize their position. On top of that, when returning the amount borrowed, the borrowers must pay an interest rate that is split pro-rata among the lenders and the governance token. Moreover, when borrowers get liquidated, they will have to pay an additional fee. Figure 1: Lending protocol framework [11] Figure 1 abstracts and generalizes the lending protocol framework by showing the main actors and interactions. From left to right, Lenders can deposit their crypto-assets, Ethereum in this case, to gain additional profits. They receive a PLFs wrapped token or IOU as proof of their deposit. In the centre, the smart contract acts as a middle-man. It takes care of the deposited assets, loans, and liquidations - if any. On the right, a borrower must deposit collateral before getting the loan. Finally, at the end of the loan, the borrower will have to return the borrowed amount plus an interest rate, part of this interest rate will split pro-rate among all lenders, and the rest will generate revenue for the PLF itself. ### From overcollateralization to risk appraisal The current initiatives on liquidity pools and related DeFi implementations, where whatever asset that is tokenized (i.e. lending, factoring, insurances, guaranties of service, creative industries, and more) is backed by collateral in the search of collateralizing the risk, by means of virtual currencies or any of the new digital assets [10]. Their drawback lies on the overcollateralization, and their volatility caused by new forms of inter-dependency at an unprecedented level is a systemic risk in the coming internet of value. In our opinion, the overcollateralization does not fit properly to invoices risk management, as they are assets with value already appraised in a currency and their risk is related to their liquidity. The euros that will be paid through an invoice is a sort of a stablecoin, invoice-euro for example, so that the conversion invoice-euro is pegged to the euro at a discount, say 0.5, 1, 2 percent or more according to the mora or invoice nonpayment rates. It is not a monetary risk, neither a crazy market valuation, but pure risk management, that is well known in financial markets which use sophisticated tools. Differently from the solutions of liquidity pools in the state of the art that use crowdsource approaches to rate risk, we look for alternative formulas that appraise the risk in a fully automated way. Under this reality, there is room to study partly collateralised invoices novel mechanisms, using risk management tools that are worth to explore and create in a decentralized implementation that should avoid overcollateralization. These techniques, like the Black-Scholes or the Kelly Criterion [12], are being today studied for their implementation in risk appraisal and management. We are interested in the Kelly Criterion for calculating the optimal diversification of investments for creating a portfolio, given a volume of money available for investment, and given some known risk, as it forecasts the optimal benefit achievable. In the case of decentralized markets with liquidity pools for invoices, provided that we know in advance the volume of money because it is the liquidity pool's amount of money, we chose to reformulate the Kelly Criterion so that one might decide whether a given invoice that is partly collateralized can be accepted to get fully collateralized by the liquidity pool, and at what benefit to collect from to compensate the risk. This benefit will be the premium required as a fee for every invoice accepted in the liquidity pool. As this is a new usage of the Kelly Criterion, we named it Reverse Kelly Criterion as a new formula that calculates the premium to be granted given an invoice with some collateral, as we'll see in the following section with all details. ## 2 Reverse Kelly AMM (rkAMM) Design Electronic markets use algorithms that decide the price and amount of liquidity supplied in markets. But this automation is not complete as algorithms are managed by humans that intervene to improve their design, or turn them off entirely if conditions become unfavorable, leading to withdrawals of liquidity that may exacerbate market volatility. Automated Market Maker (AMM) is a market, instead, consisted of just a single algorithm for facilitating trades. One that is simple, static and deterministic [13]. AMM follows a constant product rule for swapping assets: if the pool holds two types of assets then the product of the reserves of those assets must be the same before and after any swap is realized. This notion of keeping the value of a function (e.g., the product) of asset reserves invariant to swaps was \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **DeFi Protocol** & **Smart Contract** & **Investor** & **User** & **Financial Service** \\ \hline _Protocols for Loanable_ & & & & \\ _Funds (PLF)_ & & & & \\ \hline _Decentralized Exchange_ & & & & \\ _(DEX)_ & Liquidity Pool & Liquidity Provider & Buyer/Trade & Exchange \\ \hline _Yield Aggregators_ & Vault & Vault User & - & Asset Management \\ \hline \end{tabular} \end{table} Table 1: DeFi Protocols Comparison. later generalized into the idea of constant function market making. The basic construction of constant function market makers was formalized in [14]. Then, a simple formula, \(x\cdot y=k\), sets prices and quantities for thousands of different assets using only 378 lines of code. The AMM today [13] executes upwards of 50bn USD per month in digital assets. Only Uniswap, the largest of the AMMs, using a year of data acquired directly from the public Ethereum blockchain containing over 39 million transactions. In an AMM, anyone can become a Liquidity Provider (LP) to the AMM by transferring their digital assets to pools. They might add to the liquidity of the pools, thereby increasing the pool size, or create new ones. In return they receive revenue from a fixed fee on trades (called swaps) by liquidity demanders to the AMM. Fee revenue is shared equally amongst LPs in proportion to their investment -- so LP returns vary as a function of the amount of liquidity in a pool (or the pool's volume or size). LPs can only add or remove liquidity, prices are set by the AMM algorithm, unlike traditional market makers that can also vary prices [13]. An algorithm has been conceived following the main guidelines of an AMM for the invoices full collateralization. As said, AMM were designed to trade two currencies by their balance \(x\) and \(y\) in a liquidity pool to keep up a constant \(k\), as \(x\cdot y=k\) so that incoming \(x\) means withdrawing \(y\) to keep the \(x\cdot y\) constant at \(k\). There are several variants of the same scheme [15] to add resilience and utility to the AMM implementation, that must follow their axioms. In our implementation we have as well liquidity providers that contribute and withdraw their funds at a fee, so that the liquidity pool has some volume \(V\) to grant the remaining collateral demanded by the invoices to get fully funded, and premium is required in exchange to compensate the risk aiming at some benefit. From this point of view, it works as a standard community fund for securing risk of participants, but with liquidity providers instead and securing particular assets like invoices that are backed by some collateral. Thus collateral c1 goes in pool along with the premium \(p\) and in exchange the remaining collateral c2 goes out, \(V\cdot p=k(V)\); meaning that the constant \(k\) is recalculated by the Kelly criterion, thus \(k\) changes with new liquidity volume \(V\), and thus with the premium collected. As briefly introduced, we use the Kelly criterion to calculate the premium to be contributed to the liquidity pool after the collateral required for 100% backing of an invoices that is already partly backed by some collateral. It is named rkAMM, after the name of Reverse Kelly AMM. Performance curves for the rkAMM have been analysed and simulated. In its implementation, we have considered the creation of NFTs with the inclusion of terms such as "backing %" of the invoices, risk, elaboration, useful for applying the Kelly Criterion investment formula, and other considerations. We assumed that for an invoice of amount \(X\), the collateral \(q\) is a percentage of \(X\), and the remaining non-collateralized amount is \(p=1-q\). The Kelly criterion (or Kelly strategy or Kelly bet), is a formula that determines the optimal theoretical size for a bet. It is valid when the expected returns are known. The Kelly bet size is found by maximizing the expected value of the logarithm of wealth, which is equivalent to maximizing the expected geometric growth rate. It was described by [12]. The criterion is also known as the scientific gambling method, as it leads to higher wealth compared to any other strategy in the long run (i.e. the theoretical maximum return as the number of bets goes to infinity). The practical use of the formula has been demonstrated for gambling and the same idea was used to explain diversification in investment management2. Footnote 2: [https://en.wikipedia.org/wiki/Kelly_criterion](https://en.wikipedia.org/wiki/Kelly_criterion) We use the Kelly criterion calculating how much premium must be required to contribute to a liquidity pool of size \(V\) for an \(X\) invoice amount with a collateral of \(p\) thus it is demanded the remaining \(q=1-p\) collateral out of the rkAMM. Figure 2: The rkAMM input and output. Whenever the byppay is created, the Proof of Existence (PoE) of an invoice considers the mutual knowledge among A and C through B, the common acquaintance of A and C. This PoE and A rating must be enough for most of the Byppay cases that require no early payments. If the Byppay is open to 3rd parties other than A, B, and C, then some collateral might be required to B to enhance the PoE of the AB invoice, and the collateralization flow might let the door open to third parties that trust B, or trust A, or trust both, to add collateral to that invoice, privately or openly as a Marketplace. Everyone contributing with collateral will be receiving a 2% fee out of the piece of contribution to the collateral of the byppaying invoices. Invoices will be paid in 1 to 3 months on average, giving an average APY of 8% to 24%. The collateral contributors will know the due time of the invoices in advance to calculate their benefits. Also, the farthest the connection and knowledge on A and B, the more they would need oracles for rating/scoring of companies, which might fit well on this stage of the flow. For those byppayable invoices with strong collateralization (say from 50% collateral), they are eligible to the rkAMM, where C secures the remaining collateral out of a rkAMM liquidity pool out of a premium contributed to said liquidity pool so that the full amount of the invoice can be earned in advance of the payment due time of the AB invoice by A. Whenever A pays, then all collateral to the Marketplace collateral providers is returned to the lenders at the 2% extra, and the rkAMM liquidity pool gets back the collateral provided again at a 2% extra, as well as B gets back the collateral he might have contributed at the 2% extra, discountable to the fees and bonuses he might have released away for the whole Byppay process ### rkAMM Definition A model inspired by AMM is proposed to automatically fulfil the compensation of invoices. It is an underlying protocol to conduct exchanges in a decentralized manner through an autonomous trading mechanism. This allows users to exchange their assets without involving centralized financial authorities. We will use risk assessment criteria of Kelly formula. In the proposed AMM, transactions will be executed considering the following two assets defined by: * The reserve for collateral allocated in the AMM as a liquidity pool, known as \(Q\). * The reserve of premium collected in the AMM, known as \(Premium\). The collateral serves as a liquidity asset used for completing the compensation of invoices to pay while in general terms the premium serves as a protection mechanism of the AMM against the loss of lent collateral and, hopefully, some Figure 3: How the rkAMM is included in the byppay platform. benefit. Thus, the \(premium\) is considered the amount to be paid based on the risk the AMM assumes when lending collateral. These assets (\(Q\) and \(Premium\)) are exchanged among agents by interacting with the AMM. Agents withdraw liquidity \(Q\) at a premium \(P\), to compensate the non-collateralized amount of their invoices. This premium contributed by agents is used to compensate the losses the AMM may have during its life cycle. And the steady remaining premium will be considered a benefit attributed to the intrinsic behaviour of the AMM. As well, third parties, the liquidity providers, are allowed to deposit liquidity for \(Q\) into the AMM which allow them to get a proportional profit out of the accumulated premium based on their shares on the AMM liquidity pool \(Q\). Strategies to decide when and how much of the premium can be withdrawn are discussed later in following sections. \[Profit_{Premium}=Collected_{Premium}-Collateral_{UnpaidInvoices}\] Recall that the exchange of liquidity for premium happens instantaneously, while obtaining premium for those thirds who have deposited liquidity to the AMM will be an exchange that will be conducted eventually. Also, it is important to clarify that if there is no liquidity available to collateralize an invoice, the AMM alternative will be to collateralize the invoice with the collected premium. This premium, if sufficient, will collateralize the invoice. The remaining premium as profit is expected to be shared in different percentages. However, in the proposed model it is suggested that there is a joint premium withdrawal simulating this action in a variable time period. ### rkAMM Formula The formula on which the exchanges of collateral and premium are based will be here explained. To do this, we start from Kelly's investment formula (Equation 1) to work out the interpretation of its variables to adapt the formula to our decentralized invoice collateral service: \[f=\frac{p}{a}-\frac{q}{b} \tag{1}\] Where the following interpretations and assumptions are made: * \(f\) is the ratio between the amount of collateral requested by the user and the total volume of the AMM. The total volume of the AMM is computed as the sum of available collateral Q and accumulated Premium at that time. * \(p\) is the collateralized percentage of an invoice amount. We considered that \(p\) is the probability of the invoice to be paid. * \(q\) is the non-collateralized percentage of an invoice amount. We considered that \(q\) is the probability of the invoice to be unpaid. * \(a=q\) given that \(q\) is the lack of collateral of an invoice to be fully collateralized by the AMM, we interpreted it as the amount of money to be lost in the operation, that is, \(a\) which is known as the partial loss in Kelly's investment formula. * From the above: \(p=1-q\) * \(b\) in Kelly's investment formula is defined as the partial win. We interpreted this value as the ratio between potential profit and the potential loss. Simplifying the Equation 1 using the assumptions above: \[f=\frac{1-q}{q}-\frac{q}{b} \tag{2}\] For our purpose, the variable to clear is \(b\). Thus, solving \(b\) from Equation 2: \[b=\frac{q^{2}}{1-q(f+1)} \tag{3}\] To clarify, the ratio \(f\) and probabilities \(p\) and \(q\) are always known within an invoice. Ratio \(f\) can be interpreted in diverse ways, but always keeping the same meaning: \[f=\frac{NonCollateralized_{Amount}}{Volume_{AMM}}=\frac{q\cdot Invoice_{Total Amount}}{Volume_{AMM}}=\frac{Q}{Volume_{AMM}} \tag{4}\] From now on, it is important to be aware that the value of \(Q\) represents an amount calculated from the probability \(p\) and the total amount of the invoice. Therefore, \(Q\) should not be confused with \(q\). Also, \(b\) which is calculated from Equation 3 is defined as the ratio between potential profit and the potential loss. The potential loss is known since it is the amount of collateral that the AMM lends to the user, that is, \(Q\). Finally, the variable to clear is the potential profit (i.e., \(premium\)) since it is the amount the AMM can earn in the operation if the user repays the collateral. That is why it is called potential profit, due to the AMM is assuming a risk and can lose the collateral if it is not repaid by the user. \[b=\frac{PotentialProfit}{PotentialLoss}=\frac{Premium}{Q} \tag{5}\] \[Premium=b\cdot Q \tag{6}\] ### Illustrative Example This section presents an illustrative example with calculations that could be common in a real scenario. First, let's declare the AMM initial conditions: Then, an invoice that has a \(p\) and a \(q\) over a total amount is applying for the AMM service: Using Equation 4, the value of \(f\) is: \[f=\frac{Q}{Volume_{AMM}}=\frac{800}{1800}=0.444...\] Since \(Q=q\cdot Invoice_{TotalAmount}=NonCollateralized_{Amount}\). Then, from Equation 3, the value of \(b\) is: \[b=\frac{q^{2}}{1-q(f+1)}=\frac{0.4^{2}}{1-0.4(0.444+1)}=0.3789\to 37.89\%\] \begin{table} \begin{tabular}{|c|c|} \hline **AMM initial condition** & **Value (euro)** \\ \hline Liquidity available & 1,800 \\ \hline Premium deposited & 0 \\ \hline Volume (Liquidity + Premium) & 1,800 \\ \hline \end{tabular} \end{table} Table 2: Initial rKAMM situation. \begin{table} \begin{tabular}{|c|c|c|} \hline **Invoice** & **\%** & **Amount (euro)** \\ \hline Collateralized amount (\(p\)) & 60 & 1,200 \\ \hline Non-collateralized amount (\(q\)) & 40 & 800 \\ \hline Total & 100 & 1,200 \\ \hline \end{tabular} \end{table} Table 3: Invoice amounts description. Recall that \(Q\) is proportional but essentially different from \(q\) because with \(p\), \(q\), and \(b\) and \(a\), we are working with probabilities and ratios. And from Equation 5 we have: \[b=\frac{Premium}{Q}=\frac{Premium}{800}=0.3789\] Clearing premium from the previous equation as in Equation 6: \[Premium=800\cdot 0.3789=303.16\text{ (euro)}\] Finally, the AMM situation is the following: At this point, collateral is withdrawn from the AMM and \(premium\) has been deposited. It is conceivable to wonder what will happen if the user does not repay any collateral, that will be matter of study with several scenarios that will be covered in the next section. Here the ideal and common functionality of the AMM is described. So, following with this example, at some point the user or his client will repay the collateral, which is eight hundred euros. Finally, the AMM situation is: The AMM has made profit because it has assumed the risk by supporting the non-collateralized amount of the invoice for some time in between the collateral is provided by the AMM and the collateral is later repaid within the invoice due time, sooner or later. ## 3 Simulations Setup A set of simulations have been developed based on several scenarios to study typical situations that may occur. First, invoices used in scenarios have the following structure: \begin{table} \begin{tabular}{|c|c|} \hline **AMM intermediate condition** & **Value (euro)** \\ \hline Liquidity available & 1,000 \\ \hline Premium deposited & 303.16 \\ \hline Volume (Liquidity + Premium) & 1,303.16 \\ \hline \end{tabular} \end{table} Table 4: Intermediate rkAMM situation after accepting an invoice. \begin{table} \begin{tabular}{|c|c|} \hline **AMM final condition** & **Value (euro)** \\ \hline Liquidity available & 1,800 \\ \hline Premium deposited & 303.16 \\ \hline Volume (Liquidity + Premium) & 2,103.16 \\ \hline \end{tabular} \end{table} Table 5: Final rkAMM situation after collateral repaid. Then, simulations results are based on a set of values. However, it must be noted that only few key attributes are modified for each scenario in order to interpret the behavior of the AMM in specific situations. The complete set of attributes used to simulate the scenarios is shown in Table 7. \begin{table} \begin{tabular}{|l|l|} \hline \hline **Value** & **Description** \\ \hline Id Invoice & Invoice identifier \\ \hline Non-coll. \% & Value between 5\% and 49\% \\ \hline Non-coll. amount & Value between 100 and 2,000 (euro) \\ \hline Collateralized date & Day when the invoice is collateralized \\ \hline Collateralized status & True if the invoice is collateralized / False otherwise \\ \hline Payment delay days & Value between 30 and 120 (days) \\ \hline Paid status & True if the non-coll. amount is repaid / False otherwise \\ \hline \hline \end{tabular} \end{table} Table 6: Invoice attributes definition. Please note that the simulations are based on uniformly random values and running a new simulation could give slightly different results. That is why there is a number of simulation parameter which we consider enough to show the behavior of the AMM. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \hline **Description** & **Value** & **Comment** & **Fixed or variable parameter (across different scenarios** \\ \hline Number of simulations & 100 & Number of simulations to determine the common behaviour of the AMM & Fix \\ \hline Initial collateral \(Q_{0}\) & 10,000 (euro) & This amount is the initial liquidity in the LP. & Fix \\ \hline Initial premium & 0 & N/A & Fix \\ \hline Number of invoices & 500 & Max. value of invoices to process & Fix \\ \hline Range of \% from invoices to collateralize & 5\% - 49\% & Max. and min. values allowed for collateralization of invoices & Variable only for scenario 5 and Hack scenarios \\ \hline Range of amount from invoices to collateralize & 100 – 2,000 (euro) & Max. and min. amounts allowed to be collateralized. As the Initial collateral \(Q_{0}\) is fixed to 10,000, we are instructing the simulations to be in the range 0,1\% - 20\% of the initial liquidity & Variable only for scenario 4 \\ \hline Invoice payment period range & 30 - 120 (days) & Max. and min. delays in days for the payment of invoices & Variable only for scenario 3 \\ \hline Range of liquidity contribution by LPs & 0 - 0 & Max. and min. amounts of liquidity contribution to the AMM & Variable only for scenario 1 \\ \hline Non-payment probability & 0\% & The probability that collateral on an invoice will never be paid an invoice will never be paid back & Variable only for scenario 2 and Hack scenarios \\ \hline Hack probability & 0\% & The probability of including false invoices that will never be paid may affect the performance of the AMM & Variable only for Hack scenarios \\ \hline Probability of contribution by LPs & 0\% & The probability that an LP makes a liquidity deposit to the AMM & Variable only for scenario 1 \\ \hline Maximum days & 500 & Days on which invoice entries are allowed & Fix to the number of invoices \\ \hline Additional days & 30 & Further days for simulations to observe the return to the steady state of the AMM & Fix \\ \hline Number of days of the simulation & 650 & Total days of simulation. Max. days + Max. delay invoice payment + Additional days & Fix to maximum (days + delays) + additional days \\ \hline \hline \end{tabular} \end{table} Table 7: Attributes used to simulate default and hack scenarios. ### Scenarios Initially and under normal circumstances, the following scenarios are considered to conduct the simulations: Scenario 1 - There is a contribution of collateral by liquidity providers (LP) based on a growing rate concerning the volume of collateral (\(Q\)) in the AMM. The attributes modified in this scenario are not only the Probability of contribution fixed at 50%, but also the Range of liquidity contribution by LPs in three different conditions: 1% of initial \(Q\) for scenario 1.1 (Very Low), 5% of initial \(Q\) for scenario 1.2 (Low), 10% of initial \(Q\) for scenario 1.3 (Medium) and 25% of initial \(Q\) for scenario 1.4 (Max). Scenario 2 - Non-collateralized amount of a set of invoices is not repaid to the AMM. The only attribute changed in this scenario is the non-payment probability, defining three possible situations: 2% of invoices for scenario 2.1 (Low), 5% for scenario 2.2 (Medium), 20% for scenario 2.3 (Max). Scenario 3 - Increasing delay in invoice payments. Modify the payment period range of invoices from what is considered short to long. Payment period range: 30-60 days for scenario 3.1 (Short), 60-90 days for scenario 3.2 (Medium), 90-120 days for scenario 3.3 (Long). Scenario 4 - Amount to be collateralized depends on a variable percentage of the initial volume of collateral (\(Q_{0}\)) in the AMM. Only the range of demanded collateral is modified in this scenario. Range of demanded collateral, thus liquidity contribution granted by LPs: 1% of initial \(Q_{0}\) for scenario 4.1 (Low), 10% of initial \(Q_{0}\) for scenario 4.2 (Medium), 25% of initial \(Q_{0}\) for scenario 4.3 (Max). Scenario 5 - Different % to be collateralized with the same amount of invoices. Modify the range of % collateral \(p\) from invoices through three possible cases. Range of % collateral \(p\): 55% for scenario 5.1 (Low), 75% for scenario 5.2 (Medium), 90% for scenario 5.3 (Max). In addition to the scenarios proposed above, there is an additional set consisting of those situations in which there may be malicious actors trying to take advantage of the AMM operation. These scenarios foresee a hack of the AMM, that is, there is a high number of false, bogus invoices. A false invoice is considered one for which the collateral given away by the AMM will never be paid back. The purpose of this hack from the attacker's point of view is to drain the AMM liquidity. Hacking scenario - To hack the rkAMM, the following attributes from the default scenario are modified to produce a set of different hacking scenarios: * Range of % from invoices to collateralize: 49% (Max), 30% (Medium) and 10% (Low) for all possible hack probabilities. * Non-payment probability: 10% (Low), 50% (Medium) and 100% (Max) for all previous ranges of % from invoices to collateralize. ### Technical Description of the Simulator The simulator in charge of performing the simulations has been developed in Python. Additionally, for the analysis and presentation of the computed metrics common analysis and data visualisation libraries have been used. The invoices have been structured in a list of dictionaries to execute the simulations. Each invoice is defined by a dictionary of seven keys and values, and this dictionary in turn occupies a position in a list of invoices. Regarding the functions of the simulation, those that allow modifying the volumes of liquidity, collateral and premium have been defined. In addition, a function has been developed to calculate the premium based on the requested collateral, as explained in section 2.2 of this document. To implement the simulation there is a main function that randomly generates a set of invoices based on the previously defined parameters in Table 6. The flow of invoices is steady at 1 invoice per day with uniformly random amount and collateral within the range of parameters described for all the simulations in Table 7. When they are accepted in the liquidity pool, then premium is contributed, and collateral is withdrawn at the same day. Otherwise, when they are not accepted, they are simply discarded, and the next day comes for a next invoice. The simulation also synchronizes once a day the liquidity contributions of uniformly random amounts in the range established by the parameters. The invoices that are repaid they do it with a uniformly random day within the range set of parameters from a minimum number of days (often 30 days are chosen) and a maximum of 120 days. On the other hand, invoices that are not repaid they do it with a value that guarantees to be higher than the maximum delay, that is 100,000, since Python 3's int does not have a maximum size and using math.inf may impact the efficiency of our code. In this way we ensure that these invoices will not be considered in any scenario. Then, the number of days for each simulation is calculated from the number of days for the flow of all invoices plus the maximum delay and an additional number of extra days. For instance, for 500 invoices, a maximum delay of 120 days, and 30 extra days, the limit is \(500+120+30=650\) days per simulation. Every scenario is defined by the mentioned parameters in Table 7, and a batch of 100 simulations per scenario are run, thus the metrics of the batch are calculated as an average of the index at all simulations. To simulate several scenarios of unpaid invoices, a uniform random number of invoices will be chosen not to be repaid, so that they had paid the premium to be accepted in the AMM, but got the collateral that is never returned to the AMM. The AMM will try to compensate the losses by means of the premium and we'll see how resilient is at the end of the run of all simulations. Also, the AMM contemplates the possibility of withdrawing the premium collted from the users through a function. This mechanism can be adapted, both the withdrawal period and the premium percentage to be withdrawn can be modified. And last, but not least, the hack of the system, consisting in extenuating the AMM by a flow of bogus invoices, intentionally not to be paid. This is done by increasing the hack probability, which in turn increases the non-payment probability of invoices. ### Repository of the rkAMM There is a GitHub repository3 of the rkAMM for running and testing the simulations describe above. Footnote 3: [https://github.com/ballesterosbr/rkAMM](https://github.com/ballesterosbr/rkAMM) ## 4 Results A batch of simulations that proved the behavior of the AMM based on the scenarios defined in section 3.1 of this document were successfully performed. For each scenario, those values that suffer the greatest variations according to the input values will be presented and discussed. Also, a set of curves will be presented for the most revelant scenarios. ### Limitations In the event that the AMM does not have liquidity nor premium to collateralize an invoice, this invoice is discarded and a next one will be evaluated. When the AMM gets a payback of the collateral granted to a former collateralized invoice or the amount to be collateralized of a new invoice is less than the available liquidity in the AMM, then the invoice might be granted for collateralization. In the simulations, the profit percentage shows the result of the AMM. This benefit will be distributed among those LPs that have deposited liquidity into the AMM, so it is expected that this will have an impact on the AMM profit. After inspecting the scenarios, our proposal of the AMM implies that the invoice percentage to collateralize is advised to be never more than 49% or lower than 5%. ### Metrics A set of metrics is defined to describe the behaviour of the AMM to analyse the results of the simulations, out of which, several curves will be plotted for the volume of the AMM, the premium collected, the premium withdrawn, and the resulting liquidity as an addition of the volume and the remaining premium. To better understand the parameters considered in the metrics, Table 8 shows an output example. At the end of the series of 100 simulations, we have these averaged measures. The set of Tables 9, 10, 11, 12, 13, and 14 show all scenarios results, which will be discussed in next section. The result curves are also presented for a better understanding of the results. To avoid overloading this section and to facilitate reading, only the curves from scenario 5 with a withdrawal period of 30 days are presented, since it is considered one of the most expected scenarios to occur in a real environment. The rest of the curves for the rest of scenarios can be found in the Appendix A section. \begin{table} \begin{tabular}{|p{113.8pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \hline \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Scenario} & \multicolumn{2}{c|}{\% accepted invices} & \multicolumn{2}{c|}{\% unpaid invices} & \multicolumn{2}{c|}{Avg. loss due to} & \multicolumn{2}{c|}{Total collateral covered} & \multicolumn{2}{c|}{Total premium with.} & \multicolumn{2}{c|}{Remaining premium} & \multicolumn{2}{c|}{AMM profit} \\ & \multicolumn{2}{c|}{(collateralized)} & \multicolumn{2}{c|}{(collateralized)} & \multicolumn{2}{c|}{unpaid invices} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{percentage} \\ \hline & No Wirth. & With. & No Wirth. & With. & No Wirth. & & No Wirth. & With. & No Wirth. & & No Wirth. & & No Wirth. & & No Wirth. \\ \hline 1.1 & 69.64 & 66.23 & 0.0 & 0.0 & 0.0 & 0.0 & 32.5 & 30.24 & 0.0 & 1.5 & 0.96 & 0.01 & 868.36 & 860.4 \\ 1.2 & 87.49 & 86.28 & 0.0 & 0.0 & 0.0 & 0.0 & 43.73 & 42.93 & 0.0 & 4.32 & 4.05 & 0.02 & 1,610.66 & 1,603.16 \\ 1.3 & 93.97 & 93.16 & 0.0 & 0.0 & 0.0 & 0.0 & 48.18 & 47.8 & 0.0 & 5.78 & 5.48 & 0.02 & 2,432.65 & 2,428.88 \\ 1.4 & 99.11 & 98.9 & 0.0 & 0.0 & 0.0 & 0.0 & 51.8 & 51.71 & 0.0 & 7.02 & 6.83 & 0.02 & 4,368.1 & 4,823.88 \\ 1.5 & 56.42 & 50.38 & 2.16 & 2.06 & 54.1873 & 4.547.89 & 24.66 & 21.2 & 0.0 & 0.7 & 0.25 & 0.0 & 561.84 & 501.42 \\ 2.2 & 51.99 & 48.28 & 5.15 & 4.88 & 11.524.67 & 6.398.72 & 21.9 & 19.79 & 0.0 & 0.64 & 0.18 & 0.0 & 453.12 & 424.97 \\ 2.3 & 37.41 & 32.0 & 19.79 & 19.9 & 27.238.25 & 15.921.4 & 13.65 & 10.88 & 0.0 & 0.36 & 0.06 & 0.0 & 131.24 & 97.31 \\ 3.1 & 80.74 & 78.38 & 0.0 & 0.0 & 0.0 & 0.0 & 40.1 & 38.35 & 0.0 & 3.04 & 2.89 & 0.02 & 757.77 & 759.62 \\ 3.2 & 52.49 & 49.01 & 0.0 & 0.0 & 0.0 & 0.0 & 23.35 & 21.08 & 0.75 & 0.31 & 0.01 & 561.81 & 529.75 \\ 3.3 & 32.82 & 29.5 & 0.0 & 0.0 & 0.0 & 0.0 & 13.22 & 11.76 & 0.0 & 0.35 & 0.03 & 0.0 & 34.43 & 312.77 \\ 4.1 & 100.00 & 100.0 & 0.0 & 0.0 & 0.0 & 0.0 & 5.0 & 5.0 & 0.0 & 0.74 & 0.73 & 0.0 & 72.53 & 73.77 \\ 4.2 & 48.95 & 41.87 & 0.0 & 0.0 & 0.0 & 0.0 & 24.48 & 20.93 & 0.0 & 0.81 & 0.3 & 0.0 & 581.6 & 525.56 \\ 4.3 & 21.88 & 15.94 & 0.0 & 0.0 & 0.0 & 0.0 & 27.34 & 19.93 & 0.0 & 1.39 & 0.21 & 0.01 & 728.41 & 584.32 \\ 5.1 & 96.83 & 95.82 & 0.0 & 0.0 & 0.0 & 49.92 & 49.22 & 0.0 & 1.394 & 1.39 & 0.06 & 21.772 & 2,162.61 \\ 5.2 & 32.11 & 30.23 & 0.0 & 0.0 & 0.0 & 0.0 & 10.77 & 9.81 & 0.0 & 0.23 & 0.03 & 0.0 & 113.91 & 103.73 \\ 5.3 & 23.7 & 23.16 & 0.0 & 0.0 & 0.0 & 0.0 & 7.14 & 6.92 & 0.0 & 0.05 & 0.01 & 0.0 & 8,52 & 8.26 \\ \hline \multicolumn{11}{|c|}{*_i.c. = initial collateral \(Q_{0}\)_} & \multicolumn{2}{c|}{No With. = No Premium Windowness_} & \multicolumn{2}{c|}{With. = _Promium Windowness_} \\ \hline \multicolumn{11}{|c|}{\multirow{2}{*}{100 \%}{100 \%}} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} \\ \end{tabular} \end{table} Table 11: RAMM results. Every 30 days a 50% of the premium obtained is withdrawn. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Back} & \multicolumn{2}{c|}{\% accepted invices} & \multicolumn{2}{c|}{\% unpaid invices} & \multicolumn{2}{c|}{Avg. loss due to} & \multicolumn{2}{c|}{Total collateral covered} & \multicolumn{2}{c|}{Total premium with.} & \multicolumn{2}{c|}{Remaining premium} & \multicolumn{2}{c|}{AMM profit} \\ & \multicolumn{2}{c|}{(collateralizedized)} & \multicolumn{2}{c|}{(collateralized)} & \multicolumn{2}{c|}{(amplid invices)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} \\ \hline & No Wirth. & With. & No Wirth. & \multicolumn{2}{c|}{With. } & No Wirth. & \multicolumn{2}{c|}{With. } & No Wirth. & \multicolumn{2}{c|}{With. } & No Wirth. & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} & \multicolumn{2}{c|}{*} \\ \hline & No Wirth. & With. & No Wirth. & \multicolumn{2}{c|}{With. } & No Wirth. & \multicolumn{2}{c|}{With. } & No Wirth. & \multicolumn{2}{c|}{With. } & No Wirth. & \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Scenario} & \multicolumn{2}{c|}{\% accepted inv invices} & \multicolumn{2}{c|}{\% unpaid invices} & \multicolumn{2}{c|}{Avg. loss due to} & \multicolumn{2}{c|}{Total collateral covered} & \multicolumn{2}{c|}{Total premium with.} & \multicolumn{2}{c|}{Remaining premium} & \multicolumn{2}{c|}{AMM profit} \\ & \multicolumn{2}{c|}{(collateralized)} & \multicolumn{2}{c|}{(collateralizedized)} & \multicolumn{2}{c|}{unpaid invices} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{percentage} \\ \hline & No With. & With. & No With. & With. & No With. & With. & No With. & With. & No With. & With. & No With. & With. & No With. & With. \\ \hline 1.1 & 69.96 & 70.01 & 0.0 & 0.0 & 0.0 & 0.0 & 32.73 & 32.71 & 0.0 & 1.04 & 1.1 & 0.18 & 881.2 & 889.71 \\ 1.2 & 86.84 & 86.55 & 0.0 & 0.0 & 0.0 & 0.0 & 43.56 & 43.39 & 0.0 & 3.56 & 3.93 & 0.45 & 1591.24 & 1607.24 \\ 1.3 & 93.64 & 94.02 & 0.0 & 0.0 & 0.0 & 0.0 & 48.16 & 48.19 & 0.0 & 5.16 & 5.47 & 0.5 & 2,409.92 & 2,435.5 \\ 1.4 & 99.21 & 99.22 & 0.0 & 0.0 & 0.0 & 0.0 & 51.9 & 51.77 & 0.0 & 6.32 & 6.83 & 0.51 & 54.49 & 4,538.42 \\ 1.5 & 56.46 & 52.61 & 1.95 & 1.95 & 4,693.43 & 4,125.03 & 24.79 & 24.22 & 0.0 & 0.34 & 0.31 & 0.04 & 571.75 & 523.22 \\ 2.2 & 51.51 & 51.48 & 5.08 & 4,88 & 1005.05 & 10,265.56 & 12.45 & 21.71 & 0.0 & 0.3 & 0.17 & 0.04 & 436.98 & 451.74 \\ 2.3 & 34.42 & 34.65 & 20.42 & 20.04 & 24,583.38 & 24,005.7 & 12.07 & 12.05 & 0.0 & 0.14 & 0.05 & 0.01 & 102.84 & 100.99 \\ 3.1 & 81.62 & 80.54 & 0.0 & 0.0 & 0.0 & 0.0 & 40.82 & 39.86 & 0.0 & 2.65 & 3.01 & 0.38 & 766.99 & 768.17 \\ 3.2 & 52.74 & 50.55 & 0.0 & 0.0 & 0.0 & 0.0 & 23.44 & 22.15 & 0.0 & 0.39 & 0.33 & 0.07 & 566.55 & 543.55 \\ 3.3 & 32.17 & 31.62 & 0.0 & 0.0 & 0.0 & 0.0 & 12.71 & 12.48 & 0.0 & 0.13 & 0.03 & 0.03 & 32.73 & 323.79 \\ 4.1 & 100.00 & 100.0 & 0.0 & 0.0 & 0.0 & 0.0 & 5.0 & 5.0 & 0.0 & 0.69 & 0.73 & 0.05 & 72.64 & 73.59 \\ 4.2 & 49.71 & 46.7 & 0.0 & 0.0 & 0.0 & 0.0 & 24.86 & 23.35 & 0.0 & 0.41 & 0.31 & 0.06 & 590.31 & 560.45 \\ 4.3 & 22.91 & 20.52 & 0.0 & 0.0 & 0.0 & 0.0 & 28.64 & 25.65 & 0.0 & 0.63 & 0.21 & 0.06 & 762.75 & 700.41 \\ 5.1 & 96.81 & 96.73 & 0.0 & 0.0 & 0.0 & 0.0 & 50.16 & 50.17 & 0.0 & 1.25 & 1.351 & 1.24 & 217.76 & 2,199.24 \\ 5.2 & 32.29 & 31.89 & 0.0 & 0.0 & 0.0 & 0.0 & 10.79 & 10.58 & 0.0 & 0.09 & 0.03 & 0.01 & 113.95 & 112.06 \\ 5.3 & 23.82 & 23.33 & 0.0 & 0.0 & 0.0 & 0.0 & 7.16 & 7.1 & 0.0 & 0.02 & 0.01 & 0.0 & 8.54 & 8.48 \\ \hline \multicolumn{11}{|c|}{*_i.c. a initial collateral_\(Q_{0}\)} & \multicolumn{2}{c|}{No With. = No Premian With._return} & \multicolumn{2}{c|}{With. = Premian With._return} \\ \hline \end{tabular} \end{table} Table 13: RAMM results. Every 90 days a 50% of the premium obtained is withdrawn. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Back} & \multicolumn{2}{c|}{\% accepted invices} & \multicolumn{2}{c|}{\% unpaid invices} & \multicolumn{2}{c|}{Avg. loss due to} & \multicolumn{2}{c|}{Total collateral covered} & \multicolumn{2}{c|}{Total premium with.} & \multicolumn{2}{c|}{Remaining premium} & \multicolumn{2}{c|}{AMM profit} \\ & \multicolumn{2}{c|}{(collateralizedized)} & \multicolumn{2}{c|}{(collateralized)} & \multicolumn{2}{c|}{unpaid invices} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{x (i.e.)} & \multicolumn{2}{c|}{percentage} \\ \hline & No With. & With. & No With. & With. & No With. & No With. & With. & No With. & No With. & No With. & No With. & No With. & With. \\ \hline & No With. & With. & No With. & No With. & No With. & With. & No With. & No With. & No With. & No With. & No With. & No With. & No With. & No With. & No With. \\ \hline 10 \% & 33.05 & 32.1 & 10.08 & 10.04 & 10,176.52 & 51,905.81 & 52,44 & 0.0 & 0.0 & 15.42 & 16.39 & 1.34 & 23,101.21 & 2,337.35 \\ 19.02 & 18.4 & 10.0 & 10.07 & 5,192.23 & 5,353.66 & 5.29 & 5.16 & 0.0 & 0.02 & 0.01 & 0.0 & 45.62 & 47.39 \\ 99.89 & 99.91 & 50.12 & 49,94 & 2637.19 & 260.754 & 52.29 & 52.35 & 0.0 & 3.25 & 1.79 & 0.25 & 483.09 & 699.2 \\ 50 \% & 11.59 & 10.17 & 49,47 & 50.4 & 14,717.05 & 13,754.29 & 3.06 & 2.69 & 0.0 & 0.05 & 0.01 & 0.0 & -94.57 & -91.64 \\ & 7.21 & 7.23 & 50.1 & 49,61 & 10,057.43 & 9,962.06 & 2.02 & 0.0 & 0.01 & 0.0 & 0.0 & -98.19 & -97.24 \\ & 99.08 & 97.19 & 100.0 & 100.0 & 5,1852.21 & 50,174.51 & 51,85 & 50.17.0 & 0.1 & 1.18 & 0.39 & 0.09 & -60.73 & 26.59 \\ 100 \% & 2.71 Figure 4: AMM Curves from simulation of scenario 5 and 30 days of withdrawal period. In addition to the different scenarios results from tables presented above, in following Figures 5 and 6 a comparative analysis of the AMM profit of all the scenarios can be observed. Figure 5 shows a comparison between the AMM profit when there is no premium withdrawal and when there is, depending on the period in which the premium is withdrawn. Figure 6 shows the AMM profit percentage difference between scenarios where the premium is withdrawn and where it is not. Premium withdrawal intervals are 1,30 and 90 days. As can be seen, the AMM profit is not affected by the premium withdrawal in the majority of the analyzed scenarios. Also, it can even be concluded that the period in which the premium is withdrawn does not have a great impact either. However, in scenarios where the premium is withdrawn every day, the performance of the AMM does suffer and the results obtained in this case are worse. Furthermore, although in some scenarios the daily premium withdrawal causes the AMM profit to be only a few points below the AMM profit when there is no premium withdrawal, in other scenarios the AMM profit obtained is almost 70% lower when there is premium withdrawal. Figure 5: Premium vs. No Premium Withdrawn % AMM profit Figure 6: Premium vs. No Premium Withdrawn AMM profit difference from scenario 5. Figure 7: No Premium Withdrawn vs. Premium Withdrawn. Figure 8: AMM Curves from simulation of hack scenario and 30 days of withdrawal period. From Figure 9 it can be seen that practically a third of the scenarios is affected to a greater extent by the premium withdrawal, and therefore the AMM profit is altered. Further, this impact turns out to be positive and the performance of the AMM for this set of scenarios is better when premium is withdrawn periodically, especially with invoices collateralized at 51%. The best cases are those in which the invoices are collateralized at 51%, although the AMM profit decreases as the hack probability increases. As for the percentage difference with respect to withdrawing premium or not shown in Figure 10, the scenarios where premium is withdrawn every 30 days with 51% collateralized invoices present an improvement of 5%, 89% and 249% with hacking probabilities of 10%, 50% and 100% respectively. Regarding daily withdrawals, the only favorable case is found when invoices are collateralized at 51% with a hacking probability of 10%. Finally, for premium withdrawals every 90 days, the most remarkable case is in which invoices are collateralized at 51% with a hack probability of 50%. The rest of the scenarios have presented a worse performance when there i Figure 10: Premium vs. No Premium Withdrawn AMM profit difference from hack scenario. Figure 9: Premium vs. No Premium Withdrawn AMM profit absolute values from hack scenario. Figure 11: No Premium Withdrawn vs. Premium Withdrawn. (Hack scenarios) Withdrawn Premium Analysis In this section, a detailed analysis of the results obtained and presented in the previous section is carried out. First, it must be taken into account that the AMM compensates invoices from the premium if it does not have sufficient liquidity. This happens for all scenarios and affects both the final premium values and the total volume of the AMM. Considering the results obtained, the more adverse the scenario, the lower the premium withdrawn from the AMM will be. The same happens with the remaining premium in the AMM, the worse the scenario, the less amount will be available. This is because the AMM will try to collateralize with the premium due to lack of liquidity, which is the expected behaviour in adverse scenarios. _Scenario 1: The greater the increase of liquidity, it is almost possible to match the results of this scenario when no premium is withdrawn._ As liquidity increases, the difference from an approximate -50% to a -1% difference on accepted/paid invoices increases. In all cases, all collateralized invoices are paid. When there is a greater liquidity contribution, the total covered is increased. However, in the worst case of liquidity contribution (scenario 1.1) there is a 60% difference in terms of the total collateral covered. This difference is drastically reduced in the 1.4 scenario where it is only 1%. Regarding the premium managed by the AMM in these cases. It is increasing at the same time that greater liquidity is provided, which was to be expected given that a greater number of invoices are collateralized. Regarding the benefit of the AMM, the greater the contribution of liquidity, the greater the profit AMM. However, the withdrawal of premium does not affect the benefit of the premium by itself, since there are scenarios in which premium is withdrawn and a greater benefit is obtained (even if only 1%) than those in which there is no withdrawal of premium. This is because in scenarios where there is no premium withdrawal, less premium is collected than in cases where premium is withdrawn. _Scenario 2: Faced with an increase in unpaid invoices, the relationship between the scenarios with premium withdrawal and those without it is maintained, although the absolute values are affected and the results are worse the higher the number of unpaid invoices there are._ The number of collateralized invoices decreases, but the relationship between the scenarios with premium withdrawal and those without it is maintained. The same happens for the amount of collateral covered and the AMM profit. Regarding the losses due to unpaid invoices, the relationship between scenarios is maintained despite the increase in the number of these invoices. _Scenario 3: The greater the delay in the payment of the invoices, the lower the premium that can be withdrawn and the lower the remaining premium in the case where the premium is not withdrawn. The shorter the payment term, the better the liquidity of the rkAMM._ The number of collateralized invoices will be less, but the relationship between the scenarios with premium withdrawal and those without is variable, and it worsens in the scenarios where there are greater delays in the invoice payments. The same happens for the amount of collateral covered. The AMM profit is lower as the delay in the payment of invoices increases. However, this is not directly related to the relationship between the scenarios with premium withdrawal and those without it. In fact, it can be verified that the scenario with the greatest decompensation is 3.2, where the delay in payments is not the worst of all the possible ones. _Scenario 4: The greater the amount to be collateralized, the better the results of the AMM are in terms of its performance since more liquidity is collateralized. Therefore, the higher premium is obtained and the greater the benefit of the AMM. In summary, better results are achieved the higher the risk the rkAMM assumes._ The number of collateralized invoices decreases, but the relationship between the scenarios with premium withdrawal and those without is variable, and it worsens in the scenarios where there are increasingly large invoices. Despite the fact that the collateral covered and the AMM profit increase, the relationship between scenarios has the same behavior. _Scenario 5: Same pattern as scenario 4. The higher the percentage to be collateralized, the worse the results of the AMM in terms of its performance, and it can be determined that the higher the percentage of collateralization of the invoices, the more similar the results are between scenarios where premium is withdrawn and where it is not._ The number of collateralized invoices decreases, however, the relationship between the scenarios in which premium is withdrawn and those that are not, decreases until it is almost equal to the scenario where the invoices are almost completely collateralized. The same happens for the amount of collateral covered and profit AMM as the amount to be collateralized increases. _Hack Scenario: The greater the hack probability and the amout to collateralize, the worse the results of the AMM in terms of its performance. However, in some cases, premium withdrawal performs better in terms of AMM profit for withdrawal periods of 30 and 90 days._ Losses due to unpaid invoices, as expected, increase as the probability of hacking increases. On the contrary, the amount of collateral covered decreases as the probability of hacking and the amount to be collateralized increases. The same happens for the profit AMM. Finally, as a conclusion to this section, the following comments are made. * Distributing the premium can give better results in scenarios with greater liquidity (those with less payment terms or more liquidity contribution) and with higher risk (fewer guarantees or greater defaulting). * Thus, we find ourselves with an "aseptic" measure, withdrawing the benefits to improve the performance of rkAMM. The policy of giving out premium affects the design and performance of the rkAMM, and the following could be recommended: 1. Not sharing profits is not an adequate policy for the operation of the rkAMM. 2. Make a variable policy based on liquidity, defaulting and payment terms. * Apart from that, distributing benefits early is expected to have a bandwagon effect on the demand for rkAMM products with more and more liquidity contributors. ## 6 Discussion The following considerations and comments are made on the different scenarios analyzed over the results of previous sections about the impact on the number of accepted invoices, the number of unpaid invoices, the average losses, collateral covered, remaining and withdrawn premium, and finally some comments on the final profit achieved. The comments below do not differentiate between premium withdrawal or not, but are analyzed together. In fact, it is important to highlight that both the cases where premium is withdrawn and in those which are not, the same behavior trend exists between the scenarios. The rkAMM will be able to accept more invoices provided that the liquidity stream flows in the LP with constant contribution probability (as in scenario 1). Otherwise, if there is no liquidity stream then the rkAMM gets its liquidity depleted progressively, then the rkAMM continues collateralizing invoices using the remaining premium until liquidity is replenished with premium or invoices are repaid. Also, it may happen that the premium runs out and invoices cannot be collateralized, in which case liquidity in rkAMM will be expected to be replenished. Scenarios with small invoices in terms of amount to be collateralized and small high percentage to be collateralized present the best results when it comes to the number of accepted invoices. As for losses due to unpaid invoices, only scenario 2 shows values above zero. This result was expected since unpaid invoices are intentionally included to be processed by the rkAMM. The hack scenarios, logically, present losses because it is an input condition of these cases to be able to be analyzed. The more unpaid invoices, the greater the losses, however these losses increase considerably when the invoice amount to be collateralized is greater. Continuing with the total collateral covered, the highest values of collateral covered are achieved in those scenarios where there is a liquidity contribution by the LPs, this allows the rkAMM to have margin to cover more collateral. In the rest of the scenarios, as a general rule, the values of collateral covered obtained do not reach the values of scenario 1. However, it is worth mentioning that similar values of total collateral covered are achieve for scenarios 3.1 and 5.1 when there is no premium withdrawn. Regarding the premium withdrawn from the rkAMM when there is premium withdrawal, those scenarios that have a greater total collateral covered will, in turn, have a greater amount of premium withdrawn, as expected. On the other hand, the fact of making premium withdrawals in increasingly longer periods means that the premium withdrawn is lower. For the remaining premium of rkAMM when there is no premium withdrawal, the behavior is analogous. The values obtained in both cases show that the premium is in constant circulation and that it is used for the collateralization of invoices. Finally, about the rkAMM profit percentage, an analysis of this variable is made in more detail in previous sections. However, it can be concluded that all scenarios, except the hacking ones, achieve a positive profit. And if we make a comparison between scenarios where premium is withdrawn with respect to the ones in which premium is not withdrawn, we can conclude that the scenarios that best adapt and achieve the closest values to the scenarios without premium withdrawal are the ones where premium is withdrawn every 30 days. In the case of hack scenarios, an analysis of the variables is not carried out since what should be verified is resilience of the rkAMM to an adversary situation. We consider the rkAMM to be resilient when it obtains a positive AMM profit at the end of the simulations. Regardless of the existence of premium withdrawal or not, it is observed that the rkAMM is resilient and can withstand an attack in scenarios with a small hack probability and a low percentage of invoice collateralization, specifically from 49% to a maximum of 30%. Although this 30% limit is considered flexible and with high probability the AMM admits a lower percentage of collateralization such as 20% to remain resilient. It is important to highlight the scenarios in which the premium is withdrawn every 30 or 90 days with a 100% probability of hacking when the invoices are 51% collateralized. In these cases the rkAMM is also resistant. As a conclusion and summary of the hack scenario, the optimal attack is to send bogus invoices with the highest percentage to collateralize allowed, that is, 90% in our experiments. We consider them as promising results as there might be mechanisms to resist the hack by checking the invoices, enhance the KYC, and then retain the collateral in custody for doing so or even to impose as penalty the loss of the collateral. We need to investigate further on these mechanisms in a next paper. As well, further research on how to withdraw the profit out of the rkAMM without resenting its operation and long term sustainability must be performed. ## 7 Conclusions As as summary of the experiments, the optimal operation mode of our rkAMM is with invoices of collateral \(p\) over 50% - 70%, that proves strong resilience even at scenarios of up to 10% bogus invoices, with delays of 30 to 120 days and 5% of nonpayments. In addition, in the event that premium withdrawal exists, it is recommended to use an approach with periodic withdrawals of 30 days. Under these conditions, there might be room for mechanisms of PoE to check bogus invoices or double factor invoicing on the blockchain to work along our rkAMM, notably the crowd collateralization, the verifiable credentials, the on-chain scoring, or own byppay referral among groups of three actors. After experiencing with several hack scenarios, our proposal of the AMM implies that the invoice percentage to collateralize with our rkAMM is advised to be never more than 49% or lower than 5%. ## Funding This research was funded by the grant of University College London UCL - CBT 3rd Call for Research Proposals 2022 on Distributed Ledger Technologies to the project New Decentralized Compensations of Invoices - ByPay ## Acknowledgments This work has been tested thanks to the insights and comments of the Centre de Blockchain de Catalunya CBCat.io and the TECNIO Centre EASY of the University of Girona. ## Abbreviations The following abbreviations are used in this manuscript: \begin{tabular}{l l} AMM & Automated Market Maker \\ APY & Annual Percentage Yield \\ DeFi & Decentralized Finance \\ DEX & Decentralized Exchange \\ DLT & Distributed Ledger Technology \\ ERP & Enterprise Resource Planning \\ IOU & I Owe You \\ IoV & Internet of Value \\ LP & Liquidity Pool \\ PLF & Protocols for Loanable Funds \\ PoE & Proof of Existence \\ rkAMM & Reverse Kelly Automated Market Maker \\ SME & Small and Medium-sized Enterprise \\ \end{tabular}
2303.09320
Stability estimates for semigroups in the Banach case
The purpose of this paper is to revisit previous works of the author with J. Sj\"ostrand (2010--2021) proved in the Hilbert case by considering the Banach case at the light of a paper by Y.~Latushkin and V.~Yurov (2013).
Bernard Helffer
2023-03-16T13:51:28Z
http://arxiv.org/abs/2303.09320v2
# Stability estimates for semigroups ###### Abstract The purpose of this paper is to revisit previous works of the author with J. Sjostrand (2010-2021) proved in the Hilbert case by considering the Banach case at the light of a paper by Y. Latushkin and V. Yurov (2013). ## 1 Introduction Let \(\mathcal{B}\) be a complex Banach space and let \([0,+\infty[\ni t\mapsto S(t)\in\mathcal{L}(\mathcal{B},\mathcal{B})\) be a strongly continuous semigroup with \(S(0)=I\). The purpose of this note is to revisit some results of [6] and [7] which have been established in the case of a Hilbert space and to consider their extension to the Banach case. The idea in the Hilbert space is essentially to use the property of the inhomogeneous equation \((\partial_{t}-A)u=w\) in exponentially weighted spaces which are related via Fourier-Laplace transform and then use Plancherel's formula. This approach cannot be used in the Banach case (see [11] for a discussion) but one can see in [9] an alternative approach proposed by Y. Latushkin and V. Yurov which combined with the approach of [6] permits this extension. We will describe how to extend also all the results of [7]. The main result in [6] was established in the Hilbertian case: **Theorem 1.1**: _Let assume that \(\mathcal{B}\) is a Hilbert space. Let \(r(\omega)\) defined by_ \[\frac{1}{r(\omega)}:=\sup_{\Re z\geq\omega}||(A-z)^{-1}||_{\mathcal{L}( \mathcal{B})}\,.\] _Let \(m(t)\geq\|S(t)\|_{\mathcal{L}(\mathcal{B})}\) be a continuous positive function. Then for all \(t,a,b>0\), such that \(t\geq a+b\), we have_ \[\|S(t)\|\leq\frac{e^{\omega t}}{r(\omega)\|\frac{1}{m}\|_{e^{-\omega\cdot L^{2 }(0,a)}}\|\frac{1}{m}\|_{e^{-\omega\cdot L^{2}(0,b)}}}. \tag{1.1}\] Here the norms are always the natural ones obtained from \({\cal B}\), \(L^{2}\), thus for instance \(\|S(t)\|=\|S(t)\|_{{\cal L}({\cal B},{\cal B})}\), if \(u\) is a function on \(\mathbb{R}\) with values in \(\mathbb{C}\) or in \({\cal B}\), \(\|u\|\) denotes the natural \(L^{2}\) norm. In (1.1) we also have the natural norm in the exponentially weighted space \(e^{-\omega}\cdot L^{2}(0,a)\) and similarly with \(b\) instead of \(a\); \(\|f\|_{e^{-\omega}\cdot L^{2}(0,a)}=\|e^{\omega}\cdot f(\cdot)\|_{L^{2}(0,a)}\). The extension proposed by [9] (with less generality1) can be obtained by introducing \(K_{\omega,p}\) which is defined by Footnote 1: The authors consider the case when \(m(t):=Le^{\lambda t}\) and play instead with the consequence of this theorem as in [6]. \[\frac{1}{\hat{r}_{p}(\omega)}:=K_{\omega,p}:=||{\cal K}^{+}||_{{\cal L}(L^{p}_ {\omega}(\mathbb{R}_{+};X))}<+\infty \tag{1.2}\] where \[({\cal K}^{+}u)(t)=\int_{0}^{t}\,S(t-s)u(s)ds\,,\,\,\,\mbox{for}\,\,t\geq 0\,. \tag{1.3}\] Our first new theorem reads: **Theorem 1.2**: _Suppose that \(\omega\in\mathbb{R}\), \(p>1\), and that \(K_{\omega,p}\) is finite. Let \(m(t):[0,+\infty[\rightarrow]0,+\infty[\) be a continuous positive function such that_ \[\|S(t)\|\leq m(t)\,\,\mbox{for all}\,\,t\geq 0\,. \tag{1.4}\] _Then for all \(t,a,b>0\) such that \(t\geq a+b\),_ \[\|S(t)\|\leq K_{\omega,p}\frac{e^{\omega t}}{\|\frac{1}{m}\|_{e^{-\omega} \cdot L^{q}(0,a[)}\|\frac{1}{m}\|_{e^{-\omega}\cdot L^{p}(0,b[)}}\,, \tag{1.5}\] _where \(q\) is such that \(\frac{1}{p}+\frac{1}{q}=1\,\)._ The idea behind this extension by [9] is to replace the use of the Laplace transform and the assumptions on the resolvent of \(A\) by more directly the estimate for \({\cal K}^{+}\). Hence \(1/r(\omega)\) is replaced by \(K_{\omega,p}\). The proof will be given in Section 2. Another aim in this note is to see if we can also extend to the Banach case the results of [7]. Some of the steps are independent of the Banach assumption, so we just explain the modifications to do in the case of a reflexive Banach space. Let \(\Phi\) satisfy \[0\leq\Phi\in C^{1}([0,+\infty[)\,\,\mbox{with}\,\,\Phi(0)=0\,\,\mbox{and}\,\, \Phi(t)>0\,\,\mbox{for}\,\,t>0\,, \tag{1.6}\] and assume that \(\Psi\) has the same properties2. For \(t>0\), let \(\iota_{t}\) be the reflection with respect to \(t/2\): \(\iota_{t}u(s)=u(t-s)\). With this notation, we have the following theorem: Footnote 2: By a density argument we can replace \(C^{1}([0,+\infty[)\) in (1.6) by the space of locally Lipschitz functions on \([0,+\infty[\). **Theorem 1.3**: _We assume that \(\mathcal{B}\) is a complex reflexive Banach space. Under the assumptions of Theorem 1.2, for any \(\Phi\) and \(\Psi\) satisfying (1.6) and for any \(\epsilon_{1},\epsilon_{2}\in\{-,+\}\), we have_ \[||S(t)||_{\mathcal{L}(\mathcal{B})}\leq e^{\omega t}\frac{\|(\hat{r}_{p}( \omega)^{p}\Phi^{p}-|\Phi^{\prime}|^{p})_{-}^{\frac{1}{p}}m\|_{e^{\omega}\cdot L ^{p}([0,t])}\|(\hat{r}_{p}(\omega)^{q}\Psi^{q}-|\Psi^{\prime}|^{q})_{\frac{1}{ q}}^{\frac{1}{q}}m\|_{e^{\omega}\cdot L^{q}([0,t])}}{\int_{0}^{t}(\hat{r}_{p}( \omega)^{p}\Phi^{p}-|\Phi^{\prime}|^{p})_{\epsilon_{1}}^{\frac{1}{p}}(\hat{r} _{p}(\omega)^{q}(\iota_{t}\Psi)^{q}-|\iota_{t}\Psi^{\prime}|^{q})_{\epsilon_ {2}}^{\frac{1}{q}}ds}\,. \tag{1.7}\] Here for \(a\in\mathbb{R}\), \(a_{+}=\max(a,0)\) and \(a_{-}=\max(-a,0)\). This theorem was established in [7] (Theorem 1.6) in the Hilbert case with \(p=2\) and \(\hat{r}_{2}(\omega)\) replaced by \(r(\omega)\). With this generalization, the other sections of [7] hold in the case \(p=2\), in particular all the consequences considered as \(\epsilon_{1}\epsilon_{2}=-\) in [7] (Theorem 1.7, Proposition 1.8, Theorem 1.9). Notice that in the Hilbert case the replacement of \(r(\omega)\) by \(\hat{r}_{2}(\omega)=1/K_{\omega,2}\) which could appear as stronger but is probably equivalent (see the discussion in [9]).The case \(p\neq 2\) will be further discussed in the last sections, with in particular an extension of Proposition 1.8 ([7] and an extension of Wei's theorem. These applications will be stated in Theorem 4.6. and in Theorem 6.2. **Remark 1.4**: _There is a huge litterature on the subject and we refer for example to the recent survey by [12] for a description of the different approachs. In particular J. Rozendaal and M. Veraar [13] obtain more general results as here (see for example their Theorem 3.2) albeit with different constants when considering our particular case. The question of the finiteness of \(K_{\omega,p}\) is also an interesting open question. Notice3 that the inequality \(K_{\omega,p}\leq 1/r(\omega)\) also holds, by application of Proposition 3.7 in [13] when \(S\) is a positive semigroup on an \(L^{p}\)-space._ Footnote 3: We thank J. Rozendaal for this remark **Acknowledgements** The author was motivated by a question of L. Boulton at the Banff conference (2022) and encouraged later by discussions with Y. Latushkin during the Aspect Conference in Oldenburg (2022) organized by K. Pankrashkin. We thank J. Rozendaal for comments on a previous version of the manuscript and Y. Latushkin for his careful reading and suggestions of simplification. Proof of Theorem 1.2 and applications. ### Proof of Theorem 1.2 Following [6] and a simplification proposed by Y. Latushkin, we consider \(v\in D(A)\) and \(u(t)=S(t)v\), for solving the Cauchy problem \[\begin{array}{l}(\partial_{t}-A)u=0\,,\,t\geq 0\,\\ u(0)=v\,.\end{array}\] Assume \(t>a+b\), and let \(\chi\), \(\tilde{\chi}\) be decreasing Lipschitz functions on \(\mathbb{R}\), equal to \(1\) on \(]-\infty,0]\) and such that \(\operatorname{supp}\chi\in(-\infty,a)\) and \(\operatorname{supp}\tilde{\chi}\in(-\infty,b)\). We notice that \[u(t)=S(t-s)S(s)v=S(t-s)u(s)\,,\,\,\,\text{for}\,\,t\geq s\geq 0\,. \tag{2.8}\] Then, using (2.8) between the first line and the second line, we obtain, for \(t\geq 0\), \[\begin{array}{ll}(1-\chi(t))u(t)&=-(\int_{0}^{t}\chi^{\prime}(s)ds)\,u(t)\\ &=-\int_{0}^{t}S(t-s)\chi^{\prime}(s)u(s)ds\\ &=-\mathcal{K}^{+}(\chi^{\prime}(\cdot)u(\cdot))(t)\,.\end{array}\] On the other hand, introducing \(\tilde{\chi}\), we first write \[u(t)=\tilde{\chi}(0)u(t)=-\big{(}\int_{t-b}^{t}\tilde{\chi}^{\prime}(t-s)ds \big{)}u(t)\,.\] Proceeding similarly as above we then write (using that \(t-b>a\) at the third line) \[\begin{array}{ll}u(t)&=-\int_{t-b}^{t}\tilde{\chi}^{\prime}(t-s)S(t-s)u(s)ds \\ &=-\int_{t-b}^{t}\tilde{\chi}^{\prime}(t-s)S(t-s)u(s)ds\\ &=-\int_{t-b}^{t}\tilde{\chi}^{\prime}(t-s)S(t-s)(1-\chi(s))u(s)ds\\ &=-\int_{t-b}^{t}\tilde{\chi}^{\prime}(t-s)S(t-s)\mathcal{K}^{+}(\chi^{\prime} (\cdot)u(\cdot))(s)ds\,.\end{array}\] Thus, with in mind the assumption on \(||S(t)||\), we get \[||e^{-\omega t}S(t)v||\leq\Big{(}\int_{t-b}^{t}\big{(}e^{-\omega(t-s)}|\tilde {\chi}^{\prime}(t-s)|m(t-s)\big{)}\big{(}||\mathcal{K}^{+}(\chi^{\prime}( \cdot)u(\cdot))(s)||\big{)}ds\Big{)}\,,\] and Holder's inequality yields \[e^{-\omega t}||S(t)||_{\mathcal{L}(\mathcal{B})}\leq K_{\omega,p}||m\chi^{ \prime}||_{e^{\omega}L^{p}}||e^{-\omega s}m\tilde{\chi}^{\prime}||_{e^{\omega }L^{q}}\,, \tag{2.9}\] as \[||\mathcal{K}^{+}(\chi^{\prime}(\cdot)u(\cdot))||_{e^{\omega}L^{p}}\leq K_{ \omega,p}||\chi^{\prime}(\cdot)S(\cdot)v||_{e^{\omega}L^{p}}\,\,\leq K_{ \omega,p}||\chi^{\prime}m||_{e^{\omega}L^{p}}||v||\,.\] This is the inequality (3.8) in [9] but with the general weight \(m(t)\) replacing the particular weight \(Le^{\lambda t}\). When \(p=2\), (2.9) is just the Banach analog of (4.12) in [6] where \(1/r(\omega)\) is replaced by \(K_{\omega,2}\). Following the strategy of [6] it remains to optimize the right hand side by choosing \(\chi\) and \(\tilde{\chi}\) optimally. We look for \(\chi\) such that \(\|m\chi^{\prime}\|_{e^{\omega}L^{p}(0,a)}\) is as small as possible. By the Holder inequality, \[1=\int_{0}^{a}|\chi^{\prime}(s)|ds\leq\|\chi^{\prime}m\|_{e^{\omega}\cdot L^{p} }\|\frac{1}{m}\|_{e^{-\omega}\cdot L^{q}(]0,a[]}\,, \tag{2.10}\] so \[\|\chi^{\prime}m\|_{e^{\omega}\cdot L^{p}}\geq\frac{1}{\|\frac{1}{m}\|_{e^{- \omega}\cdot L^{q}(]0,a[]}}\,. \tag{2.11}\] As classical, we get equality in (2.10) if for some constant \(C\), \[(|\chi^{\prime}(s)|m(s)e^{-\omega s})^{p}=C\big{(}\frac{1}{m(s)}e^{\omega s} \big{)}^{q}\mbox{ on }[0,a],\] i.e. \[\chi^{\prime}(s)m(s)e^{-\omega s}=-C^{1/p}\Big{(}\frac{1}{m(s)}e^{\omega s} \Big{)}^{q/p}\mbox{ on }[0,a],\] where \(C\) is determined by the condition \[1=\int_{0}^{a}|\chi^{\prime}(s)|ds\,.\] Doing the same job with \(\tilde{\chi}\), we obtain the theorem. ### The result of Latushkin-Yurov and extensions In [9], the authors prove directly the following statement: **Theorem 2.1**: _Let \(\omega,\lambda\), \(p>1\) and \(L>0\). Let \(\{S(t)\}_{t\geq 0}\) be a strongly continuous semigroup on a Banach space \(\mathcal{B}\). If \(\omega<\lambda\), \(||S(t)||\leq Le^{\lambda t}\) for all \(t\geq 0\) and \(K_{\omega,p}<+\infty\), then_ \[||S(t)||\leq Me^{\omega t}\mbox{ for all }t\geq 0\,,\] _with_ \[\frac{1}{p}+\frac{1}{q}=1\,,\] _and_ \[M=L(1+4p^{-1/p}q^{-1/q}LK_{\omega,p}(\lambda-\omega))\,.\] The theorem can also be obtained as in [6] for the case \(p=2\) as a corollary of Theorem 1.2. When \(p=2\) and \(\mathcal{B}\) is an Hilbert space it is possible to prove (see [9]) that \[K_{\omega,2}\leq\sup_{s\in\mathbb{R}}||R(A,\omega+is)||_{\mathcal{L}(\mathcal{ B})}\,,\] where \(R(A,\lambda)=(\lambda-A)^{-1}\) and we recover the statement of [7], which was obtained as a consequence of the \(L^{2}\) version of Theorem 1.2 with \(p=2\). Proof of Theorem 1.3 in the reflexive Banach case. ### Flux We assume that \({\cal B}\) is a reflexive Banach space and we denote by \({\cal B}^{*}\) its dual4. As before, let \(A\) the generator of a strongly continuous semi-group and \(u(t)\in C^{1}([0,+\infty[;{\cal B})\cap C^{0}([0,+\infty[;{\cal D}(A))\,{\rm solve }\,(A-\partial_{t})u=0\) on \([0,+\infty[\). As \({\cal B}\) is reflexive, we can define \(A^{*}\) as the infinitesimal generator of the dual semi-group which is a strongly continuous semi-group on \({\cal B}^{*}\). Let \(u^{*}(t)\in C^{1}(]-\infty,T];{\cal B}^{*})\cap C^{0}(]-\infty,T];{\cal D}(A^ {*}))\) solve \((A^{*}+\partial_{t})u^{*}=0\) on \(]-\infty,T]\). We refer to [1] (and references therein) for the properties of the dual semi-group. Then the flux (or Wronskian) \(<u(t),u^{*}(t)>_{{\cal B},{\cal B}^{*}}\) (where the bracket indicates the duality bracket between \({\cal B}\) and \({\cal B}^{*}\)) is constant on \([0,T]\) as can be seen by computing the derivative with respect to \(t\). Footnote 4: The more common notation is \({\cal B}^{\prime}\). ### \(L^{p}\) estimate Write \(L^{p}_{\phi}(I)=L^{p}(I;e^{-p\phi}dt)=e^{\phi}L^{p}(I)\), \(\|u\|_{p,\phi}=\|u\|_{p,\phi,I}=\|u\|_{L^{p}_{\phi}(I)}\), where \(I\) is an interval and our functions take values in \({\cal B}\). We assume (see (1.2)) that \(K_{p,\omega}=1/\hat{r}_{p}(\omega)<+\infty\). Consider \((A-\partial_{t})u=0\) on \([0,+\infty[\) with \(u\in L^{p}_{\omega}.([0,+\infty[)\). Let \(\Phi\) satisfy (1.6) and add temporarily the assumption that \(\Phi(s)\) is constant for \(s\gg 0\). Then \(\Phi u\), \(\Phi^{\prime}u\) can be viewed as elements of \(L^{p}_{\omega}.(\mathbb{R})\) and from \[(A-\partial_{t})\Phi u=-\Phi^{\prime}u\,,\] we get, by the definition of \(\hat{r}_{p}(\omega)\), \[\|\Phi u\|_{p,\omega\cdot}\leq\frac{1}{\hat{r}_{p}(\omega)}\|\Phi^{\prime}u\| _{p,\omega\cdot}\,.\] Taking the power \(p\), we get \[\int_{-\infty}^{+\infty}(\hat{r}_{p}(\omega)^{2}|\Phi|^{p}-|\Phi^{\prime}|^{p} )||u(t)||^{p}_{{\cal B}}e^{-p\omega t}dt\leq 0\,,\] which can be rewritten as \[\int_{-\infty}^{+\infty}(\hat{r}_{p}(\omega)^{2}|\Phi|^{p}-|\Phi^{\prime}|^{p} )_{+}||u(t)||^{p}_{{\cal B}}e^{-p\omega t}dt\leq\int_{-\infty}^{+\infty}(\hat {r}_{p}(\omega)^{p}|\Phi|^{p}-|\Phi^{\prime}|^{p})_{-}||u(t)||^{p}_{{\cal B}}e ^{-p\omega t}dt\,,\] or finally in. the form \[\|(\hat{r}_{p}(\omega)^{p}|\Phi|^{p}-|\Phi^{\prime}|^{p})_{+}^{1/p}u\|_{p, \omega\cdot}\leq\|(\hat{r}_{p}(\omega)^{p}|\Phi|^{p}-|\Phi^{\prime}|^{p})_{-}^ {1/p}u\|_{p,\omega\cdot}. \tag{3.1}\] Writing \(\Phi=e^{\phi}\), \(\phi\in C^{1}(]0,+\infty[)\), \(\phi(t)\to-\infty\) when \(t\to 0\), we have \[\hat{r}_{p}(\omega)^{p}|\Phi|^{p}-|\Phi^{\prime}|^{p}=(\hat{r}_{p}(\omega)^{p}-| \phi^{\prime}|^{p})e^{p\phi}\,,\] and (3.1) becomes \[\|\big{(}\hat{r}_{p}(\omega))^{p}-|\phi^{\prime}|^{p}\big{)}_{+}^{1/p}u\|_{p, \omega-\phi}\leq\|\big{(}\hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p}\big{)}_{- }^{1/p}u\|_{\omega-\phi}\,. \tag{3.2}\] Let \(S(t)=e^{tA}\), \(t\geq 0\) and let \(m(t)>0\) be a continuous function such that \[\|S(t)\|\leq m(t),\ t\geq 0\,. \tag{3.3}\] Then we get \[\|\big{(}\hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p}\big{)}_{+}^{1/p}u\|_{p, \omega-\phi}\leq\|\big{(}\hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p}\big{)}_{ -}^{1/p}m\|_{p,\omega-\phi}|u(0)|_{\mathcal{B}}\,. \tag{3.4}\] Note that we have also trivially \[\|(\hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p})_{-}^{1/p}u\|_{p,\omega-\phi} \leq\|\big{(}\hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p}\big{)}_{-}^{1/p}m\|_{ p,\omega-\phi}|u(0)|_{\mathcal{B}}\,. \tag{3.5}\] We get the same bound for the forward solution of \(A^{*}-\partial_{t}\) and, after changing the orientation of time, for the backward solution of \(A^{*}+\partial_{t}=(A-\partial_{t})^{*}\). Then for \(u^{*}(s)\), solving \[(A^{*}+\partial_{s})u^{*}(s)=0,\ s\leq t,\] with \(u^{*}(t)\) prescribed in \(\mathcal{B}^{*}\), we get \[\|\big{(}\hat{r}_{q}^{*}(\omega)^{q}-|\iota_{t}\phi^{\prime}|^{q}\big{)}_{+}^{ 1/q}u^{*}\|_{q,\omega(t-)-\iota_{t}\phi}\leq\|\big{(}\hat{r}_{q}^{*}(\omega)^{ q}-|\iota_{t}\phi^{\prime}|^{q}\big{)}_{-}^{1/q}\iota_{t}m\|_{q,\omega(t-)- \iota_{t}\phi}\,|u^{*}(t)|_{\mathcal{B}^{*}}\,,\] where \(\iota_{t}\phi\) and \(\iota_{t}m\) denote the compositions of \(\phi\) and \(m\) respectively with the reflection \(\iota_{t}\) in \(t/2\) so that \[\iota_{t}m(s)=m(t-s),\ \ \iota_{t}\phi(s)=\phi(t-s)\,.\] Here \(\hat{r}_{q}^{*}(\omega)\) is associated with the dual semi-group like in (1.2)-(1.3). By duality, one can show in the reflexive case that \[\hat{r}_{q}^{*}(\omega)=\hat{r}_{p}(\omega)\,. \tag{3.6}\] More generally, we can replace \(\phi\) by \(\psi\) with the same properties (see (1.6)) and consider \(\Psi=\exp\psi\,\). Note that we have \[\|\big{(}\hat{r}_{q}^{*}(\omega)^{q}-|\iota_{t}\psi^{\prime}|^{q}\big{)}_{+}^{ 1/q}u^{*}\|_{q,\omega(t-)-\iota_{t}\psi}\leq\|\big{(}\hat{r}_{q}^{*}(\omega)^{ q}-|\psi^{\prime}|^{q}\big{)}_{-}^{1/q}m\|_{q,\omega-\psi}|u^{*}(t)|_{\mathcal{B}^{*}}. \tag{3.7}\] and also trivially \[\|\big{(}\hat{r}_{q}^{*}(\omega)^{q}-|\iota_{t}\psi^{\prime}|^{q}\big{)}_{-}^{ 1/q}u^{*}\|_{q,\omega(t-)-\iota_{t}\psi}\leq\|\big{(}\hat{r}_{q}^{*}(\omega)^{ q}-|\psi^{\prime}|^{q}\big{)}_{-}^{1/q}m\|_{q,\omega-\psi}|u^{*}(t)|_{\mathcal{B}^{*}}. \tag{3.8}\] ### From \(L^{p}\) to \(L^{\infty}\) bounds In order to estimate \(|u(t)|_{\mathcal{B}}\) for a given \(u(0)\) it suffices to estimate \(|<u(t),u^{*}(t)>_{\mathcal{B},\mathcal{B}^{*}}|\) for arbitrary \(u^{*}(t)\in\mathcal{B}^{*}\). Extend \(u^{*}(t)\) to a backward solution \(u^{*}(s)\) of \((A^{*}+\partial_{s})u^{*}(s)=0\), so that \[<u(s),u^{*}(s)>_{\mathcal{B},\mathcal{B}^{*}}=<u(t),u^{*}(t)>_{\mathcal{B}, \mathcal{B}^{*}},\ \forall s\in[0,t]. \tag{3.9}\] Let \(M=M_{t}:[0,t]\to[0,+\infty[\) have mass \(1\): \[\int_{0}^{t}M(s)ds=1\,. \tag{3.10}\] Then, using (3.9), \[|<u(t),u^{*}(t)>_{\mathcal{B},\mathcal{B}^{*}}|=\left|\int_{0}^{ t}M(s)\right|<u(s),u^{*}(s)>_{\mathcal{B},\mathcal{B}^{*}}|ds\bigg{|}\\ \leq\int_{0}^{t}M(s)|u(s)|_{\mathcal{B}}|u^{*}(s)|_{\mathcal{B}^ {*}}ds. \tag{3.11}\] Let \(\epsilon_{1},\epsilon_{2}\in\{-,+\}\). Assume that \[\operatorname{supp}M\subset\{s;\epsilon_{1}(\hat{r}_{p}(\omega)^{p}-|\phi^{ \prime}(s)|^{p})>0,\ \epsilon_{2}(\hat{r}_{q}^{*}(\omega)^{2}-\iota_{t}|\psi^{\prime}(s)|^{q})>0\}. \tag{3.12}\] Then multiplying and dividing with suitable factors in the last member of (3.11), we get (in the reflexive case) \[|<u(t),u^{*}(t)>_{\mathcal{B},\mathcal{B}^{*}}|\leq e^{\omega t} \int_{0}^{t}\frac{M(s)e^{-\phi(s)-\iota_{t}\psi(s)}}{(\hat{r}_{p}(\omega)^{p }-|\phi^{\prime}(s)|^{p})^{\frac{1}{\ell_{1}}}(\hat{r}_{q}^{*}(\omega)^{q}-| \iota_{t}\psi^{\prime}(s)|^{q})^{\frac{1}{\sigma_{2}}}}\times\\ \times e^{\phi(s)-\omega s}(\hat{r}_{p}(\omega)^{p}-|\phi^{\prime }(s)|^{p})^{\frac{1}{\ell_{1}}}|u(s)|_{\mathcal{B}}\times\\ \times e^{\iota_{t}\psi(s)-\omega(t-s)}(\hat{r}_{q}^{*}(\omega)^{ q}-|\iota_{t}\psi^{\prime}(s)|^{q})^{\frac{1}{\sigma_{2}}}_{\omega}|u^{*}(s)|_{ \mathcal{B}^{*}}ds\\ \leq e^{\omega t}\sup_{[0,t]}\frac{Me^{-\phi-\iota_{t}\psi}}{( \hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p})^{\frac{1}{\ell_{1}}}(\hat{r}_{q} ^{*}(\omega)^{q}-|\iota_{t}\psi^{\prime}|^{q})^{\frac{1}{\sigma_{2}}}_{\epsilon _{2}}}\times\\ \times\|(\hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p})^{\frac{1}{ \ell_{1}}}_{\epsilon_{1}}u\|_{p,\omega\cdots\phi}\|(\hat{r}_{q}^{*}(\omega)^{ q}-|\iota_{t}\psi^{\prime}|^{q})^{\frac{1}{\sigma_{2}}}_{\epsilon_{2}}u^{*}\|_{ \omega(q,t-\cdot)-\iota_{t}\psi}.\] Using (3.4), (3.7) when \(\epsilon_{j}=+\) or (3.5), (3.8) when \(\epsilon_{j}=-\), we get \[|<u(t),u^{*}(t)>_{\mathcal{B},\mathcal{B}^{*}}|\leq e^{\omega t }\sup_{[0,t]}\frac{Me^{-\phi-\iota_{t}\psi}}{(\hat{r}_{p}(\omega)^{p}-|\phi^{ \prime}|^{p})^{\frac{1}{\ell_{1}}}_{\epsilon_{1}}(\hat{r}_{q}^{*}(\omega)^{q}- |\iota_{t}\psi^{\prime}|^{q})^{\frac{1}{\sigma_{2}}}_{\epsilon_{2}}}\times\\ \times\|(\hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p})^{\frac{1}{ \rho}}_{-}m\|_{p,\omega\cdots-\phi}\|(\hat{r}_{q}^{*}(\omega)^{q}-|\psi^{ \prime}|^{q})^{\frac{1}{\sigma}}_{-}m\|_{q,\omega\cdots-\psi}|u(0)|_{\mathcal{B }}\,|u^{*}(t)|_{\mathcal{B}^{*}}. \tag{3.13}\] This estimate holding for any \(u^{*}(t)\), we get \[|u(t)|_{\mathcal{B}}\leq e^{\omega t}\sup_{[0,t]}\frac{Me^{-\phi-\iota_{t}\psi}}{ \left(\hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p}\right)_{\epsilon_{1}}^{ \frac{1}{q}}(\hat{r}_{q}^{*}(\omega)^{q}-|\iota_{t}\psi^{\prime}|^{q})_{ \epsilon_{2}}^{\frac{1}{q}}}\times \tag{3.14}\] In order to optimize the choice of \(M\), we let \(0\not\equiv F\in C([0,t];[0,+\infty[)\) and study \[\inf_{0\leq M\in C([0,t]),\atop\int Mds=1}\sup_{s}\frac{M(s)}{F(s)}. \tag{3.15}\] We first notice that \[1=\int Mds=\int\frac{M}{F}Fds\leq\left(\sup_{s}\frac{M}{F}\right)\int Fds\,,\] and hence the quantity (3.15) is \(\geq 1/\int Fds\). Choosing \(M=\theta F\) with \(\theta=1/\int F(s)\,ds\), we get equality. **Lemma 3.1**: _For any continuous function \(F\geq 0\), non identically \(0\),_ \[\inf_{0\leq M\in C([0,t]),\atop\int M(s)\,ds=1}\,\,\left(\sup_{s}\frac{M}{F} \right)=1/\int Fds\,.\] Applying the lemma to the supremum in (3.14) with \[F=e^{\phi+\iota_{t}\psi}\,(\hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p})_{ \epsilon_{1}}^{\frac{1}{p}}(\hat{r}_{q}(\omega)^{q}-|\iota_{t}\psi^{\prime}|^{ q})_{\epsilon_{2}}^{\frac{1}{q}},\] we get \[|u(t)|_{\mathcal{B}}\leq e^{\omega t}\frac{\|(\hat{r}_{p}(\omega)^{p}-|\phi^{ \prime}|^{p})_{r}^{\frac{1}{p}}\,m\|_{p,\omega-\phi}\|(\hat{r}_{q}^{*}(\omega )^{q}-|\psi^{\prime}|^{q})_{-}^{\frac{1}{q}}\,m\|_{q,\omega-\psi}}{\int_{0}^{t }e^{\phi+\iota_{t}\psi}(\hat{r}_{p}(\omega)^{p}-|\phi^{\prime}|^{p})_{\epsilon _{1}}^{\frac{1}{p}}(\hat{r}_{q}^{*}(\omega)^{q}-|\iota_{t}\psi^{\prime}|^{q})_ {\epsilon_{2}}^{\frac{1}{q}}ds}|u(0)|_{\mathcal{B}}. \tag{3.16}\] Since \(u(0)\) is arbitrary, and noting (3.6) we get Theorem 1.3. ## 4 Consequences of Theorem 1.3 ### Main proposition An important step is to prove (we assume \(\omega=0\), \(\hat{r}_{p}(0)=1\)) as a consequence of Theorem 1.3 with \(\epsilon_{1}=+\) and \(\epsilon_{2}=-\), the following key proposition: **Proposition 4.1**: _Assume that \(\omega=0\), \(\hat{r}_{p}(\omega)=1\). Let \(a,b\) positive. Then for \(t\geq a+b\),_ \[||S(t)||\leq\exp-(t-a-b)\,\frac{\left(\inf_{u}\int_{0}^{a}m(s)^{p}(|u^{\prime}(s )|^{p}-u^{p}(s))_{+}ds\right)^{1/p}}{\left(\sup_{\theta}\int_{0}^{b}\frac{1}{m^ {p}}(\theta(s)^{p}-|\theta^{\prime}(s)|^{p})\,ds\right)^{1/p}}\,, \tag{4.17}\] _where_ * \(u\in W^{1,p}(]0,a[)\) _satisfies_ \(u(0)=0\)_,_ \(u(a)=1\) _;_ * \(\theta\in W^{1,p}((]0,b[)\) _satisfies_ \(\theta(b)=1\) _and_ \(|\theta^{\prime}|\leq\theta\) _._ The analysis of the minimizers of \[I_{inf,p}:=\inf_{u}\int_{0}^{a}m(s)^{p}(|u^{\prime}(s)|^{p}-u^{p}(s))_{+}ds \tag{4.18}\] and the maximizers of \[J_{sup,p}:=\sup_{\theta}\int_{0}^{b}\frac{1}{m^{p}}(\theta(s)^{p}-|\theta^{ \prime}(s)|^{p})\,ds \tag{4.19}\] is close to what was done in Section 3 of [7] and we will sketch what has been modified in Subsection 4.3. ### From Proposition 4.1 to Theorem 1.2 This proposition implies rather directly Theorem 1.2 in the following way. We first observe the trivial lower bound (take \(\theta(s)=1\)) \[\sup_{\theta}\int_{0}^{b}\frac{1}{m^{p}}(\theta(s)^{p}-|\theta^{\prime}(s)|^{ p})\,ds\geq\int_{0}^{b}\frac{1}{m^{p}}ds\,. \tag{4.20}\] A more tricky argument based on the equality case in Holder's inequality5 gives Footnote 5: In [7] we were using instead Cauchy-Schwarz \[\inf_{u}\int_{0}^{a}m(s)^{p}(u^{\prime}(s)^{p}-u(s)^{p})_{+}ds\leq\inf_{u} \int_{0}^{a}m(s)^{p}u^{\prime}(s)^{p}ds\leq\left(1/\int_{0}^{a}\frac{1}{m^{q}} ds\right)^{p/q}, \tag{4.21}\] More precisely we start from the upper bound \[\int_{0}^{a}(|u^{\prime}|^{p}-u^{p})_{+}m(s)^{p}ds\leq\int_{0}^{a}|u^{\prime} (s)|^{p}m(s)^{p}\,ds\] and minimize the right hand side. Observing that \[1 =u(a)=\int_{0}^{a}u^{\prime}(s)\,ds=\int_{0}^{a}u^{\prime}(s)m(s) \,m(s)^{-1}\,ds\] \[\leq\left(\int_{0}^{a}(u^{\prime}(s)m(s)^{p}ds\right)^{\frac{1}{p }}\,\left(\int_{0}^{a}\frac{1}{m(s)^{q}}ds\right)^{\frac{1}{q}}\,,\] we look for a \(u\) for which we have equality. By a standard criterion for the optimality of the Holder inequality, this is the case if, for some constant \(C>0\), \[(|u^{\prime}(s)|m(s))^{p}=\frac{C}{m(s)^{q}}\,,\] or \[|u^{\prime}|=C^{1/p}\frac{1}{m^{q}}\,.\] Hence, we choose \[u(s)=\hat{C}\int_{0}^{s}\frac{1}{m(\tau)^{q}}\,d\tau\,,\] where the choice of \(\hat{C}\) is determined by imposing \(u(a)=1\). We obtain **Lemma 4.2**: _For any \(a>0\),_ \[\inf_{\{u\in W^{1,p}(]0,a[),u(0)=0,u(a)=1\}}\int_{0}^{a}(|u^{\prime}|^{p}-u^{p} )_{+}m^{p}\,ds\leq\left(\int_{0}^{a}\frac{1}{m(s)^{q}}ds\right)^{-p/q}. \tag{4.22}\] Note here that we have no condition on \(a>0\). **Remark 4.3**: _Nevertheless, one can observe that this implication holds only under the additional condition that \(\mathcal{B}\) is reflexive. This was not needed in the direct proof of Theorem 1.2 given in Section 2._ ### Proof of Proposition 4.1 We now assume \(\omega=0\) and \(\hat{r}_{p}(0)=1\). In this case, (1.7) takes the form \[||S(t)||_{\mathcal{L}(\mathcal{B})}\leq e^{\omega t}\frac{\|(\Phi^{p}-|\Phi^{ \prime}|^{p})_{-}^{\frac{1}{p}}\,m\|_{L^{p}([0,t])}\|(\Psi^{q}-|\Psi^{\prime} |^{q})_{-}^{\frac{1}{q}}\,m\|_{L^{q}([0,t])}}{\int_{0}^{t}(\Phi^{p}-|\Phi^{ \prime}|^{p})_{+}^{\frac{1}{p}}(t_{t}\Psi^{q}-|t_{t}\Psi^{\prime}|^{q})_{-}^ {\frac{1}{q}}ds}\,. \tag{4.23}\] Replacing \((\Phi,\Psi)\) by \((\lambda\Phi,\mu\Psi)\) give for any \((\lambda,\mu)\in(\mathbb{R}\setminus\{0\})^{2}\) does not change the right hand side. Hence we may choose a suitable normalization without loss of generality. We also choose \(\Phi\) and \(\Psi\) to be piecewise \(C^{1}([0,t])\). If \(0\leq\sigma<\tau<+\infty\), \(p>1\), and \(S,T\in\mathbb{R}\) we put \[W^{1,p}_{S,T}(]\sigma,\tau[)=\{u\in W^{1,p}(]\sigma,\tau[);\,u(\sigma)=S,\,\, u(\tau)=T\}\,, \tag{4.24}\] where \(W^{1,p}\) denotes the classical Sobolev space associated with \(L^{p}\). Here and in the following all functions are assumed to be real-valued unless stated otherwise. As in [7] we can here replace \(W^{1,p}_{0,1}\) by a subspace that allows to avoid the use of positive parts. Put \[\mathcal{H}^{a,p}=\mathcal{W}^{0,1,p}_{0,a}\,, \tag{4.25}\] where for \(\sigma,\tau,S,T\) as above, \[\mathcal{W}^{S,T,p}_{\sigma,\tau}=\{u\in W^{1,p}_{S,T}(]\sigma,\tau[);0\leq u\leq u ^{\prime}\}. \tag{4.26}\] \[\mathcal{G}^{b,p}=\{\theta\in W^{1,p}(]0,b[);\,|\theta^{\prime}|\leq\theta,\, \,\theta(b)=1\}\,. \tag{4.27}\] Given some \(t>a+b\), we now give the conditions satisfied by \(\Phi\): **Property 4.4** (\(P^{p}_{a,b}\)): __ 1. \(\Phi=e^{a}u\) _on_ \(]0,a]\) _and_ \(u\in\mathcal{H}^{a,p}\)_._ 2. _On_ \([a,t-b]\)_, we take_ \(\Phi(s)=e^{s}\)_, so_ \(|\Phi^{\prime}|^{p}(s)-\Phi(s)^{p}=0\,.\)__ 3. _On_ \([t-b,t]\) _we take_ \(\Phi(s)=e^{t-b}\theta(t-s)\) _with_ \(\theta\in\mathcal{G}^{b,p}\,\,\)__._ Hence, we have \[\operatorname{supp}\left(\Phi^{p}-|\Phi^{\prime}|^{p}\right)_{+}\subset[t-b, t]\,.\] Similarly we assume that \(\Psi\) satisfies property \((P^{q}_{b,a})\) but with \(\theta=1\), hence 1. \(\Psi(s)=e^{b}v(s)\) on \(]0,b[\) with \(v\in\mathcal{H}^{b,q}\,\), 2. On \([b,t-a]\), we take \(\Psi(s)=e^{s}\). 3. On \([t-a,t]\), \(\Psi(s)=e^{t-a}\,.\) Recalling the definition of \(\iota_{t}\), we get for \(\iota_{t}\Psi\): 1. On \([0,a]\), \(\iota_{t}\Psi=e^{(t-a)}\), satisfying \[(\iota_{t}\Psi)^{q}-(\iota_{t}\Psi)^{q}=-e^{q(t-a)}\,.\] 2. On \([a,t-b]\), we have \(\iota_{t}\Psi(s)=e^{t-s}\), hence \[|(\iota_{t}\Psi)^{\prime}(s)|^{q}-\iota_{t}\Psi(s)^{q}=0\,.\] 3. On \(]t-b,t[\), we have \[|(\iota_{t}\Psi)^{\prime}(s)|^{q}-(\iota_{t}\Psi)^{q}(s)\geq 0\,.\] Assuming that \(t>a+b\), we have under these assumptions on \(\Phi\) and \(\Psi\) \[\{s;\Phi(s)^{p}-|\Phi^{\prime}(s)|^{p}>0,\iota_{t}\Psi(s)^{q}-|\iota_{t}\Psi^ {\prime}(s)|^{q}<0\}\subset[t-b,b]\,.\] We now compute or estimate the various quantities appearing in (1.7). We have \[\|(\Phi^{p}-|\Phi^{\prime}|^{p})^{\frac{1}{p}}_{-}m\|=e^{a}\left(\int_{0}^{a }(|u^{\prime}(s)|^{p}-u^{p}(s))m(s)^{p}ds\right)^{1/p}\,, \tag{4.28}\] \[\|(\Psi^{q}-|\Psi^{\prime}|^{q})_{-}^{\frac{1}{q}}m\|=e^{b}\left(\int_{0}^{b}(|v^{ \prime}(s)|^{q}-v(s)^{q})m(s)^{q}ds\right)^{1/q}, \tag{4.29}\] and \[\begin{split}&\int_{0}^{t}(\Phi^{p}-\Phi^{\prime p})_{+}^{\frac{ 1}{p}}((\iota_{t}\Psi)^{q}-(\iota_{t}\Psi^{\prime})^{q})_{-}^{\frac{1}{q}}ds \\ &\qquad=\int_{t-b}^{t}(\Phi^{p}-|\Phi^{\prime}|^{p})_{+}^{\frac{ 1}{p}}((\iota_{t}\Psi)^{q}-|\iota_{t}\Psi^{\prime}|^{q})_{-}^{\frac{1}{q}}ds \\ &\qquad=e^{t-b}\int_{t-b}^{t}(\theta(t-s)^{p}-|\theta^{\prime}(t- s)|^{p})_{+}^{\frac{1}{2}}((\iota_{t}\Psi)^{q}-|\iota_{t}\Psi^{\prime}|^{q})_{-}^{ \frac{1}{q}}ds\\ &\qquad=e^{t}\int_{0}^{b}(\theta(s)^{p}-|\theta^{\prime}(s)|^{p} )^{\frac{1}{p}}(|v^{\prime}(s)|^{q}-v(s)^{q})^{\frac{1}{q}}ds\,.\end{split} \tag{4.30}\] So we get from (1.7) \[||e^{t}S(t)||_{\mathcal{L}(\mathcal{B})}\leq e^{a+b}\left(\int_{0}^{a}(|u^{ \prime}(s)|^{p}-u^{p}(s))m(s)^{p}ds\right)^{\frac{1}{p}}\,K^{p}(b,\theta,v)\,, \tag{4.31}\] where \[K^{p}(b,\theta,v):=\frac{\left(\int_{0}^{b}(|v^{\prime}(s)|^{q}-v(s)^{q})m(s)^ {q}ds\right)^{1/q}}{\int_{0}^{b}(\theta(s)^{p}-|\theta^{\prime}(s)|^{p})^{ \frac{1}{p}}(|v^{\prime}(s)|^{q}-v(s)^{q})^{\frac{1}{q}}ds}\,. \tag{4.32}\] We start by considering for a given \(\theta\in\mathcal{G}^{b,p}\) \[K^{p}_{\inf}(b,\theta):=\inf_{v\in\mathcal{H}^{b,q}}K^{p}(b,\theta,v)\,,\] and get the following: **Lemma 4.5**: _If \(\theta\in\mathcal{G}^{b,p}\) and \(\theta-\theta^{\prime}\) is not identically \(0\) on \(]0,b\)[, we have_ \[K^{p}_{\inf}(b,\theta)=\frac{1}{\big{(}\int_{0}^{b}(\theta(s)^{p}-|\theta^{ \prime}(s)|^{p})\frac{1}{m^{p}}ds\big{)}^{1/p}}\,. \tag{4.33}\] **Proof.** We consider with \[h(s)=\big{(}\theta(s)^{p}-|\theta^{\prime}(s)|^{p}\big{)}^{\frac{1}{p}}\geq 0\] the denominator in (4.32), \[\int_{0}^{b}h(s)(|v^{\prime}(s)|^{q}-v(s)^{q})^{\frac{1}{q}}ds=\int_{0}^{b} \big{(}h(s)/m(s)\big{)}\left(m^{q}(|v^{\prime}(s)|^{q}-v(s)^{q})^{\frac{1}{q}} \right)ds\,.\] By the Holder inequality, we have \[\begin{split}&\int_{0}^{b}h(s)(|v^{\prime}(s)|^{q}-v(s)^{q})^{ \frac{1}{q}}ds\leq\\ &\qquad\qquad\qquad\left(\int_{0}^{b}(h(s)/m(s))^{p}ds\right)^{ \frac{1}{p}}\left(\int_{0}^{b}m(s)^{q}(|v^{\prime}(s)|^{q}-v(s)^{q})ds\right)^ {\frac{1}{q}}\,,\end{split}\] which implies that \(K_{\inf}^{p}(b,\theta)\) is bounded from below by the right hand side of (4.33). We have equality for some \(v\) in \({\cal H}^{b,q}\) if and only if, for some constant \(c>0\), \[m(s)^{q}(|v^{\prime}(s)|^{q}-v(s)^{q})=c\,h(s)^{p}/m(s)^{p}\,.\] In order to get such a \(v\), we first consider \(w\in W^{1,p}\) defined by \[w^{\prime}=(w^{q}+h^{p}m^{-pq})^{1/q}\,,\,w(0)=0\,,\] noticing that the right hand side of the differential equation is Lipschitz continuous in \(w\), so that the Cauchy-Lipschitz theorem applies. According to our assumption on \(\theta\), we verify that \(w(b)>0\) and we choose \[v=\frac{1}{w(b)}w\,,\,c=\frac{1}{w(b)}\,.\] For this pair \((c,v)\) we get \[\left(\int_{0}^{b}(|v^{\prime}(s)|^{q}-v(s)^{q})m(s)^{q}ds\right) ^{\frac{1}{2}}\Big{/}\int_{0}^{b}\left(\theta(s)^{p}-|\theta^{\prime}(s)|^{p} \right)^{\frac{1}{2}}\left(|v^{\prime}(s)|^{q}-v(s)^{q}\right)^{\frac{1}{q}}ds\] \[=1\Big{/}\left(\int_{0}^{b}h^{p}(s)m^{-p}(s)\,ds\right)^{\frac{1 }{p}}\,. \tag{4.34}\] Returning to the definition of \(h\) shows that \(K_{\inf}^{p}(b,\theta)\) is bounded from above by the right hand side of (4.33) and we get the announced result. \(\Box\) To conclude the proof of Proposition 4.1, we just combine Lemma 4.5 and (4.31). ### Application of Proposition 4.1: Wei's theorem When \(p=2\), we can directly apply [7] where \(r(\omega)\) is replaced by \(\hat{r}_{2}(\omega)\). The case \(p\neq 2\) is less clear. Already, in the case \(m=1\) this involves new questions related to the \(p\)-Laplacian instead of the Laplacian. Nevertheless, one can get (probably non optimal) upper bounds by using the optimizers obtained for \(p=2\). If we consider \(m=1\), (4.17), \(a\leq\frac{\pi}{4}\), \(b\leq\frac{\pi}{4}\) and take therein \(u(s)=\sin s/\sin a\) and \(\theta(s)=\cos s/\cos b\), we obtain \[||S(t)||\leq\cos b\sin a^{-1}\exp-(t-a-b)\,\frac{\left(\int_{0}^{a}((\cos s)^ {p}-(\sin s)^{p})ds\right)^{1/p}}{\left(\int_{0}^{b}((\cos s)^{p}-(\sin s)^{p })\,ds\right)^{1/p}}\,, \tag{4.35}\] When \(a=b\), we obtain \[||S(t)||\leq\cot a\exp-(t-2a)\,. \tag{4.36}\] For \(a=b=\frac{\pi}{4}\) we get an extension of Wei's theorem to the reflexive Banach case. **Theorem 4.6**: _Let \(p>1\) and \(S(t)\) a \(C_{0}\)-semigroup of generator \(A\) in a reflexive Banach space \({\cal B}\) such that_ \[\|S(t)\|\leq 1\mbox{ for all }t\geq 0\,. \tag{4.37}\] _holds and \(\hat{r}_{p}(0)>0\). Then we have,_ \[||S(t)||\leq e^{-\hat{r}_{p}(0)t+\frac{\pi}{2}}\,,\,\forall t\geq 0\,. \tag{4.38}\] ## 5 Modified Riccati equation and application to the optimization problems. When analyzing the optimality of the statements in [7], an important tool was a fine analysis of a natural Riccati equation (see more precisely Subsection 3.4.2 in [7]). Let us see what is going on for \(p\in(1,+\infty)\) and we start with the assumption that \[\hat{r}_{p}(0)=1\,.\] Let \(f\) be defined on \(]\sigma,\tau[\) such that \[0<f\leq f^{\prime}. \tag{5.39}\] Put \[\mu=m^{\prime}/m\,.\] We now assume that \(f\) satisfies \[(\partial_{s}\circ m^{p}\circ\partial_{s}+m^{p})f^{p-1}=0\,,\] we get \[(\partial_{s}^{2}+p\mu\partial_{s}+1)f^{p-1}=0\,. \tag{5.40}\] In this case, we say that \(f\) is \((m,p)\)-harmonic. Writing \[\phi=\log f\mbox{ and }\psi=\phi^{\prime}=f^{\prime}/f\,,\] we get, noting that \[f^{\prime\prime}/f=\psi^{\prime}+\psi^{2}\,,\,\psi\geq 1\] and \[\psi^{\prime}=-((p-1)\psi^{2}+p\mu\psi+\frac{1}{p-1})\,, \tag{5.41}\] or equivalently \[\frac{\psi^{\prime}}{\psi}=-((p-1)\psi+p\mu+\frac{1}{(p-1)\psi})\,. \tag{5.42}\] As in [7], we note that \(\tilde{\psi}:=-1/\psi\) satisfies \[\frac{\tilde{\psi}^{\prime}}{\tilde{\psi}}=-((p-1)\tilde{\psi}-p\mu+\frac{1}{(p- 1)\tilde{\psi}})\,. \tag{5.43}\] We consider the condition \[\lim_{x\to 0^{+}}\psi_{p}(x)=+\infty\,. \tag{5.44}\] With the conditions (5.41) and (5.44), \(\psi=\psi_{m,p}\) is uniquely defined and and we introduce \[a^{*}=a^{*}(m,p)=\sup\{a>0\mbox{ and }\psi_{m,p}>1\mbox{ on }(0,a)\}\,, \tag{5.45}\] so that \(a^{*}(m,p)\in]0,+\infty]\). Following [7], one can prove that in (4.18) the infimum is realized by the \((m,p)\)-harmonic function \(u_{p}\) such that \(u_{p}^{\prime}/u_{p}=\psi_{p}\). Hence we get with in mind that in (4.18) the infimum is for \(u\in{\cal H}^{a,p}\) \[I_{inf,p}=\int_{0}^{a}(m(s)^{p}u_{p}^{\prime}(s)^{p}-u_{p}^{p}(s))ds\] After an integration by parts, we get (since \(u_{p}(0)=0\) and \(u_{p}(a)=1\)) \[I_{inf,p}:=m(a)^{p}(u_{p}^{\prime}(a))^{p-1}(a)u_{p}^{p}(a)=m(a)^{p}\psi_{p}^{ p-1}(a)\,.\] Similarly one can prove that in (4.19) the supremum is realized by the \((1/m,p)\)-harmonic function \(\theta_{p}\in{\cal G}^{b,p}\) such that \(\theta_{p}^{\prime}/\theta_{p}=-1/\psi_{p}\). Hence we get, with in mind that in (4.19) the infimum is for \(\theta\in{\cal G}^{b,p}\) satisfying \(\theta^{\prime}(0)=0\), \[J_{sup,p}=\int_{0}^{b}\frac{1}{m(s)^{p}}(\theta_{p}(s)^{p}-|\theta_{p}^{ \prime}(s)|^{p})\,ds \tag{5.46}\] After an integration by parts, we get (since \(\theta_{p}^{\prime}(0)=0\) and \(\theta_{p}(b)=1\)) \[J_{sup,p}=-m(b)^{-p}|\theta_{p}^{\prime}(b)|^{p-1}(a)\theta_{p}^{p}(b)=m(b)^{ -p}\psi_{p}^{-(p-1)}(b)\,.\] ## 6 Final theorem Like in [7] and coming back to Proposition 4.1, we immediately get from the previous section: **Proposition 6.1**: _Let \(p>1\), \(\omega=0\), \(\hat{r}_{p}(0)=1\) and \(a^{*}:=a^{*}(m,p)\in]0,+\infty]\). When \(a,b\in]0,+\infty[\cap]0,a^{*}]\) and \(t>a+b\), we have_ \[||e^{t}S(t)||\leq\exp(a+b)m(a)m(b)\psi_{p}(a)^{\frac{p-1}{p}}\psi_{p}(b)^{ \frac{p-1}{p}}\,. \tag{6.47}\] _In particular, when \(a^{*}<+\infty\), we have_ \[||e^{t}S(t)||\leq\exp(2a^{*})\,m(a^{*})^{2}\,,\ \ t>2a^{*}\,. \tag{6.48}\] This proposition is the analog of Wei's theorem for general weights \(m\) and the \(L^{p}\)-Banach version of Theorem 1.9 in [7]. By the same rescaling procedure, we have actually a more general statement. We consider \(\hat{A}\) with the same properties as \(A\) where the hat's are introduced to make easier the transition between the particular case above to the general case below. As before, we introduce \(\hat{\omega}\) and \(\hat{r}=\hat{r}_{p}(\hat{\omega})\). **Theorem 6.2**: _Let \(p>1\), \(\hat{r}_{p}(\hat{\omega})<+\infty\). Let \(\hat{S}(\hat{t})=e^{\hat{t}\,\hat{A}}\) satisfying_ \[||\hat{S}(\hat{t})||\leq\hat{m}(\hat{t})\,,\,\forall\hat{t}>0\,.\] _Then there exist uniquely defined \(\hat{a}^{*}:=\hat{a}^{*}(\hat{m},\hat{\omega},\hat{r},p)>0\) and \(\hat{\psi}_{p}:=\hat{\psi}_{p}(\cdot;\hat{m},\hat{\omega},\hat{r})\) on \(]0,\hat{a}^{*}[\) with the same general properties as above such that, if \(\hat{a},\hat{b}\in]0,+\infty[\cap]0,\hat{a}^{*}]\) and \(\hat{t}>\hat{a}+\hat{b}\), we have_ \[||S(\hat{t})||\leq\exp\left((\hat{\omega}-\hat{r}_{p}(\hat{\omega}))(\hat{t}- (\hat{a}+\hat{b}))\right)\hat{m}(\hat{a})\hat{m}(\hat{b})\hat{\psi}_{p}(\hat{a })^{\frac{p-1}{p}}\,\hat{\psi}_{p}(\hat{b})^{\frac{p-1}{p}}\,. \tag{6.49}\] _Moreover, when \(\hat{a}^{*}<+\infty\), the estimate is optimal for \(\hat{a}=\hat{b}=\hat{a}^{*}\) and reads_ \[||\hat{S}(\hat{t})||\leq\exp((\hat{\omega}-\hat{r}_{p}(\hat{\omega})(\hat{t}- 2\hat{a}^{*}))\,\hat{m}(\hat{a}^{*})^{2}\,,\ \ t>2\hat{a}^{*}\,. \tag{6.50}\] Note that in the statement \[\hat{a}^{*}(\hat{m},\hat{\omega})=\hat{r}\,a^{*}(e^{-\hat{\omega}}\hat{m})\,, \,\hat{\psi}_{p}(\hat{s};\hat{m},\hat{\omega},\hat{r})=\psi_{p}(\hat{r}\hat{s };e^{-\hat{\omega}}\hat{m})\,.\] This theorem is the \(L^{p}\)-Banach version of Theorem 1.10 in [7].
2301.07574
Initial-boundary value problems to semilinear multi-term fractional differential equations
For $\nu,\nu_i,\mu_j\in(0,1)$, we analyze the semilinear integro-differential equation on the one-dimensional domain $\Omega=(a,b)$ in the unknown $u=u(x,t)$ \[ \mathbf{D}_{t}^{\nu}(\varrho_{0}u)+\sum_{i=1}^{M}\mathbf{D}_{t}^{\nu_{i}}(\varrho_{i}u) -\sum_{j=1}^{N}\mathbf{D}_{t}^{\mu_{j}}(\gamma_{j}u) -\mathcal{L}_{1}u-\mathcal{K}*\mathcal{L}_{2}u+f(u)=g(x,t), \] where $\mathbf{D}_{t}^{\nu},\mathbf{D}_{t}^{\nu_{i}}, \mathbf{D}_{t}^{\mu_{j}}$ are Caputo fractional derivatives, $\varrho_0=\varrho_0(t)>0,$ $\varrho_{i}=\varrho_{i}(t)$, $\gamma_{j}=\gamma_{j}(t)$, $\mathcal{L}_{k}$ are uniform elliptic operators with time-dependent smooth coefficients, $\mathcal{K}$ is a summable convolution kernel. Particular cases of this equation are the recently proposed advanced models of oxygen transport through capillaries. Under certain structural conditions on the nonlinearity $f$ and orders $\nu,\nu_i,\mu_j$, the global existence and uniqueness of classical and strong solutions to the related initial-boundary value problems are established via the so-called continuation arguments method. The crucial point is searching suitable a priori estimates of the solution in the fractional H\"{o}lder and Sobolev spaces. The problems are also studied from the numerical point of view.
Sergii Siryk, Nataliya Vasylyeva
2023-01-18T14:49:56Z
http://arxiv.org/abs/2301.07574v2
# Initial-boundary value problems to semilinear ###### Abstract. For \(\nu,\nu_{i},\mu_{j}\in(0,1)\), we analyze the semilinear integro-differential equation on the one-dimensional domain \(\Omega=(a,b)\) in the unknown \(u=u(x,t)\) \[\mathbf{D}_{t}^{\nu}(\varrho_{0}u)+\sum_{i=1}^{M}\mathbf{D}_{t}^{\nu_{i}}( \varrho_{i}u)-\sum_{j=1}^{N}\mathbf{D}_{t}^{\mu_{j}}(\gamma_{j}u)-\mathcal{L} _{1}u-\mathcal{K}\ast\mathcal{L}_{2}u+f(u)=g(x,t),\] where \(\mathbf{D}_{t}^{\nu},\mathbf{D}_{t}^{\nu_{i}},\mathbf{D}_{t}^{\mu_{j}}\) are Caputo fractional derivatives, \(\varrho_{0}=\varrho_{0}(t)>0\), \(\varrho_{i}=\varrho_{i}(t)\), \(\gamma_{j}=\gamma_{j}(t)\), \(\mathcal{L}_{k}\) are uniform elliptic operators with time-dependent smooth coefficients, \(\mathcal{K}\) is a summable convolution kernel. Particular cases of this equation are the recently proposed advanced models of oxygen transport through capillaries. Under certain structural conditions on the nonlinearity \(f\) and orders \(\nu,\nu_{i},\mu_{j}\), the global existence and uniqueness of classical and strong solutions to the related initial-boundary value problems are established via the so-called continuation arguments method. The crucial point is searching suitable a priori estimates of the solution in the fractional Holder and Sobolev spaces. The problems are also studied from the numerical point of view. Key words and phrases:a priori estimates, Caputo derivatives, nonlinear oxygen subdiffusion, global solvability, numerical solutions 2000 Mathematics Subject Classification: Primary 35R11, 35B45, 35B65; Secondary 35Q92, 26A33, 65M22 ## 1. Introduction Let \(\Omega=(a,b)\subset\mathbb{R}\) be a segment, with a boundary \(\partial\Omega=\{a\}\cup\{b\}\). For an arbitrary fixed time \(T>0\), we denote \[\Omega_{T}=\Omega\times(0,T)\qquad\text{and}\qquad\partial\Omega_{T}=\partial \Omega\times[0,T].\] We consider the semilinear equation in the unknown function \(u=u(x,t):\Omega_{T}\to\mathbb{R}\), \[\mathbf{D}_{t}u-\mathcal{L}_{1}u-\mathcal{K}\ast\mathcal{L}_{2}u+f(u)=g(x,t), \tag{1.1}\] subject either to the Dirichlet boundary condition (**DBC**) \[u=\psi(x,t)\quad\text{on}\quad\partial\Omega_{T}, \tag{1.2}\] or to the Neumann boundary condition (**NBC**) \[\frac{\partial u}{\partial x}=\psi_{1}(x,t)\qquad\text{on}\quad\partial\Omega _{T}, \tag{1.3}\] where the functions \(g,f,\psi,\psi_{1},\mathcal{K}\) are prescribed. The equation is supplemented with the initial condition \[u(x,0)=u_{0}\quad\text{in}\quad\bar{\Omega} \tag{1.4}\] for some given initial datum \(u_{0}\). Here the \(\ast\) denotes the usual time-convolution product on \((0,t)\), namely \[(\mathfrak{h}_{1}\ast\mathfrak{h}_{2})(t)=\int\limits_{0}^{t}\mathfrak{h}_{1}( t-s)\mathfrak{h}_{2}(s)ds,\] while the symbol \(\mathbf{D}_{t}\) stands for the linear combinations of Caputo fractional derivatives with respect to time, defined as \[\mathbf{D}_{t}u=\mathbf{D}_{t}^{\nu}(\varrho_{0}u)+\sum_{i=1}^{M}\mathbf{D}_{t}^ {\nu_{i}}(\varrho_{i}u)-\sum_{j=1}^{N}\mathbf{D}_{t}^{\mu_{j}}(\gamma_{j}u) \tag{1.5}\] for any fixed \(\nu\in(0,1)\)\(\nu_{i},\mu_{i}\in(0,\nu)\), and given positive functions \(\varrho_{0}=\varrho_{0}(t)\), \(\varrho_{i}=\varrho_{i}(t)\), \(\gamma_{j}=\gamma_{j}(t)\), \(i=1,...,M\), and \(j=1,...,N\). We agreed that if \(N=0\) or \(M=0\) then the corresponding sum is missing from the above representation. Here \(\mathbf{D}_{t}^{\theta}\) denotes the Caputo fractional derivative of order \(\theta\) with respect to time. Let us recall the definition of the Caputo fractional derivative in the case of \(\theta\in(0,1]\), \[\mathbf{D}_{t}^{\theta}u(x,t)=\begin{cases}\frac{1}{\Gamma(1-\theta)}\frac{ \partial}{\partial t}\int\limits_{0}^{t}\frac{u(x,s)-u(x,0)}{(t-s)^{\theta}} ds&\text{if}\quad\theta\in(0,1),\\ \frac{\partial u}{\partial t}(x,t)&\text{if}\quad\theta=1,\end{cases}\] with \(\Gamma\) being the Euler Gamma-function. An equivalent definition of this derivative in the case of absolutely continuous functions reads \[\mathbf{D}_{t}^{\theta}u(x,t)=\begin{cases}\frac{1}{\Gamma(1-\theta)}\int \limits_{0}^{t}(t-s)^{-\theta}\frac{\partial u}{\partial s}(x,s)ds&\text{if} \quad\theta\in(0,1),\\ \frac{\partial u}{\partial t}(x,t)&\text{if}\quad\theta=1.\end{cases}\] Coming to the operators involved, \(\mathcal{L}_{i}\) are linear elliptic operators of the second order with time-dependent coefficients, namely, \[\mathcal{L}_{1}u =a_{2}\frac{\partial^{2}u}{\partial x^{2}}+a_{1}\frac{\partial u }{\partial x}+a_{0}u,\] \[\mathcal{L}_{2}u =b_{2}\frac{\partial^{2}u}{\partial x^{2}}+b_{1}\frac{\partial u }{\partial x}+b_{0}u,\] where \(a_{i}=a_{i}(x,t)\), \(b_{i}=b_{i}(x,t)\). The evolution equations with fractional derivatives play an important role in the modeling of the so-called anomalous phenomena arising in Biology, Geophysics, Chemistry and Physics (see e.g. [30, 35, 36, 2, 4] and references therein). It occurs that for certain processes the order of the time-fractional derivatives from the corresponding model equation does not remain constant. A possible method to control these phenomena is to exploit the multi-term time-fractional diffusion-wave equation, see e.g. [32]. In particular, partial case of equation (1.1) (\(M=0\), \(N=1\), \(\varrho_{0}=1\) and \(\gamma_{1}=\text{constant}\)) describes oxygen delivery through capillaries [42, 34]. Published works related to the multi-term fractional diffusion/wave equations, i.e. equations similar to (1.1) with the operator \[\mathbf{D}_{t}u=\sum_{i=1}^{N}q_{i}\mathbf{D}_{t}^{\nu_{i}}u, \tag{1.6}\] with \(q_{i}\) being positive, and \(0\leq\nu_{1}<\nu_{2}<...<\nu_{M}\), are quite limited in spite of rich literature on their single-term version. Exact solution of linear multi-term fractional diffusion equations with \(q_{i}\) being positive constants on bounded domains are constructed employing eigenfunction expansion in [6, 7, 14, 34, 45]. We quote [41, 42, 34] where certain numerical solutions are built to the corresponding initial-boundary value problems to evolution equations with \(\mathbf{D}_{t}\) given via (1.6). Abstract multi-term time-fractional equations in Banach spaces are discussed in [27]. Well-posedness along with a maximum principle and the long-time asymptotic behavior of the solution for the initial-boundary value problems to the these equation are studied in [31, 33, 19, 29] (see also references therein). In fine, we refer to [28], where initial-boundary value problems to this equation with the \(x\)-dependent coefficients \(q_{i}\) are analyzed. The principal distinction of equation (1.1) from the equations in the aforementioned works is related to the representation of the operator \(\mathbf{D}_{t}\) (see (1.5)) as a linear combination of the multi-term fractional derivatives. Therefore, for certain \(\rho_{0}\) and \(\gamma_{i}\), \(\mathbf{D}_{t}u\) can be rewritten in the form \[\frac{\partial}{\partial t}\int_{0}^{t}\mathcal{N}(t-\tau)[u(x,\tau)-u(x,0)]d\tau\] with the kernel \(\mathcal{N}\) being either a negative function or a function alternating in sign. Indeed, choosing \(M=0\), \(N=1\), and \[\gamma_{1}=1+\varrho_{0},\quad\varrho_{0}\equiv C_{\varrho}>0,\] and appealing to [13, Lemma 4], we end up with the equality \[\mathbf{D}_{t}^{\nu}(\varrho_{0}u)-\mathbf{D}_{t}^{\nu_{i}}(\gamma_{1}u)= \frac{\partial}{\partial t}[\mathcal{N}*(u-u(x,0))],\] where the kernel \[\mathcal{N}=C_{\varrho}\frac{t^{-\nu}}{\Gamma(1-\nu)}-(1+C_{\varrho})\frac{t^ {-\mu_{1}}}{\Gamma(1-\mu_{1})},\] is _negative_ for \(t>e^{-\gamma}\), \(\gamma\) being the Euler-Mascheroni constant. It is worth noting that the nonnegativity of the kernel \(\mathcal{N}\) plays a crucial role in the previous investigations of fractional partial differential equations and related initial/initial-boundary value problems. This assumption is removed in our research. Moreover, equation (1.1) contains fractional derivatives calculated from the product of two functions: the desired solution \(u\) and the prescribed coefficients \(\varrho_{i}\), \(\varrho_{0}\), \(\gamma_{i}\). This peculiarity provides additional difficulties since the typical Leibniz rule does not work in the case of fractional derivatives. The first result on the global classical solvability to the linear version (1.1) with the operator \(\mathbf{D}_{t}\) given by (1.5), where the coefficients \(\varrho_{i}=\varrho_{i}(x,t)\), \(\gamma_{i}=\gamma_{i}(x,t)\) are alternating sign, is discussed in [37]. To the authors' best knowledge, there are no works in the literature addressing to one-valued global solvability to the quasilinear equation (1.1) in the case of \(N\geq 1\) and positive \(\gamma_{i}\). The aim of the present paper is to fill this gap, providing a well-posedness result along with the regularity of solutions in fractional Holder and Sobolev classes for any fixed time \(T\), in the case of "power law" memory kernel [38], i.e. satisfying for every \(t\in[0,T]\) the bound \[|\mathcal{K}(t)|\leq Ct^{-\beta}\] for some positive constant \(C\) and \(\beta\in[0,1)\). Indeed, boundary problems with kernels of this kind do have a practical interest: many viscoelastic materials have rapidly decreasing memory in small time values, and are therefore better described by kernels with singularities at the origin. The technique of this paper heavily rely on the fact that we work in a one-dimensional domain. Bedsides, the main tools in our analysis are a priori estimates in fractional Sobolev and Holder spaces. Our analysis is complemented by numerical simulations. ### Outline of the paper The paper is organized as follows: in Section 2, we introduce the notation and the functional settings. The main assumptions are discussed in Section 3. The principal results (Theorems 4.1 and 4.4) are stated in Section 4. Theorem 4.1 concerns to the classical global solvability of (1.1)-(1.5), while Theorem 4.4 touches the existence and uniqueness of strong solutions to these problems. In Section 5, we recall some definitions together with some auxiliary technical results from fractional calculus, playing a key role in the course of this study. Section 6 is devoted to obtain a priori estimates in the fractional Sobolev and Holder spaces. The proof of Theorem 4.1 is carried out in Section 7. To this end, we exploit the continuation method, treating the family of problems depending on the parameter \(\lambda\in[0,1]\), \[\mathbf{D}_{t}u-\mathcal{L}_{1}u-\mathcal{K}*\mathcal{L}_{2}u=\lambda[f(u_{0})- f(u)]+g(x,t)-f(u_{0}),\] subject to the conditions (1.2)-(1.4). At last, in Section 8, we prove Theorem 4.4 via construction of a strong solution as a limit of approximate smooth solutions. In the final Section 9, we study (1.1)-(1.5) from the numerical side. ## 2. Functional Spaces and Notation Throughout this work, the symbol \(C\) will denote a generic positive constant, depending only on the structural quantities of the problem. We will carry out our analysis in the framework of the fractional Holder spaces. To this end, in what follows we take two arbitrary (but fixed) parameters \[\alpha\in(0,1)\quad\text{and}\quad\theta\in(0,1).\] For any nonnegative integer \(l\), any Banach space \((\mathbf{X},\|\cdot\|_{\mathbf{X}})\), and any \(p\geq 1\), \(s\geq 0\), we consider the usual spaces \[\mathcal{C}^{s}([0,T],\mathbf{X}),\quad\mathcal{C}^{l+\alpha}(\bar{\Omega}), \quad W^{s,p}(\Omega),\quad L_{p}(\Omega),\quad W^{s,p}((0,T),\mathbf{X}).\] Recall that for noninteger \(s\), \(W^{s,p}\) is called the Sobolev-Slobodeckii space (for its definition and properties see, e.g., in [1, Chapter 1], [12, Chapter 1]). Denoting for \(\beta\in(0,1)\) \[\langle v\rangle_{x,\Omega_{T}}^{(\beta)} =\sup\Big{\{}\frac{|v(x_{1},t)-v(x_{2},t)|}{|x_{1}-x_{2}|^{\beta} }:\quad x_{2}\neq x_{1},\quad x_{1},x_{2}\in\bar{\Omega},\quad t\in[0,T]\Big{\}},\] \[\langle v\rangle_{t,\Omega_{T}}^{(\beta)} =\sup\Big{\{}\frac{|v(x,t_{1})-v(x,t_{2})|}{|t_{1}-t_{2}|^{\beta} }:\quad t_{2}\neq t_{1},\quad x\in\bar{\Omega},\quad t_{1},t_{2}\in[0,T]\Big{\}},\] we assert the following definition. **Definition 2.1**.: A function \(v=v(x,t)\) belongs to the class \(\mathcal{C}^{l+\alpha,\frac{l+\alpha}{2}\theta}(\bar{\Omega}_{T})\), for \(l=0,1,2\), if the function \(v\) and its corresponding derivatives are continuous and the norms here below are finite: \[\|v\|_{\mathcal{C}^{l+\alpha,\frac{l+\alpha}{2}\theta}(\bar{\Omega}_{T})}= \begin{cases}\|v\|_{\mathcal{C}([0,T],\mathcal{C}^{l+\alpha}(\bar{\Omega}))}+ \sum_{|j|=0}^{l}\langle D_{x}^{j}v\rangle_{t,\Omega_{T}}^{(\frac{l+\alpha-|j|} {2}\theta)},&l=0,1,\\ \\ \|v\|_{\mathcal{C}([0,T],\mathcal{C}^{2+\alpha}(\bar{\Omega}))}+\|\mathbf{D}_{t }^{\theta}v\|_{\mathcal{C}^{\alpha,\frac{\alpha}{2}\theta}(\bar{\Omega}_{T})}+ \sum_{|j|=1}^{2}\langle D_{x}^{j}v\rangle_{t,\Omega_{T}}^{(\frac{2+\alpha-|j| }{2}\theta)},&l=2.\end{cases}\] In a similar way, for \(l=0,1,2\), we introduce the space \(\mathcal{C}^{l+\alpha,\frac{l+\alpha}{2}\theta}(\partial\Omega_{T})\). The properties of these spaces have been discussed in [21, Section 2]. It is worth noting that, in the limiting case \(\theta=1\), the class \(\mathcal{C}^{l+\alpha,\frac{l+\alpha}{2}\theta}\) coincides with the usual parabolic Holder space \(H^{l+\alpha,\frac{l+\alpha}{2}}\) (see e.g. [25, (1.10)-(1.12)]). Finally, exploiting [44, Proposition 1], we introduce the space \(\mathcal{H}^{s}((0,T),\mathbf{X})\) for \(s\in(0,1)\). **Definition 2.2**.: For \(s\in(0,1)\) we define the space \[\mathcal{H}^{s}((0,T),\mathbf{X})=\begin{cases}W^{s,2}((0,T),\mathbf{X}),&s \in(0,1/2),\\ \{v\in W^{1/2,2}((0,T),\mathbf{X}),\quad\int_{0}^{T}\|v\|_{\mathbf{X}}^{2} \frac{dt}{t}<+\infty\},&s=1/2,\\ \{v\in W^{s,2}((0,T),\mathbf{X}),\quad v|_{t=0}=0\},&s\in(1/2,1),\end{cases}\] subject toh the norms \[\|v\|_{\mathcal{H}^{s}((0,T),\mathbf{X})}=\begin{cases}\|v\|_{W^{s,2}((0,T), \mathbf{X})},&s\in(0,1),\ \ s\neq 1/2,\\ \Big{(}\|v\|_{W^{1/2,2}((0,T),\mathbf{X})}^{2}+\int_{0}^{T}\|v\|_{\mathbf{X}}^{ 2}\frac{dt}{t}\Big{)}^{1/2},&s=1/2.\end{cases}\] Setting \(v(x,0)=v_{0}\) and taking into account [44, Propositions 3 and 7], we arrive to the following norm equivalence \[C^{-1}\|v-v_{0}\|_{\mathcal{H}^{s}((0,T),\mathbf{X})}\leq\|\mathbf{D}_{t}^{s}v \|_{L_{2}((0,T),\mathbf{X})}\leq C\|v-v_{0}\|_{\mathcal{H}^{s}((0,T),\mathbf{ X})}, \tag{2.1}\] for all \((v-v_{0})\in\mathcal{H}^{s}((0,T),\mathbf{X})\) and \(s\in(0,1)\). ## 3. General Assumptions First, we state our general hypothesis on the structural terms of the model. To this end, introducing \[\omega_{1-\nu}(t)=\frac{t^{-\nu}}{\Gamma(1-\nu)},\] we define the positive values \(\nu^{*}\) and \(T^{*}\) such that the kernels \[\mathcal{N}(t;\nu,\mu_{j})=\omega_{1-\nu}(t)-\omega_{1-\mu_{j}}(t),\quad j=1,2,...,N,\] are nonnegative for all \(t\in[0,T^{*}]\) and \(0<\mu_{j}<\nu\leq\nu^{*}<1\). **h1 (Conditions on the fractional order of the derivatives):** We assume that \[\nu\in\begin{cases}(0,\nu^{*})&\text{if}\quad N\geq 1,\\ (0,1)&\text{if}\quad N=0,\end{cases}\qquad\text{and}\quad\nu_{i},\mu_{j}\in \Big{(}0,\frac{\nu(2-\alpha)}{2}\Big{)},\quad i=1,2,...,M,\,j=1,2,...,N,\] \(0<\mu_{1}<...<\mu_{N}<\nu,\quad 0<\nu_{1}<...<\nu_{M}<\nu,\quad\nu_{i}\neq\mu_{j} \quad\text{for all}\quad i=1,2,...,M,\,j=1,2,...,N,\) **h2 (Ellipticity conditions):** There are positive constants \(\delta_{i}\), \(i=0,1,2,3\), such that \[a_{2}(x,t) \geq\delta_{0}>0,\quad\varrho_{0}(t)\geq\delta_{1}>0,\] \[\varrho_{i}(t) \geq\delta_{2}>0,\quad\gamma_{j}(t)\geq\delta_{3}>0,\quad i=1,2,...,M,\,j=1,2,...,N,\] for any \((x,t)\in\bar{\Omega}_{T}\) and \(t\in[0,T]\). **h3 (Regularity of the coefficients):** We require \[a_{k},b_{k}\in\mathcal{C}^{\alpha,\frac{\alpha\nu}{2}}(\bar{\Omega}_{T}), \quad k=0,1,2,\quad\frac{\partial a_{2}}{\partial x},\frac{\partial b_{2}}{ \partial x}\in\mathcal{C}(\bar{\Omega}_{T}),\] \[\varrho_{0},\varrho_{i},\gamma_{j}\in\mathcal{C}^{1}([0,T]),\quad i=1,2,...,M, \,j=1,2,...,N,\] and \[\frac{\partial\varrho_{i}}{\partial t},\frac{\partial\varrho_{0}}{\partial t },\frac{\partial\gamma_{j}}{\partial t}\geq 0,\] for all \(t\in[0,T]\) and \(i=1,...,M\), \(j=1,2,...,N\). Besides, in the case of \(N\geq 1\) the representation holds \[\varrho_{0}=\varrho+\sum_{i=1}^{N}\gamma_{i}\] with the positive function \(\varrho\) having the properties of the function \(\varrho_{0}\). **h4 (Condition on the kernel):** The summable kernel \(\mathcal{K}\) fulfills the estimate \[|\mathcal{K}(t)|\leq\frac{C}{t^{\beta}},\quad\beta\in(0,1-\nu]\] for any \(t\in[0,T]\). **h5 (Conditions on the given functions):** We require that the given functions possess the regularity: **(i)** either \[u_{0}(x) \in C^{2+\alpha}(\bar{\Omega}), g\in\mathcal{C}^{\alpha,\frac{\alpha\nu}{2}}(\bar{\Omega}_{T}),\] \[\psi(x,t) \in\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}(\partial\Omega_{ T}), \quad\psi_{1}(x,t)\in\mathcal{C}^{1+\alpha,\frac{1+\alpha}{2}\nu}(\partial\Omega_{T}),\] **(ii)** or \[\psi,\psi_{1},g\equiv 0,\qquad u_{0}\in\begin{cases}W^{2,2}(\Omega)\cap\overset{ 0}{W}{}^{1,2}(\Omega)&\text{in the case of }\mathbf{DBC},\\ W^{2,2}(\Omega)&\text{in the case of }\mathbf{NBC}.\end{cases}\] **h6 (Conditions on the nonlinearity):**: The function \(f(u)\) satisfies the one of two conditions: **(i)** either \(f(u)\) is the local Lipshits, i.e. for every \(\rho>0\) there exists a positive constant \(C_{\rho}\) such that \[|f(u_{1})-f(u_{2})|\leq C_{\rho}|u_{1}-u_{2}|\] for any \(u_{1},u_{2}\in[-\rho,\rho]\); and there is a positive constant \(L\) such that \[|f(u)|\leq L(1+|u|)\quad\text{for any}\quad u\in\mathbb{R}; \tag{3.1}\] **(ii)** or \[\begin{cases}f\in\mathcal{C}^{1}(\mathbb{R}),\\ |f(u)|\leq L_{1}(1+|u|^{r}),\\ uf(u)\geq-L_{2}+L_{3}|u|^{r+1},\\ f^{\prime}(u)\geq-L_{4},\end{cases} \tag{3.2}\] for some nonnegative constants \(r\) and \(L_{i}\), \(i=1,2,3,4\). **h7 (Compatibility conditions):**: The following compatibility conditions hold for every \(x\in\partial\Omega\) at the initial time \(t=0\), \[\psi(x,0)=u_{0}(x)\quad\text{and}\quad\mathbf{D}_{t}\psi|_{t=0}=\mathcal{L}_{ 1}u_{0}(x)|_{t=0}-f(u_{0})+g(x,0)\] if the **DBC** (1.2) holds, and \[\frac{\partial u_{0}}{\partial x}(x)=\psi_{1}(x,0),\] if the **NBC** (1.3) holds. **Remark 3.1**.: Thanks to Lemma 4.1 in [19], the equality is true \[(\mathcal{K}*\mathcal{L}_{2}u)(x,0)=0\] for any \(u\in\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}(\partial\Omega_{T})\) and any \(x\in\partial\Omega\). That explains the absence of the memory term \((\mathcal{K}*\mathcal{L}_{2}u)\) in the compatibility conditions **h7**. **Remark 3.2**.: It is worth noting that the existence of \(\nu^{*}\) and \(T^{*}\) in assumption **h1** is provided by [13, Lemma 4]. Indeed, this lemma establishes the existence of the pair \((\nu_{\gamma},T_{\gamma})\), \(0<\nu_{\gamma}<1\) and \(T_{\gamma}=e^{-\gamma}\) (\(\gamma\approx 0.577\) is the Euler-Mascheroni constant), such that the function \(\omega_{1-\nu}(t)\) is strictly increasing for all \(\nu\in(0,\nu_{\gamma})\) and each \(t\in[0,T_{\gamma}]\). Thus, this assertion tells us that the kernels \(\mathcal{N}(t;\nu,\mu_{j})\) are positive if \(0<\mu_{j}<\nu<\nu_{\gamma}\) and \(t\in[0,T_{\gamma}]\). Hence, we can select \(\nu^{*}=\nu_{\gamma}\) and \(T^{*}=T_{\gamma}\) in **h1**. Nevertheless, if \(T^{*}<T_{\gamma}\), then the value \(\nu^{*}\) can be chosen greater than \(\nu_{\gamma}\). Unfortunately, an analytical proof of such conjecture as well as explicit values of \(\nu^{*}\) and \(T^{*}\) seem to be out of reach. This is the point of the story where the Numerics steps in. Indeed, let us examine case \(\mu_{j}=\frac{\nu}{j+1}\), \(j=1,2,3\), and \(T^{*}\leq 0.11\). We find numerically \(\nu_{j}^{*}=\nu^{*}(T^{*},\mu_{j})\) and \(\hat{\nu}_{\gamma}^{*}=\hat{\nu}_{\gamma}(T^{*})\), which provide for all \(t\in[0,T^{*}]\): \[\mathcal{N}\bigg{(}t;\nu,\frac{\nu}{j+1}\bigg{)}\geq 0\quad\text{for}\quad \frac{\nu}{j+1}<\nu\leq\nu_{j}^{*},\quad\text{and}\] \[\omega_{1-\nu}(t)\quad\text{is strictly increasing for}\quad\nu<\hat{\nu}_{ \gamma}^{*}.\] Then, setting \(\nu^{*}=\min\limits_{j\in\{1,2,3\}}\nu_{j}^{*}\), we ensure the fulfillment of assumption **h1** in the considered case. In particular, our numeric calculations (presented with Figure 1 and Table 1) demonstrate that if \(T^{*}=0.1\), then \(\nu^{*}=0.7200\), while \(\hat{\nu}_{\gamma}^{*}=0.5614\). **Remark 3.3**.: We remark that the examples of the nonlinearity \(f(u)\) and the kernel \(\mathcal{K}\) satisfying assumptions **h6** and **h4** are given in [21, Example 3.3], [20, Remark 3.2] and [10, (1.4)], respectively. ## 4. Main Results Now we are in the position to state our first main result related with a classical solvability of (1.1)-(1.4). **Theorem 4.1**.: _Let \(T>0\) be arbitrarily given, and let assumptions **h1-h4**, **h5 (i)** and **h7** hold. Moreover, we assume that \(f(u)\) meets requirement **h6(i)** if \(N\geq 1\), while in the case of \(N=0\), \(f(u)\) satisfies **h6**. Then, equation (1.1) with the initial condition (1.4), subjects either to the **DBC** (1.2) or to the **NBC** (1.3), admits a unique classical solution \(u=u(x,t)\) satisfying regularity \[u\in\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}(\tilde{\Omega}_{T}),\quad \mathbf{D}_{t}^{\nu_{i}}u,\,\mathbf{D}_{t}^{\mu_{j}}u\in\mathcal{C}^{\alpha, \frac{\nu\nu}{2}}(\tilde{\Omega}_{T}),\,i=1,2,...,M,\,j=1,2,...,N.\] **Remark 4.2**.: Actually, with inessential modifications in the proof, the very same result holds for the boundary value problem (1.1), (1.4) subject to either the mixed boundary conditions, e.g. \[u(a,t)=\psi(a,t),\quad\frac{\partial u}{\partial x}(b,t)=\psi_{1}(b,t),\qquad t \in[0,T],\] or the boundary conditions of the third kind. Before stating our next result, we specify the notation of a strong solution to (1.1)-(1.4). **Definition 4.3**.: A function \(u\) is called a strong solution to problems (1.1)-(1.4) if \(\bullet\)\(u\in\mathcal{C}(\bar{\Omega}_{T})\cap L_{2}((0,T),W^{2,2}(\Omega))\), \(\mathbf{D}_{t}^{\nu}u,\mathbf{D}_{t}^{\nu_{i}}u\), \(\mathbf{D}_{t}^{\mu_{j}}u\in L_{2}(\Omega_{T})\); \(\bullet\) the boundary and initial conditions (1.2)-(1.4) hold; \(\bullet\) for any fixed \(T>0\) and any \(\phi\in L_{2}(\Omega_{T})\), \[\int_{\Omega}\int_{0}^{T}(\mathbf{D}_{t}u-\mathcal{L}_{1}u-\mathcal{K}* \mathcal{L}_{2}u+f(u))\phi dxdt=\int_{\Omega}\int_{0}^{T}g\phi dxdt.\] Now we are ready to assert next results concerning to the existence of a strong solution to (1.1)-(1.4). **Theorem 4.4**.: _Let assumptions **h1-h4**, **h5(ii)** hold. Let the nonlinearity \(f(u)\) satisfy **h6(i)** if \(N\geq 1\), while \(f(u)\) meets requirement **h6** if \(N=0\). Besides, in the case of **NBC**, we require that_ \[\frac{\partial u_{0}}{\partial x}(a)=\frac{\partial u_{0}}{\partial x}(b)=0.\] _Then problem (1.1)-(1.4) admits a unique strong solution \(u=u(x,t)\)._ **Remark 4.5**.: In general, assumption **h3** on the coefficient \(\varrho_{0}\) can be relaxed under additional requirements on the orders \(\nu_{i}\) and \(\mu_{j}\) and coefficients \(\varrho_{i}\). Indeed, the results of Theorems 4.1 and 4.4 hold if instead of the condition \[\varrho_{0}(t)=\varrho(t)+\sum_{j=1}^{N}\gamma_{j}(t)\] we will require that \[N \leq M,\quad 0<\mu_{1}<\nu_{1}<\mu_{2}<\nu_{2}<...<\nu_{N-1}<\mu_{N} <\nu_{N}<...<\nu_{M}<\nu,\] \[\varrho_{i}(t) =\hat{\varrho}_{i}(t)+\gamma_{i}(t),\quad i=1,2,...,N,\] with \(\hat{\varrho}_{i}(t)\) having the properties of the function \(\varrho_{i}(t)\). Moreover, if \(M=N=0\), then Theorems 4.1 and 4.4 hold if \(\varrho=\varrho(x,t)\in\mathcal{C}^{1}(\bar{\Omega}_{T})\). **Remark 4.6**.: Theorems 4.1 and 4.4 hold if \(\mathbf{D}_{t}u\) in (1.5) is replaced by \[\mathbf{D}_{t}u=\varrho(t)\mathbf{D}_{t}^{\nu}u+\sum_{i=1}^{M}\varrho_{i}(t) \mathbf{D}_{t}^{\nu_{i}}u-\sum_{j=1}^{N}\gamma_{j}(t)\mathbf{D}_{t}^{\mu_{j}}u.\] **Remark 4.7**.: It is worth noting that our assumptions on the kernel \(\mathcal{K}\) include the case \(\mathcal{K}=0\), telling that the multi-term subdiffusion equation: \[\mathbf{D}_{t}u-\mathcal{L}_{1}u+f(u)=g(x,t)\] fits in our analysis and described by the theorems above. Finally, we remark that in our work we do not consider equation (1.1) if \(\nu=1\), since (thanks to presence the first order derivative in time in (1.5)) this case is examined with simpler approach. The remaining part of the paper is devoted to the proof of Theorems 4.1 and 4.4. ## 5. Technical Results In this section we describe some properties of fractional derivatives and integrals, along with several technical assertions which will be used in the course of our analysis. First, we define fractional Riemann-Liouville integrals and derivatives. Throughout this work, for any \(\theta>0\) we denote (as we did before) \[\omega_{\theta}(t)=\frac{t^{\theta-1}}{\Gamma(\theta)}, \tag{5.1}\] and define the fractional Riemann-Liouville integral and the derivative of order \(\theta\), respectively, of a function \(v=v(x,t)\) with respect to time \(t\) as \[I_{t}^{\theta}v(x,t)=(\omega_{\theta}*v)(x,t),\quad\partial_{t}^{\theta}v(x,t )=\frac{\partial^{\lceil\theta\rceil}}{\partial t^{\lceil\theta\rceil}}( \omega_{\lceil\theta\rceil-\theta}*v)(x,t),\] where \(\lceil\theta\rceil\) is the ceiling function of \(\theta\) (i.e. the smallest integer greater than or equal to \(\theta\)). Clearly, for \(\theta\in(0,1)\) we have \[\partial_{t}^{\theta}v(x,t)=\frac{\partial}{\partial t}(\omega_{1-\theta}*v)( x,t).\] Accordingly, the Caputo fractional derivative of the order \(\theta\in(0,1)\) to the function \(v(x,t)\) can be represented as \[\mathbf{D}_{t}^{\theta}v(x,t)=\partial_{t}^{\theta}v(x,t)-\omega_{1-\theta}(t )v(x,0), \tag{5.2}\] if the both derivatives exist (see [17, (2.4.8)]). In the first claim, which subsumes Propositions 4.1 and 4.2 in [21], we recall some important relations for the fractional derivatives and integrals. **Proposition 5.1**.: _The following holds._ **(i)**: _Let_ \(\theta,\theta_{1}\in(0,1),\)__\(t\in[0,T].\) _Given any function_ \(w=w(t)\in\mathcal{C}^{\theta_{1}}([0,T])\)__ \[I_{t}^{\theta}\partial_{t}^{\theta}w(t)=w(t).\] _If in addition_ \(\theta<2\theta_{1}\) _and_ \(p\geq 2\) _is any even integer, it is also true that_ \[\partial_{t}^{\theta}w^{p}(t) \leq\partial_{t}^{\theta}w^{p}(t)+(p-1)w^{p}(t)\omega_{1-\theta} (t)\] \[\leq pw^{p-1}(t)\partial_{t}^{\theta}w(t).\] _If_ \(w\) _is a nonnegative function then these bounds hold for any integer odd_ \(p\)_._ **(ii)**: _For any given positive numbers_ \(\theta_{1}\) _and_ \(\theta_{2}\)_, and any function_ \(k\in L_{1}(0,T),\) _the following relations are fulfilled_ \[\omega_{\theta_{1}}*\omega_{\theta_{2}}=\omega_{\theta_{1}+\theta_{2}}(t),\;1 *\omega_{\theta_{1}}=\omega_{1+\theta_{1}}(t),\;\omega_{\theta_{1}}(t)\geq CT ^{\theta_{1}-1},\,\omega_{\theta_{1}}*k\leq C\omega_{\theta_{1}+1}\leq C \omega_{\theta_{1}},\] _for any_ \(t\in[0,T]\)_. The positive constant_ \(C\) _depends only on_ \(T\)_,_ \(\theta_{1}\) _and_ \(\|k\|_{L_{1}(0,T)}.\)__ Our next assertion is similar to point (i) of Proposition 5.1 stated to the convolution with the kernel \[\mathcal{N}_{\theta}(t)=\mathcal{N}(t;\theta_{1},\theta_{2})=\omega_{1-\theta _{1}}(t)-\omega_{1-\theta_{2}}(t). \tag{5.3}\] **Corollary 5.2**.: _Let \(\theta^{*}\) and \(T^{*}\leq T\) be the numbers such that the kernel \(\mathcal{N}_{\theta}(t)\) is positive for \(0<\theta_{2}<\theta_{1}\leq\theta^{*}\leq 1\) and \(t\in[0,T^{*}]\). Then for any given function \(w\in\mathcal{C}^{\theta_{3}}([0,T]),\)\(\theta_{3}\in(\theta_{1},1)\) and any even integer \(p\geq 2\) the inequalities are fulfilled_ \[\frac{d}{dt}(\mathcal{N}_{\theta}*w^{p})(t) \leq\frac{d}{dt}(\mathcal{N}_{\theta}*w^{p})(t)+(p-1)w^{p}(t) \mathcal{N}_{\theta}(t)\] \[\leq pw^{p-1}(t)\frac{d}{dt}(\mathcal{N}_{\theta}*w)(t),\quad \forall t\in[0,T^{*}]. \tag{5.4}\] _If additionally \(w\) is nonnegative then theses bounds hold for any integer odd \(p\)._ Proof.: It is worth noting that this claim is a simple consequence of the following inequalities: \[\frac{d}{dt}(\mathcal{N}_{\theta}*w\,w_{1})(t) =w\frac{d}{dt}(\mathcal{N}_{\theta}*w_{1})(t)+w_{1}(t)\frac{d}{dt}( \mathcal{N}_{\theta}*w)(t)-w(t)w_{1}(t)\mathcal{N}_{\theta}(t)\] \[+\int_{0}^{t}[w(t)-w(s)][w_{1}(t)-w_{1}(s)]\frac{d}{d(t-s)} \mathcal{N}_{\theta}(t-s)ds,\] \[\frac{d\mathcal{N}_{\theta}}{dt}(t) \leq 0, \tag{5.5}\] which hold for any \(w_{1}\in\mathcal{C}^{\theta_{3}}([0,T])\) and any \(t\in[0,T^{*}]\). Indeed, substituting \[w_{1}(t)=\begin{cases}w&\text{if}\quad p=2,\\ w^{2}&\text{if}\quad p=3,\end{cases}\] in the first equality in (5), we derive the relations: \[\frac{d}{dt}(\mathcal{N}_{\theta}*w^{2})(t) =2w\frac{d}{dt}(\mathcal{N}_{\theta}*w)(t)-\mathcal{N}_{\theta}( t)w^{2}(t)+\int_{0}^{t}[w(s)-w(t)]^{2}\frac{d}{d(t-s)}\mathcal{N}_{\theta}(t-s)ds,\] \[\frac{d}{dt}(\mathcal{N}_{\theta}*w^{3})(t) =w\Big{[}\frac{d}{dt}(\mathcal{N}_{\theta}*w^{2})(t)+w(t)\frac{d }{dt}(\mathcal{N}_{\theta}*w)(t)-\mathcal{N}_{\theta}(t)w^{2}(t)\Big{]}\] \[+\int_{0}^{t}[w(s)-w(t)]^{2}[w(s)+w(t)]\frac{d}{d(t-s)}\mathcal{N }_{\theta}(t-s)ds.\] Then, taking into account the positivity of the kernel \(\mathcal{N}_{\theta}(t)\) for \(\theta_{i}\) and \(t\) meeting requirements of this corollary, and appealing to the second inequality in (5), we arrive at the desired estimates if \(p=2,3\) (in the last case we also used the positivity of \(w\)). Finally, keeping in mind the obtained inequalities and exploiting the induction, we end up with (5) for \(p>3\). Thus, in order to complete the verification of Corollary 5.2, we are left to prove (5). As for the second inequality in (5), the straightforward calculations provide \[\frac{d\mathcal{N}_{\theta}}{dt}(t)=-\theta_{1}t^{-1}\mathcal{N}_{\theta}(t)- (\theta_{1}-\theta_{2})t^{-1}\omega_{1-\theta_{2}}(t)\leq 0\quad\text{for} \quad t\in[0,T^{*}].\] Coming to the verification of the first equality in (5), we take advantage of the definition of a derivative and have \[\frac{d}{dt}(\mathcal{N}_{\theta}*ww_{1})(t)=\lim_{\varepsilon\to 0}\frac{1}{ \varepsilon}\bigg{[}\int_{0}^{t+\varepsilon}\mathcal{N}_{\theta}(t+ \varepsilon-s)w_{1}(s)w(s)ds-\int_{0}^{t}\mathcal{N}_{\theta}(t-s)w_{1}(s)w(s) ds\bigg{]}.\] Then, exploiting the easily verified equality \[w(s)w_{1}(s)=[w(s)-w(t)][w_{1}(s)-w_{1}(t)]-w(t)w_{1}(t)+w(t)w_{1}(s)+w(s)w_{1} (t),\] we end up with the equality \[\frac{d}{dt}(\mathcal{N}_{\theta}*w_{1}w)(t)= \lim_{\varepsilon\to 0}\frac{1}{\varepsilon}\Big{[}\int_{0}^{t+ \varepsilon}\mathcal{N}_{\theta}(t+\varepsilon-s)[w_{1}(s)-w_{1}(t)][w(s)-w(t )]ds\] \[-\int_{0}^{t}\mathcal{N}_{\theta}(t-s)[w_{1}(s)-w_{1}(t)][w(s)-w(t )]ds\Big{]}\] \[+w_{1}(t)\lim_{\varepsilon\to 0}\frac{1}{\varepsilon}\Big{[}\int_{0}^{t+ \varepsilon}\mathcal{N}_{\theta}(t+\varepsilon-s)w(s)ds-\int_{0}^{t}\mathcal{N }_{\theta}(t-s)w(s)ds\Big{]}\] \[+w(t)\lim_{\varepsilon\to 0}\frac{1}{\varepsilon}\Big{[}\int_{0}^{t+ \varepsilon}\mathcal{N}_{\theta}(t+\varepsilon-s)w_{1}(s)ds-\int_{0}^{t} \mathcal{N}_{\theta}(t-s)w_{1}(s)ds\Big{]}\] \[-w_{1}(t)w(t)\lim_{\varepsilon\to 0}\frac{1}{\varepsilon}\Big{[} \int_{0}^{t+\varepsilon}\mathcal{N}_{\theta}(t+\varepsilon-s)ds-\int_{0}^{t} \mathcal{N}_{\theta}(t-s)ds\Big{]}.\] Finally, taking into account the smoothness of the functions \(w\) and \(w_{1}\), we obtain the desired equality. This finishes the proof of Corollary 5.2. **Corollary 5.3**.: _Let \(0<\theta_{2}<\theta_{1}<\theta\leq 1\) and \(w=w(t)\in\mathcal{C}([0,T])\) be a positive function, and let for any fixed \(T>0\)_ \[0<T_{1}=T_{1}(\theta_{1},\theta_{2})<\min\Big{\{}T,\Big{(}\frac{ \theta_{1}\Gamma(1+\theta_{1}-\theta_{2})}{\theta_{2}}\Big{)}^{\frac{1}{\theta _{1}-\theta_{2}}}\Big{\}},\] \[0<T_{2}=T_{2}(\theta_{1},\theta_{2},\theta)<\min\Big{\{}T,\Big{(} \frac{\theta_{1}\Gamma(1+\theta-\theta_{2})}{\theta_{2}\Gamma(1+\theta- \theta_{1})}\Big{)}^{\frac{1}{\theta_{1}-\theta_{2}}}\Big{\}}. \tag{5.6}\] _Then the inequalities hold_ \[\theta_{1}I_{t}^{1}w(t)-\theta_{2}I_{t}^{1+\theta_{1}-\theta_{2}} w(t) \geq 0\quad\text{for each}\quad t\in[0,T_{1}],\] \[\theta_{1}I_{t}^{1+\theta-\theta_{1}}w(t)-\theta_{2}I_{t}^{1+ \theta-\theta_{2}}w(t) \geq 0\quad\text{for each}\quad t\in[0,T_{2}].\] Proof.: It is apparent that the second estimate is proved with the similar arguments as the first. Thus, we restrict ourselves with the verification of the first inequality. It is worth noting that this bound follows from the definition of the fractional Riemann-Liouville integral and performing straightforward calculations. Namely, appealing to (5.1), (5.6) and assumptions on \(\theta_{i}\), we easily conclude that \[\theta_{1}-\theta_{2}\omega_{1+\theta_{1}-\theta_{2}}(t)=\theta_{1}-\frac{ \theta_{2}t^{\theta_{1}-\theta_{2}}}{\Gamma(1+\theta_{1}-\theta_{2})}\geq \theta_{1}-\frac{\theta_{2}T_{1}^{\theta_{1}-\theta_{2}}}{\Gamma(1+\theta_{1} -\theta_{2})}\geq 0.\] Thus, collecting these inequalities with the positivity of \(w\), we end up with the desired estimate \[\theta_{1}I_{t}^{1}w(t)-\theta_{2}I_{t}^{1+\theta_{1}-\theta_{2}}w(t)=\int_{0 }^{t}[\theta_{1}-\theta_{2}\omega_{1+\theta_{1}-\theta_{2}}(\tau)]w(t-\tau)d \tau\geq 0,\quad t\in[0,T_{1}],\] which completes the proof of this corollary. Our next result deals with the fractional differentiation of a product, i.e. \(\mathbf{D}_{t}^{\theta}(w_{1}w_{2})\). We remark that the similar result under stronger assumption on the function \(w_{2}\) is established in [22, Corollary 3.1]. **Proposition 5.4**.: _Let \(\theta\in(0,1),\) and \(w_{1}\in\mathcal{C}^{1}([0,T]),\)\(w_{2}\in\mathcal{C}([0,T]).\)_ **(i)**: _If_ \(\mathbf{D}_{t}^{\theta}w_{2}\) _belongs either to_ \(\mathcal{C}([0,T])\) _or to_ \(L_{2}(0,T)\)_, then there is the equality_ \[\mathbf{D}_{t}^{\theta}(w_{1}w_{2})=w_{1}(t)\mathbf{D}_{t}^{\theta}w_{2}(t)+w _{2}(0)\mathbf{D}_{t}^{\theta}w_{1}(t)+\frac{\theta}{\Gamma(1-\theta)}\mathfrak{ I}_{\theta}(t)\] _with_ \[\mathfrak{I}_{\theta}(t)=\mathfrak{I}_{\theta}(t;w_{1},w_{2})=\int\limits_{0}^ {t}\frac{[w_{1}(t)-w_{1}(s)][w_{2}(s)-w_{2}(0)]}{(t-s)^{1+\theta}}ds.\] _Besides, the estimates hold_ \[\|\mathbf{D}_{t}^{\theta}(w_{1}w_{2})\|_{\mathcal{C}([0,T])} \leq C[\|\mathbf{D}_{t}^{\theta}w_{2}\|_{\mathcal{C}([0,T])}+\|w _{2}\|_{\mathcal{C}([0,T])}],\] \[\|\mathfrak{I}_{\theta}(t)\|_{L_{2}(0,T)} \leq C\|w_{2}-w_{2}(0)\|_{L_{2}(0,T)},\] \[\|\mathbf{D}_{t}^{\theta}(w_{1}w_{2})\|_{L_{2}(0,T)} \leq C[\|\mathbf{D}_{t}^{\theta}w_{2}\|_{L_{2}(0,T)}+\|w_{2}-w_{2} (0)\|_{L_{2}(0,T)}],\] _where the positive constant_ \(C\) _depends only on_ \(T\)_,_ \(\theta\) _and the norm of the function_ \(w_{1}\)_._ **(ii)**: _If_ \(\partial_{t}^{\theta}w_{2}\in\mathcal{C}([0,T])\)_, then for any_ \(\theta_{1}\geq\theta\) _and all_ \(t\in[0,T]\) _the equality holds_ \[I_{t}^{\theta_{1}}(w_{1}\partial_{t}^{\theta}w_{2})(t) =I_{t}^{\theta_{1}-\theta}(w_{1}w_{2})(t)-w_{2}(0)[I_{t}^{\theta _{1}-\theta}w_{1}(t)-I_{t}^{\theta_{1}}(w_{1}\omega_{1-\theta})(t)]\] \[-\theta I_{t}^{1+\theta_{1}-\theta}(\mathcal{W}(w_{1})w_{2})(t)\] _with_ \[\mathcal{W}(w_{1})=\mathcal{W}(w_{1};t,\tau)=\int\limits_{0}^{1}\frac{\partial w _{1}}{\partial z}(z)ds,\quad z=st+(1-s)(\tau),\quad 0<\tau<t<T.\] Proof.: As for the representation of \(\mathbf{D}_{t}^{\theta}(w_{1}w_{2})\) stated in (i) of this claim, it follows from the definition of Caputo fractional derivative and smoothness of \(w_{1}\) and \(w_{2}\). Namely, we have \[\mathbf{D}_{t}^{\theta}(w_{1}w_{2}) =w_{1}(t)\mathbf{D}_{t}^{\theta}w_{2}+w_{2}(0)\mathbf{D}_{t}^{ \theta}w_{1}+w_{1}^{\prime}(t)I_{t}^{1-\theta}(w_{2}-w_{2}(0))(t)\] \[-\frac{1}{\Gamma(1-\theta)}\int_{0}^{t}[w_{2}(s)-w_{2}(0)]\frac{ \partial}{\partial t}\frac{[w_{1}(t)-w_{1}(s)]}{(t-s)^{\theta}}ds.\] Performing differentiation in the last integral arrives at the desired representation. Concerning the regularity of \(\mathbf{D}_{t}^{\theta}(w_{1}w_{2})\) and \(\mathfrak{I}_{\theta}(t)\), they are simple consequence of the representation to \(\mathbf{D}_{t}^{\theta}(w_{1}w_{2})\), and of the properties of \(w_{1},w_{2}\), and the Young inequality to convolution. In fine, point (ii) of this proposition follows from (5.2) and point (i) of this claim, if one takes into account [17, Proposition 2.2] and semigroup property of the fractional Riemann-Liouville integral. We now state and prove some inequalities that will be needed to obtain a priori estimates of solutions to (1.1)-(1.4) in Section 6.5. First, for each positive fixed \(T_{3}\) and \(T\), \(0<T_{3}<T\), we introduce the function \[\xi=\xi(t)\in\mathcal{C}_{0}^{\infty}(\mathbb{R}_{+}),\quad\xi\in[0,1],\quad \xi=\begin{cases}1,&t\in[0,T_{3}/2],\\ 0,&t\geq 3T_{3}/4.\end{cases} \tag{5.7}\] **Lemma 5.5**.: _Let \(T,T_{3}\) be arbitrary fixed, \(T>T_{3}>0\), \(\theta\in(0,1)\) and \(\theta_{1}\in(0,\theta]\). Then for any \(w\in\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\theta}(\bar{\Omega}_{T_{3}})\) and \(w_{1}\in\mathcal{C}^{1}([0,T])\) the equalities hold:_ * \(\|\xi\|_{\mathcal{C}([0,T],\mathcal{C}^{1}(\bar{\Omega}))}\leq C\|w\|_{ \mathcal{C}([0,T_{3}],\mathcal{C}^{1}(\bar{\Omega}))};\)__ * \(\sum_{j=1}^{2}\left[\left\|\xi\frac{\partial^{j}w}{\partial x^{j}}\right\|_{ \mathcal{C}^{\alpha,\frac{2+\alpha-j}{2}\theta}(\bar{\Omega}_{T})}+\left\| \mathcal{K}\ast\xi\frac{\partial^{j}w}{\partial x^{j}}\right\|_{\mathcal{C}^{ \alpha,\frac{\theta\alpha}{2}}(\bar{\Omega}_{T})}\right]\leq C\|w\|_{\mathcal{C }^{2+\alpha,\frac{2+\alpha}{2}\theta}(\bar{\Omega}_{T_{3}})};\)__ * \(\langle\xi w\rangle_{t,\Omega_{T}}^{(\theta/2)}\leq C[\langle w\rangle_{t, \Omega_{T_{3}}}^{(\theta/2)}+\|w\|_{\mathcal{C}(\Omega_{T_{3}})}];\)__ * \(\|\mathfrak{I}_{\theta_{1}}(t;\xi,w_{1}w)\|_{\mathcal{C}^{\alpha,\frac{ \theta\alpha}{2}}(\bar{\Omega}_{T})}\leq C\|w_{1}\|_{\mathcal{C}^{1}([0,T])}[ \|w\|_{\mathcal{C}([0,T_{3}],\mathcal{C}^{\alpha}(\bar{\Omega}))}+\langle w \rangle_{t,\Omega_{T_{3}}}^{(\theta/2)}];\)__ * \(\|\mathbf{D}_{t}^{\theta_{1}}(\xi ww_{1})\|_{\mathcal{C}^{\alpha,\frac{ \theta\alpha}{2}}(\bar{\Omega}_{T})}\leq C[1+\|w_{1}\|_{\mathcal{C}^{1}([0,T])} ][\|\mathbf{D}_{t}^{\theta_{1}}w\|_{\mathcal{C}^{\alpha,\frac{\theta\alpha}{2} }(\bar{\Omega}_{T_{3}})}+\|w\|_{\mathcal{C}^{\alpha,\frac{\theta\alpha}{2}}( \bar{\Omega}_{T_{3}})}+\|w_{0}\|_{\mathcal{C}^{\alpha}(\bar{\Omega})}];\)__ \(\|\mathbf{D}_{t}^{\theta_{1}}(\xi ww_{1})-\xi\mathbf{D}_{t}^{\theta_{1}}(ww_{ 1})\|_{\mathcal{C}^{\alpha,\frac{\theta\alpha}{2}}(\bar{\Omega}_{T})}\leq C[1+ \|w_{1}\|_{\mathcal{C}^{1}([0,T])}](\langle w\rangle_{t,\Omega_{T_{3}}}^{( \frac{\theta}{2})}+\|w\|_{\mathcal{C}([0,T_{3}],\mathcal{C}^{\alpha}(\bar{ \Omega}))}+\|w\|_{\mathcal{C}^{\alpha}(\bar{\Omega})}],\) _where_ \(w_{0}=w(x,0);\)__ * \(\|\mathbf{D}_{t}^{\theta_{1}}(\xi w)\|_{L_{2}(0,T)}+\|\xi w\|_{L_{2}((0,T),W^{ 2,2}(\Omega))}\leq C[\|\mathbf{D}_{t}^{\theta_{1}}(w)\|_{L_{2}(0,T_{3})}+\|w\|_ {L_{2}((0,T_{3}),W^{2,2}(\Omega))}].\)__ _Here the positive constant \(C\) depends only on \(T,\theta,\theta_{1}\) and the norm of the function \(\xi\)._ Proof.: It is worth noting that the points (v) and (vi) are simple consequences of Proposition 5.4 and the inequality in (iv) of this claim. Thus, to complete the proof of this lemma, we are left to verify estimates in (i)-(iv). The definition of the function \(\xi(t)\) and the regularity of \(w(x,t)\) arrive at the inequality \[\|\xi w\|_{\mathcal{C}([0,T],\mathcal{C}^{2+\alpha}(\bar{\Omega}))}\leq C\|w\|_{ \mathcal{C}([0,T_{3}],\mathcal{C}^{2+\alpha}(\bar{\Omega}))},\] which in turn proves the point (i) of this claim. Besides, this bound tells us that the verification of the estimate in (ii) will immediately follow from the inequality \[\sum_{j=1}^{2}\biggl{\langle}\xi\frac{\partial^{j}w}{\partial x^{j}}\biggr{\rangle} _{t,\Omega_{T}}^{(\frac{2+\alpha-j}{2}\theta)}\leq C\sum_{j=1}^{2}\biggl{[} \biggl{\langle}\frac{\partial^{j}w}{\partial x^{j}}\biggr{\rangle}_{t,\Omega_{T_{3} }}^{(\frac{2+\alpha-j}{2}\theta)}+\biggl{\|}\frac{\partial^{j}w}{\partial x^{j}} \biggr{\|}_{\mathcal{C}(\bar{\Omega}_{T_{3}})}\biggr{]}. \tag{5.8}\] For simplicity consideration, we first assume \(0<t_{1}<t_{2}<T\), and set \(\Delta t=t_{2}-t_{1}\). Then, we discuss three options to the arrangement of \(t_{1}\) and \(t_{2}\): \[\text{either}\quad 0<t_{1}<t_{2}\leq 3T_{3}/4,\] \[\text{or}\quad 0<t_{1}\leq 3T_{3}/4<t_{2}\leq T,\] \[\text{or}\quad 0<3T_{3}/4<t_{1}<t_{2}\leq T. \tag{5.9}\] In the first case, we have \[\bigg{|}\xi(t_{2})\frac{\partial^{j}w}{\partial x^{j}}(x,t_{2})-\xi(t_{1})\frac{ \partial^{j}w}{\partial x^{j}}(x,t_{1})\bigg{|}\leq|\xi(t_{2})-\xi(t_{1})| \bigg{\|}\frac{\partial^{j}w}{\partial x^{j}}\bigg{\|}_{\mathcal{C}(\bar{ \Omega}_{T_{3}})}+\xi(t_{1})\bigg{|}\frac{\partial^{j}w}{\partial x^{j}}(x,t_{2 })-\frac{\partial^{j}w}{\partial x^{j}}(x,t_{1})\bigg{|}.\] Collecting this estimate with smoothness of the functions \(w\) and \(\xi\), we immediately obtain (5.8). Coming to the second case, i.e. \(t_{1}\leq\frac{3T_{3}}{4}<t_{2}\), the easily verified inequalities \[\Delta t >\frac{3T_{3}}{4}-t_{1},\] \[\xi(t_{2})\frac{\partial^{j}w}{\partial x^{j}}(x,t_{2})-\xi(t_{1 })\frac{\partial^{j}w}{\partial x^{j}}(x,t_{1})=\xi\bigg{(}\frac{3T_{3}}{4} \bigg{)}\frac{\partial^{j}w}{\partial x^{j}}\bigg{(}x,\frac{3T_{3}}{4}\bigg{)} -\xi(t_{1})\frac{\partial^{j}w}{\partial x^{j}}(x,t_{1}),\] provide the desired bound (5.8). In the last case in (5.9), thanks to \(\xi(t)=0\) for \(t>\frac{3T_{3}}{4}\), we have \[\xi(t_{2})\frac{\partial^{j}w}{\partial x^{j}}(x,t_{2})-\xi(t_{1})\frac{ \partial^{j}w}{\partial x^{j}}(x,t_{1})=0,\] which means that (5.8) holds. As a result, gathering all estimates, we complete the proof of (5.8) and, besides, \[\sum_{j=1}^{2}\bigg{\|}\xi\frac{\partial^{j}w}{\partial x^{j}}\bigg{\|}_{ \mathcal{C}^{\alpha,\frac{2+\alpha-j}{2}\theta}(\bar{\Omega}_{T})}\leq C \|w\|_{\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\theta}(\bar{\Omega}_{T_{3}})}.\] Therefore, in order to complete the verification of the estimate in (ii), we first take advantage of the representation \[\mathcal{K}\ast\xi\frac{\partial^{j}w}{\partial x^{j}}=\begin{cases}\int_{0}^ {t}\mathcal{K}(t-s)\xi(s)\frac{\partial^{j}w}{\partial x^{j}}(x,s)ds,\qquad 0 <t<\frac{3T_{3}}{4},\\ \\ \int_{0}^{\frac{3T_{3}}{4}}\mathcal{K}(t-s)\xi(s)\frac{\partial^{j}w}{\partial x ^{j}}(x,s)ds,\qquad t\geq\frac{3T_{3}}{4},\end{cases}\] and [19, Lemma 4.1], then, performing standard calculations, we conclude that \[\bigg{\|}\mathcal{K}\ast\xi\frac{\partial^{j}w}{\partial x^{j}}\bigg{\|}_{ \mathcal{C}^{\alpha,\frac{\alpha\theta}{2}}(\bar{\Omega}_{T})}\leq C\| \mathcal{K}\|_{L_{1}(0,T)}\bigg{\|}\frac{\partial^{j}w}{\partial x^{j}}\bigg{\|} _{\mathcal{C}^{\alpha,\frac{\alpha\theta}{2}}(\bar{\Omega}_{T_{3}})}.\] Thus, the proof of the estimate in (ii) is finished. Concerning the inequalities in (iii), they are obtained with the arguments leading to (5.8). At this point we examine the bound in (iv). Here we restrict ourselves with the consideration of the case \(\theta_{1}=\theta\). Another case is verified with the similar manner. In light of the definition of \(\xi(t)\) (see (5.7)) and \(\mathfrak{I}_{\theta}(t)\), we deduce \[\mathfrak{I}_{\theta}(t) =\begin{cases}0,\qquad\quad t\leq T_{3}/2,\\ \int_{0}^{t}\frac{[\xi(t)-\xi(\tau)][w(x,\tau)w_{1}(\tau)-w(x,0)w_{1}(0)]d\tau }{(t-\tau)^{1+\sigma}},\quad t\in(T_{3}/2,3T_{3}/4),\\ \int_{0}^{3T_{3}/4}\frac{[\xi(t)-\xi(\tau)][w(x,\tau)w_{1}(\tau)-w(x,0)w_{1}(0)] d\tau}{(t-\tau)^{1+\sigma}},\quad t\geq 3T_{3}/4,\end{cases}\] \[=\begin{cases}0,\qquad\quad t\leq T_{3}/2,\\ \int_{0}^{3T_{3}/4}\frac{\mathcal{W}(\xi:t,\tau)[w(x,\tau)-w(x,0)]w_{1}(\tau) d\tau}{(t-\tau)^{\sigma}}+\int_{0}^{3T_{3}/4}\frac{\mathcal{W}(\xi:t,\tau)[w_{1}( \tau)-w_{1}(0)]w(x,0)d\tau}{(t-\tau)^{\sigma}},\quad t\in(T_{3}/2,3T_{3}/4), \\ \int_{0}^{t}\frac{\mathcal{W}(\xi:t,\tau)[w(x,\tau)-w(x,0)]w_{1}(\tau)d\tau}{(t -\tau)^{\theta}}+\int_{0}^{t}\frac{\mathcal{W}(\xi:t,\tau)[w_{1}(\tau)-w_{1}(0) ]w(x,0)d\tau}{(t-\tau)^{\theta}},\quad t\geq 3T_{3}/4,\end{cases} \tag{5.10}\] where the function \(\mathcal{W}(\cdot)\) is defined in point (ii) of Proposition 5.4. Taking into account these equalities and performing standard technical calculations, we end up with the estimate \[\|\mathfrak{I}_{\theta}(t)\|_{\mathcal{C}([0,T],\mathcal{C}^{\alpha}(\bar{\Omega} ))}\leq C\|\xi\|_{\mathcal{C}^{1}([0,T])}\|w_{1}\|_{\mathcal{C}^{1}([0,T])}[\|w \|_{\mathcal{C}([0,T_{3}],\mathcal{C}^{\alpha}(\bar{\Omega}))}+\langle w \rangle_{t,\Omega_{T_{3}}}^{(\theta/2)}]. \tag{5.11}\] As for the Holder regularity of \(\mathfrak{I}_{\theta}(t)\) with respect to time, we first assume that \[\Delta t=t_{2}-t_{1}<T_{3}/8, \tag{5.12}\] otherwise the bound of \(\langle\mathfrak{I}_{\theta}(t)\rangle_{t,\Omega_{T}}^{(\alpha\theta/2)}\) follows from (5.11). Next we consider again (5.9) to the location of \(t_{1},t_{2}\). If \(t_{1},t_{2}\in[0,3T_{3}/4]\), then using (5.10), we can write \[\mathfrak{I}_{\theta}(t_{2})-\mathfrak{I}_{\theta}(t_{1})\equiv\sum_{i=1}^{3 }J_{i},\] where we set \[J_{1} =\int_{0}^{t_{1}}\frac{\mathcal{W}(\xi;t_{2},t_{2}-\tau)[w(x,t_{ 2}-\tau)w_{1}(t_{2}-\tau)-w(x,t_{1}-\tau)w_{1}(t_{1}-\tau)]d\tau}{\tau^{\theta}},\] \[J_{2} =\int_{0}^{t_{1}}\frac{[\mathcal{W}(\xi;t_{2},t_{2}-\tau)- \mathcal{W}(\xi;t_{1},t_{1}-\tau)][w(x,t_{1}-\tau)w_{1}(t_{1}-\tau)-w(x,0)w_{ 1}(0)]d\tau}{\tau^{\theta}},\] \[J_{3} =\int_{t_{1}}^{t_{2}}\frac{\mathcal{W}(\xi;t_{2},t_{2}-\tau)[w(x, t_{2}-\tau)w_{1}(t_{2}-\tau)-w(x,0)w_{1}(0)]d\tau}{\tau^{\theta}}.\] According to the properties of \(\xi,w\) and \(w_{1}\), we immediately deduce \[|J_{1}|+|J_{2}|\leq C\Delta t^{\theta/2}\|\xi\|_{\mathcal{C}^{2}([0,T])}\|w_{ 1}\|_{\mathcal{C}^{1}([0,T])}[\|w\|_{\mathcal{C}(\bar{\Omega}_{T_{3}})}+ \langle w\rangle_{t,\Omega_{T_{3}}}^{(\theta/2)}].\] As for \(J_{3}\), we have \[|J_{3}| \leq C\|\xi\|_{\mathcal{C}^{1}([0,T])}\|w_{1}\|_{\mathcal{C}^{1} ([0,T])}[\|w\|_{\mathcal{C}(\bar{\Omega}_{T_{3}})}+\langle w\rangle_{t,\Omega _{T_{3}}}^{(\theta/2)}]\int_{t_{1}}^{t_{2}}\tau^{-\theta}(t_{2}-\tau)^{\theta/2 }[1+(t_{2}-\tau)^{1-\theta/2}]d\tau\] \[\leq C\Delta t^{\theta/2}\|\xi\|_{\mathcal{C}^{2}([0,T])}\|w_{1} \|_{\mathcal{C}^{1}([0,T])}[\|w\|_{\mathcal{C}(\bar{\Omega}_{T_{3}})}+\langle w \rangle_{t,\Omega_{T_{3}}}^{(\theta/2)}].\] It is worth noting that the positive constant \(C\) depends only on \(T\) and \(\theta\). Collecting these estimates arrives at the bound \[\langle\mathfrak{I}_{\theta}(t)\rangle_{t,\Omega_{T}}^{(\theta/2)}\leq C\| \xi\|_{\mathcal{C}^{2}([0,T])}\|w_{1}\|_{\mathcal{C}^{1}([0,T])}[\|w\|_{ \mathcal{C}(\bar{\Omega}_{T_{3}})}+\langle w\rangle_{t,\Omega_{T_{3}}}^{( \theta/2)}]. \tag{5.13}\] Concerning the case of \(t_{1}\leq\frac{3T_{3}}{4}<t_{2}<T\), assumption (5.12) tells us that \[t_{2}<\frac{7T_{3}}{8}<T_{3}.\] Hence, recasting the arguments leading to (5.13), we obtain the desired estimate in the case of the second option in (5.10). Finally, analyzing the case \(\frac{3T_{3}}{4}\leq t_{1}<t_{2}\leq T\), two possibilities occur: * either \(t_{1}-\frac{3T_{3}}{4}\leq\frac{T_{3}}{8}\); * or \(t_{1}-\frac{3T_{3}}{4}>\frac{T_{3}}{8}\). It is apparent that the option (i) is studied with the similar arguments leading to (5.13). As for the case (ii), exploiting (5.10), we have \[\mathfrak{I}_{\theta}(t_{2})-\mathfrak{I}_{\theta}(t_{1})\equiv i_{1}+i_{2},\] where \[i_{1} =\int_{0}^{3T_{3}/4}[w(x,\tau)w_{1}(\tau)-w(x,0)w_{1}(0)]\frac{ \mathcal{W}(\xi;t_{2},\tau)-\mathcal{W}(\xi;t_{1},\tau)}{(t_{2}-\tau)^{\theta} }d\tau,\] \[i_{2} =\int_{0}^{3T_{3}/4}[w(x,\tau)w_{1}(\tau)-w(x,0)w_{1}(0)]\mathcal{ W}(\xi;t_{2},\tau)[(t_{2}-\tau)^{-\theta}-(t_{1}-\tau)^{-\theta}]d\tau.\] According to the regularity of \(\xi(t)\), we arrive at \[|i_{1}|\leq C\Delta t\|w_{1}\|_{\mathcal{C}^{1}([0,T])}\|\xi\|_{\mathcal{C}^{2}( \bar{\Omega}_{T})}[\|w\|_{\mathcal{C}(\bar{\Omega}_{T_{3}})}+\langle w\rangle_{ t,\Omega_{T_{3}}}^{(\theta/2)}]\int_{0}^{3T_{3}/4}[\tau+\tau^{\theta/2}](t_{2}- \tau)^{-\theta}d\tau.\] To estimate the term \(i_{2}\), we use the mean value theorem and the fact that \(t_{1}-\frac{3T_{3}}{4}>\frac{T_{3}}{8}\). Thus, we achieve \[|i_{2}|\leq C\frac{\Delta t}{T_{3}^{2\theta}}\|\xi\|_{\mathcal{C}^{1}([0,T])} \|w_{1}\|_{\mathcal{C}^{1}([0,T])}[\|w\|_{\mathcal{C}(\bar{\Omega}_{T_{3}})}+ \langle w\rangle_{t,\Omega_{T_{3}}}^{(\theta/2)}].\] As a result, gathering this inequality with the estimate of \(i_{1}\), we end up with bound (5.13) in the third case in (5.9). Finally, (5.11) and (5.13) completes the proof of point (iv) in this lemma. Next, we state and prove inequalities which are generalized the bounds in [20, Lemma 4.2] and will be used later in this art. **Lemma 5.6**.: _Let \(w_{2}=w_{2}(x,t)\in\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}}\theta(\bar{ \Omega}_{T}),\,\theta\in(0,1)\), and let \(w_{1}=w_{1}(t)\in\mathcal{C}^{1}([0,T])\) be a positive function. We assume that_ \[\text{either}\quad w_{2}|_{\partial\Omega_{T}}=0\quad\text{or}\quad\frac{ \partial w_{2}}{\partial x}|_{\partial\Omega_{T}}=0.\] _Then for any \(0<\theta_{2}<\theta_{1}\leq\theta\) and any integer even \(p\geq 2\) the inequalities hold:_ **(i)**__ \[-pI_{t}^{\theta}\bigg{(}\int_{\Omega}\partial_{t}^{\theta_{1}}(w_ {1}w_{2})\frac{\partial}{\partial x}\bigg{(}\frac{\partial w_{2}}{\partial x_ {2}}\bigg{)}^{p-1}dx\bigg{)}(t)\] \[\geq(p-1)I_{t}^{\theta}\bigg{(}w_{1}\omega_{1-\theta_{1}}\int_{ \Omega}\bigg{(}\frac{\partial w_{2}}{\partial x_{2}}\bigg{)}^{p}dx\bigg{)}(t)+ I_{t}^{\theta-\theta_{1}}\bigg{(}w_{1}\int_{\Omega}\bigg{(}\frac{\partial w _{2}}{\partial x_{2}}\bigg{)}^{p}dx\bigg{)}(t)\] \[-w_{1}^{p}(0)\int_{\Omega}\bigg{(}\frac{\partial w_{2}}{\partial x _{2}}(x,0)\bigg{)}^{p}dx[I_{t}^{\theta-\theta_{1}}(w_{1}^{1-p})(t)-I_{t}^{ \theta}(w_{1}^{1-p}\omega_{1-\theta_{1}})(t)]\] \[-\theta_{1}I_{t}^{1-\theta_{1}+\theta}\bigg{(}w_{1}^{p}\mathcal{ W}(w_{1}^{1-p})\int_{\Omega}\bigg{(}\frac{\partial w_{2}}{\partial x_{2}} \bigg{)}^{p}dx\bigg{)}(t)\quad\text{for}\quad\forall t\in[0,T],\] **(ii)**__ \[-pI_{t}^{\theta}\bigg{(}\int_{\Omega}\frac{\partial}{\partial t }(\mathcal{N}_{\theta}*w_{1}w_{2})\frac{\partial}{\partial x}\bigg{(}\frac{ \partial w_{2}}{\partial x_{2}}\bigg{)}^{p-1}dx\bigg{)}(t)\] \[\geq(p-1)I_{t}^{\theta}\Big{(}w_{1}\mathcal{N}_{\theta}\int_{ \Omega}\bigg{(}\frac{\partial w_{2}}{\partial x_{2}}\bigg{)}^{p}dx\Big{)}(t)+ I_{t}^{\theta-\theta_{1}}\bigg{(}w_{1}\int_{\Omega}\bigg{(}\frac{\partial w _{2}}{\partial x_{2}}\bigg{)}^{p}dx\bigg{)}(t)\] \[-w_{1}^{p}(0)\int_{\Omega}\bigg{(}\frac{\partial w_{2}}{\partial x _{2}}(x,0)\bigg{)}^{p}dxI_{t}^{\theta-\theta_{1}}\bigg{(}w_{1}^{1-p}+I_{t}^{ \theta_{1}}(\omega_{1-\theta_{1}}w_{1}^{1-p})\bigg{)}(t)\] \[-I_{t}^{\theta-\theta_{2}}\bigg{(}w_{1}\int_{\Omega}\bigg{(}\frac {\partial w_{2}}{\partial x_{2}}\bigg{)}^{p}dx\bigg{)}(t)-\theta_{1}I_{t}^{1- \theta_{1}+\theta}\bigg{(}w_{1}^{p}\mathcal{W}(w_{1}^{1-p})\int_{\Omega}\bigg{(} \frac{\partial w_{2}}{\partial x_{2}}\bigg{)}^{p}dx\bigg{)}(t)\] \[+\theta_{2}I_{t}^{1-\theta_{2}+\theta}\bigg{(}w_{1}^{p}\mathcal{ W}(w_{1}^{1-p})\int_{\Omega}\bigg{(}\frac{\partial w_{2}}{\partial x_{2}} \bigg{)}^{p}dx\bigg{)}(t)\quad\text{for}\quad\forall t\in[0,T^{*}],\] _where \(\mathcal{N}_{\theta}\) is given with (5.3) and \(\mathcal{W}(\cdot)\) is defined in (ii) of Proposition 5.4._ Proof.: First we consider the case of homogeneous Dirichlet boundary condition. We preliminarily observe that the estimate at the point (ii) is the same as the one for the bound at (i) (where instead of Proposition 5.1, one should use Corollary 5.2) of this claim. For these reason, we are left to tackle the inequality in (i). To this end, similar to the proof of Lemma 4.2[20], we first construct the mollification of the function \(w_{2}\). For any positive \(\mathfrak{d}<|b-a|\), we introduce a cut function \(\eta\in\mathcal{C}_{0}^{\infty}(\mathbb{R})\) taking values in \([0,1]\): \[\begin{cases}\eta=1,\quad\text{if}\quad x\in(a-\frac{\mathfrak{d}}{4},b+\frac{ \mathfrak{d}}{4}),\\ \eta=0,\quad\text{if}\quad x\in\mathbb{R}\backslash(a-\frac{\mathfrak{d}}{2},b+ \frac{\mathfrak{d}}{2}),\end{cases}\] and then define the even extension \(W_{\mathfrak{d}}(x,t)\) of the function \(w_{2}(x,t)\) in the segment \((a-\mathfrak{d},b+\mathfrak{d})\) as \[W_{\mathfrak{d}}(x,t)=\begin{cases}-w_{2}(2a-x,t),&\text{if}\quad x\in(a- \mathfrak{d},a),\\ w_{2}(x,t),&\text{if}\quad x\in[a,b],\\ -w_{2}(2b-x,t),&\text{if}\quad x\in(b,b+\mathfrak{d}).\end{cases}\] Then we build the zero extension of \(W_{\mathfrak{d}}\) outside the segment \((a-\mathfrak{d},b+\mathfrak{d})\) as \[W_{\eta}=W_{\eta}(x,t)=\eta(x)W_{\mathfrak{d}}(x,t).\] It is easily verify that \(W_{\eta}\in\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\theta}(\bar{\mathbb{R}}_{ T})\), and \[\|W_{\eta}\|_{\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\theta}(\bar{\mathbb{R}}_ {T})}\leq C\|w_{2}\|_{\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\theta}(\bar{ \Omega}_{T})}.\] Setting the mollifier \(J_{\varepsilon}(x)\) satisfying the properties: \[J_{\varepsilon}(x)\in\mathcal{C}_{0}^{\infty}(\mathbb{R}),\quad J_{ \varepsilon}(x)=0\quad\text{if}\quad|x|\geq\varepsilon,\quad\int_{-\infty}^{+ \infty}J_{\varepsilon}(x)dx=1,\] we define the mollification of the function \(W_{\eta}\) as \[W_{\eta,\varepsilon}=\int_{\mathbb{R}}J_{\varepsilon}(x-y)W_{\eta}(y,t)dy.\] Clearly, \[W_{\eta,\varepsilon}\in\mathcal{C}([0,T],\mathcal{C}^{\infty}(\mathbb{R})), \quad\mathbf{D}_{t}^{\theta_{i}}W_{\eta,\varepsilon}\in\mathcal{C}([0,T], \mathcal{C}^{\infty}(\mathbb{R})),\,0<\theta_{i}\leq\theta,\,i=1,2,\] and \[W_{\eta,\varepsilon}=0\qquad\text{on}\quad\partial\Omega_{T}.\] The last equality tells us that \[\partial_{t}^{\theta_{2}}W_{\eta,\varepsilon}=\partial_{t}^{\theta_{1}}W_{ \eta,\varepsilon}=\partial_{t}^{\theta}W_{\eta,\varepsilon}=0\qquad\text{for} \quad(x,t)\in\partial\Omega_{T}.\] Following standard approximation arguments (see e.g. [1, Chapter 1]), we have the uniform convergences on \(\bar{\Omega}_{T}\) as \(\varepsilon\to 0\) \[\frac{\partial^{i}W_{\eta,\varepsilon}}{\partial x^{i}} \to\frac{\partial^{i}w_{2}}{\partial x^{i}},\quad i=0,1,2,\quad \text{and}\] \[\mathbf{D}_{t}^{\theta_{i}}W_{\eta,\varepsilon} \to\mathbf{D}_{t}^{\theta_{i}}w_{2},\quad 0<\theta_{2}<\theta_{1}\leq \theta<1,\] \[\frac{\partial}{\partial x}\bigg{(}\frac{\partial W_{\eta, \varepsilon}}{\partial x}\bigg{)}^{p-1} \to\frac{\partial}{\partial x}\bigg{(}\frac{\partial w_{2}}{ \partial x}\bigg{)}^{p-1},\quad p\geq 2.\] Moreover, exploiting [22, Corollary 3.1] and standard technical calculations, we arrive at \[\partial_{t}^{\theta_{i}}(w_{1}W_{\eta,\varepsilon}) \to\partial_{t}^{\theta_{i}}(w_{1}w_{2}),\quad i=1,2,\] \[\mathbf{D}_{t}^{\theta_{i}}(w_{1}W_{\eta,\varepsilon}) \to\mathbf{D}_{t}^{\theta_{i}}(w_{1}w_{2}).\] Now, we begin to prove the first inequality of this lemma for \(W_{\eta,\varepsilon}\). Namely, integration by parts together with Propositions 5.1 and 5.4 yield \[-pI_{t}^{\theta}\bigg{(}\int_{\Omega}\partial_{t}^{\theta_{1}}(w_{1} W_{\eta,\varepsilon})\frac{\partial}{\partial x}\bigg{(}\frac{\partial W_{\eta, \varepsilon}}{\partial x}\bigg{)}^{p-1}dx\bigg{)}(t)=pI_{t}^{\theta}\bigg{(}w_ {1}^{1-p}\int_{\Omega}\partial_{t}^{\theta_{1}}\frac{\partial}{\partial x}(w_{ 1}W_{\eta,\varepsilon})\bigg{(}\frac{\partial w_{1}W_{\eta,\varepsilon}}{ \partial x}\bigg{)}^{p-1}dx\bigg{)}(t)\] \[\geq I_{t}^{\theta}\bigg{(}w_{1}^{1-p}\int_{\Omega}\partial_{t}^{ \theta_{1}}\bigg{(}\frac{\partial}{\partial x}(w_{1}W_{\eta,\varepsilon}) \bigg{)}^{p}dx\bigg{)}(t)+(p-1)I_{t}^{\theta}\bigg{(}w_{1}\omega_{1-\theta_{1} }\int_{\Omega}\bigg{(}\frac{\partial W_{\eta,\varepsilon}}{\partial x}\bigg{)} ^{p}dx\bigg{)}(t)\] \[=(p-1)I_{t}^{\theta}\bigg{(}w_{1}\omega_{1-\theta_{1}}\int_{ \Omega}\bigg{(}\frac{\partial W_{\eta,\varepsilon}}{\partial x}\bigg{)}^{p} dx\bigg{)}(t)+I_{t}^{\theta-\theta_{1}}\bigg{(}w_{1}\int_{\Omega}\bigg{(}\frac{ \partial W_{\eta,\varepsilon}}{\partial x}\bigg{)}^{p}dx\bigg{)}(t)\] \[-w_{1}^{p}(0)\int_{\Omega}\bigg{(}\frac{\partial W_{\eta, \varepsilon}}{\partial x}(x,0)\bigg{)}^{p}dxI_{t}^{\theta-\theta_{1}}(w_{1}^{1 -p})(t)-\theta_{1}I_{t}^{1-\theta_{1}+\theta}\bigg{(}\mathcal{W}(w_{1}^{1-p}) w_{1}^{p}\int_{\Omega}\bigg{(}\frac{\partial W_{\eta,\varepsilon}}{\partial x} \bigg{)}^{p}dx\bigg{)}(t)\] \[+w_{1}^{p}(0)\int_{\Omega}\bigg{(}\frac{\partial W_{\eta, \varepsilon}}{\partial x}(x,0)\bigg{)}^{p}dxI_{t}^{\theta}(w_{1}^{1-p}\omega_{ 1-\theta_{1}})(t)\equiv\mathcal{I}\bigg{(}t,w_{1},\frac{\partial W_{\eta, \varepsilon}}{\partial x}\bigg{)}.\] Finally, taking into account of the positivity of \(w_{1}\) and even \(p\) to control the last term in the right-hand side of the last inequality, we arrive at the desired bound to the function \(W_{\eta,\varepsilon}\). The conclusion can be easily drawn from the uniform convergences \[I_{t}^{\theta}\bigg{(}\int_{\Omega}\partial_{t}^{\theta_{1}}(w_{1}W_{\eta, \varepsilon})\frac{\partial}{\partial x}\bigg{(}\frac{\partial W_{\eta, \varepsilon}}{\partial x}\bigg{)}^{p-1}dx\bigg{)}(t)\to I_{t}^{\theta} \bigg{(}\int_{\Omega}\partial_{t}^{\theta_{1}}(w_{1}w_{2})\frac{\partial}{ \partial x}\bigg{(}\frac{\partial w_{2}}{\partial x}\bigg{)}^{p-1}dx\bigg{)}(t),\] and \[\mathcal{I}\bigg{(}t,w_{1},\frac{\partial W_{\eta,\varepsilon}}{\partial x} \bigg{)}\rightarrow\mathcal{I}\bigg{(}t,w_{1},\frac{\partial w_{2}}{\partial x }\bigg{)}.\] The case of \(\frac{\partial u}{\partial x}=0\) on \(\partial\Omega_{T}\) is similar and left to the reader. This completes the proof of this lemma. For the reader's convenience, we now recall the global classical solvability of the linear version of problem (1.1)-(1.4). This result, stated as a lemma, is proved in our previous work [37, Theorem 4.1, Remark 4.4], and will be a key point in our analysis in Sections 6-7. **Lemma 5.7**.: _Let \(\partial\Omega\in\mathcal{C}^{2+\alpha}\), \(f(u)\equiv 0\), and let \(\nu\in(0,1],\) while \(\nu_{i},\mu_{j}\) meet requirement **h1**. For any fixed \(T>0\), under assumptions **h2**-**h4**, and **h5(i)**, **h7**, then the conclusions of Theorem 4.1 hold. Besides, this solution fulfills the estimate_ \[\|u\|_{\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}(\bar{\Omega}_ {T})}+\sum_{i=1}^{M}\|\mathbf{D}_{t}^{\nu_{i}}u\|_{\mathcal{C}^{\alpha,\frac{ \alpha}{2}\nu}(\bar{\Omega}_{T})}+\sum_{j=1}^{N}\|\mathbf{D}_{t}^{\mu_{j}}u\|_ {\mathcal{C}^{\alpha,\frac{\alpha}{2}\nu}(\bar{\Omega}_{T})}\] \[\leq C[\|u_{0}\|_{\mathcal{C}^{2+\alpha}(\bar{\Omega})}+\|g\|_{ \mathcal{C}^{\alpha,\frac{\alpha}{2}\nu}(\bar{\Omega}_{T})}+\|\psi\|_{ \mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}(\partial\Omega_{T})}]\] _within the **DBC** (1.2), or_ \[\|u\|_{\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}(\bar{\Omega}_ {T})}+\sum_{i=1}^{M}\|\mathbf{D}_{t}^{\nu_{i}}u\|_{\mathcal{C}^{\alpha,\frac{ \alpha}{2}\nu}(\bar{\Omega}_{T})}+\sum_{j=1}^{N}\|\mathbf{D}_{t}^{\mu_{j}}u\| _{\mathcal{C}^{\alpha,\frac{\alpha}{2}\nu}(\bar{\Omega}_{T})}\] \[\leq C[\|u_{0}\|_{\mathcal{C}^{2+\alpha}(\bar{\Omega})}+\|g\|_{ \mathcal{C}^{\alpha,\frac{\alpha}{2}\nu}(\bar{\Omega}_{T})}+\|\psi_{1}\|_{ \mathcal{C}^{1+\alpha,\frac{1+\alpha}{2}\nu}(\partial\Omega_{T})}]\] _within **NBC** (1.3). The generic constant \(C\) is independent of the right-hand sides of (1.1)-(1.4)._ We conclude this preliminary section with the inequalities for every \(v\in W^{1,2}(\Omega)\) that will play a key role in the proof of Theorem 4.1 in Subsection 6.2: \[\|v\|_{L_{\infty}(\Omega)}\leq C\|v\|_{W^{1,2}(\Omega)}^{1/3}\|v\|_{L_{1}(\Omega)} ^{2/3},\] and \[\|v\|_{L_{\infty}(\Omega)}\leq\varepsilon\|v\|_{W^{1,2}(\Omega)}+C\varepsilon^{- 1/2}\|v\|_{L_{1}(\Omega)}. \tag{5.14}\] It is worth noting that the first inequality is the bound (2.19) in [26], while estimate (5.14) is a simple consequence of the first and Young inequality. ## 6. A Priori Estimates In this section, we provide a priori estimates for the classical solutions to the following family of problems for \(\lambda\in[0,1],\) \[\begin{cases}\mathbf{D}_{t}u-\mathcal{L}_{1}u-\mathcal{K}*\mathcal{L}_{2}u+ \lambda f(u)=g(x,t)\quad\text{in}\quad\Omega_{T},\\ u(x,0)=u_{0}(x)\qquad\text{in}\qquad\bar{\Omega},\\ u(x,t)=0,\qquad\text{on}\qquad\partial\Omega_{T}.\end{cases} \tag{6.1}\] These estimates are the crucial point in the proof of Theorems 4.1 and 4.4. **Lemma 6.1**.: _Let the assumptions of Theorem 4.1 hold with \(\psi=\psi_{1}\equiv 0.\) If \(u\in\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}(\bar{\Omega}_{T})\) is a classical solution to (6.1). Then for any \(\lambda\in[0,1],\) the following inequalities are fulfilled_ \[\|u\|_{\mathcal{C}([0,T],\mathcal{C}^{1}(\bar{\Omega}))}+\langle u\rangle_{t, \Omega_{T}}^{(\nu/2)}\leq C(1+\|u_{0}\|_{W^{2,2}(\Omega)}+[\sup_{t\in[0,T]}I_ {t}^{\nu}\|g\|_{L_{2}(\Omega)}^{2}]^{\frac{1}{2}}), \tag{6.2}\] \[\|u\|_{\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}(\bar{\Omega}_{T})}+\sum_{ i=1}^{M}\|\mathbf{D}_{t}^{\nu_{i}}u\|_{\mathcal{C}^{\alpha,\frac{\alpha}{2}\nu}( \bar{\Omega}_{T})}+\sum_{j=1}^{N}\|\mathbf{D}_{t}^{\mu_{j}}u\|_{\mathcal{C}^{ \alpha,\frac{\alpha}{2}\nu}(\bar{\Omega}_{T})}\leq C[1+\|u_{0}\|_{\mathcal{C}^ {2+\alpha}(\bar{\Omega})}+\|g\|_{\mathcal{C}^{\alpha,\frac{\alpha}{2}\nu}( \bar{\Omega}_{T})}], \tag{6.3}\] \[\|u\|_{L_{2}((0,T),W^{2,2}(\Omega))}+\|\mathbf{D}_{t}^{\nu}u\|_{L_{2}(\Omega_ {T})}+\sum_{i=1}^{M}\|\mathbf{D}_{t}^{\nu_{i}}u\|_{L_{2}(\Omega_{T})}+\sum_{j =1}^{N}\|\mathbf{D}_{t}^{\mu_{j}}u\|_{L_{2}(\Omega_{T})}\] \[\leq C[1+\|u_{0}\|_{W^{2,2}(\Omega)}+[\sup_{t\in[0,T]}I_{t}^{\nu}\|g\|_{L_{2} (\Omega)}^{2}]^{\frac{1}{2}}]. \tag{6.4}\] _Here the positive constant \(C\) is independent of \(\lambda\) and depends only on \(\nu,\nu_{i},\mu_{j}\), \(T,L,L_{i}\), \(r\) and the corresponding norms of the coefficients \(a_{i},b_{i},\)\(\varrho_{0},\)\(\varrho_{j}\), \(\gamma_{j}\) and of the kernel \(\mathcal{K}.\)_ **Remark 6.2**.: Actually, our arguments in Section 6.1 tells us that the term \(\|u\|_{\mathcal{C}(\bar{\Omega}_{T})}\) is evaluated via minor norms \[\|u\|_{\mathcal{C}(\bar{\Omega}_{T})}\leq C[1+\|u_{0}\|_{W^{1,2}(\Omega)}+[ \sup_{t\in[0,T]}I_{t}^{\nu}\|g\|_{L_{2}(\Omega)}^{2}]^{\frac{1}{2}}].\] First, we notice that estimate (6.3) is verified with the standard Schauder approach and by means of Lemma 5.7 and bound (6.2). Thus, to prove Lemma 6.1, we are left to produce inequalities (6.2) and (6.4). We preliminarily observe that verification of these estimates in the case of absence of \(\mathbf{D}_{t}^{\mu_{i}}(\gamma_{i}u),\)\(i=1,2,...,N,\) (i.e. \(N=0\)) is the simpler and repeat the main steps (with minor changes) in arguments related with the analysis of the case \(N\geq 1\). Hence, here we focus on the case of the presence of at least one fractional derivative \(\mathbf{D}_{t}^{\mu_{i}}(\gamma_{i}u)\) in the operator \(\mathbf{D}_{t}u\). To this end, appealing to assumption **h3**, we rewrite \(\mathbf{D}_{t}u\) in the more suitable form to our analysis \[\mathbf{D}_{t}u= \ {}_{1}\mathbf{D}_{t}u+\ {}_{2}\mathbf{D}_{t}u,\quad\ {}_{1}\mathbf{D}_{t}u=\mathbf{D}_{t}^{\nu}(\varrho_{t})+\sum_{i=1}^{M} \mathbf{D}_{t}^{\nu_{i}}(\varrho_{i}u),\] \[{}_{2}\mathbf{D}_{t}u= \sum_{j=1}^{N}[\mathbf{D}_{t}^{\nu}(\gamma_{j}u)-\mathbf{D}_{t}^{ \mu_{j}}(\gamma_{j}u)]=\sum_{j=1}^{N}\frac{\partial}{\partial t}\bigg{(} \mathcal{N}(t;\nu,\mu_{j})*(\gamma_{j}u-\gamma_{j}(0)u_{0})\bigg{)}\] (see (5.3) for the definition of the kernel \(\mathcal{N}(t;\nu,\mu_{j})\)). Thanks to the positivity of the kernel \(\mathcal{N}(t;\nu,\mu_{j})\) for \(t\in[0,T^{*}]\) (see **h1**), we first prove estimates (6.2) and (6.4) for \(t\in[0,T_{0}]\) where \[T_{0}<\min\bigg{\{}T^{*},\min_{j\in\{1,2,...,N\}}\bigg{(}\frac{\nu\Gamma(1+\nu -\mu_{j})}{\mu_{j}}\bigg{)}^{\frac{1}{\nu-\mu_{j}}}\bigg{\}}. \tag{6.5}\] After that, if \(T\geq T_{0}\), we discuss the extension of these bounds to the interval \((T_{0},T]\). It is worth noting that this step is absent in the case of \(N=0,\) due to the proof of estimates (6.2) and (6.4) is carried out immediately on entire time interval. To verify (6.2) and (6.4) for \(t\in[0,T_{0}]\), we will follow the strategy containing fourth main steps. In the first, we evaluate the function \(u(x,t)\) in the class \(\mathcal{C}([0,T_{0}],W^{1,2}(\Omega))\cap L_{2}((0,T_{0}),W^{2,2}(\Omega))\). Then, Sobolev embedding theorem (see, e.g., [1, Subsection 5.4]) allows us to readily obtain the bound of \(\|u\|_{\mathcal{C}(\bar{\Omega}_{T_{0}})}\) exploiting only the estimate on \(\|u\|_{\mathcal{C}([0,T_{0}],W^{1,2}(\Omega))}\). On the second stage, we evaluate the term \(\|\frac{\partial u}{\partial x}\|_{\mathcal{C}(\bar{\Omega}_{T_{0}})}\) via integral iteration technique adapted to the case of multi-term fractional derivatives. After that, to complete the proof of (6.2), appealing to estimate of \(\|u\|_{\mathcal{C}([0,T_{0}],\mathcal{C}^{1}(\bar{\Omega}))}\), we arrive at the bound of the Holder seminorm of \(u\) with respect to time. Finally, taking into account (6.2), we achieve estimate (6.4) via evaluation of \(\|\mathbf{D}_{t}^{\nu}u\|_{L_{2}(\Omega_{T_{0}})}\), \(\|\mathbf{D}_{t}^{\nu_{i}}u\|_{L_{2}(\Omega_{T_{0}})}\), \(\|\mathbf{D}_{t}^{\mu_{j}}u\|_{L_{2}(\Omega_{T_{0}})}\). Note that assumption **h3** provides the existence of a constant \(C_{0}\) such that \[\sum_{i=0}^{2}[\sup_{\bar{\Omega}_{T}}]a_{i}(x,t)|+\sup_{\bar{\Omega}_{T}}]b _{i}(x,t)|+\sup_{\bar{\Omega}_{T}}\biggl{|}\frac{\partial a_{2}}{\partial x }(x,t)\biggr{|}+\sup_{\bar{\Omega}_{T}}\biggl{|}\frac{\partial b_{2}}{ \partial x}(x,t)\biggr{|}\leq C_{0}. \tag{6.6}\] ### Estimate of \(\|u\|_{\mathcal{C}(\bar{\Omega}_{T_{0}})}\) We begin to evaluate the norm of \(u\) in the space \(\mathcal{C}([0,T_{0}],L_{2}(\Omega))\). To this end, we multiply the equation in (6.1) by \(u(x,\tau)\), and then we integrate over \(\Omega\) and compute the fractional integral \(I_{t}^{\nu}\). Thus, we have \[\sum_{j=1}^{5}\mathcal{R}_{j}(t)=0, \tag{6.7}\] where we put \[\mathcal{R}_{1}(t) =I_{t}^{\nu}\bigg{(}\int_{\Omega}1\mathbf{D}_{\tau}uudx\bigg{)}( t),\quad\mathcal{R}_{2}(t)=I_{t}^{\nu}\bigg{(}\int_{\Omega}2\mathbf{D}_{\tau}uudx \bigg{)}(t),\quad\mathcal{R}_{3}(t)=-I_{t}^{\nu}\bigg{(}\int_{\Omega}\mathcal{ L}_{1}uudx\bigg{)}(t),\] \[\mathcal{R}_{4}(t) =-I_{t}^{\nu}\bigg{(}\int_{\Omega}(\mathcal{K}*\mathcal{L}_{2}u )udx\bigg{)}(t),\quad\mathcal{R}_{5}(t)=I_{t}^{\nu}\bigg{(}\int_{\Omega}[ \lambda f(u)-g]udx\bigg{)}(t).\] At this point, we estimate each term \(\mathcal{R}_{j}\), separately. \(\bullet\) By Propositions 5.1 and 5.4 and assumptions **h2** and **h3**, \[\mathcal{R}_{1}(t) \geq\frac{1}{2}I_{t}^{\nu}\Big{(}\varrho^{-1}\int_{\Omega} \partial_{t}^{\nu}(\varrho u)^{2}dx\Big{)}(t)+\frac{1}{2}\sum_{i=1}^{M}I_{t}^ {\nu}\Big{(}\varrho_{i}^{-1}\int_{\Omega}\partial_{t}^{\nu_{i}}(\varrho_{i}u )^{2}dx\Big{)}(t)\] \[-\frac{1}{2}\int_{\Omega}u_{0}^{2}dx\Big{[}\varrho^{2}(0)I_{t}^{ \nu}(\varrho^{-1}\omega_{1-\nu})+\sum_{i=1}^{M}\varrho_{i}^{2}(0)I_{t}^{\nu}( \varrho_{i}^{-1}\omega_{1-\nu_{i}})\Big{]}\] \[\geq\frac{\delta_{1}}{2}\int_{\Omega}u^{2}(x,t)dx-\int_{\Omega}u _{0}^{2}dx\Big{[}2\varrho(0)+\sum_{i=1}^{M}\varrho_{i}(0)\omega_{1+\nu-\nu_{i} }(t)\Big{]}.\] \(\bullet\) As for the term \(\mathcal{R}_{2}(t)\), we recast the arguments leading to the bound of \(\mathcal{R}_{1}(t)\). Thus, exploiting Corollary 5.2, Proposition 5.4, and conditions **h2**, **h3**, we immediately arrive at \[\mathcal{R}_{2}(t) \geq\frac{1}{2}\sum_{j=1}^{N}I_{t}^{\nu}\bigg{(}\gamma_{j}^{-1} \int_{\Omega}\frac{\partial}{\partial\tau}(\mathcal{N}(\tau;\nu,\mu_{j})*( \gamma_{j}u)^{2})dx\bigg{)}(t)-\frac{1}{2}\int_{\Omega}u_{0}^{2}dx\sum_{j=1}^{N }I_{t}^{\nu}\bigg{(}\mathcal{N}(\tau;\nu,\mu_{j})\frac{\gamma_{j}^{2}(0)}{ \gamma_{j}(\tau)}\bigg{)}(t)\] \[\geq\frac{1}{2}\sum_{j=1}^{N}\gamma_{j}(t)\int_{\Omega}u^{2}(x,t )dx-\frac{1}{2}\sum_{j=1}^{N}I_{t}^{\nu-\mu_{j}}\bigg{(}\gamma_{j}\int_{\Omega}u ^{2}dx\bigg{)}(t)-\frac{1}{2}\sum_{j=1}^{N}\gamma_{j}^{2}(0)\gamma_{j}^{-1}(t) \int_{\Omega}u_{0}^{2}dx\] \[+\frac{1}{2}\sum_{j=1}^{N}\bigg{[}I_{t}^{1}\bigg{(}-\nu\mathcal{W }(\gamma_{j}^{-1})\int_{\Omega}(\gamma_{j}u)^{2}dx\bigg{)}(t)-I_{t}^{1+\nu-\mu _{j}}\bigg{(}-\mu_{j}\mathcal{W}(\gamma_{j}^{-1})\int_{\Omega}(\gamma_{j}u)^{2} dx\bigg{)}(t)\bigg{]}.\] After that, Corollary 5.3 with \(T_{1}=T_{0}\) (see restriction (6.5)) and assumption **h3** tell us about the positivity of the last sum in the right-hand side of this inequality. Thus, we have \[\mathcal{R}_{2}(t) \geq\frac{N}{2}\delta_{3}\int_{\Omega}u^{2}(x,t)dx-\frac{1}{2}\sum_{j= 1}^{N}\gamma_{j}(T)I_{t}^{\nu-\mu_{j}}\bigg{(}\int_{\Omega}u^{2}dx\bigg{)}(t)- \frac{1}{2}\sum_{j=1}^{N}\gamma_{j}(0)\|u_{0}\|_{L_{2}(\Omega)}^{2}.\] \(\bullet\) Integrating by parts and keeping in mind the homogeneous Dirichlet boundary condition and according to assumptions **h2-h3** and (6.6), we deduce \[\mathcal{R}_{3}(t)\geq\frac{3\delta_{0}}{4}I_{t}^{\nu}\bigg{(}\int_{\Omega} \bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{2}dx\bigg{)}(t)-C_{1}I_{t}^{\nu} \bigg{(}\int_{\Omega}u^{2}dx\bigg{)},\] where \[C_{1}=C_{0}(1+4C_{0}/\delta_{0}).\] It is worth noting that, we used the Cauchy inequality to evaluate the term \(u\frac{\partial u}{\partial x}\). \(\bullet\) Coming to the term \(\mathcal{R}_{4}(t)\), we integrate by parts and take advantage of the Cauchy and the Poincare inequalities and requirements **h2-h3**, (6.6). In summary, we obtain \[\mathcal{R}_{4}(t)\geq-\frac{\delta_{0}}{4}I_{t}^{\nu}\bigg{(}\int_{\Omega} \bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{2}dx\bigg{)}(t)-CI_{t}^{\nu} \bigg{(}\mathcal{K}*\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}\bigg{)} ^{2}dx\bigg{)}(t)-CI_{t}^{\nu}\bigg{(}\int_{\Omega}u^{2}dx\bigg{)}(t),\] where the positive constant \(C\) is independent of \(\lambda\) and \(T_{0}\). \(\bullet\) Exploiting assumption **h6 (i)** with the Cauchy inequality and point (ii) of Proposition 5.1, we have \[\mathcal{R}_{5}(t)\geq-L|\Omega|\omega_{1+\nu}(t)-(L+2)I_{t}^{\nu}\bigg{(}\int _{\Omega}u^{2}dx\bigg{)}(t)-I_{t}^{\nu}\bigg{(}\int_{\Omega}g^{2}dx\bigg{)}(t).\] Finally, collecting all estimates of \(\mathcal{R}_{j}(t)\), we end up with \[\int_{\Omega}u^{2}(x,t)dx+I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(} \frac{\partial u}{\partial x}\bigg{)}^{2}dx\bigg{)}(t) \leq C\Big{\{}\big{(}I_{t}^{\nu}+\sum_{i=1}^{N}I_{t}^{\nu-\mu_{i} }\big{)}\Big{(}\int_{\Omega}u^{2}dx\Big{)}(t)+I_{t}^{\nu}\Big{(}\mathcal{K}* \int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{2}dx\Big{)}(t)\] \[+1+\|u_{0}\|_{L_{2}(\Omega)}+I_{t}^{\nu}\bigg{(}\int_{\Omega}g^{ 2}dx\bigg{)}(t)\Big{\}}.\] Appealing to associative properties of a convolution to handle the second term in the right-hand side of this estimate, we use the Gronwall-type inequality (4.3) in [21] and arrive at the desired bound \[\|u\|^{2}_{\mathcal{C}([0,T_{0}],L_{2}(\Omega))}+\sup_{t\in[0,T_{0}]}I_{t}^{ \nu}\bigg{(}\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{2}dx \bigg{)}(t)\leq C[1+\|u_{0}\|_{L_{2}(\Omega)}+\sup_{t\in[0,T]}I_{t}^{\nu}\|g\| ^{2}_{L_{2}(\Omega)}] \tag{6.8}\] with the constant \(C\) being independent of \(\lambda\) and \(T_{0}\). Moreover, inequality (6.8) provides the estimate \[\bigg{\|}\frac{\partial u}{\partial x}\bigg{\|}_{L_{2}(\Omega_{T_{0}})}\leq C [1+\|u_{0}\|^{2}_{L_{2}(\Omega)}+\sup_{t\in[0,T]}I_{t}^{\nu}\|g\|^{2}_{L_{2}( \Omega)}]. \tag{6.9}\] In order to complete the estimate of \(\|u\|_{\mathcal{C}(\Omega_{T_{0}})}\), we need a similar bound for the derivative \(\frac{\partial u}{\partial x}\). To this end, we multiply the equation in (6.1) by \(\frac{\partial^{2}u}{\partial x^{2}}(x,\tau)\) and then we integrate over \(\Omega\) and compute the fractional integral \(I_{t}^{\nu}\). Taking into account **h2-h4** and **h6 (i)** and applying Lemma 5.6 with \(p=2\) and (6.9), we obtain \[\frac{1}{2}[\delta_{1}+N\delta_{3}]\int_{\Omega}\bigg{(}\frac{ \partial u}{\partial x}\bigg{)}^{2}dx+\sum_{j=1}^{N}(\nu I_{t}^{1}-\mu_{j}I_{t }^{1+\nu-\mu_{j}})\bigg{(}\gamma_{j}^{2}(-\mathcal{W}(\gamma_{j}^{-1}))\int_{ \Omega}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{2}dx\bigg{)}(t)\] \[+\frac{\delta_{0}}{2}I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(} \frac{\partial^{2}u}{\partial x^{2}}\bigg{)}^{2}dx\bigg{)}(t)\leq\frac{1}{2} \sum_{j=1}^{N}\gamma_{j}(T)I_{t}^{\nu-\mu_{j}}\bigg{(}\int_{\Omega}\bigg{(} \frac{\partial u}{\partial x}\bigg{)}^{2}dx\bigg{)}(t)\] \[+\frac{3\|\mathcal{K}\|_{L_{1}(0,T)}C_{0}^{2}}{\delta_{0}}| \mathcal{K}|*I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(}\frac{\partial^{2}u}{ \partial x^{2}}\bigg{)}^{2}dx\bigg{)}+C[1+\|u_{0}\|^{2}_{W^{1,2}(\Omega)}+\sup _{t\in[0,T]}I_{t}^{\nu}\|g\|^{2}_{L_{2}(\Omega)}]. \tag{6.10}\] At last, using Corollary 5.3 with \(T_{1}=T_{0}\) (see (6.6)) to handle the second term in the left-hand side of this inequality, and then applying Gronwall-type inequality (4.3) in [21], we conclude that \[\sup_{t\in[0,T_{0}]}\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}\bigg{)} ^{2}dx+\sup_{t\in[0,T_{0}]}I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(}\frac{ \partial^{2}u}{\partial x^{2}}\bigg{)}^{2}dx\bigg{)}\leq C[1+\|u_{0}\|^{2}_{W^{1,2}(\Omega)}+\sup_{t\in[0,T]}I_{t}^{\nu}\|g\|^{2}_{L_{2}(\Omega)}] \tag{6.11}\] where the positive constant \(C\) is independent of \(\lambda\) and \(T_{0}\). Collecting this estimate with (6.8) and (6.9) and applying Sobolev embedding theorem (as we wrote before) arrive at the desired estimate \[\|u\|_{\mathcal{C}(\tilde{\Omega}_{T_{0}})}+\|u\|_{L_{2}((0,T_{0}), W^{2,2}(\Omega))}+\|u\|_{\mathcal{C}([0,T_{0}],W^{1,2}(\Omega))}\] \[\leq C[1+\|u_{0}\|_{W^{1,2}(\Omega)}+\sqrt{\sup_{t\in[0,T]}I^{ \nu}_{t}\|g\|^{2}_{L_{2}(\Omega)}}]\equiv C\mathcal{F}(u_{0},g) \tag{6.12}\] with the positive constant \(C\) being independent of \(\lambda,T_{0}\). Obviously, the last estimate allows us to evaluate the term \(\|\mathcal{K}*\mathcal{L}_{2}u\|_{L_{2}(\Omega_{T_{0}})}\). Indeed, the Young inequality of a convolution (see, e.g. [9]) provides \[\|\mathcal{K}*\mathcal{L}_{2}u\|_{L_{2}(\Omega_{T_{0}})}\leq C\|\mathcal{K}\| _{L_{1}(0,T)}\|\mathcal{L}_{2}u\|_{L_{2}(\Omega_{T_{0}})}.\] To manage the term \(\|\mathcal{L}_{2}u\|_{L_{2}(\Omega_{T_{0}})}\), we apply (6.12) and exploit the smoothness of the coefficients \(b_{j}\), \(j=0,1,2.\) Hence, we have \[\|\mathcal{K}*\mathcal{L}_{2}u\|^{2}_{L_{2}(\Omega_{T_{0}})}\leq C\|u\|^{2}_{ L_{2}((0,T_{0}),W^{2,2}(\Omega))}\leq C[1+\|u_{0}\|^{2}_{W^{1,2}(\Omega)}+\sup_{t \in[0,T]}I^{\nu}_{t}\|g\|^{2}_{L_{2}(\Omega)}], \tag{6.13}\] where the positive constant \(C\) is independent of \(\lambda\) and \(T_{0}\). **Remark 6.3**.: The treatment of the term \(I^{\nu}_{t}(\int_{\Omega}f(u)u_{xx}dx)(t)\) in (6.10) in the case of (3.1) differs from the case of \(f(u)\) satisfying (3.2). Indeed, if (3.2) holds, we first rewrite the term \(I^{\nu}_{t}(\int_{\Omega}f(u)u_{xx}dx)(t)\) in the form \[I^{\nu}_{t}\bigg{(}\int_{\Omega}f(u)u_{xx}dx\bigg{)}(t)=I^{\nu}_{t}\bigg{(} \int_{\Omega}\bar{f}(u)u_{xx}dx\bigg{)}(t)+I^{\nu}_{t}\bigg{(}\int_{\Omega}[f( 0)-L_{4}u]u_{xx}dx\bigg{)}(t)\] with \[\bar{f}(u)=f(u)-f(0)+L_{4}u.\] It is apparent that the second term in this representation is controlled with the arguments leading to the estimate of \(\mathcal{R}_{5}\). Coming the first term, we note that the function \(\bar{f}(u)\) meets the first three requirements in (3.2) and, besides, \[\bar{f}(u)=0\quad\text{on}\quad\partial\Omega_{T},\quad\bar{f}^{\prime}(u) \geq 0.\] Thus, taking into account these relations and integrating by parts arrive at \[I^{\nu}_{t}\bigg{(}\int_{\Omega}\bar{f}(u)u_{xx}dx\bigg{)}(t)=I^{\nu}_{t} \bigg{(}\int_{\Omega}\bar{f}^{\prime}(u)(u_{x})^{2}dx\bigg{)}(t)\geq 0,\] which in turn provides (6.10). ### The bound of \(\|\frac{\partial u}{\partial x}\|_{\mathcal{C}(\tilde{\Omega}_{T_{0}})}\) In this subsection we aim to prove the bound \[\bigg{\|}\frac{\partial u}{\partial x}\bigg{\|}_{\mathcal{C}(\tilde{\Omega}_ {T_{0}})}\leq\mathfrak{C}[1+\|u_{0}\|_{W^{2,2}(\Omega)}+(\sup_{t\in[0,T]}I^{ \nu}_{t}\|g\|^{2}_{L_{2}(\Omega)})^{1/2}] \tag{6.14}\] with the positive constant \(\mathfrak{C}\) is independent of \(\lambda\) and \(T_{0}\) and depends only on \(T,\nu,\nu_{i},\mu_{j},L,\)\(\|\mathcal{K}\|_{L_{1}(0,T)},\) and the corresponding norms of the coefficients. For simplicity of consideration, we assume that \[\bigg{\|}\frac{\partial u}{\partial x}\bigg{\|}_{\mathcal{C}(\tilde{\Omega}_{ T_{0}})}\geq\frac{\mathfrak{C}}{2}\mathcal{F}(u_{0},g), \tag{6.15}\] where \(\mathcal{F}(u_{0},g)\) is defined in (6.12), otherwise we immediately arrive at (6.14). To verify (6.14) in the case of (6.15), we reason similarly to the case analyzed in Subsection 6.1. Thus, we first multiply the equation in (6.1) by \(p\frac{\partial}{\partial x}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p-1}\) and then integrate over \(\Omega\) and compute the fractional integral \(I_{t}^{\nu}\) \[\varrho(0)\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}\bigg{)} ^{p}dx+\frac{3\delta_{0}}{4}I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(}\frac{ \partial u}{\partial x}\bigg{)}^{p-2}\bigg{(}\frac{\partial^{2}u}{\partial x^{2 }}\bigg{)}^{2}dx\bigg{)}(t)\] \[\leq C\bigg{\{}p(p-1)[1+\|u\|_{\mathcal{C}(\tilde{\Omega}_{r_{0}} )}]^{2}I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x} \bigg{)}^{p-2}dx\bigg{)}(t)\] \[+p(p-1)I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(}\frac{\partial u}{ \partial x}\bigg{)}^{p}dx\bigg{)}(t)+\int_{\Omega}\bigg{(}\frac{\partial u_{0} }{\partial x}\bigg{)}^{p}dx\] \[+(p-1)N\Gamma(1+\nu)\sup_{j\in(1,2,\ldots,N)}\|\gamma_{j}\|_{ \mathcal{C}^{1}([0,T])}T^{1-\nu}I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(} \frac{\partial u}{\partial x}\bigg{)}^{p}dx\bigg{)}(t)\bigg{\}}\] \[+I_{t}^{\nu}\bigg{(}\int_{\Omega}(\mathcal{K}*\mathcal{L}_{2}u)p \frac{\partial}{\partial x}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p -1}dx\bigg{)}(t)\] with \(C\) being independent of \(\lambda\) and \(T_{0}\). Here we used Corollary 5.3, Lemma 5.6 and assumptions **h1-h3** to manage the term \(I_{t}^{\nu}\bigg{(}\int_{\Omega}\mathbf{D}_{t}up\frac{\partial}{\partial x} \bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p-1}dx\bigg{)}\), while to handle the term \(I_{t}^{\nu}\bigg{(}\int_{\Omega}\mathcal{L}_{1}up\frac{\partial}{\partial x} \bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p-1}dx\bigg{)}\), we appealed to **h2-h3** and the Young inequality. Finally, we take advantage of (3.1) and again the Young inequality to treat \(I_{t}^{\nu}\bigg{(}\int_{\Omega}[\lambda f(u)-g]p\frac{\partial}{\partial x} \bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p-1}dx\bigg{)}\). After that, keeping in mind restriction (6.15) and inequality (6.12), we conclude that \[\varrho(0)\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x} \bigg{)}^{p}dx+\frac{3\delta_{0}}{4}I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(} \frac{\partial u}{\partial x}\bigg{)}^{p-2}\bigg{(}\frac{\partial^{2}u}{ \partial x^{2}}\bigg{)}^{2}dx\bigg{)}(t)\] \[\leq C\bigg{\{}p(p-1)I_{t}^{\nu}\bigg{(}\sup_{\Omega}\bigg{|} \frac{\partial u}{\partial x}\bigg{|}^{p}\bigg{)}(t)+I_{t}^{\nu-\mu_{N}}\bigg{(} \int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p}dx\bigg{)}(t)\] \[+\int_{\Omega}\bigg{(}\frac{\partial u_{0}}{\partial x}\bigg{)}^{ p}dx\bigg{\}}+I_{t}^{\nu}\bigg{(}\int_{\Omega}(\mathcal{K}*\mathcal{L}_{2}u)p \frac{\partial}{\partial x}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p -1}dx\bigg{)}(t), \tag{6.16}\] where the positive constant \(C\) is independent of \(T_{0},\lambda,p\) and the corresponding norms of \(u_{0},g\). Now we are left to evaluate the last term in the right-hand side of (6.16). To this end, applying the Young inequality and restriction (6.6) provides the estimate \[\int_{\Omega}(\mathcal{K}*\mathcal{L}_{2}u)p\frac{\partial}{ \partial x}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p-1}dx \leq C_{0}\frac{2}{\varepsilon_{1}}p(p-1)\underset{\Omega}{\sup} \bigg{|}\frac{\partial u}{\partial x}(x,t)\bigg{|}^{p-2}\bigg{(}|\mathcal{K}|* \|u(\cdot,t)\|_{W^{2,2}(\Omega)}^{2}\bigg{)}(t)\] \[+C_{0}p(p-1)\varepsilon_{1}(|\mathcal{K}|*1)(t)\int_{\Omega} \bigg{(}\frac{\partial u}{\partial x}(x,t)\bigg{)}^{p-2}\bigg{(}\frac{\partial ^{2}u}{\partial x^{2}}(x,t)\bigg{)}^{2}dx\] where the small quantity \(\varepsilon_{1}\) being specified later. Exploiting assumption **h4** and estimate (6.11) to manage the first term in the right-hand side of this inequality, we easily conclude that \[I_{t}^{\nu}\bigg{(}\int_{\Omega}(\mathcal{K}*\mathcal{L}_{2}u)p \frac{\partial}{\partial x}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p-1} dx\bigg{)}\] \[\leq\frac{Cp(p-1)}{\varepsilon_{1}}I_{t}^{\nu}\bigg{(}\underset{ \Omega}{\sup}\bigg{|}\frac{\partial u}{\partial x}(x,t)\bigg{|}^{p-2}\bigg{)}(t )\mathcal{F}^{2}(u_{0},g)\] \[+C_{0}p(p-1)\varepsilon_{1}\|\mathcal{K}\|_{L_{1}(0,T)}I_{t}^{\nu }\bigg{(}\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}(x,t)\bigg{)}^{p-2} \bigg{(}\frac{\partial^{2}u}{\partial x^{2}}(x,t)\bigg{)}^{2}dx\bigg{)}(t).\] Applying restriction (6.15) to control the first term in the right-hand side leads to the estimate \[I_{t}^{\nu}\bigg{(}\int_{\Omega}(\mathcal{K}\ast\mathcal{L}_{2}u)p \frac{\partial}{\partial x}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p-1} dx\bigg{)}\] \[\leq\frac{2Cp(p-1)}{\mathfrak{C}^{2}\varepsilon_{1}}I_{t}^{\nu} \bigg{(}\sup_{\bar{\Omega}}\biggl{|}\frac{\partial u}{\partial x}(x,t)\biggr{|} ^{p}\bigg{)}(t)\] \[+C_{0}p(p-1)\varepsilon_{1}\|\mathcal{K}\|_{L_{1}(0,T)}I_{t}^{ \nu}\bigg{(}\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}(x,t)\bigg{)}^{p- 2}\bigg{(}\frac{\partial^{2}u}{\partial x^{2}}(x,t)\bigg{)}^{2}dx\bigg{)}(t).\] Choosing \[\varepsilon_{1}=\frac{\delta_{0}}{4C_{0}\|\mathcal{K}\|_{L_{1}(0,T)}}\] and coming to estimate (6.16), we easily draw \[\varrho(0)\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x} \bigg{)}^{p}dx+\frac{\delta_{0}}{2}I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(} \frac{\partial u}{\partial x}\bigg{)}^{p-2}\bigg{(}\frac{\partial^{2}u}{ \partial x^{2}}\bigg{)}^{2}dx\bigg{)}(t)\] \[\leq C_{2}\bigg{\{}p(p-1)I_{t}^{\nu}\bigg{(}\sup_{\bar{\Omega}} \biggl{|}\frac{\partial u}{\partial x}\bigg{|}^{p}\bigg{)}(t)+I_{t}^{\nu-\mu_{ N}}\bigg{(}\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}\bigg{)}^{p}dx \bigg{)}(t)\] \[+\int_{\Omega}\bigg{(}\frac{\partial u_{0}}{\partial x}\bigg{)}^ {p}dx\bigg{\}}, \tag{6.17}\] where the positive constant \(C_{2}\) is independent of \(T_{0}\), \(\lambda,p\) and the norms of \(u_{0},g\). Finally, to handle the term \(I_{t}^{\nu}\bigg{(}\sup_{\Omega}\biggl{|}\frac{\partial u}{\partial x}(x,t) \biggr{|}^{p}\bigg{)}\), we exploit bound (5.14) with \(v=\bigg{(}\frac{\partial u}{\partial x}(x,t)\bigg{)}^{p/2}\) and have \[\sup_{\bar{\Omega}}\biggl{|}\frac{\partial u}{\partial x}(x,t) \bigg{|}^{p} \leq\frac{\varepsilon^{2}p^{2}}{4}\int_{\Omega}\bigg{(}\frac{ \partial u}{\partial x}(x,t)\bigg{)}^{p-2}\bigg{(}\frac{\partial^{2}u}{ \partial x^{2}}(x,t)\bigg{)}^{2}dx+\varepsilon^{2}\int_{\Omega}\bigg{(}\frac {\partial u}{\partial x}(x,t)\bigg{)}^{p}dx\] \[+\frac{C}{\varepsilon}\bigg{(}\int_{\Omega}\bigg{(}\frac{\partial u }{\partial x}(x,t)\bigg{)}^{p/2}dx\bigg{)}^{2}\] with a small positive \(\varepsilon\). Then, setting here \[\varepsilon=\frac{\varepsilon_{0}}{\sqrt{|\Omega|}}\] with some positive \(\varepsilon_{0}<1\) and computing the fractional integral, we arrive at \[I_{t}^{\nu}\bigg{(}\sup_{\bar{\Omega}}\biggl{|}\frac{\partial u }{\partial x}(x,t)\bigg{|}^{p}\bigg{)}(t) \leq\frac{\varepsilon_{0}^{2}p^{2}}{4[1-\varepsilon^{2}]|\Omega |}I_{t}^{\nu}\bigg{(}\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}(x,t) \bigg{)}^{p-2}\bigg{(}\frac{\partial^{2}u}{\partial x^{2}}(x,t)\bigg{)}^{2}dx \bigg{)}(t)\] \[+\frac{C\sqrt{|\Omega|}}{\varepsilon_{0}(1-\varepsilon_{0}^{2})}I_ {t}^{\nu}\bigg{(}\bigg{(}\int_{\Omega}\bigg{(}\frac{\partial u}{\partial x}(x, t)\bigg{)}^{p/2}dx\bigg{)}^{2}\bigg{)}(t),\] Collecting this estimate with (6.17) and choosing \[\varepsilon_{0}^{2}=\frac{\delta_{0}|\Omega|}{\delta_{0}|\Omega|+p^{2}C_{2}},\] we achieve the bound \[\int_{\Omega}\left(\frac{\partial u}{\partial x}(x,t)\right)^{p}\!dx +p(p-1)I_{t}^{\nu}\bigg{(}\int_{\Omega}\left(\frac{\partial u}{\partial x}\right) ^{p-2}\!\!\left(\frac{\partial^{2}u}{\partial x^{2}}\right)^{2}\!dx\bigg{)}(t)\] \[\leq C\bigg{\{}I_{t}^{\nu-\mu_{N}}\bigg{(}\int_{\Omega}\left( \frac{\partial u}{\partial x}\right)^{p}\!dx\bigg{)}(t)+\int_{\Omega}\left( \frac{\partial u_{0}}{\partial x}\right)^{p}\!dx\] \[+p^{3}I_{t}^{\nu}\bigg{(}\int_{\Omega}\left(\frac{\partial u}{ \partial x}\right)^{p/2}\!dx\bigg{)}^{2}(t), \tag{6.18}\] where the positive constant \(C\) is independent of \(T_{0}\), \(\lambda\), \(p\) and the norms of \(u_{0}\) and \(g\). In order to evaluate the term \(I_{t}^{\nu-\mu_{N}}\bigg{(}\int_{\Omega}\left(\frac{\partial u}{\partial x} \right)^{p}\!dx\bigg{)}(t)\) in the right-hand side of (6.18), we exploit the Gronwal-type inequality [21, Proposition 4.3] and obtain \[\int_{\Omega}\left(\frac{\partial u}{\partial x}(x,t)\right)^{p}\!dx\leq AE _{\nu-\mu_{N}}(Ct^{\nu-\mu_{N}})\quad\text{for any}\quad t\in[0,T_{0}],\] where we put \[A=C\bigg{[}\int_{\Omega}\left(\frac{\partial u_{0}}{\partial x}\right)^{p}\! dx+p^{3}\bigg{\|}\int_{\Omega}\left(\frac{\partial u}{\partial x}\right)^{p/2}\!dx \bigg{\|}_{\mathcal{C}([0,T_{0}])}^{2}\bigg{]}\] and \[E_{\theta}(t)=\sum_{m=0}^{+\infty}\frac{z^{m}}{\Gamma(1+m\theta)}\] is the classical Mittag-Leffler function of the order \(\theta\) (see, e.g., its definition in [11, (2.2.4)]). Applying this estimate to handle the first term in the right-hand side of (6.18) and then taking into account formula (3.7.44) in [11] to compute the fractional integral of Mittag-Leffler function produce the inequality \[\int_{\Omega}\left(\frac{\partial u}{\partial x}\right)^{p}\!dx \leq CE_{\nu-\mu_{N}}(CT^{\nu-\mu_{N}})\bigg{\{}1+\int_{\Omega} \left(\frac{\partial u_{0}}{\partial x}\right)^{p}\!dx\] \[+p^{3}\bigg{\|}\int_{\Omega}\left(\frac{\partial u}{\partial x} \right)^{p/2}\!dx\bigg{\|}_{\mathcal{C}([0,T_{0}])}^{2}\bigg{\}}\] At last, denoting \[\mathcal{B}_{0}=CE_{\nu-\mu_{N}}(CT^{\nu-\mu_{N}})\qquad\mathcal{A}_{m}=\sup_{ t\in[0,T_{0}]}\bigg{(}\int_{\Omega}\left(\frac{\partial u}{\partial x} \right)^{p}\!dx\bigg{)}^{1/p}\] with \(p=2^{m}\), \(m\geq 1\), we derive the bound \[\mathcal{A}_{m} \leq\mathcal{B}_{0}^{2^{-m}}[1+\|u_{0}\|_{W^{1,p}(\Omega)}+(2^{3 m2^{-m}})\mathcal{A}_{m-1}]\] \[\leq\mathcal{B}_{0}^{2^{-m}}[1+\|u_{0}\|_{\mathcal{C}^{1}(\Omega) }+(2^{3m2^{-m}})\mathcal{A}_{m-1}]\] \[\leq\mathcal{B}_{0}^{2^{-m}}[1+\|u_{0}\|_{W^{2,2}(\Omega)}]+(2^{3 m2^{-m}})\mathcal{A}_{m-1}]. \tag{6.19}\] To handle the term \(\|u_{0}\|_{\mathcal{C}^{1}(\Omega)}\) in these inequalities, we used the Sobolev embedding theorem. At this point, we analyze two possibilities: **(i)** either \(\max\left\{1+\|u_{0}\|_{W^{2,2}(\Omega)};\mathcal{A}_{m-1}\right\}=1+\|u_{0}\| _{W^{2,2}(\Omega)}\), **(ii)** or \(\max\left\{1+\|u_{0}\|_{W^{2,2}(\Omega)};\mathcal{A}_{m-1}\right\}=\mathcal{A }_{m-1}\). Obviously, in the case of **(i)** passing to the limit as \(m\to+\infty\) in (6.19), we end up with the desired bound \[\sup_{\Omega_{T_{0}}}\biggl{|}\frac{\partial u}{\partial x}\biggr{|}\leq C[1 +\|u_{0}\|_{W^{2,2}(\Omega)}].\] Conversely, if **(ii)** holds then \[\mathcal{A}_{m}\leq\mathcal{B}_{0}^{2^{-m}}\mathcal{A}_{m-1}\leq C\exp\bigg{\{} \sum_{m=1}^{+\infty}m2^{-m}\bigg{\}}\mathcal{A}_{1}\leq C\mathcal{A}_{1}\] and letting \(m\to+\infty\), we deduce that \[\sup_{\Omega_{T_{0}}}\biggl{|}\frac{\partial u}{\partial x}\biggr{|}\leq C \mathcal{A}_{1}.\] Finally, applying (6.11) to control the term \(\mathcal{A}_{1}\), we obtain (6.14). ### The estimate of \(\langle u\rangle_{t,\Omega_{T_{0}}}^{(\nu/2)}\) This subsection will be devoted to obtain Holder regularity in time of the solution \(u\). Namely, we aim to achieve the bound \[\langle u\rangle_{t,\Omega_{T_{0}}}^{(\nu/2)}\leq C\bigg{[}1+\|u_{0}\|_{W^{2,2 }(\Omega)}+\biggl{(}\sup_{t\in[0,T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t) \biggr{)}^{1/2}+\sup_{t\in[0,T]}\|g\|_{L_{2}(\Omega)}\bigg{]}.\] It is apparent that this estimate is a simple consequence of the inequality \[\Delta_{h}u =|u(x,t+h)-u(y,t)|\] \[\leq C\bigg{[}1+\|u_{0}\|_{W^{2,2}(\Omega)}+\biggl{(}\sup_{t\in[0,T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\biggr{)}^{1/2}+\sup_{t\in[0,T]} \|g\|_{L_{2}(\Omega)}\bigg{]}[h^{\nu/2}+|x-y|] \tag{6.20}\] for any \(x,y\in\bar{\Omega}\), and \(h\in(0,1)\) such that \(x+h^{\nu/2}\in\bar{\Omega}\) and \(t\in[0,T_{0}]\). Indeed, substituting \(x=y\) in (6.20), we immediately arrive at the desired bound. First, appealing to the integral mean value theorem, we conclude that \[\int_{x}^{x+h^{\nu/2}}[u(z,t+h)-u(z,t)]dz=h^{\nu/2}[u(x^{*},t+h)-u(x^{*},t)] \tag{6.21}\] for some \(x^{*}\in[x,x+h^{\nu/2}]\) and \(t\in[0,T_{0}]\). Taking into account this relation and (6.14), we deduce that \[\Delta_{h}u \leq|u(x,t+h)-u(x^{*},t+h)|+|u(x^{*},t+h)-u(x^{*},t)|+|u(x^{*},t) -u(y,t)|\] \[\leq C\{|x-x^{*}|+|x^{*}-y|\}\underset{\Omega_{T}}{\sup}\biggl{|} \frac{\partial u}{\partial x}\biggr{|}+|u(x^{*},t+h)-u(x^{*},t)|\] \[\leq C[1+\|u_{0}\|_{W^{2,2}(\Omega)}+\|I_{t}^{\nu}\|g\|_{L_{2}( \Omega)}^{2}\|_{\mathcal{C}(\bar{\Omega}_{T})}^{1/2}][h^{\nu/2}+|x^{*}-y|]\] \[+|u(x^{*},t+h)-u(x^{*},t)|\equiv d_{1}+d_{2}. \tag{6.22}\] At this point, we evaluate each term \(d_{i}\), separately. \(\bullet\) As for \(d_{1}\), we should to evaluate \(|x^{*}-y|\) in appropriate way. To this end, we analyze three possibilities to the location of \(y\): **i:**: if \(y\geq x^{*}\), then \(|y-x^{*}|\leq|y-x|\); **ii:**: if \(x\leq y<x^{*}\), then \(|y-x^{*}|\leq h^{\nu/2}\); **iii:**: if \(y<x\), then \(|y-x^{*}|\leq|x-y|+h^{\nu/2}\). In summary, we derive the bound \[d_{1}\leq C[|x-y|+h^{\nu/2}]\bigg{[}1+\|u_{0}\|_{W^{2,2}(\Omega)}+\biggl{(} \underset{t\in[0,T]}{\sup}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\biggr{)} ^{1/2}+\underset{t\in[0,T]}{\sup}\|g\|_{L_{2}(\Omega)}\bigg{]}.\] \(\bullet\) Concerning the estimate of \(d_{2}\), we have to evaluate the left-hand side of (6.21). To this end, exploiting (3.5.4) in [17] and [39, Chapter 1, Corollary 2] arrives at the easily verified relations \[u(z,t+h)-u(z,t)=I_{t+h}^{\nu}\mathbf{D}_{t+h}^{\nu}u(z,t+h)-I_{t}^{\nu} \mathbf{D}_{t}^{\nu}u(z,t),\] and, therefore, \[|u(x^{*},t+h)-u(x^{*},t)|\leq Ch^{\nu/2}\underset{[0,T_{0}]}{\sup}\biggl{|} \int_{x}^{x+h^{\nu/2}}\mathbf{D}_{t}^{\nu}u(z,s)dz\biggr{|}. \tag{6.23}\] Thus, we are left to evaluate the term \(\int_{x}^{x+h^{\nu/2}}\mathbf{D}_{t}^{\nu}u(z,s)dz\). Appealing to (i) in Proposition 5.4 provides the representation \[\mathbf{D}_{t}u=\sum_{j=1}^{3}\mathfrak{D}_{j}u+\varrho_{0}\mathbf{D}_{t}^{\nu}u, \tag{6.24}\] where we set \[\mathfrak{D}_{1}u =\sum_{i=1}^{M}\varrho_{i}\mathbf{D}_{t}^{\nu_{i}}u-\sum_{j=1}^{N }\gamma_{j}\mathbf{D}_{t}^{\mu_{j}}u,\quad\mathfrak{D}_{2}u=u_{0}\{\mathbf{D}_ {t}^{\nu}\varrho_{0}+\sum_{i=1}^{M}\mathbf{D}_{t}^{\nu_{i}}\varrho_{i}-\sum_{j =1}^{N}\mathbf{D}_{t}^{\mu_{j}}\gamma_{j}\},\] \[\mathfrak{D}_{3}u =\frac{\nu}{\Gamma(1-\nu)}\int_{0}^{t}(t-s)^{-1-\nu}[\varrho_{0} (t)-\varrho_{0}(s)][u(x,s)-u_{0}(x)]ds\] \[+\sum_{i=1}^{M}\frac{\nu_{i}}{\Gamma(1-\nu_{i})}\int_{0}^{t}(t-s) ^{-1-\nu_{i}}[\varrho_{i}(t)-\varrho_{i}(s)][u(x,s)-u_{0}(x)]ds\] \[-\sum_{j=1}^{N}\frac{\mu_{j}}{\Gamma(1-\mu_{j})}\int_{0}^{t}(t-s) ^{-1-\mu_{j}}[\gamma_{j}(t)-\gamma_{j}(s)][u(x,s)-u_{0}(x)]ds.\] Using representation (6.24) and the equation in (6.1), we end up with \[\bigg{|}\int_{x}^{x+h^{\nu/2}}\mathbf{D}_{s}^{\nu}u(z,s)dz\bigg{|}\leq\sum_{j =1}^{6}|\mathcal{G}_{j}(u)|, \tag{6.25}\] where \[\mathcal{G}_{1}u =\varrho_{0}^{-1}\int_{x}^{x+h^{\nu/2}}\mathcal{L}_{1}u(z,s)dz, \mathcal{G}_{2}u =\varrho_{0}^{-1}\int_{x}^{x+h^{\nu/2}}(\mathcal{K}*\mathcal{L}_{2 }u)(z,s)dz,\] \[\mathcal{G}_{3}u =\varrho_{0}^{-1}\int_{x}^{x+h^{\nu/2}}[\lambda f(u)-g(z,s)]dz, \mathcal{G}_{4}u =\varrho_{0}^{-1}\int_{x}^{x+h^{\nu/2}}\mathfrak{D}_{3}u(z,s)dz,\] \[\mathcal{G}_{5}u =\varrho_{0}^{-1}\int_{x}^{x+h^{\nu/2}}\mathfrak{D}_{2}u(z,s)dz, \mathcal{G}_{6}u =\varrho_{0}^{-1}\int_{x}^{x+h^{\nu/2}}\mathfrak{D}_{1}u(z,s)dz.\] Hence, we are left to treat each \(\mathcal{G}_{i}u\). \(\bullet\) Taking into account (6.12) and (6.14), assumptions **h2-h3**, and easily verified relation \[\int_{x}^{x+h^{\nu/2}}a_{2}(z,s)\frac{\partial^{2}u}{\partial z^{2}}(z,s)dz= \int_{x}^{x+h^{\nu/2}}\frac{\partial}{\partial z}\bigg{[}a_{2}(z,s)\frac{ \partial u}{\partial z}(z,s)\bigg{]}dz-\int_{x}^{x+h^{\nu/2}}\frac{\partial u }{\partial z}(z,s)\frac{\partial a_{2}}{\partial z}(z,s)dz,\] we immediately deduce that \[|\mathcal{G}_{1}u|\leq\delta_{1}^{-1}[1+h^{\nu/2}]C_{0}\bigg{[}1+\|u_{0}\|_{W^ {2,2}(\Omega)}+\left(\sup_{t\in[0,T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t) \right)^{1/2}+\sup_{t\in[0,T]}\|g\|_{L_{2}(\Omega)}\bigg{]}.\] \(\bullet\) It is worth noting that, using the similar arguments and making assumption **h4** on the kernel \(\mathcal{K}\), we get the inequality \[|\mathcal{G}_{2}u|\leq\frac{C_{0}(1+h^{\nu/4})\|\mathcal{K}\|_{L_{1}(0,T)}}{ \delta_{1}}\bigg{[}1+\|u_{0}\|_{W^{2,2}(\Omega)}+\bigg{(}\sup_{t\in[0,T]}I_{t }^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{1/2}+\sup_{t\in[0,T]}\|g\|_{L_{ 2}(\Omega)}\bigg{]}.\] \(\bullet\) Assumptions (3.1) on \(f(u)\), **h5** on \(g\) and estimate (6.12) arrive at \[|\mathcal{G}_{3}u|\leq\delta_{1}^{-1}h^{\nu/2}[1+L]\bigg{[}1+\|u_{0}\|_{W^{2,2 }(\Omega)}+\left(\sup_{t\in[0,T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t) \right)^{1/2}+\sup_{t\in[0,T]}\|g\|_{L_{2}(\Omega)}\bigg{]}.\] \(\bullet\) In light of the regularity of the coefficients \(\varrho_{0},\varrho_{i},\gamma_{j}\) and the function \(u_{0}\) (see **h3** and **h5**), we use (6.12) and obtain the bound \[|\mathcal{G}_{4}u|+|\mathcal{G}_{5}u|\leq C\delta_{1}^{-1}h^{\nu/2}\bigg{[}1+\|u _{0}\|_{W^{2,2}(\Omega)}+\bigg{(}\sup_{t\in[0,T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega )}^{2})(t)\bigg{)}^{1/2}+\sup_{t\in[0,T]}\|g\|_{L_{2}(\Omega)}\bigg{]}\] with the constant \(C\) depending only on \(T,\nu,\mu_{j},\nu_{i},N,M\) and the norms of the coefficients. \(\bullet\) Concerning \(\mathcal{G}_{6}u\), we exploit the representation (10.34) in [23] and have \[|\mathcal{G}_{6}u|\leq\sum_{i=1}^{M}\varrho_{i}I_{s}^{\nu-\nu_{i}}\bigg{|}\int _{x}^{x+h^{\nu/2}}\mathbf{D}_{s}^{\nu}u(z,s)dz\bigg{|}+\sum_{j=1}^{N}\gamma_{ j}I_{s}^{\nu-\mu_{j}}\bigg{|}\int_{x}^{x+h^{\nu/2}}\mathbf{D}_{s}^{\nu}u(z,s)dz \bigg{|}.\] Finally, gathering these estimates with (6.25), we obtain \[\bigg{|}\int_{x}^{x+h^{\nu/2}}\mathbf{D}_{s}^{\nu}u(z,s)dz\bigg{|} \leq C[1+h^{\frac{\nu}{2}}]\bigg{[}1+\|u_{0}\|_{W^{2,2}(\Omega)}+ \bigg{(}\sup_{t\in[0,T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{1/ 2}+\sup_{t\in[0,T]}\|g\|_{L_{2}(\Omega)}\bigg{]}\] \[+\bigg{[}\sum_{i=1}^{M}\varrho_{i}I_{s}^{\nu-\nu_{i}}+\sum_{j=1}^ {N}\gamma_{j}I_{s}^{\nu-\mu_{j}}\bigg{]}\bigg{|}\int_{x}^{x+h^{\nu/2}}\mathbf{ D}_{s}^{\nu}u(z,s)dz\bigg{|}.\] Then, Gronwall-type inequality (4.4) [21] tells us that \[\bigg{|}\int_{x}^{x+h^{\nu/2}}\mathbf{D}_{s}^{\nu}u(z,s)dz\bigg{|}\leq C[1+h ^{\frac{\nu}{2}}]\bigg{[}1+\|u_{0}\|_{W^{2,2}(\Omega)}+\bigg{(}\sup_{t\in[0,T ]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{1/2}+\sup_{t\in[0,T]}\| g\|_{L_{2}(\Omega)}\bigg{]}, \tag{6.26}\] where \(C\) is independent of \(T_{0}\), \(\lambda,h\) and depends only on \(\nu,\mu_{i},\nu_{j},N,M,\)\(\|\mathcal{K}\|_{L_{1}(0,T)}\), and the corresponding norms of the coefficients. As a result, (6.26) and (6.23) lead to \[d_{2} =|u(x^{*},t+h)-u(x^{*},t)|\] \[\leq Ch^{\frac{\nu}{2}}[1+h^{\frac{\nu}{2}}]\bigg{[}1+\|u_{0}\|_{ W^{2,2}(\Omega)}+\bigg{(}\sup_{t\in[0,T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2} )(t)\bigg{)}^{1/2}+\sup_{t\in[0,T]}\|g\|_{L_{2}(\Omega)}\bigg{]},\] or coming to (6.22) and, taking into account the estimate of \(d_{1}\), we achieve \[\Delta_{h}u\leq C\{h^{\nu}+h^{\nu/2}+|x-y|\}\bigg{[}1+\|u_{0}\|_{W^{2,2}(\Omega )}+\bigg{(}\sup_{t\in[0,T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{ 1/2}+\sup_{t\in[0,T]}\|g\|_{L_{2}(\Omega)}\bigg{]}.\] It is apparent that (as we wrote above) this inequality provides the desired estimate. In summary, this bound completes the proof of (6.2), and therefore, the verification (6.3) is finished. ### Verification of (6.4) if \(t\in[0,T_{0}]\) In light of estimates (6.12) and (6.13), we conclude that (6.4) with \(T=T_{0}\) will be proved if we obtain the inequality \[\|\mathbf{D}_{t}^{\nu}u\|_{L_{2}(\Omega_{T_{0}})}+\sum_{i=1}^{M} \|\mathbf{D}_{t}^{\nu_{i}}u\|_{L_{2}(\Omega_{T_{0}})}+\sum_{j=1}^{N}\|\mathbf{ D}_{t}^{\mu_{j}}u\|_{L_{2}(\Omega_{T_{0}})}\] \[\leq C\bigg{[}1+\|u_{0}\|_{W^{2,2}(\Omega)}+\bigg{(}\sup_{t\in[0, T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{1/2}\bigg{]}. \tag{6.27}\] To this end, we use again representation (6.24) and the easily verified estimates \[\|\mathbf{D}_{t}u\|_{L_{2}(\Omega_{T_{0}})} \leq C\bigg{[}1+\|u_{0}\|_{W^{2,2}(\Omega)}+\bigg{(}\sup_{t\in[0,T] }I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{1/2}\bigg{]},\] \[\|\mathbf{D}_{t}^{\nu}u\|_{L_{2}(\Omega_{T_{0}})} \leq\frac{C}{\delta_{1}}\|\mathbf{D}_{t}u-\sum_{j=1}^{3}\mathfrak{ D}_{j}u\|_{L_{2}(\Omega_{T_{0}})}\leq\frac{C}{\delta_{1}}[\|\mathbf{D}_{t}u\|_{L_{ 2}(\Omega_{T_{0}})}+\sum_{j=1}^{3}\|\mathfrak{D}_{j}u\|_{L_{2}(\Omega_{T_{0}}) }],\] \[\|\mathcal{D}_{2}u\|_{L_{2}(\Omega_{T_{0}})}+\|\mathfrak{D}_{3}u \|_{L_{2}(\Omega_{T_{0}})} \leq C\bigg{[}1+\|u_{0}\|_{W^{2,2}(\Omega)}+\bigg{(}\sup_{t\in[0, T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{1/2}\bigg{]}\] \[\times[\|\varrho_{0}\|_{\mathcal{C}^{1}([0,T_{0}])}+\sum_{i=1}^{ M}\|\varrho_{i}\|_{\mathcal{C}^{1}([0,T_{0}])}+\sum_{j=1}^{N}\|\gamma_{j}\|_{ \mathcal{C}^{1}([0,T_{0}])}] \tag{6.28}\] with the constants being independent of \(\lambda\) and \(T_{0}\). We remark that the last inequality in (6.28) is a simple consequence of assumption **h3** and estimates (6.12) and (6.14). Besides, collecting all these estimates provides the bound \[\|\mathbf{D}_{t}^{\nu}u\|_{L_{2}(\Omega_{T_{0}})} \leq C\bigg{[}1+\|u_{0}\|_{W^{2,2}(\Omega)}+\bigg{(}\sup_{t\in[0, T]}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{1/2}\bigg{]}\] \[+C_{3}[\sum_{i=1}^{M}\|\mathbf{D}_{t}^{\nu_{i}}u\|_{L_{2}(\Omega_ {T_{0}})}+\sum_{j=1}^{N}\|\mathbf{D}_{t}^{\mu_{j}}u\|_{L_{2}(\Omega_{T_{0}})}]. \tag{6.29}\] As a result, in virtue of the last estimate, we are left to evaluate \(\|\mathbf{D}_{t}^{\nu_{i}}u\|_{L_{2}(\Omega_{T_{0}})}\) and \(\|\mathbf{D}_{t}^{\mu_{j}}u\|_{L_{2}(\Omega_{T_{0}})}\). To this end, we are exploit [43, Theorem 2.2] (taking into account Definition 2.2 and estimate (2.1)) and embedding Theorem 1.4.3.3 in [12]. Indeed, setting \[\mu_{0}=\frac{1}{4}\min\{\mu_{1};\nu-\mu_{N}\}\quad\text{and}\quad\nu_{0}= \frac{1}{4}\min\{\nu_{1};\nu-\nu_{M}\},\] and choosing \(\bar{\nu}\) satisfying inequalities \[0<\bar{\nu}<\min\{\nu-\mu_{N}-\mu_{0};\nu-\nu_{M}-\nu_{0}\},\] we take advantage of [43, Theorem 2.2], [44, Propositions 3 and 7] and then embedding Theorem 1.4.3.3 in [12] to conclude that \[C_{\nu}\|u-u_{0}\|_{W^{\nu-\rho}((0,T_{0}),L_{2}(\Omega))}\leq \|\mathbf{D}_{t}^{\nu}u\|_{L_{2}(\Omega_{T_{0}})},\] \[\sum_{i=1}^{M}\|\mathbf{D}_{t}^{\nu_{i}}u\|_{L_{2}(\Omega_{T_{0}}) }+\sum_{j=1}^{N}\|\mathbf{D}_{t}^{\mu_{j}}u\|_{L_{2}(\Omega_{T_{0}})}\leq \sum_{i=1}^{M}C_{\nu_{i}}\|u-u_{0}\|_{W^{\nu_{i}+\nu_{0}}((0,T_{0}),L_{2}( \Omega))}\] \[+\sum_{j=1}^{N}C_{\mu_{j}}\|u-u_{0}\|_{W^{\mu_{j}+\nu_{0}}((0,T_{ 0}),L_{2}(\Omega))}\] \[\leq\varepsilon[\sum_{i=1}^{M}C_{\nu_{i}}+\sum_{j=1}^{N}C_{\mu_{ j}}]\|u-u_{0}\|_{W^{\nu-\rho}((0,T_{0}),L_{2}(\Omega))}\] \[+C\bigg{\{}\sum_{j=1}^{N}C_{\mu_{j}}\varepsilon^{-\frac{\mu_{j}+ \nu_{0}}{\nu-\rho-\mu_{j}-\nu_{0}}}+\sum_{i=1}^{M}C_{\nu_{i}}\varepsilon^{- \frac{\nu_{i}+\nu_{0}}{\nu-\rho-\nu_{i}-\nu_{0}}}\bigg{\}}\|u-u_{0}\|_{L_{2}( \Omega_{T_{0}})}, \tag{6.30}\] where \(C_{\nu}=C(\nu,\bar{\nu})\), \(C_{\nu_{i}}=C(\nu_{i},\nu_{0})\), \(C_{\mu_{j}}=C(\mu_{j},\mu_{0})\), are positive constants defined in (i) of Theorem 2.2[43]. Gathering (6.29) and (6.30) and setting \[\varepsilon=\frac{C_{\nu}}{4C_{3}[\sum_{j=1}^{N}C_{\mu_{j}}+\sum_{i=1}^{M}C_{ \nu_{i}}]},\] we end up with the bound \[\|u-u_{0}\|_{W^{\nu-\rho}((0,T_{0}),L_{2}(\Omega))}\leq C\bigg{[}1+\|u_{0}\|_{W^{2, 2}(\Omega)}+\bigg{(}\underset{t\in[0,T]}{\sup}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^ {2})(t)\bigg{)}^{1/2}\bigg{]} \tag{6.31}\] with the positive quantity \(C\) being independent of \(\lambda\) and \(T_{0}\). We notice that, we exploited (6.12) to control the term \(\|u-u_{0}\|_{L_{2}(\Omega_{T_{0}})}\) in the right-hand side of the second inequality in (6.30). After that, (6.31) together with the second inequality in (6.30) arrive at the estimate \[\sum_{i=1}^{M}\|\mathbf{D}_{t}^{\nu_{i}}u\|_{L_{2}(\Omega_{T_{0}})}+\sum_{j=1} ^{N}\|\mathbf{D}_{t}^{\mu_{j}}u\|_{L_{2}(\Omega_{T_{0}})}\leq C\bigg{[}1+\|u_{ 0}\|_{W^{2,2}(\Omega)}+\bigg{(}\underset{t\in[0,T]}{\sup}I_{t}^{\nu}(\|g\|_{L _{2}(\Omega)}^{2})(t)\bigg{)}^{1/2}\bigg{]},\] which in turn (see (6.29)) provides the desired bound \[\|D_{t}^{\nu}u\|_{L_{2}(\Omega_{T_{0}})}\leq C\bigg{[}1+\|u_{0}\|_{W^{2,2}( \Omega)}+\bigg{(}\underset{t\in[0,T]}{\sup}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^ {2})(t)\bigg{)}^{1/2}\bigg{]}.\] This finishes the proof of (6.27) and, hence, (6.4) with \(T=T_{0}\). ### Conclusion of the proof of Lemma 6.1 In light of the results described in Sections 6.1-6.4, we are left to extend estimates (6.2)-(6.4) on the whole time interval \([0,T]\). To this end, we first discuss the technique which allows us to extend these estimates to the interval \([0,3T_{0}/2]\). Then, recasting this procedure a finite number of times until the entire \([0,T]\) is exhausted. First, we need in new function \[\mathcal{U}(x,t)=\xi(t)u(x,t), \tag{6.32}\] where \(\xi(t)\) is defined in (5.7) with \(T_{3}=T_{0}\), and \(u(x,t)\) is a solution of (6.1). Next statement describes the main properties of this function. **Corollary 6.4**.: _The function \(\mathcal{U}(x,t)\) solves the problem (6.1) in \(\bar{\Omega}_{T_{0}/2}\) and satisfies estimates:_ \[\|\mathcal{U}\|_{\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}( \bar{\Omega}_{T})}+\sum_{j=1}^{N}\|\mathbf{D}_{t}^{\mu_{j}}\mathcal{U}\|_{ \mathcal{C}^{\alpha,\frac{\alpha\nu}{2}}(\bar{\Omega}_{T})}+\sum_{i=1}^{M}\| \mathbf{D}_{t}^{\nu_{i}}\mathcal{U}\|_{\mathcal{C}^{\alpha,\frac{\alpha\nu}{2 }}(\bar{\Omega}_{T})}\leq C[1+\|u_{0}\|_{\mathcal{C}^{2+\alpha}(\bar{\Omega} )}+\|g\|_{\mathcal{C}^{\alpha,\frac{\alpha\nu}{2}}(\bar{\Omega}_{T})}],\] \[\|\mathcal{U}\|_{\mathcal{C}([0,T],\mathcal{C}^{1}(\bar{\Omega} ))}+\langle\mathcal{U}\rangle_{t,\Omega_{T}}^{(\nu/2)}\leq C\bigg{[}1+\bigg{(} \underset{t\in[0,T]}{\sup}I_{t}^{\nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{ 1/2}+\underset{t\in[0,T]}{\sup}\|g\|_{L_{2}(\Omega)}+\|u_{0}\|_{W^{2,2}(\Omega) }\bigg{]},\] \[\|\mathcal{U}\|_{L_{2}((0,T),W^{2,2}(\Omega))}+\|\mathbf{D}_{t}^{ \nu}\mathcal{U}\|_{L_{2}(\Omega_{T})}+\sum_{j=1}^{N}\|\mathbf{D}_{t}^{\mu_{j} }\mathcal{U}\|_{L_{2}(\Omega_{T})}+\sum_{i=1}^{M}\|\mathbf{D}_{t}^{\nu_{i}} \mathcal{U}\|_{L_{2}(\Omega_{T})}+\bigg{(}\underset{t\in[0,T]}{\sup}I_{t}^{\nu }\|\mathcal{U}_{xx}\|_{L_{2}(\Omega)}^{2}\bigg{)}^{\frac{1}{2}}\] \[\leq C\bigg{[}1+\bigg{(}\underset{t\in[0,T]}{\sup}I_{t}^{\nu}(\|g \|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{1/2}+\|u_{0}\|_{W^{2,2}(\Omega)}\bigg{]},\] \[\|\mathbf{D}_{t}\mathcal{U}-\mathcal{L}_{1}\mathcal{U}-\mathcal{K }\ast\mathcal{L}_{2}\mathcal{U}\|_{\mathcal{C}^{\alpha,\frac{\alpha\nu}{2}}( \bar{\Omega}_{T})}\leq C\bigg{[}1+\bigg{(}\underset{t\in[0,T]}{\sup}I_{t}^{ \nu}(\|g\|_{L_{2}(\Omega)}^{2})(t)\bigg{)}^{\frac{1}{2}}+\|u_{0}\|_{W^{2,2}( \Omega)}+\|g\|_{\mathcal{C}^{\alpha,\frac{\alpha\nu}{2}}(\bar{\Omega}_{T})} \bigg{]},\] \[\underset{t\in[0,T]}{\sup}I_{t}^{\nu}(\|\mathbf{D}_{t}\mathcal{U}- \mathcal{L}_{1}\mathcal{U}-\mathcal{K}\ast\mathcal{L}_{2}\mathcal{U}\|_{L_{2}( \Omega)}^{2})\leq C\bigg{[}1+\underset{t\in[0,T]}{\sup}I_{t}^{\nu}(\|g\|_{L _{2}(\Omega)}^{2})(t)+\|u_{0}\|_{W^{2,2}(\Omega)}^{2}\bigg{]}\] _Here the positive constant \(C\) depends only on \(T,\mu_{j},\nu_{i},\nu\) and the corresponding norms of the coefficients of the operators \(\mathcal{L}_{1},\,\mathcal{L}_{2}\) and \(\mathbf{D}_{t}\)._ Proof.: Clearly, the first three inequalities are simple consequences of definition (6.32), Lemma 5.5 (where \(T_{3}=T_{0}\)) and estimates (6.2)-(6.4) with \(T=T_{0}\), which are proved in Sections 6.1-6.4. As for the fourth bound, taking into account that \(u\) solves (6.1), we have the representation \[\mathbf{D}_{t}\mathcal{U}-\mathcal{L}_{1}\mathcal{U}-\mathcal{K}\ast\mathcal{L} _{2}\mathcal{U}=\sum_{j=1}^{3}s_{j},\] where \[s_{1}=\xi[g(x,t)-\lambda f(u)],\quad s_{2}=\mathbf{D}_{t}(\xi u)-\xi\mathbf{D}_{t} u,\quad s_{3}=\xi(\mathcal{K}*\mathcal{L}_{2}u)(t)-(\mathcal{K}*\xi\mathcal{L}_{2}u)(t).\] At this point, we evaluate each \(s_{j}\), separately. \(\bullet\) Appealing to **h5** and **h6 (i)**, we derive \[\|s_{1}\|_{\mathcal{C}^{\alpha,\alpha\omega/2}(\bar{\Omega}_{T})}\leq C[1+\|g \|_{\mathcal{C}^{\alpha,\alpha\omega/2}(\bar{\Omega}_{T})}+\|u\|_{\mathcal{C}^ {\alpha,\alpha\omega/2}(\bar{\Omega}_{T_{0}})}],\] where the constant \(C\) depends only on \(\|\xi\|_{\mathcal{C}^{1}([0,T])}\), \(L,C_{\rho}\). Then exploiting inequality (6.2), we end up with \[\|s_{1}\|_{\mathcal{C}^{\alpha,\alpha\omega/2}(\bar{\Omega}_{T})}\leq C[1+\|g \|_{\mathcal{C}^{\alpha,\alpha\omega/2}(\bar{\Omega}_{T})}+\|u_{0}\|_{W^{2,2}( \Omega)}].\] \(\bullet\) Collecting the representation of the operator \(\mathbf{D}_{t}\) with statement (v) in Lemma 5.2 and the bound (6.2) with \(T=T_{0}\) provides the estimate \[\|s_{2}\|_{\mathcal{C}^{\alpha,\alpha\omega/2}(\bar{\Omega}_{T})} \leq C[1+\|u_{0}\|_{\mathcal{C}^{\alpha}(\bar{\Omega})}+\|u\|_{ \mathcal{C}^{\alpha,\omega/2}(\bar{\Omega}_{T_{0}})}]\] \[\leq C\bigg{[}\|u_{0}\|_{W^{2,2}(\Omega)}+1+\left(\sup_{t\in[0,T] }I^{\nu}_{t}(\|g\|^{2}_{L_{2}(\Omega)})(t)\right)^{\frac{1}{2}}\bigg{]}\] with the constant \(C\) being independent of \(\lambda\) and \(T_{0}\). \(\bullet\) Standard calculations allow us to rewrite \(s_{3}\) in the form \[s_{3}=\begin{cases}\xi(t)(\mathcal{K}*\mathcal{L}_{2}u)(t)-(\mathcal{K}*\xi \mathcal{L}_{2}u)(t),&t\in(0,3T_{0}/2],\\ \\ \int_{0}^{3T_{0}/2}\mathcal{K}(t-\tau)\xi(\tau)\mathcal{L}_{2}u(x,\tau)d\tau, \quad t>3T_{0}/2.\end{cases}\] After that, [19, Lemma 4.1], Lemma 5.5 and assumptions of **h2, h4** lead to the estimate \[\|s_{3}\|_{\mathcal{C}^{\alpha,\alpha\omega/2}(\bar{\Omega}_{T})}\leq C[\|u_{ 0}\|_{W^{2,2}(\Omega)}+1+\|I^{\nu}_{t}\|g\|^{2}_{L_{2}(\Omega)}\|^{1/2}_{ \mathcal{C}([0,T])}].\] As a result, gathering all estimates of \(s_{i}\) leads to the searched estimate. Finally, we left remark that the verification of the last inequality in this corollary is carried out with the similar arguments and with exploiting the second and the third inequalities of this claim. This completes the proof of Corollary 6.4. Now we introduce new unknown function \[\mathcal{V}=u-\mathcal{U} \tag{6.33}\] which solves the problem \[\begin{cases}\mathbf{D}_{t}\mathcal{V}-\mathcal{L}_{1}\mathcal{V}-\mathcal{K }*\mathcal{L}_{2}\mathcal{V}=g^{*}(x,t)-\lambda f^{*}(\mathcal{V})\quad\text{ in}\quad\Omega_{3T_{0}/2},\\ \mathcal{V}(x,0)=0\quad\text{in}\quad\bar{\Omega},\\ \mathcal{V}(x,t)=0\quad\text{on}\quad\partial\Omega_{3T_{0}/2},\end{cases}\] where we put \[g^{*}(x,t)=g(x,t)-\mathbf{D}_{t}\mathcal{U}+\mathcal{L}_{1}\mathcal{U}+ \mathcal{K}*\mathcal{L}_{2}\mathcal{U},\quad f^{*}(\mathcal{V})=f(\mathcal{V}+ \mathcal{U}).\] The definition of the function \(\mathcal{U}\) and Corollary 6.4 readily yield \[\|g^{*}\|_{\mathcal{C}^{\alpha,\alpha\omega/2}(\bar{\Omega}_{3T_{0 }/2})} \leq C[1+\|u_{0}\|_{W^{2,2}(\Omega)}+\|g\|_{\mathcal{C}^{\alpha, \alpha\omega/2}(\bar{\Omega}_{T})}],\] \[g^{*}(x,t)-\lambda f^{*}(\mathcal{V})=0\quad\text{if}\quad t\in[ 0,T_{0}/2],\quad x\in\bar{\Omega},\] and \(f^{*}(\mathcal{V})\) meets requirements **h6 (i)**. Besides, the last equality here means that compatibility conditions hold. Finally, we introduce the new time-variable \[\sigma=t-\frac{T_{0}}{2},\quad\sigma\in[-T_{0}/2,T_{0}]\] and recast arguments of the end of Section 6.3 in [37]. Thus, we deduce \[\begin{cases}\bar{\mathbf{D}}_{\sigma}\bar{\mathcal{V}}-\bar{\mathcal{L}}_{1} \bar{\mathcal{V}}-\mathcal{K}*\bar{\mathcal{L}}_{2}\bar{\mathcal{V}}=\bar{g}^{ *}(x,\sigma)-\lambda\bar{f}^{*}(\bar{\mathcal{V}})\quad\text{in}\quad\Omega_{ T_{0}},\\ \bar{\mathcal{V}}(x,\sigma)=0\quad\text{in}\quad\bar{\Omega},\\ \bar{\mathcal{V}}(x,\sigma)=0\quad\text{on}\quad\partial\Omega_{T_{0}},\end{cases} \tag{6.34}\] and \(\bar{\mathcal{V}}(x,\sigma)=0\) if \(\sigma\in[-T_{0}/2,0]\). Here we denote \[\bar{\mathcal{V}}(x,\sigma)=\mathcal{V}(x,\sigma+T_{0}/2),\quad\bar{g}^{*}(x,\sigma)=g^{*}(x,\sigma+T_{0}/2)\quad\bar{f}^{*}(\bar{\mathcal{V}})=f^{*}( \mathcal{V})|_{t=\sigma+T_{0}/2},\] and we call \(\bar{\mathcal{L}}_{i}\), \(\bar{\mathbf{D}}_{\sigma}\) the operators \(\mathcal{L}_{i}\) and \(\mathbf{D}_{\sigma}\), respectively, with the bar coefficients. It is easy to verify that the coefficients \(\bar{\mathcal{L}}_{i}\), \(\bar{\mathbf{D}}_{\sigma}\) and the function \(\bar{g}^{*}\), \(\bar{f}^{*}\) meet the requirements of Lemma 6.1. Then, we repeat the arguments of Sections 6.1-6.4 in the case of problem (6.34) and obtain estimates (6.2)-(6.4) to the function \(\bar{\mathcal{V}}\). In fine, taking into account representation (6.33) and Corollary 6.4, we extend these estimates to the interval \([0,3T_{0}/2]\). Therefore, after repeating this procedure finite times, we get (6.2)-(6.4) for all \(t\in[0,T]\). ## 7. Proof of Theorem 4.1 Here we proceed with a detailed proof of this theorem in the case of **DBC** (1.2) and \(f(u)\) satisfying **h6 (i)**. Another cases are analyzed with the similar arguments and left to the readers. First of all, we reduce problem (1.1), (1.2), (1.4) to the problem with homogeneous initial and boundary conditions. To this end, we apply [21, Remark 3.1] and Lemma 5.7 (in this art) to the linear problem for the unknown function \(\mathfrak{U}=\mathfrak{U}(x,t):\Omega_{T}\to\mathbb{R}:\) \[\begin{cases}\mathbf{D}_{t}\mathfrak{U}-\mathcal{L}_{1}\mathfrak{U}-\mathcal{ K}*\mathcal{L}_{2}\mathfrak{U}=g(x,t)-f(u_{0})\quad\text{in}\,\Omega_{T},\\ \mathfrak{U}(x,0)=u_{0}(x)\quad\text{in}\quad\bar{\Omega},\\ \mathfrak{U}(x,t)=\psi(x,t)\quad\text{on}\quad\partial\Omega_{T},\end{cases}\] and obtain the one-valued global classical solvability of this problem \(\mathfrak{U}\in\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}(\bar{\Omega}_{T})\) satisfying the bound \[\|\mathfrak{U}\|_{\mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}( \bar{\Omega}_{T})}+\sum_{i=1}^{M}\|\mathbf{D}_{t}^{\nu_{i}}\mathfrak{U}\|_{ \mathcal{C}^{\alpha,\frac{\nu\alpha}{2}}(\bar{\Omega}_{T})}+\sum_{j=1}^{N}\| \mathbf{D}_{t}^{\mu_{j}}\mathfrak{U}\|_{\mathcal{C}^{\alpha,\frac{\nu\alpha} {2}}(\bar{\Omega}_{T})}\] \[\leq C[1+\|u_{0}\|_{\mathcal{C}^{2+\alpha}(\bar{\Omega})}+\|g\|_{ \mathcal{C}^{\alpha,\frac{\nu\alpha}{2}}(\bar{\Omega}_{T})}+\|\psi\|_{ \mathcal{C}^{2+\alpha,\frac{2+\alpha}{2}\nu}(\partial\Omega_{T})}]\] \[\equiv C\mathfrak{G}(u_{0},f,\psi).\] Here we used assumption **h6 (i)** and [21, Remark 3.1] to control the term \(\|f(u_{0})\|_{\mathcal{C}^{\alpha,\frac{\nu\alpha}{2}}(\bar{\Omega}_{T})}\). Then we look for a solution of the original problem (1.1), (1.2), (1.4) in the form \[u(x,t)=v(x,t)+\mathfrak{U}(x,t),\] where the unknown function \(v\) solves the problem \[\begin{cases}\mathbf{D}_{t}v-\mathcal{L}_{1}v-\mathcal{K}*\mathcal{L}_{2}v=G(x,t)-F(v)\quad\text{in}\quad\Omega_{T},\\ v(x,0)=0\quad\text{in}\quad\bar{\Omega},\\ v(x,t)=0\quad\text{on}\quad\partial\Omega_{T}.\end{cases} \tag{7.1}\] Here we set \[G(x,t)=f(u_{0})-f(\mathfrak{U}),\qquad F(v)=f(v+\mathfrak{U})-f(\mathfrak{U}).\] **Remark 7.1**.: Assumptions **h6 (i)** and the estimate of \(\mathfrak{U}\) readily ensure the following relations to the functions \(F\) and \(G\): \[\|G\|_{\mathcal{C}^{\alpha,\frac{\nu\alpha}{2}}(\bar{\Omega}_{T})}\leq C \mathfrak{G}(u_{0},f,\psi),\] and for all \(v_{i}\in[-\rho,\rho]\) and \(v\in\mathbb{R}\) there holds \[|F(v_{1})-F(v_{2})|\leq C_{\rho}|v_{1}-v_{2}|,\quad|F(v)|\leq L^{*}(1+|v|),\] with \[L^{*}=L(1+2\underset{\tilde{\Omega}_{T}}{\sup}|\mathfrak{U}|)\leq L[1+2C\mathfrak{G} (u_{0},f,\psi)].\] Moreover, the direct calculations and the properties of the function \(\mathfrak{U}\) arrive at the equalities: \[G(x,0)=0\quad\text{for}\quad x\in\bar{\Omega},\quad F(0)=0\quad\text{for each}\quad(x,t)\in\bar{\Omega}_{T}.\] Thus, we easily conclude that \(F\) and \(G\) meet requirements of Theorem 4.1 and, hence, we are left to prove this theorem in the case of problem (7.1). To this end, we rely on the so-called continuation arguments, in analogy to the case of semilinear problem to the subdiffusion equations with a single-term fractional derivative described in [21, Section 5.2]. This approach deals with the analysis of the family of problems for \(\lambda\in[0,1]\), \[\begin{cases}\mathbf{D}_{t}v-\mathcal{L}_{1}v-\mathcal{K}*\mathcal{L}_{2}v=G( x,t)-\lambda F(v)\quad\text{in}\,\Omega_{T},\\ v(x,0)=0\quad\text{in}\quad\bar{\Omega},\\ v(x,t)=0\quad\text{on}\quad\partial\Omega_{T}.\end{cases} \tag{7.2}\] Let \(\Lambda\) be the set of those \(\lambda\) for which (7.2) is solvable on \([0,T]\). For \(\lambda=0\), (7.2) is a linear problem studied in detailed in [37] (see also Lemma 5.7 here). Hence, keeping in mind assumptions **h1**-**h5**, **h7** and Remark 7.1, we can apply Lemma 5.7 to (7.2) with \(\lambda=0\) and obtain the global classical solvability in the corresponding classes. Therefore, \(0\in\Lambda\). The next step demonstrates that \(\Lambda\) is open and closed at the same time. To this end, we repeat step-by-step the arguments given in [21, Section 5.2] and, exploiting Lemma 6.1 (where \(f=F\), \(g=G\), \(u_{0}=0\)), we complete the proof of Theorem 4.1. ## 8. Proof of Theorem 4.4 Here we focus on the proof of this theorem in the case of **DBC** (1.2). The case of **NBC** (1.3) is treated with the similar arguments. We will follow the strategy consisting in two steps. In the first, we assume the existence of functions \(U_{0,n}\) which approximate the initial data \(u_{0}\in\overset{0}{W}^{1,2}(\Omega)\cap W^{2,2}(\Omega)\) and satisfy assumptions **h5 (i)** and **h7**. Other words, the function \(U_{0,n}\in\mathcal{C}_{0}^{\infty}(\bar{\Omega})\) has the properties: \[U_{0,n}\quad\text{converges to }u_{0}\quad\text{in}\quad W^{2,2}(\Omega); \quad\begin{cases}U_{0,n}(a)=U_{0,n}(b)=0,\\ a_{2}(a,0)\frac{\partial^{2}U_{0,n}}{\partial x^{2}}(a)+a_{1}(a,0)\frac{ \partial U_{0,n}}{\partial x}(a)+f(0)=0,\\ a_{2}(b,0)\frac{\partial^{2}U_{0,n}}{\partial x^{2}}(b)+a_{1}(b,0)\frac{ \partial U_{0,n}}{\partial x}(b)+f(0)=0.\end{cases} \tag{8.1}\] We discuss subsequently the technique of the construction of \(U_{0,n}\). Appealing to Theorem 4.1, we build smooth solutions \(u_{n}=u_{n}(x,t):\Omega_{T}\to\mathbb{R}\) to the approximated problems \[\mathbf{D}_{t}u_{n}-\mathcal{L}_{1}u_{n}-\mathcal{K}*\mathcal{L}_{2}u_{n}+f(u _{n})=0\quad\text{in}\quad\Omega_{T},\quad u_{n}(x,0)=U_{0,n}\quad\text{in} \quad\bar{\Omega},\quad u_{n}(x,t)=0\quad\text{on}\quad\partial\Omega_{T}. \tag{8.2}\] Then we extract a convergent subsequence and pass to the limit in the equation. Using the Banach-Alaoglu Theorem and estimates stated in Lemma 6.1 (with \(\lambda=1,\,g\equiv 0\)), we can apply a standard arguments to choose a subsequence of \(u_{n}\) (which we relabel) such that, for any fixed time \(T>0\) \[u_{n} \rightharpoonup u\quad\text{weakly in}\quad L_{2}((0,T),W^{2,2}( \Omega)),\] \[K*u_{n} \rightharpoonup K*u\quad\text{weakly in}\quad L_{2}((0,T),W^{2,2}( \Omega)),\] \[u_{n} \to u\quad\text{uniformly}\quad\mathcal{C}^{\alpha^{*},\frac{ \alpha^{*}}{2}}(\bar{\Omega}_{T})\quad\text{for any}\quad\alpha^{*}\in(0,1),\] \[f(u_{n}) \to f(u)\quad\text{uniformly}\quad\mathcal{C}(\bar{\Omega}_{T}),\] \[\mathbf{D}_{t}^{\nu}u_{n} \rightharpoonup\mathbf{D}_{t}^{\nu}u_{n}\quad\text{weakly in} \quad L_{2}(\Omega_{T}),\] \[\mathbf{D}_{t}^{\nu_{i}}u_{n} \rightharpoonup\mathbf{D}_{t}^{\nu_{i}}u_{n},\,i=1,2,...,M,\quad \text{weakly in}\quad L_{2}(\Omega_{T}),\] \[\mathbf{D}_{t}^{\mu_{j}}u_{n} \rightharpoonup\mathbf{D}_{t}^{\mu_{j}}u_{n},\,j=1,2,...,N,\quad \text{weakly in}\quad L_{2}(\Omega_{T}),\] for some \(u\) belonging to all the spaces above. Therefore, such \(u\) satisfies estimate (6.4) (with \(g\equiv 0\)) and \[\|u\|_{\mathcal{C}^{\alpha*,\frac{\alpha\nu}{2}}(\bar{\Omega}_{T})}\leq C[1+\|u_{0 }\|_{W^{2,2}(\Omega)}],\] besides, \(u\) satisfies equation (1.1) a.e. in \(\Omega_{T}\), along with (1.4) and homogeneous boundary condition (1.2). As for the uniqueness this solution, this fact is proved with the standard arguments. Namely, assuming the existence of two solutions \(u\) and \(u^{*}\) satisfying the same data, we consider homogenous problem (1.1), (1.2) and (1.4) for the difference \(U^{*}=u-u^{*}\), and recasting the proof leading to estimate (6.11) (see also Section 6.5 concerning this estimate to whole time interval), we get \[\|U^{*}\|_{\mathcal{C}([0,T],L_{2}(\Omega))}\leq 0\quad\Rightarrow\quad u\equiv u ^{*}.\] In summary, in order to complete the proof of the existence of a unique strong solution to (1.1), (1.2) and (1.4) satisfying the regularity required by Theorem 4.4, we are left to build the function \(U_{0,n}\) satisfying (8.1). To this end, we introduce new function \[U_{0}(x)=u_{0}(x)-P_{u_{0}}(x),\] where \(P_{u_{0}}(x)\) is the \(5^{\text{th}}\) degrees polynomial satisfying equalities \[\begin{cases}P_{u_{0}}(a)=P_{u_{0}}(b)=0,\\ P^{\prime}_{u_{0}}(a)=u^{\prime}_{0}(a),\\ P^{\prime}_{u_{0}}(b)=u^{\prime}_{0}(b),\\ P^{\prime\prime}_{u_{0}}(a)=\frac{1}{a_{2}(a,0)}[-f(0)-a_{1}(a,0)u^{\prime}_{0 }(a)],\\ P^{\prime\prime}_{u_{0}}(b)=\frac{1}{a_{2}(b,0)}[-f(0)-a_{1}(b,0)u^{\prime}_{0 }(b)].\end{cases}\] Here we used the smoothness of \(u_{0}\) and embedding theorem which provides existence (in the classical sense) of \(u^{\prime}_{0}(a)\) and \(u^{\prime}_{0}(b)\). We easily conclude that \(U_{0}(x)\in\overset{0}{W}^{2,2}(\Omega)\) and, hence, we can approximate \(U_{0}(x)\) by the functions \(\bar{U}_{0,n}\in\mathcal{C}_{0}^{\infty}(\bar{\Omega})\) in the norm of \(W^{2,2}(\Omega)\). Then, setting \[U_{0,n}(x)=\bar{U}_{0,n}(x)+P_{u_{0}}(x),\] and performing standard calculations, we conclude that \(U_{0,n}(x)\in\mathcal{C}^{2+\alpha}(\bar{\Omega})\) and satisfies conditions (8.1). This means that \(U_{0,n}\) satisfies assumptions of Theorem 4.1 and approximate the initial data \(u_{0}\) in the corresponding classes. Finally, coming to (8.2), we carry out the procedure described above and complete the proof of Theorem 4.4. ## 9. Numerical Simulations We finally presents some numerical tests aimed to illustrate our theoretical results (Theorems 4.1 and 4.4) and demonstrate some effects of fractional derivatives which do not have yet analytical proofs. Namely, if all derivatives in the operator \(\mathbf{D}_{t}\) (see (1.5)) are fractional, then we observe rapid changes in the solutions for small time, and the solutions slow down as time goes up (see Example 9.2). It differs from the case \(\nu=1\) in (1.5), where this effect is negligible (and is diminishing as fractional orders grow, as we observe in Example 9.2). For simplicity, we focus on the initial-boundary value problem in the one-dimensional domain \(\Omega=(0,1)\) (multi-dimensional generalization of the proposed finite-difference scheme is straightforward and boils down to adding new terms that approximate the occurring new spatial partial derivatives in a completely similar manner): \[\begin{cases}\mathbf{D}^{\nu}_{t}(\varrho_{0}u)+\mathbf{D}^{\nu_{1}}_{t}( \varrho_{1}u)-\mathbf{D}^{\mu_{1}}_{t}(\gamma_{1}u)-\mathfrak{a}\frac{ \partial^{2}u}{\partial x^{2}}+\mathfrak{d}\frac{\partial u}{\partial x}\ -(\mathcal{K}*b\frac{\partial^{2}u}{\partial x^{2}})=f(x,t,u)+g(x,t)\quad \text{in}\ \Omega_{T},\\ u(x,0)=u_{0}(x),\qquad\qquad\qquad\qquad x\in[0,1],\\ \mathfrak{c}_{1}\frac{\partial u}{\partial x}(0,t)+\mathfrak{c}_{2}u(0,t)= \varphi_{1}(t),\quad t\in[0,T],\\ \mathfrak{c}_{3}\frac{\partial u}{\partial x}(1,t)+\mathfrak{c}_{4}u(1,t)= \varphi_{2}(t),\quad t\in[0,T].\end{cases} \tag{9.1}\] We introduce the space-time mesh with nodes \[x_{k}=kh,\quad\sigma_{j}=j\sigma,\quad k=0,1,\ldots,K,\quad j=0,1,\ldots,J,\quad h =L/K,\quad\sigma=T/J.\] For these examples, we actually take \(L=1\). Denoting the finite-difference approximation of the solution \(u\) at the point \((x_{k},\sigma_{j})\) by \(u_{k}^{j}\) and calling \[\mathfrak{a}_{k}^{j+1}=\mathfrak{a}(x_{k},\sigma_{j+1}),\qquad \mathfrak{d}_{k}^{j+1}=\mathfrak{d}(x_{k},\sigma_{j+1}),\qquad b_{k}^{j}=b(x_ {k},\sigma_{j}),\] \[\mathcal{K}_{m,j}=\int_{\sigma_{m}}^{\sigma_{m+1}}\mathcal{K}( \sigma_{j+1}-s)ds,\qquad\rho_{m}=(-1)^{m}\binom{\nu}{m},\qquad\tilde{\rho}_{m }=(-1)^{m}\binom{\nu_{1}}{m},\qquad\tilde{\rho}_{m}=(-1)^{m}\binom{\mu_{1}}{m },\] \[\varrho_{0,k}^{j+1}=\varrho_{0}(x_{k},\sigma_{j+1}),\qquad \varrho_{1,k}^{j+1}=\varrho_{1}(x_{k},\sigma_{j+1}),\qquad\gamma_{1,k}^{j+1}= \gamma_{1}(x_{k},\sigma_{j+1}),\] we approximate the differential equation in (9.1) at each time level \(\sigma_{j+1}\) and spatial point \(x_{k}\), so to obtain the finite-difference scheme \[\sigma^{-\nu}\sum_{m=0}^{j+1}(\varrho_{0,k}^{j+1-m}u_{k}^{j+1-m}- \varrho_{0,k}^{0}u_{0}(x_{k}))\rho_{m}+\sigma^{-\nu_{1}}\sum_{m=0}^{j+1}( \varrho_{1,k}^{j+1-m}u_{k}^{j+1-m}-\varrho_{1,k}^{0}u_{0}(x_{k}))\tilde{\rho} _{m}\] \[-\sigma^{-\mu_{1}}\sum_{m=0}^{j+1}(\gamma_{1,k}^{j+1-m}u_{k}^{j+1- m}-\gamma_{1,k}^{0}u_{0}(x_{k}))\bar{\rho}_{m}-\frac{\mathfrak{a}_{k}^{j+1}}{h^{2}} (u_{k-1}^{j+1}-2u_{k}^{j+1}+u_{k+1}^{j+1})+\frac{\mathfrak{d}_{k}^{j+1}}{2h}( u_{k+1}^{j+1}-u_{k-1}^{j+1})\] \[=\sum_{m=0}^{j}\left(b_{k}^{m}\frac{u_{k-1}^{m}-2u_{k}^{m}+u_{k+1 }^{m}}{h^{2}}+b_{k}^{m+1}\frac{u_{k-1}^{m+1}-2u_{k}^{m+1}+u_{k+1}^{m+1}}{h^{2} }\right)\!\frac{\mathcal{K}_{m,j}}{2}+f(x_{k},\sigma_{j},u_{k}^{j})+g(x_{k}, \sigma_{j+1}),\] for \[k=1,\ldots,K-1\qquad\text{and}\qquad j=0,1,\ldots,J-1.\] Here, the derivatives \(u_{x}\) and \(u_{xx}\) are approximated by the standard central finite-difference formulas, the trapezoid rule is employed to approximate the integrals in the sum \[\sum_{m=0}^{j}\int_{\sigma_{m}}^{\sigma_{m+1}}\mathcal{K}(\sigma_{j+1}-s)b(x,s )u_{xx}(x,s)ds\] representing the convolution term in (9.1), and the Grunwald-Letnikov (GL) formula [8, 15, 17] is applied to approximate the fractional derivatives \(\mathbf{D}_{t}^{\nu}(\varrho_{0}u)\), \(\mathbf{D}_{t}^{\nu_{1}}(\varrho_{1}u)\) and \(\mathbf{D}_{t}^{\mu_{1}}(\gamma_{1}u)\). Alternatively, here one might also use the "Leibniz rule" for Caputo derivatives (see [22, Corollary 3.1]) to reduce the treating of fractional derivatives terms in (9.1) to the approximation of fractional derivatives of the segregated function \(u\) only (but not its products with \(\varrho_{i}\) or \(\gamma_{i}\)) by the cost of adding integral extra terms (which can be approximated using the same approach as it was done for the convolution term in (9.1)) and making slight changes in the right part \(g\). However we will not pursue this way further here. Also, in order to achieve an improvement in the temporal discretization accuracy (primarily, owing to the approximation of the fractional derivatives) we apply the Richardson extrapolation (see [8]). Finally, two fictitious mesh points outside the spatial domain to approximate the derivatives in the boundary conditions with the second order of accuracy are exploited [23, 37]. Further improvement in the accuracy of calculations may be reached by resorting to finite element methods [15, 40], albeit we do not have the possibility to pursue this direction further here. **Example 9.1**.: Consider problem (9.1) with \(T=1\) and \[\begin{split}\mathcal{K}(t)&=t^{-1/3},\quad\mathfrak{a} (x,t)=\cos(\pi x/4)+t,\quad\mathfrak{d}(x,t)=x+t,\quad b(x,t)=t^{1/3}+\sin(\pi x ),\quad u_{0}(x)=\cos(\pi x),\\ \varrho_{0}(t)&=1+t,\quad\varrho_{1}=1/2,\quad\gamma _{1}(t)=(1+t^{2})/2,\quad\mathfrak{c}_{1}=\mathfrak{c}_{3}=1,\quad\mathfrak{c }_{2}=\mathfrak{c}_{4}=0,\quad\varphi_{1}(t)=\varphi_{2}(t)=0,\\ g(x,t)&=\pi^{2}\Big{(}\cos\frac{\pi x}{4}+t+\frac{3t ^{2/3}\sin(\pi x)}{2}+\frac{t\pi}{3\sin(\pi/3)}\Big{)}\cos(\pi x)-xt\sin((\cos( \pi x)+t^{\nu}/\Gamma(1+\nu))^{2})\\ &-(x+t)\pi\sin(\pi x)+1+\frac{t^{1-\nu}\cos(\pi x)}{\Gamma(2-\nu )}+(1+\nu)t+\frac{t^{\nu-\nu_{1}}}{2\Gamma(1+\nu-\nu_{1})}-\frac{1}{2}\Big{(} \frac{t^{\nu-\mu_{1}}}{\Gamma(1+\nu-\mu_{1})}\\ &+\frac{2t^{2-\mu_{1}}\cos(\pi x)}{\Gamma(3-\mu_{1})}+\frac{(2+ \nu)(1+\nu)t^{2+\nu-\mu_{1}}}{\Gamma(3+\nu-\mu_{1})}\Big{)},\quad f(x,t,u)= xt\sin(u^{2}).\end{split}\] It is easy to verify that the function \[u(x,t)=\cos(\pi x)+\frac{t^{\nu}}{\Gamma(1+\nu)}\] solves initial-boundary value problem (9.1) with the parameters specified above. The outcomes of this example (the absolute error \(\mathfrak{J}=\max|u-u_{\mathfrak{N}}|\) between \(u\) and the numerical solution \(u_{\mathfrak{N}}\), where the maximum is taken over all the grid points in the space-time mesh) are listed in Table 2. One can observe from Table 2 the rapid decaying of errors as mesh refines and that relatively coarse meshes provide quite small errors (as compared to the exact solution magnitude); one can also observe that decreasing fractional orders leads to a decrease in the computation accuracy, which is in line with the asymptotic estimates obtained in [5] for the accuracy of GL approximations (asserting that, in general, their accuracy degrades with decreasing a fractional order in case of weakly singular solutions and close to the singularity points, see [5, Theorems 3.1, 4.1] for details). **Example 9.2**.: Consider problem (9.1) with the constant coefficients \(\varrho_{0}=1\), \(\varrho_{1}=1/2\), \(\gamma_{1}=1/2\), \(g=0\), while the remaining coefficients being as in Example 9.1. As for the function \(f\), we test here two options: * \(f(x,t,u)=0\) (linear problem), * \(f(x,t,u)=xt\cos(u^{2})\). Solutions to this example are drawn in Figures 2 and 3 for different fractional orders \(\nu\) (with \(\nu_{1}=\nu/3\) and \(\mu_{1}=\nu/2\)) and time points \(t\); the steps \(\sigma=h=10^{-3}\) are employed. One can observe in these figures that adding the non-linearity can noticeably change the solution behavior (especially for low values of fractional orders). **Acknowledgments.** The second author was partially supported by the Foundation of The European Federation of Academy of Sciences and Humanities (ALLEA), the Grant EFDS-FL2-08. Figure 2. Solutions to Example 9.2 with \(f(x,t,u)=0\), \(\nu_{1}=\nu/3\), \(\mu_{1}=\nu/2\). Figure 3. Solutions to Example 9.2 with \(f(x,t,u)=xt\cos(u^{2})\), \(\nu_{1}=\nu/3\), \(\mu_{1}=\nu/2\).
2302.01455
Reconsidering Fascicles in Soft Pneumatic Actuator Packs
This paper discusses and contests the claims of ``Soft Pneumatic Actuator Fascicles for High Force and Reliability'' a research article which was originally published in the March 2017 issue of the Journal Soft Robotics. The original paper claims that the summed forces of multiple thin-walled extending McKibben muscles are greater than a volumetrically equivalent actuator of the same length at the same pressure. The original paper also claims that the purported benefit becomes more pronounced as the number of smaller actuators is increased. Using reasonable assumptions, the analysis of this paper shows that the claims of the original paper violate the law of conservation of energy. This paper also identifies errors in the original methodology that may have led to the erroneous conclusions of the original paper. The goal of this paper is to correct the record and to provide a more accurate framework for considering fascicles used in soft pneumatic actuator packs.
Wyatt Felt
2023-02-02T22:56:25Z
http://arxiv.org/abs/2302.01455v1
# Reconsidering Fascicles in Soft Pneumatic Actuator Packs ###### Abstract This paper discusses and contests the claims of "Soft Pneumatic Actuator Fascicles for High Force and Reliability" a research article which was originally published in the March 2017 issue of the Journal _Soft Robotics_. The original paper claims that the summed forces of multiple thin-walled extending McKibben muscles are greater than a volumetrically equivalent actuator of the same length at the same pressure. The original paper also claims that the purported benefit becomes more pronounced as the number of smaller actuators is increased. Using reasonable assumptions, the analysis of this paper shows that the claims of the original paper violate the law of conservation of energy. This paper also identifies errors in the original methodology that may have led to the erroneous conclusions of the original paper. The goal of this paper is to correct the record and to provide a more accurate framework for considering fascicles used in soft pneumatic actuator packs. ## I Introduction The March 2017 issue of the journal _Soft Robotics_ included a research article [1] that considered the use of multiple Soft Pneumatic Actuators (SPAs) in parallel combinations. While the original paper is interesting in many ways, the central claims of the paper appear to be based on faulty analysis. The present paper seeks to provide a more thorough analysis, to correct some of the main conclusions of the original, and, thereby, to provide a sound theoretical basis for the design of pneumatic actuator packs with multiple individual actuators working in parallel. The original paper [1] has several interesting contributions. It explores combining extending McKibben muscle actuators together to sum their forces. Each individual actuator is referred to as a "fascicle" and together they form a "pack" (Fig. 1). The pack configuration is useful to achieve high forces without increasing the size of the individual actuators. By using four actuators, for example, the force output of the pack is quadruple that of the individual fascicles. The authors also show how the relatively small individual actuators can be arranged in patterns that make them less bulky. For instance, arranging the actuators in a row creates a low-profile pack that could be worn under clothing. This flat pack is certainly less cumbersome than a single actuator with a larger diameter. Though [1] considers _extending_ McKibben muscles, others have looked at effect of using multiple, thin _contracting_ McKibben muscles. Suzumori's group, for example, has been experimenting with bundles of thin contracting McKibben muscles since at least 2011 [2, 3]. Their work shows that, due to the large diameter change in contracting McKibben muscles, a "multifilament" muscle can exhibit a greater contraction ratio than the individual actuators [3]. Others have looked at how the principles of "variable recruitment" (which is common in mammalian muscle) can be applied to groups of contracting McKibben muscles to achieve gains in efficiency [4, 5, 6, 7, 8]. Though not without merit, the original paper, [1], presents specious claims about the benefits of fascicles. The central claim could be summarized as follows: _By using multiple thin-walled soft pneumatic actuator "fascicles," it is possible to achieve higher forces in a pack than a single larger actuator with an equivalent length and volume at the same pressure could produce_. The original paper further claims that the purported improvement in volumetric force-density increases with increasing numbers of fascicles. These counter-intuitive (and thus attractive) claims appear to be based on a faulty analysis of the McKibben muscle force equations. The analysis presented in this work demonstrates that the claims in the original paper are not true. The primary contribution of this paper is to provide a more thorough and correct analysis of the models incorrectly used in [1]. It appears that errors in the analysis of these models led to the erroneous claims of the original paper. ## II Analysis ### _Original claim_ The original paper claims that the sum of forces from the individual actuator "fascicles" in a pack is greater than Fig. 1: The original paper makes the erroneous claim that the force output from a pack of \(n\) individual actuators exceeds that of a single volumetrically comparable actuator. The paper further claims that this benefit becomes more pronounced as \(n\) increases. the force from "an equivalent single SPA of comparable volume." The original paper further predicts that this benefit would become more pronounced as the number of fascicles in the pack is increased. Examples of the claims in the paper include: From the abstract, "Experimental measurements show an SPA pack to generate over 112 N linear force, while the model indicates the benefit of parallel actuator grouping over a geometrically equivalent single SPA scale as an increasing function of the number of individual actuators in the group. For a module of four actuators, a 23 % increase in force production over a volumetrically equivalent single SPA is predicted and validated, while further gains appear possible up to 50 %." From the methods, "By grouping SPAs in parallel, actuator packs can be formed, which outperform individual SPAs of comparable size. Not only can higher force output be achieved with this configuration but also the benefit mutually increases with the number of constitutive actuator units utilized in the pack. This is to say that the more unit actuators are combined in parallel, the greater the gain in force output can be attained. This improvement in spatial-force efficiency allows for either stronger actuation in a given physical space or more compact actuation for a given required force." And from the discussion (emphasis original), "It is shown in the Results section that _a fascicle arrangement of SPAs is capable of generating more linear force than an equivalent single SPA of comparable volume_. Moreover, the gain in force production continues to increase within practical bounds as the number of units in a parallel configured SPA pack is increased. This model-based finding indicates that high-performance SPA design favors multiplicity and can be exploited as a new actuator design strategy, which takes into consideration the effect of multiple unit actuators in parallel coordination as well as the individual unit actuator design parameters to achieve desired performance." With regard to the word "validated" in the abstract, [1] reports no experimental comparison between a "volumetrically equivalent single SPA" and a "module of [n] actuators." The force output of a four-actuator pack _is_ tested experimentally, but the force data from this pack are not compared against forces from a volumetrically equivalent actuator. The predicted "23 % increase in force [from a four actuator pack compared to a single equivalent actuator]" is "validated" only against an erroneous prediction from the model. The erroneous claims of [1] have been repeated in subsequent publications. For example, [9] states "[The authors of [1]] have proved that the arrangement of multiple smaller soft pneumatic actuators produces larger output forces compared to one soft actuator with the same total volume." Because the original paper uses only a simple model to establish its claims, this paper uses only the same simple model (correctly analyzed) to refute it. This paper presents no challenge to the experimental data of the original paper, only to the paper's untested predictions and conclusions. ### _Actuator model_ This work seeks to mirror the assumptions of the original paper insofar as possible without duplicating errors. The model used in [1] was presented by Chou and Hannaford [10] and is common in the literature. The model assumes that the McKibben muscle is cylindrical and that its kinematics are governed by an inextensible fiber braid. The outer dimensions of the actuator are given by the axial length \(L\) and diameter \(D\) of the braid \[L =b\cos\theta \tag{1}\] \[D =\frac{b\sin\theta}{N\pi} \tag{2}\] where \(b\) is the unwound length of the fibers in the braid, \(N\) is the number of times they circle the axis and \(\theta\) is a measure of their angle with respect to the long axis. The diameter \(D\) can also be expressed as a fraction of \(D_{0}\), where \(D_{0}=\frac{b}{N\pi}\) is the theoretical diameter of the fibers if the braid angle were 90 \({}^{\circ}\) (i.e. \(\theta=\pi/2\)) \[D=D_{0}\sin\theta. \tag{3}\] The values of \(D_{0}\), \(N\) and \(b\) are determined by the fabrication of the actuator. Unlike the length \(L\), diameter \(D\) and braid angle \(\theta\), they do not change as the actuator creates motion. Neglecting the wall-thickness of the actuator (thin-walled approximation), the force output \(F_{\text{thin}}\) of the actuator is approximated in [10] as \[F_{\text{thin}}=\frac{\pi P^{\prime}}{4}{D_{0}}^{2}\left(3\cos^{2}(\theta)-1\right) \tag{4}\] where P' is the internal gauge pressure of the actuator. This equation is presented in [10] for _contractile_ McKibben muscles (i.e. \(\theta<54.7^{\circ}\)). Accordingly, a positive value of \(F_{\text{thin}}\) corresponds to a tensile force. The _extending_ McKibben muscles considered in this work and in [1] thus have braid angles larger than 54.7 \({}^{\circ}\) and correspondingly negative values of \(F\). Nevertheless, when a force is said to be "higher" or have an "increase" compared to another, it is understood that it is larger in magnitude. The analysis in [1] is based on a form of the equation in [10] that takes into account the thickness \(t_{k}\) of the elastomeric actuator walls. For clarity, the force output predicted from this approximation is designated \(F_{\text{thick}}\) here. In [10] it has the following form \[F_{\text{thick}}= \frac{\pi P^{\prime}}{4}{D_{0}}^{2}\left(3\cos^{2}(\theta)-1\right) \tag{5}\] \[+\pi P^{\prime}\left(D_{0}t_{k}\left(2\sin\theta-\frac{1}{\sin \theta}\right)-{t_{k}}^{2}\right).\] This approximation takes into account the volumetric effect of the wall-thickness but not its elastic forces that resist the actuator motion. When presented in [1], the authors use the identity of \(D_{0}\) to rewrite the equation as follows \[F_{\text{thick}}= \frac{b^{2}P^{\prime}}{4\pi N^{2}}\left(3\cos^{2}(\theta)-1\right) \tag{6}\] \[+\pi P^{\prime}\left(\frac{b}{N\pi}t_{k}\left(2\sin\theta-\frac{1 }{\sin\theta}\right)-{t_{k}}^{2}\right).\] They also provide a valid expression for the fiber length \(b\) based on the Pythagorean theorem and the circumference of the cylinder \[b=\sqrt{L^{2}+\left(D\pi N\right)^{2}}. \tag{7}\] ### _Individual actuator parameters_ The individual actuator parameters presented in the original paper are not self-consistent. That is, they do not satisfy the model equations. This inconsistency is corrected in this work by identifying the value of \(\theta\) that preserves the consistency of the rest of the parameters. The parameters of the individual actuators given in the original paper are listed in Table I. These values are not self-consistent. That is, using the listed parameters to calculate \(b\) results in different values from each of the three equations, Eq. (1), Eq. (2) and Eq. (7). Assuming that the easy-to-measure values of \(L\), \(D\) and \(N\) are correct, this work uses Eq. (7) to calculate the value of \(b\) as 866.7 mm. This value of \(b\) results in the same value of \(\theta\) from both Eq. (1) and Eq. (2), \(\theta=\) 80.4 \({}^{\circ}\). The resulting set of self-consistent parameters are listed in Table II. The original paper's parameters do not result in a physically meaningful braid description. This work uses the angle in Table II to avoid duplicating this small error in the original work. ### _Generating "equivalent" actuator parameters_ A more serious problem in the original paper is found in the prescribed methods for generating the parameters of the "equivalent" actuator. Following the methodology described in the original paper results in another inconsistent parameter set for the equivalent actuator. By failing to adjust the the number of fiber turns with the change in diameter, the original paper appears to consider a geometrically impossible equivalent actuator. The original paper claims that a pack with many fascicles can produce more force at the same pressure than an "equivalent" actuator of "comparable volume." The volume is preserved by using an "equivalent" actuator with the same length as the pack but with a cross-sectional area that has been scaled to match the total cross-sectional area of the actuators in the pack. The original paper states, "For a cross-sectional area, \(A\), of an individual unit actuator, the "equivalent SPA" is defined to have equal volume to a pack of \(n\) unit actuators, by defining its cross-sectional area as \(A_{\text{eq}}=nA\). This simple relationship for the equivalent area defines the necessary diameter for an equivalent single SPA. All other actuator parameters, as shown in Table I, are held equal." Note that, in this work, the parameters of the "individual unit actuator" and "equivalent" actuators are designated with the subscripts "ind" and "eqq" respectively. The problem with the methodology described in the original paper is the holding of "all other actuator parameters" equal while \(D\) is scaled. That is, problems arise when \(L_{\text{ind}}=L_{\text{eq}}\), \(\theta_{\text{ind}}=\theta_{\text{eq}}\), \(N_{\text{ind}}=N_{\text{eq}}\) and \(D_{\text{ind}}\neq D_{\text{eq}}\). Attempting to generate a set of equivalent braid parameters in this way results in a set that is not self-consistent. The number of turns \(N\) must be defined differently if the diameter of the actuator is different and the other parameters are kept constant. It is possible that this error in calculation led the authors of the original paper to their erroneous conclusions. It is reasonable to hold the length and braid angles constant (i.e. \(L_{\text{ind}}=L_{\text{eq}}\) and \(\theta_{\text{ind}}=\theta_{\text{eq}}\)). Indeed, for the volume to be "comparable" with equivalent cross-sectional areas, the lengths must be equal. Similarly, the braid angle, which changes over the actuation stroke, largely dictates the force output of a McKibben muscle actuator [10] (see also Eq. (4) and Eq. (5) in this work). Two McKibben muscle actuators with the same outer dimensions but different braid angles would produce different forces for the same pressure. To compare apples to apples, as they say, the braid angle must be the same in the equivalent actuator. With these two parameters equal, Eq. (1) makes it clear that the length \(b\) of the fibers in the individual and equivalent actuators must also be equal \[\text{Eq.~{}(1): }l =b\cos\theta \tag{8}\] \[\text{Assuming~{}}l_{\text{ind}} =l_{\text{eq}}\] \[b_{\text{ind}}\cos\theta_{\text{ind}} =b_{\text{eq}}\cos\theta_{\text{eq}}\] \[b_{\text{ind}}\cos\theta_{\text{ind}} =b_{\text{eq}}\cos\theta_{\text{ind}}\] \[b_{\text{ind}} =b_{\text{eq}}.\] If \(b_{\text{ind}}=b_{\text{eq}}\) and \(\theta_{\text{ind}}=\theta_{\text{eq}}\), then the number of fiber wraps in the two actuators are equal (i.e. \(N_{\text{ind}}=N_{\text{eq}}\)) _if and only if_ the diameters are equal (i.e. \(D_{\text{ind}}=D_{\text{eq}}\)) \[\text{Eq.~{}(2): }D =\frac{b\sin\theta}{N\pi} \tag{9}\] \[\text{Assuming~{}}D_{\text{ind}} \neq D_{\text{eq}}\] \[\frac{b_{\text{ind}}\sin\theta_{\text{ind}}}{N_{\text{ind}}\pi} \neq\frac{b_{\text{eq}}\sin\theta_{\text{eq}}}{N_{\text{eq}}\pi}\] \[\frac{b_{\text{ind}}\sin\theta_{\text{ind}}}{N_{\text{ind}}\pi} \neq\frac{b_{\text{ind}}\sin\theta_{\text{ind}}}{N_{\text{eq}}\pi}\] \[N_{\text{ind}} \neq N_{\text{eq}}\] \[.\] Thus, if \(L\) and \(\theta\) are kept the same, it is unreasonable to hold \(N\) constant while scaling the diameter \(D\) (as the original paper does). This relationship between \(D\) and \(N\) can be easily verified by wrapping and taping two equal-length strings around two respective household cylinders of different diameters (Fig. 2). If the strings are wrapped around the same axial length of the cylinders, their angle with respect to the axis will be approximately equal. If the diameters of the cylinders are different, the number of turns around the cylinder will be different. ### _Considering the claims without wall-thickness_ In this work, the original paper's problematic claims are first discussed without taking the elastomeric wall-thickness into account. The neglect of the wall-thickness is a reasonable approximation for actuators whose walls are thin relative to their diameters. To indicate that the wall-thickness has been neglected, the variables are marked with the subscript "thin." Neglecting the wall-thickness, the external cross-sectional area \(A_{\text{eq}}\) of the equivalent actuator is \(n\)-times greater than that of the individual actuator \(A_{\text{ind}}\). The outer diameter \(D_{\text{eq}}\) of the equivalent actuator is the product of the square-root of the number of actuator fascicles \(n\) and the outer diameter of the individual actuator \(D_{\text{ind}}\) \[\begin{split} A_{\text{eq}}&=nA_{\text{ind}}\\ (\pi/4)\,{D_{\text{eq}}}^{2}&=n\left(\pi/4\right){D_ {\text{ind}}}^{2}\\ D_{\text{eq}}&=\sqrt{n}D_{\text{ind}}.\end{split} \tag{10}\] The maximum diameter \(D_{0}\) of the equivalent actuator scales in the same way as \(D\). This can be seen by examining Eq. (3) while considering that the braid angles between the two actuators are the same \[D_{0,\text{eq}}=\frac{D_{\text{eq}}}{D_{\text{ind}}}D_{0,\text{ind}}. \tag{11}\] By Eq. (10) and Eq. (11), the maximum diameter of the equivalent actuator \(D_{0,\text{eq}}\) is similarly scaled from that of the individual actuator \(D_{0,\text{ind}}\) \[D_{0,\text{eq}}=\sqrt{n}D_{0,\text{ind}}. \tag{12}\] #### Iii-E1 Method 1, Force Equation The total force output \(F_{\text{thin,pack}}\) from a "pack" of identical individual fascicles in parallel is the sum of their individual forces given by Eq. (4) (recall that the lowercase "\(n\)" is the count of the fascicles in the pack) \[\begin{split} F_{\text{thin,pack}}&=nF_{\text{ thin,ind}}\\ F_{\text{thin,pack}}&=n\frac{\pi(D_{0,\text{ind}}) ^{2}P^{\prime}}{4}\left(3\cos^{2}(\theta_{\text{ind}})-1\right).\end{split} \tag{13}\] The force output from the single equivalent actuator is also given by Eq. (4) \[F_{\text{thin,eq}}=\frac{\pi(D_{0,\text{eq}})^{2}P^{\prime}}{4}\left(3\cos^{2 }(\theta_{\text{eq}})-1\right). \tag{14}\] Substituting for \(D_{0,\text{eq}}\) from Eq. (12) and considering that \(\theta_{\text{eq}}=\theta_{\text{ind}}\), the force from the equivalent actuator is \[\begin{split} F_{\text{thin,eq}}&=\frac{\pi(\sqrt{n}D _{0,\text{ind}})^{2}P^{\prime}}{4}\left(3\cos^{2}(\theta_{\text{ind}})-1 \right)\\ F_{\text{thin,eq}}&=n\frac{\pi(D_{0,\text{ind}}) ^{2}P^{\prime}}{4}\left(3\cos^{2}(\theta_{\text{ind}})-1\right).\end{split} \tag{15}\] Eq. (15) is equivalent to Eq. (13). Thus, when neglecting the wall-thickness, the force from an equivalent actuator at the same pressure is predicted to be identical to the force from the actuator pack, regardless of the number of fascicles used \[F_{\text{thin,eq}}=F_{\text{thin,pack}}. \tag{16}\] This contradicts the claims of the original paper. Fig. 2: The original paper evaluates a larger diameter actuator while holding the length, braid angle and the number of turns of the fiber equal. This is not possible in a physical actuator. The number of fiber turns must change with the diameter. This can be demonstrated with a simple household experiment. Shown are two equal-length strings wrapped around the same axial-length of cylinders with different diameters. The angles of the fibers with respect to the cylinder axis are approximately equal. It is clear, however, that the fiber wraps around the thin cylinder more times than the larger-diameter cylinder. #### Iii-B2 Method 2, Conservation of Energy The law of conservation of energy can also be used to evaluate the claims. Consider an ideal actuator without losses or energy storage. The input energy \(E_{\text{in}}\) to the actuator is equal to the output energy \(E_{\text{out}}\) \[\begin{split} E_{\text{in}}&=E_{\text{out}}\\ P^{\prime}\Delta V&=F_{\text{avg}}\Delta L.\end{split} \tag{17}\] The input is flow energy, that is, the product of the gauge pressure in the actuator \(P^{\prime}\) (considered to be constant) and the change in volume \(\Delta V\). The output energy is mechanical work, that is, the product of the average extension force \(F_{\text{avg}}\) and the distance over which that force is applied (written as a change of length \(\Delta L\)) [11]. Considering two closed-volume fluid-powered actuators actuated by the same constant pressure \(P^{\prime}\) and acting over the same distance \(\Delta L\), the average force of one actuator can only exceed the other if it undergoes a larger change in volume \(\Delta V\). For the claims of the original paper to be valid, the sum of the volume changes of the individual actuators would need to be larger than the volume change of the equivalent actuator. This, however, is not the case. To illustrate this, consider an extending McKibben muscle at an initial state before extension (subscript "1") and final state ("2") after extension. The change of volume \(\Delta V\), with the same assumptions as the force model, is \[\begin{split}\Delta V&=V_{2}-V_{1}\\ &=\frac{\pi}{4}\left({D_{2}}^{2}{L_{2}}-{D_{1}}^{2}{L_{1}}\right). \end{split} \tag{18}\] Though the actuator extends axially, the value of \(D_{0}\) remains the same. Accordingly, Eq. (3) can be re-arranged to find the relationship between the relaxed diameter \(D_{1}\) and the post-extension diameter \(D_{2}\). \(\theta_{1}\) and \(\theta_{2}\) are the respective fiber angles in the relaxed and extended states. \[\begin{split} D_{0}=\frac{D_{1}}{\sin\theta_{1}}&= \frac{D_{2}}{\sin\theta_{2}}\\ D_{2}&=\frac{\sin\theta_{2}}{\sin\theta_{1}}D_{1}\\ D_{2}&=\gamma D_{1}\\ \gamma&=\frac{\sin\theta_{2}}{\sin\theta_{1}}.\end{split} \tag{19}\] When the actuator extends from \(L_{1}\) to \(L_{2}\), \(D_{2}\) is scaled down from the initial diameter \(D_{1}\) by a factor of \(\gamma\). The value of \(\gamma\) depends on \(\theta_{1}\) and \(\theta_{2}\) which, in turn, can be found with length of the actuator \(L\) at the respective angles and the constant fiber-length \(b\) (Eq. (1)). Because the individual and equivalent actuators in this analysis have the same values for \(L_{1}\), \(L_{2}\) and \(b\), they share the same values of \(\theta_{1}\), \(\theta_{2}\) and \(\gamma\). With Eq. (19), the change in volume from Eq. (18) is \[\begin{split}\Delta V&=\frac{\pi}{4}\left({D_{2}} ^{2}{L_{2}}-{D_{1}}^{2}{L_{1}}\right)\\ &=\frac{\pi}{4}\left({\left(\gamma{D_{1}}\right)}^{2}{L_{2}}-{D_ {1}}^{2}{L_{1}}\right)\\ &=\frac{\pi}{4}{D_{1}}^{2}\left({\gamma}^{2}{L_{2}}-{L_{1}} \right).\end{split} \tag{20}\] The total change in volume for the pack of \(n\) actuators \(\Delta V_{\text{pack}}\) is \(n\)-times the change of the individual actuators \(\Delta V_{\text{ind}}\) \[\begin{split}\Delta V_{\text{pack}}&=n\Delta V_{ \text{ind}}\\ &=n\frac{\pi}{4}{D_{1,\text{ind}}}^{2}\left({\gamma}^{2}{L_{2}}- {L_{1}}\right).\end{split} \tag{21}\] By Eq. (10), the change in volume for the equivalent actuator \(\Delta V_{\text{eq}}\) is \[\begin{split}\Delta V_{\text{eq}}&=\frac{\pi}{4}{D_ {1,\text{eq}}}^{2}\left({\gamma}^{2}{L_{2}}-{L_{1}}\right)\\ &=\frac{\pi}{4}{\left(\sqrt{n}{D_{1,\text{ind}}}\right)}^{2}\left( {\gamma}^{2}{L_{2}}-{L_{1}}\right)\\ &=n\frac{\pi}{4}{D_{1,\text{ind}}}^{2}\left({\gamma}^{2}{L_{2}}-{L _{1}}\right).\end{split} \tag{22}\] The change volume is the same \[\Delta V_{\text{pack}}=\Delta V_{\text{eq}}. \tag{23}\] By Eq. (17), which assumes ideal actuators, the average force from a pack of individual actuators \(F_{\text{avg,pack}}\) is predicted to be equal to the average force from an equivalent actuator \(F_{\text{avg,eq}}\) at the same pressure, undergoing the same length change \[F_{\text{avg,pack}}=F_{\text{avg,eq}}. \tag{24}\] This is true regardless of how many actuators are used to form the pack. Accordingly, it contradicts the claims of the original paper. ### _Assumptions about wall-thickness_ It has been shown, without considering the wall-thickness, that there is no force benefit from increasing the number of fascicles (compared to an equivalent actuator). Both the force equation used by the original work and the conservation of energy contradict the claims of the original work. This work now considers the wall-thickness and shows, under reasonable assumptions, that the addition of wall-thickness does add support for the claims of the original paper. One thing that is not explored in the original work is the effect of the elastomeric bladder inside the McKibben muscle. The walls of the bladder create non-trivial elastic forces that resist actuator motion and reduce the output forces. These elastic forces were not considered in the original paper and they are not considered here. What _is_ considered in the original paper is the effect of the wall-thickness \(t_{k}\) on the internal volume of the actuator. Because the force output of the actuator is related to the rate of volume change, this thickness term \(t_{k}\) also appears in the force equation Eq. (5). Its effect is to reduce the force-output of the actuator. The original paper does not discuss the effect of the wall-thickness on the definition of the "equivalent" actuator. An increased wall-thickness reduces the internal cross-sectional area of the actuator. The original paper does not specify whether the cross-sectional area used to define equivalence is the internal or external cross-sectional area. However, the original paper _does_ state that the the equivalent actuator has a "comparable volume" to total volume of individual actuators in the pack. Accordingly, this work assumes that the _external cross-sectional area of the equivalent actuator is equal to the total _external_ cross-sectional area of the individual actuators in the pack. This assumption makes the diameter relationship between the equivalent and individual actuators identical to the relationship found in Eq. (10) (i.e. \(D_{\text{eq}}=\sqrt{n}D_{\text{ind}}\)). The fraction of the external diameter \(D\) made up by the thickness of each wall \(t_{k}\) is designated \(\hat{t}_{k}\) \[0\leq\hat{t}_{k} \leq 0.5 \tag{25}\] \[\hat{t}_{k} =\frac{t_{k}}{D}.\] This quantity is bounded between zero (no wall thickness) and one half (no hollow interior). For the sake of simplicity, this paper assumes that the relative wall-thickness of the equivalent actuator \(\hat{t}_{k,\text{eq}}\) is equal to the relative wall-thickness of the individual actuator \(\hat{t}_{k,\text{ind}}\). That is, if the thickness of the original actuator walls made up 10 % of the overall diameter, the walls of the equivalent actuator would make up 10 % of the equivalent actuator's overall diameter \[\hat{t}_{k,\text{ind}} =\hat{t}_{k,\text{eq}} \tag{26}\] \[\frac{t_{k,\text{ind}}}{D_{\text{ind}}} =\frac{t_{k,\text{eq}}}{D_{\text{eq}}}.\] When \(\hat{t}_{k,\text{ind}}=\hat{t}_{k,\text{eq}}\), the equivalent actuator has the same total volume of elastomeric material as the pack of individual actuators. This assumed relationship is necessary because the original paper does not discuss the wall-thickness of the equivalent actuator. ### _The effect of wall-thickness_ Eq. (5) can be manipulated algebraically to provide insight into the volumetric effect of the thickness on the actuator force output. The equation is first normalized by the gauge pressure \(P^{\prime}\) and the expression in (3) is substituted for \(D_{0}\) \[\frac{F_{\text{thick}}}{P^{\prime}} =\frac{\pi}{4}D^{2}\left(2\csc^{2}\theta-3\right) \tag{27}\] \[+\pi\left(Dt_{k}\left(2-\csc^{2}\theta\right)-{t_{k}}^{2}\right).\] See the appendix for the derivation of Eq. (27). Note that this form of the equation needs to be used with caution. The form in Eq. (5) is parameterized by \(D_{0}\), which does not change during actuation. The form in Eq. (27) is parameterized by the instantaneous external diameter \(D\) which is itself a function of the braid angle \(\theta\) and changes over the course of the actuation. The expression in Eq. (27) can be further normalized by the external cross-sectional area \(A\) of the actuator defined by \[A=\frac{\pi}{4}D^{2} \tag{28}\] \[\frac{F_{\text{thick}}}{P^{\prime}A}=\left(2\csc^{2}\theta-3\right)+4\left(2- \csc^{2}\theta\right)\frac{t_{k}}{D}-4\left(\frac{t_{k}}{D}\right)^{2}\!\!. \tag{29}\] The normalized force in Eq. (29) is designated \(\hat{F}_{\text{thick}}\), a function of \(\theta\) and the relative wall-thickness \(\hat{t}_{k}\) \[\hat{F}_{\text{thick}}\!\left(\theta,\hat{t}_{k}\right)=\left(-4\right)\hat{t }_{k}^{2}+\left(8-4\csc^{2}\theta\right)\hat{t}_{k}+\left(2\csc^{2}\theta-3 \right). \tag{30}\] This function is plotted in Fig. 3. The predicted force output of an actuator at a given diameter is the product of the function in Eq. (30) with the internal gauge pressure \(P^{\prime}\) and the external cross-sectional area \(A\) \[F_{\text{thick}}=\hat{F}_{\text{thick}}\!\left(\theta,\hat{t}_{k}\right)P^{ \prime}A. \tag{31}\] ### _Considering the claims with wall-thickness_ Eq. (31) allows one to consider the claims of the original paper while considering the thickness of the elastomeric walls. The force \(F_{\text{thick,pack}}\) from a pack of \(n\) individual actuators is given by \[F_{\text{thick,pack}} =nF_{\text{thick,ind}} \tag{32}\] \[=n\hat{F}_{\text{thick}}\!\left(\theta,\hat{t}_{k,\text{ind}} \right)P^{\prime}A_{\text{ind}}.\] The force \(F_{\text{eq,thick}}\) from an actuator with an equivalent external cross-sectional area (\(A_{\text{eq}}=nA_{\text{ind}}\)) and the same relative wall-thickness (\(\hat{t}_{k,\text{ind}}=\hat{t}_{k,\text{eq}}\)) is given by \[F_{\text{thick,eq}} =\hat{F}_{\text{thick}}\!\left(\theta,\hat{t}_{k,\text{eq}}\right) P^{\prime}A_{\text{eq}} \tag{33}\] \[=\hat{F}_{\text{thick}}\!\left(\theta,\hat{t}_{k,\text{ind}} \right)P^{\prime}(nA_{\text{ind}})\] \[=n\hat{F}_{\text{thick}}\!\left(\theta,\hat{t}_{k,\text{ind}} \right)P^{\prime}A_{\text{ind}}.\] Eq. (32) is equivalent to Eq. (33). Thus, even when considering the wall-thickness, the force from an equivalent actuator at the same pressure is predicted to be identical to the force from the actuator pack, regardless of the number of fascicles used \[F_{\text{thick,eq}}=F_{\text{thick,pack}}. \tag{34}\] This contradicts the claims of the original paper. ### _Numerical verification_ As a final verification, the claims are considered through numerical experimentation. The force from an SPA pack made up from individual actuators with the parameters in Fig. 3: Shown is the relationship between the normalized extension force \(\hat{F}_{\text{thick}}\), the braid angle \(\theta\) and the relative elastomeric wall-thickness \(\hat{t}_{k}\). The force shown is normalized by pressure and the current external cross-sectional area of the actuator. This idealized extension force is highest at the theoretical braid angle of 90 \({}^{\circ}\) and decreases with the braid angle. An extending McKibben muscle, such as those considered in the original paper, is fabricated with some initial braid angle greater than 54.7 \({}^{\circ}\). This angle decreases as the actuator extends. When there is no thickness, the maximum normalized force is one (equivalent to a piston with the same area). Increasing the relative thickness \(\hat{t}_{k}\) decreases the maximum extension force and increases the angle for which the force is predicted to reach zero. Table II is calculated for a wall-thickness of 1 mm. This is compared to the force from two equivalent actuators, one with the same _relative_ elastomeric wall-thickness and one with the same _absolute_ wall-thickness. This is repeated for various values of \(n\) (i.e. various counts of individual actuators in the pack). All the calculated forces come from Eq. (4) and MATLAB code for the verification is included in the appendix. The results are listed in Table III. As expected, the force from the pack is identical to that from the equivalent actuator with the same relative wall-thickness. This does not change with increasing numbers of individual actuators. The equivalent actuator with the same _absolute_ wall-thickness of 1 mm shows the opposite trend of what is claimed by the original paper. That is, it creates a higher magnitude force than the pack and this disparity grows with an increasing \(n\). ## III Conclusion This paper demonstrates that the central claims of the popular original paper, "Soft Pneumatic Actuator Fascicles for High Force and Reliability," [1] are not true. The analysis of this paper refutes the original paper's claim that parallel combinations of extending McKibben muscles can produce more force-per-unit-volume than a single actuator of the same kind. It also refutes the original paper's claim that this benefit becomes more pronounced as the number of actuators in the pack increases. These claims have been refuted analytically, with the original paper's force model and with the law of conservation of energy. These claims have also been refuted by direct numerical calculation.
2308.14765
A note on Majorana representation of quantum states
By the Majorana representation, for any $d > 1$ there is a one-one correspondence between a quantum state of dimension $d$ and $d-1$ qubits represented as $d-1$ points in the Bloch sphere. Using the theory of symmetry class of tensors, we present a simple scheme for constructing $d-1$ points on the Bloch sphere and the corresponding $d-1$ qubits representing a $d$-dimensional quantum state. Additionally, we demonstrate how the inner product of two $d$-dimensional quantum states can be expressed as a permanent of a matrix related to their $(d-1)$-qubit state representations. Extension of the result to mixed states is also considered.
Chi-Kwong Li, Mikio Nakahara
2023-08-27T13:29:40Z
http://arxiv.org/abs/2308.14765v4
###### Abstract ###### Abstract We study the Majorana representation of quantum states using symmetry class of tensors. We present a simple method to construct \(d-1\) points on the Bloch sphere and their corresponding \(d-1\) qubits, effectively representing a \(d\)-dimensional quantum state. Additionally, we demonstrate how the inner product of two \(d\)-dimensional quantum states can be expressed as a permanent of a matrix related to their \((d-1)\)-qubit state representations. Furthermore, we discuss the implications of this result on the convexity of a specific decomposable numerical range. A note on Majorana representation of quantum states Chi-Kwong Li, Department of Mathematics, College of William & Mary, Williamsburg, VA 23187, USA. Email: [email protected] Mikio Nakahara, IQM Quantum Computers, Espoo 02150, Finland. Email: [email protected] Footnote 1: The author’s affiliation with IQM Quantum Computers is provided for identification purposes only and it is not intended to convey or imply IQM’s concurrence with, or support for, the positions, opinions, or viewpoints expressed by the author. **In memory of Professor Kalyanapuram Rangachari Parthasarthy.** AMS Classification. 15A90, 15A60. Keywords. Quantum states, Majorana representation, principal character, permanent. ## 1 Introduction Quantum states, represented by unit vectors \(\mathbf{a}\in\mathbb{C}^{d}\), form a fundamental aspect of \(d\)-dimensional systems. These vectors are identified up to a phase factor, i.e., \(\mathbf{a}\) and \(e^{it}\mathbf{a}\) are identified for any \(t\in[0,2\pi)\). In the case of \(d=2\), quantum states are commonly referred to as qubits. For qubits, there exists a one-to-one correspondence between the state \(\mathbf{a}=(a_{0},a_{1})^{t}\) with \(|a_{0}|^{2}+|a_{1}|^{2}=1\) and a point \((c_{x},c_{y},c_{z})\) on the Bloch sphere: \[\mathbf{B}=\{(c_{x},c_{y},c_{z}):c_{x},c_{y},c_{z}\in\mathbb{R}^{3},\ c_{x}^{2} +c_{y}^{2}+c_{z}^{2}=1\}\] where \(\mathbf{a}\mathbf{a}^{*}\) is related to \((c_{x},c_{y},c_{z})\) by \(\frac{1}{2}\begin{pmatrix}1+c_{z}&c_{x}-ic_{y}\\ c_{x}+ic_{y}&1-c_{z}\end{pmatrix}\). The correspondence is established using \((c_{x},c_{y},c_{z})=(\Re(\bar{a}_{0}a_{1}),\Im(\bar{a}_{0}a_{1}),|a_{0}|^{2}-| a_{1}|^{2})/2\), ensuring that \(c_{x}^{2}+c_{y}^{2}+c_{z}^{2}=1\). Majorana's work [4] proposes a geometric method to represent quantum states \(\mathbf{a}\in\mathbb{C}^{d}\) with \(d>1\) using \(d-1\) qubits. Consequently, a quantum state in \(\mathbb{C}^{d}\) is associated with \(d-1\) points on the Bloch sphere. Moreover, the inner product between quantum states \(\mathbf{a}\) and \(\mathbf{b}\) in \(\mathbb{C}^{d}\) corresponds to the inner product of their respective \((d-1)\)\(\mathbb{C}^{2}\)-vector representations. The Majorana representation provides a visual tool to understand the properties and transformations of quantum states. It allows for direct visualization of qubit rotations, thus proving useful in various domains of quantum information science, such as quantum computation and communication. In this note, we establish the connection between Majorana representation and symmetry class of tensors in \(\mathbb{V}^{\otimes(d-1)}\) for \(\mathbb{V}=\mathbb{C}^{2}\) associated with the principal character \(\xi\). This connection enables easy determination of the \((d-1)\)-qubit representation of a given quantum state \(\mathbf{a}\in\mathbb{C}^{d}\), as well as the computation of the inner product between \(\mathbf{a}\) and \(\mathbf{b}\) in terms of their \((d-1)\)-qubit representations, and vice versa. We will provide straightforward procedures for determining \(v_{1},\ldots,v_{d-1}\in\mathbb{C}^{2}\) associated with a given vector \((a_{0},\ldots,a_{d-1})\in\mathbb{C}^{d}\) in the next section. Additionally, we present a simple formula for the inner product of \(\mathbf{a}\) and \(\mathbf{b}\) in \(\mathbb{C}^{d}\) in terms of their \((d-1)\)-qubit presentations. Numerical examples are provided in the last section to illustrate these results. ## 2 Results and Examples Let us present the following standard set up of a symmetry class of tensors in the \((d-1)\)-fold tensor space \(\mathbb{V}^{\otimes(d-1)}\). In our study we focus on \(\mathbb{V}=\mathbb{C}^{2}\) and the principal character \(\xi\) on the symmetric group \(S_{d-1}\) of degree \(d-1\) such that \(\xi(\sigma)=1\) for all \(\sigma\in S_{d-1}\). Define the symmetrizer on the tensor space \(\mathbb{V}^{\otimes(d-1)}\) by \[T(v_{1}\otimes\cdots\otimes v_{d-1})=\frac{1}{(d-1)!}\sum_{\sigma\in S_{d-1}} \xi(\sigma)v_{\sigma^{-1}(v_{1})}\otimes\cdots\otimes v_{\sigma^{-1}(d-1)}.\] The vector space \(\mathbb{V}^{(d-1)}_{\xi}=T(\mathbb{V}^{\otimes(d-1)})\) is a subspace of \(\mathbb{V}^{\otimes(d-1)}\) known as the symmetry class of tensors over \(\mathbb{V}\) associated with \(\xi\) on \(S_{d-1}\). The elements in \(\mathbb{V}^{(d-1)}_{\xi}\) of the form \(T(v_{1}\otimes\cdots\otimes v_{m})\) are called decomposable tensors and are denoted by \(v_{1}\bullet\cdots\bullet v_{m}\) and abbreviated as \(v^{\bullet}\), when there is no ambiguity. One may see [2, 5] and also [3] for some general background. For the connection of this set up with the boson states, one may see [1]. Let \(\{e_{0},e_{1}\}\) be the standard orthonormal basis of \(\mathbb{V}=\mathbb{C}^{2}\). Then \(\mathbb{V}^{d-1}_{\xi}\) be the subspace spanned by the orthonormal basis \[\mathcal{S}=\{e_{i_{1}}\bullet\cdots\bullet e_{i_{d-1}}:0\leq i_{1}\leq\cdots \leq i_{d-1}\leq 1\}\] using the induced inner product on decomposable tensor \(u_{1}\bullet\cdots\bullet u_{d-1}\) and \(v_{1}\bullet\cdots\bullet v_{d-1}\) so that \[\langle u_{1}\bullet\cdots\bullet u_{d-1},v_{1}\bullet\cdots\bullet v_{d-1} \rangle=\operatorname{per}(\langle u_{i},v_{j}\rangle)/(d-1)!,\] where \[\text{per}(X)=\sum_{\sigma\in S_{k}}\prod_{j=1}^{k}x_{j\sigma(j)}\quad\text{ for }X=(x_{ij})\in\mathbb{M}_{k}\] is the permanent of \(X\in\mathbb{M}_{k}\). If \(j_{1}=\cdots=j_{\ell}=0\) and \(j_{\ell+1}=\cdots=j_{d-1}=1\), then \[\langle e_{j_{1}}\bullet\cdots\bullet e_{j_{d-1}},e_{j_{1}}\bullet\cdots\bullet e _{j_{d-1}}\rangle=\text{per}\left(J_{k}\oplus J_{d-1-k}\right)/(d-1)!={d-1 \choose\ell}^{-1},\] where \(J_{r}\in M_{r}\) has all entries equal to \(1\). Thus, after normalization \(\mathcal{S}\) becomes an orthonormal basis \(\{f_{0}^{\bullet},\ldots,f_{d-1}^{\bullet}\}\). The vectors \(v_{1},\ldots,v_{d-1}\in\mathbb{C}^{2}\) correspond to the decomposable tensor \[v^{\bullet}=v_{1}\bullet\cdots\bullet v_{d-1}=a_{0}f_{0}^{\bullet}+\cdots+a_{ d-1}f_{d-1}^{\bullet} \tag{1}\] with \[a_{j}=\sqrt{{d-1\choose j}}\text{per}(C_{j}^{*}[v_{1}\cdots v_{d-1}])/(d-1)!, \quad j=0,\ldots,d-1,\] where \(C_{j}\in\mathbb{M}_{2,d-1}\) with first \(d-1-j\) columns equal to \(e_{0}\), and the other columns equal to \(e_{1}\), and \(A^{*}\) denotes the conjugate transpose of a complex matrix \(A\). Note that * \(\gamma(v_{1}\bullet\cdots\bullet v_{d-1})=\mu_{1}v_{1}\bullet\cdots\bullet \mu_{d-1}v_{d-1}\) if \(\mu_{1},\ldots,\mu_{d-1},\gamma\in\mathbb{C}\) satisfy \(\mu_{1}\cdots\mu_{d-1}=\gamma\), * \(v_{1}\bullet\cdots\bullet v_{d-1}=v_{\sigma(1)}\bullet\cdots\bullet v_{\sigma (d-1)}\) if \(\sigma\in S_{d-1}\) is a permutation of \((1,\ldots,d-1)\). As a result, suppose \((a_{0},\ldots,a_{d-1})^{t}\in\mathbb{C}^{d}\) is given and we want to find \(v_{1},\ldots,v_{d-1}\in\mathbb{C}^{2}\) to satisfy (1). By (a), we may assume that the first nonzero entry of \((a_{0},\ldots,a_{d-1})\) is \(1\). By (b), if \(v_{1},\ldots,v_{d-1}\in\mathbb{C}^{2}\) corresponds to \((a_{0},\ldots,a_{d-1})^{t}\), then so is \(v_{\sigma(1)},\ldots,v_{\sigma(d-1)}\) for any \(\sigma\in S_{d-1}\). We also need the following fact about the zeros of a polynomial to present our result. Denote by \(E_{k}(\mu_{1},\ldots,\mu_{d-1})\) the \(k\)th symmetric function for \(\mu_{1},\ldots,\mu_{d-1}\) so that \[E_{0}(\mu_{1},\ldots,\mu_{d-1})=1,\quad E_{k}(\mu_{1},\ldots,\mu_{d-1})=\sum_{ 1\leq j_{1}<\cdots<j_{k}\leq d-1}\mu_{j_{1}}\cdots\mu_{j_{k}},\ \ k=1,\ldots,d-1.\] For instance, \(E_{1}(\mu_{1},\ldots,\mu_{d-1})=\mu_{1}+\cdots+\mu_{d-1}\) and \(E_{d-1}(\mu_{1},\ldots,\mu_{d-1})=\mu_{1}\cdots\mu_{d-1}\). Suppose \(g(z)=\sum_{j=0}^{n}c_{j}z^{d-1-j}\) is a complex polynomial with \(c_{0}=1\). Then \(g(z)\) has zeros \(\mu_{1},\ldots,\mu_{d-1}\) (counting multiplicities) if and only if \(c_{j}=(-1)^{j}E_{j}(\mu_{1},\ldots,\mu_{d-1})\) for \(j=1,\ldots,d-1\), **Theorem 2.1**.: _Let \(\mathbb{V}=\mathbb{C}^{2}\) and \(\{f_{0}^{\bullet},\ldots,f_{d-1}^{\bullet}\}\) be the standard orthonormal basis for \(\mathbb{V}_{\xi}^{d-1}\). Suppose \((a_{0},\ldots,a_{d-1})^{t}\in\mathbb{C}^{d}\) is nonzero and \(r\geq 0\) is the smallest integer such that \(a_{r}\neq 0.\) Then \(a_{0}f_{0}^{\bullet}+\cdots+a_{d-1}f_{d-1}^{\bullet}=v_{1}\bullet\cdots \bullet v_{d-1},\) where \(v_{1},\ldots,v_{d-1}\) are constructed as follows._ * _Suppose_ \(r<d-1\)_. Then_ \[v_{j}=(0,1)^{t}\quad\text{ for }j=1,\ldots,r\text{, \quad and \quad}v_{j}=(1,\mu_{j})^{t}\quad\text{ for }j=r+1,\ldots,d-1,\] _such that_ \(\mu_{r},\ldots,\mu_{d-1}\) _are the zeros of the Majorana polynomial_ \[g(z)=\sum_{j=r}^{d-1}(-1)^{j}a_{j}\sqrt{\binom{d-1}{j}}z^{d-1-j}=\sum_{j=0}^{d- 1}(-1)^{j}a_{j}\sqrt{\binom{d-1}{j}}z^{d-1-j}.\] * _Suppose_ \(r=d-1\)_. Then_ \(v_{1}=\cdots=v_{r-1}=(0,1)^{t}\) _and_ \(v_{r}=(0,a_{d-1})^{t}\)_._ _Moreover, if \(b_{0}f_{0}^{\bullet}+\cdots+b_{d-1}f_{d-1}^{\bullet}=u_{1}\bullet\cdots \bullet u_{d-1}\) and \(c_{0}f_{0}^{\bullet}+\cdots+c_{d-1}f_{d-1}^{\bullet}=w_{1}\bullet\cdots \bullet w_{d-1}\), then_ \[\sum_{j=0}^{d-1}\bar{b}_{j}c_{j}=\langle u_{1}\bullet\cdots\bullet u_{d-1},w_{ 1}\bullet\cdots\bullet w_{d-1}\rangle=\mathrm{per}(\langle u_{i},w_{j}\rangle) /(d-1)!.\] _Proof of Theorem 2.1._ Construct the vectors \(v_{1},\ldots,v_{d-1}\) according to the description. Let \(Q\in\mathbb{M}_{2,d-1}\) have columns \(v_{1},\ldots,v_{d-1}\), and \(C_{j}\in\mathbb{M}_{2,d-1}\) such that the first \(d-1-j\) columns of \(C_{j}\) are \(e_{0}\) and the rest of the columns of \(C_{j}\) are \(e_{1}\). Then \[C_{j}^{*}Q=\binom{0_{d-1-j,r-1}&\mathbf{1}_{d-1-j}\mathbf{1}_{d-r}^{t}}{ \mathbf{1}_{j}\mathbf{1}_{r-1}^{t}&\mathbf{1}_{j}(\mu_{r},\ldots,\mu_{d-1})},\] where \(\mathbf{1}_{k}\in\mathbb{C}^{k}\) has all entries equal to \(1\). Hence, \(\mathrm{per}(C_{j}^{*}Q)=0\) for \(j=0,\ldots,r-1\), and \[\mathrm{per}(C_{j}^{*}Q)=j!(d-1-j)!E_{j-r}(\mu_{r+1},\ldots,\mu_{d-1}),\qquad j =r,\ldots,d-1.\] Thus, \[\langle f_{j}^{\bullet},v_{1}\bullet\cdots\bullet v_{d-1}\rangle=\sqrt{ \binom{d-1}{j}}\mathrm{per}(C_{j}^{*}Q)/(d-1)!=0,\qquad j=0,\ldots,r-1,\] and for \(j\geq r\) we have \[\langle f_{j}^{\bullet},v_{1}\bullet\cdots\bullet v_{d-1}\rangle = \sqrt{\binom{d-1}{j}}\mathrm{per}(C_{j}^{*}Q)/(d-1)!\] \[= E_{j-r}(\mu_{r+1},\ldots,\mu_{d-1})=a_{j}.\] So, \[E_{j-r}(\mu_{r+1},\ldots,\mu_{d-1})=\frac{a_{j}\sqrt{\binom{d-1}{j}}}{\sqrt{ \binom{d-1}{r}}},\quad j=r,\ldots,d-1,\] i.e., \(\mu_{r+1},\ldots,\mu_{d-1}\) are the zeros of \(g(z)\). The last statement follows from the formula of the induced inner product in \(\mathbb{V}_{\xi}^{d-1}\). **Remark 2.2**.: _Recall that a general quantum state is represented by a density matrix, which is a positive semi-definite matrix with trace 1. If \(\mathbf{a}=(a_{0},\ldots,a_{d-1})^{t}\in\mathbb{C}^{d}\) is a quantum state corresponding to \(u_{1}\bullet\cdots\bullet u_{d-1}\in\mathbb{V}_{\xi}^{d-1}\), where \(u_{1},\ldots,u_{d-1}\) are qubit states, then the corresponding density matrix \(\rho=\mathbf{a}\mathbf{a}^{*}\in\mathbb{M}_{d}\) has \((r,s)\) entry equal to_ \[a_{r}\bar{a}_{s}=\mathrm{per}(C_{r}^{*}[u_{1}\cdots u_{d-1}])\mathrm{per}([u_{1 }\cdots u_{d-1}]^{*}C_{s}])/\sqrt{\binom{d-1}{r}\binom{d-1}{s}},\qquad 0\leq r,s \leq d-1.\] _One can use Theorem 2.1 to represent a mixed states. Let \(\rho=\sum_{j=1}^{r}p_{j}\rho_{j}\in\mathbb{M}_{d}\) where \((p_{1},\ldots,p_{r})\) is a probability vector and \(\rho_{1},\ldots,\rho_{r}\) are pure states. Then we can apply our result to each \(\rho_{j}\). In particular, we can use the spectral decomposition of a density matrix \(\rho\) as a convex combination of pure states corresponding to its eigenprojections, and express every pure state in terms of \(d-1\) qubit states._ Define the numerical range and decomposable numerical range of \(A\in\mathbb{M}_{d}\) by \[W(A)=\{\langle x,Ax\rangle:x\in\mathbb{C}^{d},\|x\|=1\},\] and \[W_{\xi}^{\bullet}(A)=\{\langle\mathbf{u},A\mathbf{u}\rangle:\mathbf{u}=u_{1} \bullet\cdots\bullet u_{d-1}\in\mathbb{V}_{\xi}^{d-1},\|u\|=1\}.\] One may define the decomposable numerical range using different characters on the symmetric group. The numerical range and decomposable numerical range are useful concepts in matrix theory and quantum systems; see [1, 3] and their references for general background. The classical numerical range of a matrix is always convex, but the decomposable numerical range is often a proper subset of the numerical range and is not convex in general. By our theorem, we have the following result for \(W_{\xi}^{\bullet}(A)\). **Corollary 2.3**.: _Let \(A\in\mathbb{M}_{d}\) act on \(\mathbb{C}^{d}\) identified with \(\mathbb{V}_{\xi}^{d-1}\). Then \(W_{\xi}^{\bullet}(A)=W(A)\) is convex._ **Example 2.4**.: Suppose \(d=5\) and \(\mathbf{a}=(a_{0},a_{1},a_{2},a_{3},a_{4})^{t}\in\mathbb{C}^{5}\). Then \[a_{0}f_{0}^{\bullet}+a_{1}f_{1}^{\bullet}+a_{2}f_{2}^{\bullet}+a_{3}f_{3}^{ \bullet}+a_{4}f_{4}^{\bullet}=v_{1}\bullet v_{2}\bullet v_{3}\bullet v_{4},\] where \(v_{1},v_{2},v_{3},v_{4}\) are constructed as follows. 1. Suppose \(\mathbf{a}=(a_{0},a_{1},a_{2},a_{3},a_{4})^{t}\in\mathbb{C}^{5}\) with \(a_{0}\neq 0\). Then \(v_{j}=(1,\mu_{j})^{t}\) for \(j=1,2,3,4\) such that \(\mu_{1},\mu_{2},\mu_{3},\mu_{4}\) are the zeros of the polynomial \[g(z)=a_{0}z^{4}-\sqrt{4}a_{1}z^{3}+\sqrt{6}a_{2}z^{2}-\sqrt{4}a_{3}z+a_{4}.\] 2. Suppose \(\mathbf{a}=(a_{0},a_{1},a_{2},a_{3},a_{4})^{t}\in\mathbb{C}^{5}\) with \(a_{0}=0\neq a_{1}\). Then \(v_{1}=(0,1)^{t}\), \(v_{j}=(1,\mu_{j})^{t}\) for \(j=2,3,4\) such that \(\mu_{2},\mu_{3},\mu_{4}\) are the zeros of the polynomial \[g(z)=-a_{1}\sqrt{\binom{4}{1}}z^{3}+a_{2}\sqrt{\binom{4}{2}}z^{2}-\sqrt{\binom{ 4}{3}}a_{3}z-\sqrt{\binom{4}{4}}a_{4}.\] 3. Suppose \(\mathbf{a}=(a_{0},a_{1},a_{2},a_{3},a_{4})^{t}\in\mathbb{C}^{5}\) with \(a_{0}=a_{1}=0\neq a_{2}\). Then \(v_{1}=(0,1)^{t}\), \(v_{2}=(0,1)^{t}\), \(v_{j}=(1,\mu_{j})^{t}\) for \(j=3,4\) such that \(\mu_{3},\mu_{4}\) are the zeros of the polynomial \[g(z)=a_{2}\sqrt{\binom{4}{2}}z^{2}-a_{3}\sqrt{\binom{4}{3}}z+\sqrt{\binom{4}{ 4}}a_{4}.\] 4. Suppose \(\mathbf{a}=(a_{0},a_{1},a_{2},a_{3},a_{4})^{t}\in\mathbb{C}^{5}\) with \(a_{0}=a_{1}=a_{2}=0\neq a_{3}\). Then \(v_{1}=v_{2}=v_{3}=(0,1)^{t}\), \(v_{4}=(1,\mu_{4})^{t}\) such that \(\mu_{4}\) is the zero of the polynomial \[g(z)=\sqrt{\binom{4}{3}}z-\sqrt{\binom{4}{4}}a_{4}.\] 5. If \(\mathbf{a}=(0,0,0,0,a_{4})\), then \(v_{1}=v_{2}=v_{3}=(0,1)^{t}\) and \(v_{4}=(0,a_{4})^{t}\). Using the above formula, we see that 1. \(\mathbf{a}=(1,3,13/\sqrt{6},6,4)^{t}\in\mathbb{C}^{5}\) corresponds to \(u_{1}\bullet\cdots\bullet u_{4}\) with \(u_{1}=u_{2}=(1,1)^{t}\) and \(u_{3}=u_{4}=(1,2)^{t}\), where \(1,1,2,2\) are the zeros of \(g(z)=(z-1)^{2}(z-2)^{2}=z^{4}-6z^{3}+13z^{2}-12z+4\); 2. \(\mathbf{b}=(0,1/2,\sqrt{6},11/2,6)^{t}\) corresponds to \(v_{1}\bullet\cdots\bullet v_{4}\) with \(v_{1}=(0,1)^{t}\) and \(v_{j}=(1,j-1)^{t}\) for \(j=2,3,4\), where 1,2,3 are the zeros of \(g(z)=(z-1)(z-2)(z-3)=z^{3}-6z^{2}+11z-6\); 3. \(\mathbf{c}=(0,0,1/\sqrt{6},1,1)^{t}\) corresponds to \(w_{1}\bullet\cdots\bullet w_{4}\) with \(w_{1}=w_{2}=(1,0)^{t}\), \(w_{3}=w_{4}=(1,1)^{t}\), where \(1,1\) are the zeros of \(h(z)=z^{2}-2z+1\). We have \[\langle\mathbf{a},\mathbf{b}\rangle =\mathrm{per}([u_{1}u_{2}u_{3}u_{4}]^{*}[v_{1}v_{2}v_{3}v_{4}])/ 4!=143/2,\] \[\langle\mathbf{a},\mathbf{c}\rangle =\mathrm{per}([u_{1}u_{2}u_{3}u_{4}]^{*}[w_{1}w_{2}w_{3}w_{4}])/4!= 12+1/6,\text{ and}\] \[\langle\mathbf{b},\mathbf{c}\rangle =\mathrm{per}([v_{1}v_{2}v_{3}v_{4}]^{*}[w_{1}w_{2}w_{3}w_{4}])/4!= 25/2.\] **Acknowledgment** Li is an affiliate member of the Institute for Quantum Computing, University of Waterloo. His research was supported by the Simons Foundation Grant 851334. The authors would like to thank Karol Zyczkowski and Marcin Rudzinski for some helpful comments.
2305.05694
A Local Universe model for constrained simulations
The aim of cosmological simulations is to reproduce the properties of the observed Universe, serving as tools to test structure and galaxy formation models. Constrained simulations of our local cosmological region up to a few hundred Mpc/h , the local Universe, are designed to reproduce the actual cosmic web of structures as observed. A question that often arises is how to judge the quality of constrained simulations against the observations of the Local Universe. Here we introduce the Local Universe model (LUM), a new methodology, whereby many constrained simulations can be judged and the ''best'' initial conditions can be identified. By characterising the Local Universe as a set of rich clusters, the model identifies haloes that serve as simulated counterparts to the observed clusters. Their merit is determined against a null hypothesis, the probability that such a counterpart could be identified in a random, unconstrained simulation. This model is applied to 100 constrained simulations using the Cosmicflows-3 data. Cluster counterparts are found for all constrained simulations, their distribution of separation from the true observed cluster position and their mass distribution are investigated. Lastly, the ''best'' constrained simulation is selected using the LUM and discussed in more detail.
Simon Pfeifer, Aurélien Valade, Stefan Gottlöber, Yehuda Hoffman, Noam I. Libeskind, Wojciech A. Hellwing
2023-05-09T18:00:06Z
http://arxiv.org/abs/2305.05694v1
# A Local Universe model for constrained simulations ###### Abstract The aim of cosmological simulations is to reproduce the properties of the observed Universe, serving as tools to test structure and galaxy formation models. Constrained simulations of our local cosmological region up to a few hundred \(h^{-1}\,\mathrm{Mpc}\), the local Universe, are designed to reproduce the actual cosmic web of structures as observed. A question that often arises is how to judge the quality of constrained simulations against the observations of the Local Universe. Here we introduce the Local Universe model (LUM), a new methodology, whereby many constrained simulations can be judged and the "best" initial conditions can be identified. By characterising the Local Universe as a set of rich clusters, the model identifies haloes that serve as simulated counterparts to the observed clusters. Their merit is determined against a null hypothesis, the probability that such a counterpart could be identified in a random, unconstrained simulation. This model is applied to 100 constrained simulations using the Cosmicflows-3 data. Cluster counterparts are found for all constrained simulations, their distribution of separation from the true observed cluster position and their mass distribution are investigated. Lastly, the "best" constrained simulation is selected using the LUM and discussed in more detail. keywords: keyword1 - keyword2 - keyword3 ## 1 Introduction Cosmological simulations play a major role in studying the formation and evolution of the large scale structure (LSS) of the Universe. The majority of such simulations are presently conducted within the standard model of cosmology, the \(\Lambda\)CDM model. The aim of such simulations is to reproduce the properties of the observed Universe and the quality of the fidelity of how well the simulations recover the Universe serves as laboratory for testing structure and galaxy formation models. Cosmological simulations are expected to recover the statistical measures of the LSS such as the power spectrum, high order correlation functions and mass functions of the galaxy and underlying dark matter distributions. Constrained simulations of our local cosmological neighborhood, referred to here as the Local Universe, are designed to reproduce the actual structure of a particular piece of the Universe, such as the cosmic web of clusters, filaments, sheets and voids. Unlike standard cosmological simulations whose initial conditions constitute of random realizations of the initial conditions (_e.g._ Angulo et al., 2012; Alimi et al., 2012; Vogelsberger et al., 2014; Schaye et al., 2015; McCarthy et al., 2017; Pillepich et al., 2018; Hernandez-Aguayo et al., 2022), constrained simulations are constrained by observational data pertaining to the particular patch of the Local Universe. Different approached essentially differ in both method and data that are used to generate the initial conditions. The Hoffman and Ribak (1991) method was the first to be able to generate constrained initial conditions and is used in this work. Approaches built on top of this method use peculiar velocity survey data (Tully et al., 2008, 2013, 2016) which trace the gravitational potential and therefore are sensitive to the entire matter distribution. However, these peculiar velocity data are typically plagued by large uncertainties and sparse sampling as they rely on distance estimators. A relatively recent development applies Bayesian forward modelling (_e.g._ Kitaura et al., 2012; Wang et al., 2014, 2016; Sawala et al., 2022; McAlpine et al., 2022) to the field of constrained simulations (see Jasche and Lavaux, 2019). These employ galaxy redshift surveys (_e.g._ Skrutskie et al., 2006, 2006; Lavaux and Hudson, 2011; Huchra et al., 2012) which typically cover larger areas with much denser sampling. However, their downsides are the uncertainty in distance due to line-of-sight velocities (fingers of god) and more critically, that galaxies are biased tracers of the matter distribution which is challenging to account and correct for. The continues development of constrained simulation efforts have led to their application to a wide variety of works, such as studies of the reionisation (Dixon et al., 2018; Ocvirk et al., 2020), clusters and their formation history (Sorce et al., 2016; Olchanski and Sorce, 2018; Sorce et al., 2020), galaxy distributions (Mathis et al., 2002; Yepes et al., 2009, 2014; Dolag et al., 2023), the local group (Libeskind et al., 2010; Forero-Romero et al., 2011; Libeskind et al., 2020), magnetic fields (Dolag et al., 2005) and effects of modified gravity (Naidoo et al., 2023). Yet constrained simulations necessarily also incorporate an element of randomness. Random modes are required to simulate the region which are unconstrained; namely regions of space that have little data and small, non-linear scales which cannot be constrained with linear methods. Therefore both large and small scales are affected by the introduction of random modes. In this respect, many constrained simulations may be generated by changing the seed of the random fluctuation. Given that one can thus generate an infinite number of these constrained simulations, all with equally probable realization constrained by the observed data and the prior model, it is unclear exactly how to differentiate between multiple constrained simulations. A question that often arises is how to judge the quality of a given simulation, _i.e._ how well does it reproduces the actual Universe (e.g. see Carlesi et al., 2016, for a similar argument regarding the Local Group). Such a question arises in the context of the comparison of simulations constrained by galaxy redshift surveys and by galaxy peculiar velocities. The main purpose of this paper is therefore to introduce a new methodology whereby many constrained simulations can be judged and the "best" initial conditions can be identified, with an eye for future hydrodynamic simulation. The core principle of the methodology presented in this work attempts to judge constrained simulations via null-hypotheses testing. The null-hypotheses in this case come from random simulations. By calculating a baseline from random simulations, which are completely unconstrained, one can effectively assign a statistical significance to a constrained simulation. In practice, the cluster distribution in the Local Universe is matched by identifying simulated cluster counterpart. The statistical significance of these cluster counterparts are then used to judge the constrained simulations. This work also presents the first ever constrained simulations of the CF3 Universe - the largest constrained simulations ever run with this method, the focus of which will be in an upcoming paper (Pfeifer et al. (in prep)). The paper is organised as follows: Section 2 presents the observation peculiar velocity data used to generate the constrained simulations and Section 3 describes the methods employed to generate the constrained initial conditions and run the constrained simulations. Section 4 describes how the Local Universe model (LUM) is constructed and Section 5 presents the results of applying the LUM to the constrained simulations. Finally, the conclusions are presented in Section 6. ## 2 Cosmicflows-3 Data The Cosmicflows-3 catalogue consists of 17,669 individual galaxies with measured redshift, angular position and distance moduli (Tully et al., 2016). Galaxies in groups and clusters, whose velocity, and thus redshift, is due largely to non linear motions are grouped by taking the arithmetic mean of their redshift and angular positions. Their errors are also reduced by taking the errors on the distance moduli of the group members and dividing by the square root of the number of members. The grouped data partially suppresses the virial motions and removes the main contribution of non-linearities in the velocity field. This grouping reduces the number of entries to 11,501 data points and this grouped Cosmicflows-3 catalogue is used henceforth, referred to as CF3. More than 95% of the data lies within a redshift distance of \(\lesssim 16\,000\) km/s \(\approx 160\)\(h^{-1}\) Mpc. ## 3 Methods ### Bias Gaussianization correction The CF3 catalogue suffers from a variety of biases, one of the most important bias is referred to as the log-normal bias due to the the logarithmic relationship between the distance modulus and the luminosity distance. In CF3, distance moduli have normally distributed errors which become log-normal errors when converted to luminosity distance. This means that distance estimates are more likely to be overestimated even if a galaxy were to be observed many times, as the mean of these observations would not coincide with the true value. This bias also affects the derived peculiar velocities, resulting in underestimations, as these depend on the cosmological redshift and therefore the distance. The bias gaussianization correction (BGc) method proposed by Hoffman et al. (2021) aims at correcting for the log-normal bias by transforming the log-normal distribution, for distance and peculiar velocity, into a normal distribution. The main idea behind the correction is motivated by the fact that in the \(\Lambda\)CDM model, the radial components of peculiar velocities of galaxies in a cosmological shell are described by a Gaussian distribution with a well defined standard deviation. The BGc method transforms the measured log-normal distribution into a normal one using the fact that the median of such a transformation is conserved (while the mean is not). Lastly, the width of the normal distributions for the distance moduli and peculiar velocities are set to 0.19 and 275 km/s, relatively, consistent for a \(\Lambda\)CDM cosmology. For a more detailed description of the BGc method, we refer the reader to Hoffman et al. (2021). ### Constrained initial conditions To generate cosmological simulations that reproduce large scale structures constrained by observations, initial conditions (ICs) need to be generated which already contain this information. We follow the method of Doumler et al. (2013) to create these constrained ICs. The process starts by reconstructing the linear density and velocity field from observations using the Wiener filter (WF). The WF has been successfully used to reconstruct the large scale, linear density and velocity field of the nearby universe from sparsely distributed observations (_e.g._ Hoffman et al., 2015) and the details of the method have been extensively described (see _e.g._ Hoffman and Ribak (1991); Zaroubi et al. (1995, 1999)). In general, the WF is a Bayesian estimator that estimates the mean field given a set of uncertain data and a prior model, in this case peculiar velocity data with errors and the \(\Lambda\)CDM model, respectively. Constrained ICs generated directly from the observational data have been found to produce structures that are systematically offset from their known positions at redshift zero due to the bulk motions that shift structures away from their initial density peaks. To correct for this, the reverse Zeldovich approximation (RZA) shifts the constraints, the peculiar velocity measurement, to a position at an initial redshift using the reconstructed velocity field. Although the Zeldovich approximation breaks down at shell crossing, _i.e._ within the non-linear regime, it has been shown to improve the resulting constrained simulations significantly (Doumler et al., 2013). The RZA shifted constraints are used to generate a constrained realisation (CR). The aim of a constrained realisation (CR) is to generate a full Gausssian random field that contains within it a constrained density (or velocity) distribution of the early observed Local Universe. The WF does not fulfill this criterion as it returns the null field in unconstrained regions. These unconstrained regions are therefore supplemented with a random Gaussian random field via the Hoffman & Ribak (1991) method. Finally, the CR is rescaled to an initial redshift to form the constrained ICs. ### Constrained simulations The constrained cosmological simulations were ran using AREPO(Springel et al., 2019) with \(512^{3}\) dark matter particles and in a periodic volume of 500 \(h^{-1}\) Mpc along a side. The assumed cosmology is a flat, \(\Lambda\)CDM model using \(\Omega_{m}=0.301\), \(\Omega_{\Lambda}=0.709\), \(n_{S}=0.961\), \(\sigma_{8}=0.8293\) and \(H_{0}=67.77\), consistent with Planck Collaboration et al. (2014). This equates to a dark matter particle mass of \(m_{\rm DM}=7.99\times 10^{10}\)\(h^{-1}\) M\({}_{\odot}\). Constrained ICs were generated at \(z=99\). The power spectra used to generate the ICs were generated using CAMB(Lewis & Challinor, 2011). Haloes are identified with SUBFIND as part of AREPO. A total of 100 constrained simulations were completed. The only difference between them are the seeds used in the random field of the CR used to generate the constrained ICs, with their unique seeds contained within the range [100,199]. In addition, a matching set of 100 random, unconstrained simulation with the same seeds were generated. Fig. 1 shows the quasi-linear (QL) density field, which is the geometric mean value taken over an ensemble of the 100 constrained simulations (following Hoffman et al., 2018). The figure shows the logarithm of \(\Delta=\frac{\rho}{\rho_{\rm a}}\) smoothed on 5 \(h^{-1}\) Mpc. The density slices are centred on the supergalactic origin and 15.6 \(h^{-1}\) Mpc thick, going from -7.8 to +7.8 \(h^{-1}\) Mpc. The validity of the simulations was checked at \(z=0\) by examining the total matter power spectrum and the halo mass function. The constrained and random simulations were perfectly consistent with each other as well as with the linear matter power spectrum on large scales and the Tinker et al. (2008) fitted halo mass function. ## 4 Local Universe Model ### Cluster selection In order to judge the quality of the constrained simulations, we need a metric against which to define the quality of a given constrained simulation. In other words, it would be very desirable to have a criterion with which one can measure the likeness of a constrained simulation relative to the observed Local Universe. We have chosen to use the distribution of massive clusters for this purpose for three main reasons; firstly, since the methods for generating constrained initial conditions rely on peculiar velocities, massive clusters with their many member galaxies produce good peculiar velocity estimates in the CF3 catalogue and therefore produce good constraints. The clusters, especially within \(\approx 200\)\(h^{-1}\) Mpc, are observationally robust and least bias. Secondly, massive clusters are undergoing active formation today, and are therefore more closely linked to the linear regime than other collapsed structures. The overdensities they inhabit cause large scale linear flows which can be reconstructed within the linear limit of the constrained initial conditions methods. Last but not least, the distribution of clusters is among the most basic features that characterizes the specific large scale structure we inhabit. The cluster selection is based on the clusters in CF3 with more than \(\geq 50\) group members with measured velocities. This is a practical choice which reduces the error on their measured velocity. It is however also a realistic choice as all of the richest clusters in the Local Universe fulfil this criterion. The angular position in supergalactic coordinates (SGL, SGB) is very well measured but the distance to these objects can have large uncertainties depending on the method used for the distance estimation. We therefore opt to use the redshift distance since the uncertainty on the redshift measurements are negligible. Since the clusters are selected based on their large number of member galaxies, the grouping of CF3 removes the majority of the redshift space distortion, due to large small-scale velocities, by averaging the velocity measurements of the cluster members. Equation 1 shows the equation to calculate the redshift distance, \[d=\frac{1}{H_{0}}\left(cz-v_{\rm pec}\right) \tag{1}\] where \(H_{0}=75.0\) km s\({}^{-1}\) Mpc\({}^{-1}\) for CF3, \(c\) is the speed of light, \(z\) is the redshift and \(v_{\rm pec}\) is the peculiar velocity in the CMB reference frame. Column "Vcg" in the CF3 data gives \(cz-v_{\rm pec}\). Although Equation 1 is an approximation, the error is \(<1\%\) for the distances of the selected clusters. The cluster selection and their converted supergalactic coordinates are presented in Table 1. When the simulated haloes, drawn from the constrained simulations, are compared with data, their positions are also converted to redshift distances using their radial peculiar velocities relative to a virtual observer at the centre of the box. The observed cluster masses are not explicitly considered here, other than the assumption that these clusters are more massive than \(10^{14}\)\(h^{-1}M_{\odot}\). Observational constraints on the masses of the clusters are not directly included in the LUM because of the difficulty in obtaining robust constraints in the first place. We therefore leave the nuances of observational mass estimates to a future study. Fig. 1 shows the relevant position of the observed cluster from Table 1, projected onto the density slices (black crosses). Note that the positions of Perseus on the SGX-SGZ plane and Coma on the SGX-SGY plane are in reality outside the region of the slice and thus small projection effect are present. The chosen cluster selection clearly traces the QL density field, generally sitting within density peaks. Ophiuchus is the only cluster in Fig 1 that is at the boundary of the overdensity region and not within its own density peak. ### Local Universe model The WF/CR method is able to constrain regions that are well sampled by the observations. In unconstrained regions with poor sampling, and on small scales within the non-linear regime, the method \begin{table} \begin{tabular}{l r r r} \hline Cluster & \multicolumn{2}{c}{SGX} & \multicolumn{2}{c}{SGY} & \multicolumn{1}{c}{SGZ} \\ & [\(h^{-1}\) Mpc ] & [\(h^{-1}\) Mpc ] & [\(h^{-1}\) Mpc ] \\ \hline Virgo & -3.3004 & 14.4336 & -0.6076 \\ Centaurus & -34.3491 & 14.9497 & -7.5807 \\ Hydra & -24.3204 & 20.8672 & -24.6429 \\ Perseus & 49.1633 & -10.5576 & -12.7799 \\ Coma & 0.4447 & 70.7675 & 10.4502 \\ Norma & -49.4307 & -7.1231 & 6.0524 \\ Ophiuchus & -63.7704 & 7.5590 & 60.7693 \\ Pavill & -39.3256 & -18.5555 & S.8548 \\ Leo & -23.772 & 67.0677 & -12.4381 \\ Pisces & 41.0872 & -24.7560 & 4.5935 \\ Shapley (A3558) & -124.7189 & 74.4359 & -3.3975 \\ \hline \end{tabular} \end{table} Table 1: The observational cluster data used to compare the constrained simulations against. Supergalactic Cartesian coordinates have been calculated using the SGL, SGB and redshift distance from the CF3 grouped catalogue. must introduce random, unconstrained fluctuations. Therefore, different random initial fluctuation can produce constrained simulation of varying quality, enhancing or degrading the cosmographic features of the Local Universe. To assess the quality of constrained simulations, this work presents the LUM. As described in Section 4.1, we characterise the Local Universe via a selection of observed galaxy clusters. The aim of the LUM is thus to quantify the quality of a constrained simulations with respect to the observed cluster distribution. If the separation between observation and simulation are small, one can deem the constrained simulation a good analogue of the Local Universe. Therefore, for each observed cluster, one can attempt to identify the best counterpart from the simulated halo distribution and find a quantitative measure for their similarity, such as the separation between the simulated halo and observed cluster position. However, the difficulty lies in judging the significance of this separation. To establish a baseline, the probability of identifying a cluster counterpart in an unconstrained Figure 1: The contours show the quasi-linear density field, the mean of the logarithm of \(\Delta=\frac{\rho}{\rho_{\rm m}}\) calculated from all 100 constrained simulations. The contours show underdensities (dashed blue lines), mean density (think black lines) and overdensity (thin black lines). The slices are centred on the supergalactic origin and 15.6 \(h^{-1}\) Mpc thick. The observed cluster positions (black crosses) from Table 1 are also shown. Locations of all simulated cluster counterparts (blue dots) from the constrained simulation that meet the minimum of \(p<0.05\) are shown with shades indicating which \(p\)-value threshold they satisfy (darker is lower, Fig. 4). The simulated counterparts are projected onto the density slice irrespective of their position. The approximate location of the Zone of Avoidance, of 10 degrees, is indicated in the SGX-SGY and SGY-SGZ planes (dashed grey line). random simulation is calculated. This can then be used to judge constrained simulations. Firstly, only simulated haloes with \(\mathrm{M}_{200c}>10^{14}\)\(h^{-1}\,\mathrm{M}_{\odot}\,\mathrm{are}\) considered since we are interested in rich cluster counterparts (Sorce, 2018). The observed cluster masses are therefore not explicitly built into this method but encapsulated by this mass cut. The statistic of choice for the baseline, or null-hypothesis, is the distribution of separation between observed and simulated clusters in random simulations. To measure this, a virtual observer can be placed on a location uniformly sampled across a random simulation volume and the separation between the position of, _e.g._ Coma, and the closest halo with \(\mathrm{M}_{200c}>10^{14}\)\(h^{-1}\,\mathrm{M}_{\odot}\,\mathrm{can}\) be measured. Although Coma occupies a special place in the Local Universe, attempting to find a halo at the position of Coma in a random simulation is no different to finding a halo at any random point in space. Similarly, one could shifting the virtual observer to a new location in the same simulation and repeat the measurement. Therefore, in practice, the separation between 10000 random points and their closest halo are measure for all 100 random simulations. Note that we assume here that the environment of the virtual observer, _i.e._ the Milky Way, has no significant effect on the environment of clusters tens of \(h^{-1}\,\mathrm{Mpc}\) away. Fig. 2 shows the measured probability density functions (PDF) (top) for haloes with different lower mass cuts (shaded blue lines). The centre and spread of the PDFs increases with mass because the number density of high mass haloes is lower and therefore it is less likely to find a halo at small separations. A statistical significance can now be assigned using the \(p\)-value. The \(p\)-value is generally used in null-hypothesis testing, signifying the probability that a given result, or a more extreme result, can be obtained given a null hypothesis. In this case, the random simulation provides the null-hypothesis which can be rejected with the probability of \(1-p\). Of interest is the significance from zero, _i.e._ separations of a given value and closer, and not from the mean of the PDF. Therefore the one-tailed \(p\)-value is used, calculated from the cumulative PDF (CDF), and shown in Fig. 2 (bottom). To calculate \(p\) for a halo-cluster pair, the mass of the simulated halo and the separation to the observed cluster position is required. However, the CDFs in Fig. 2 are calculated for a set number of mass cuts. To be able to calculate \(p\) as a smooth function of separation and mass, the CDFs for each mass cut are fit using Equation 2, where \(\alpha\) is a free parameter1, \(r\) is the outer radius of a shell of fixed thickness \(dr=0.5\)\(h^{-1}\,\mathrm{Mpc}\). The values for \(r\) and \(dr\) are taken as the values used to calculate the PDFs in Fig. 2. Footnote 1: The parameter \(\alpha\) is actually representative of a density but for clarity and to avoid confusion with the \(p\)-value, we use the letter \(\alpha\) instead of \(\rho\) since the physical meaning is somewhat lost here. See Appendix A for more details. \[P(r,\alpha)=e^{-\frac{1}{\pi}\pi\,\alpha r^{3}}-e^{-\frac{1}{\pi}\pi\,\alpha(r +dr)^{3}} \tag{2}\] The free parameter, \(\alpha\), is then fit as a function of mass using a quadratic function given by Equation 3. \[\log_{10}(\alpha)=-0.861\log_{10}(M_{200c})^{2}+\\ 22.616\log_{10}(M_{200c})-152.154 \tag{3}\] The method uses the mass of the simulated halo to generate the corresponding PDF (normalised to sum to unity over a large range of \(r\)), from which the CDF and therefore \(p\) can be calculated using the separation to the cluster position. The output PDFs and CDFs of these fits for the matching mass cuts are shown in Fig. 2 (red lines). The up-turn of the CDFs at low separations are recovered well which Figure 3: The \(p\)-values, calculated from the fits shown in Fig. 2, as a function of lower mass limit and separation. The lines denote contours of constant \(p\) and darker shaded areas show lower \(p\)-values. The distribution above \(10^{15}\)\(h^{-1}\,\mathrm{M}_{\odot}\,\mathrm{are}\) kept constant as not enough halos of these masses exist in the simulations to produce stable fits. Figure 2: The PDF of the separation between the closest halo and a random point, _i.e._ the probability of finding a halo at a given separation within a given mass cut (top). The PDFs are calculated from random, unconstrained simulations (blue) and fitted (red). The \(p\)-value for different mass cuts (bottom) are calculate by taking the cumulative PDF, the CDF. An insert shows a zoom-in on the distributions of \(p\) for small values to show the quality of the fits in the range that is important for the LUM, represented by the black dashed area. is the area of interest (see inset). The fits are not accurate over the full PDF, especially the long tail at large separation. However, for the purposes of the LUM, only the region covered by the insert in Fig. 2 are important. More details on the fitting of these functions is given in Appendix A. Note that it is the mass of the simulated halo and not the mass of the observed clusters that is used. The complete pipeline effectively result in a function \(p(M_{200\mathrm{c}},r)\) and Fig. 3 shows that relationship between the mass and separation on \(p\). The lines denote contours of different \(p\) with darker shades showing lower \(p\)-values. For a mass cut of \(10^{14}\)\(h^{-1}\) M\({}_{\odot}\), \(a\)\(p=0.05\) corresponds to \(\approx 6\)\(h^{-1}\) Mpc ; _i.e._ a halo of that mass which is 6 \(h^{-1}\) Mpc away from the observed position of a cluster would _not_ be found at that separation or closer in a random simulation with a probability of 0.95 (\(2\sigma\) confidence). For the same mass cut and a separation of \(\approx 3\)\(h^{-1}\) Mpc, this probability increase to 0.99, _i.e._\(p=0.01\). Fig. 3 encapsulates the essence of the LUM; it shows a metric of gauging if a simulated halo, given its mass and separation to an observed target, is significant or random. Note that this approach assumes that finding a cluster counterpart has no effect on the probability of finding another cluster counterpart elsewhere and thus assumes a 1-point probability distribution. The method assigns a probability to a single simulated cluster counterpart of being found in a random simulation, _i.e._ it does not take into account the conditional probability of finding a Virgo, having found a Coma, and instead assume that each detection is independent. ## 5 Results ### Cluster counterpart detection The main purpose of the LUM, presented in Section 4.2, is to determining the significance of finding a halo of a given mass at a particular separation. We now leverage this method to identify the "best" halo counterparts to the cluster selection from CF3 for each constrained simulation. We define "best" as the halo least likely to be found in a random simulation. Therefore, \(p(M_{200\mathrm{c}},r)\) can be minimised to find the "best" halo counterpart. For each of the \(N\) observed clusters, the separation to all \(M\) simulated haloes and their halo mass is used to calculate \(p\) for every halo-cluster pair. This results in an \(N\times M\) matrix of \(p\)-values. The problem here is to assign each observational cluster a unique halo counterpart, _i.e._ each halo is only allowed to be assigned once, such that the sum of \(p\) for the chosen haloes is minimised. This type of problem is referred to as an _assignment problem_, which is solved efficiently with the Hungarian method (Kuhn, 1955). These haloes can now be accepted or rejected as counterparts to their respective observed cluster based on their value of \(p\). Fig. 4 shows the fraction of identified cluster counterparts for each observed cluster from the 100 constrained simulation for three different thresholds of \(p\). For the majority of clusters, the detection rates are similar. For \(p<0.05\), all clusters except Ophiuchus and Virgo are found in \(\gtrsim 50\%\) of the simulations. For these clusters, the trend with \(p\) thresholds is consistent as well; lower thresholds reduce the detection rate as the criterion becomes more restrictive. It is interesting to see that some clusters are recovered more frequently than others. This plot effectively shows the stability of each cluster counterpart detection which is a reflection of the quality of the constraints for that particular environment. Therefore, one can expect to find a Perseus and Centaurus counterpart in most simulations. Fig. 1 gives a visual impression of the distribution of the counterparts with respect to the observed cluster positions. The contours show the QL density calculated from the 100 constrained simulations. All cluster counterparts for their respective cluster are shown (blue dots) with shades indicating which threshold of \(p\) they satisfy, the same as for Fig. 4. The simulated counterparts are spatially distributed around their respective cluster positions, where closer counterparts tend to have lower \(p\) (although the darkest points are often obscured by the cross of the cluster marker). There is a clear correlation between the distribution of counterparts and the shape of the QL overdensity that their respective observed cluster occupies. The counterpart distributions trace out the contours very well. Considering the first of the two outliers in Fig. 4, Ophiuchus has a low detection rate even at \(p<0.05\), and only 6 counterpart are detected at \(p<0.01\). From Fig. 1 it is clear the Ophiuchus does not sit within an overdensity in the QL field and therefore it is more likely that the best counterpart is found further away and thus have higher \(p\). For Virgo, the detection rate for \(p<0.05\) is also low. Its immediate environment is within a well constrained, high overdensity region. Although the size of the region appears significantly smaller, especially apparent in the SGY-SGZ plane. The observed cluster position is directly within a QL overdensity and the spread of the simulated cluster counterparts is very tight. From Fig. 1 is is not obvious why Virgo has such a low detection rate. This is explored in more detail in the next section. By looking at the mass distributions of the simulated counterparts it shows that the minimum mass limit of \(10^{14}\)\(h^{-1}\) M\({}_{\odot}\) is too high for the majority of Virgo counterparts. ### Separation and mass distribution The LUM uses the separations and halo masses to determine how likely an equal or better halo could be drawn from a random simulation. Fig. 5 shows the distributions of separations between the observed cluster and the cluster counterparts (left) and the distribution of simulated counterpart masses (right). The median (crosses), and 16th and 84th percentiles of the distribution (error bars) are shown for different thresholds of \(p\), where darker shades indicate lower thresholds. The distributions for all 100 counterparts, independent of their \(p\)-value, are also shown (grey). The separation distributions in Fig. 5 (left) change significantly Figure 4: The fraction of detected cluster counterparts for each observed cluster from the 100 constrained simulations given different thresholds of \(p\) (blue shades). Each cluster can only have one counterpart per simulation so a detection rate of 0.5 means that 50 out of 100 simulations contain a counterpart given a \(p\) threshold. with varying threshold of \(p\). Generally, the median separations are \(\sim\)10 \(h^{-1}\) Mpc for \(p<0.05\) and decrease to \(\sim\)5 \(h^{-1}\) Mpc for \(p<0.01\). The spread also decreases significantly with decreasing threshold. The large changes present in the separation distributions are not found in the mass distributions. Fig. 5 (right) shows that the median and spread of the masses are relatively constant with varying \(p\) threshold, although median masses generally increase by a small amount with decreasing threshold. Generally, a trade-off between mass and separation is struck due to the shape of the \(p(M_{200c},r)\) distribution in Fig. 3; counterparts with larger separations also tend to have larger masses. Therefore, as in the case of _e.g._ Leo, the cluster counterparts are allowed to have a large separation only if the mass increases significantly. Again, Virgo is an outlier with relatively small median separations and mass for all thresholds. Notably, the spread for both quantities is also significantly smaller, except for the distribution containing all counterparts (and therefore predominantly rejected counterparts) which are significantly larger. The mass distribution appears to be cut off by the lower mass limit. To test if, this, the LUM was rearan for a minimum mass cut of \(M_{200c}>5\times 10^{13}\)\(h^{-1}\) M\({}_{\odot}\). The detection rate for Virgo increase by 40% while it increase up to \(\approx\) 10% for the other clusters. The separation and mass distributions were robust to this change except that the lower percentile of the mass distribution for Virgo decreased, with a median value of \(\approx 10^{14}\)\(h^{-1}\) M\({}_{\odot}\). This suggests that while the overdensity of Virgo is well constrained, it struggles to form haloes of masses \(>10^{14}\)\(h^{-1}\) M\({}_{\odot}\). ### Best constrained Local Universe The LUM can be used to find the best cluster candidate for a given simulation and a given cluster. It follows that the LUM can be used to choose the 'best' candidate halo for zoom simulations of a particular cluster. Another use of the LUM is the selection of the 'best' realisation for performing 'full box' constrained simulations of our local volume. One way to materialize the later goal is to minimise the sum of \(p\), \(\Sigma p\), for all simulations, _i.e._ the best simulation is the one with the lowest total sum of \(p\) for all counterpart. Another option is to count the number of simulated cluster counterparts that are below a threshold, _e.g._\(p<0.05\). Note that this is done under the assumption that the clusters are uncorrelated, _i.e._ identifying a good Virgo counterpart does not affect the search for a good Coma counterpart. For the set of 100 constrained simulations, the top 5 simulations are within 0.02 of each other for \(\Sigma p\) (which has the range [0.38,2.00] between the best and worse simulation, respectively) and 4 simulations have a total of 9 cluster counterparts with \(p<0.05\), where 2 simulations are in both groups. The simulation with the seed of 159 (S159) has the lowest total \(p\) and the most cluster counterparts with \(p<0.05\), and is therefore deemed the "best". There is room to tailor this approach further by, _e.g._. placing special value on certain clusters that should always have an acceptable counterpart by enforcing that these must meet the threshold of \(p<0.05\). The locations of the cluster counterparts for S159 are shown with respect to the \(\log_{10}(\Delta)\) field of the constrained simulation in Fig 6. It is clear that the majority of the cluster counterparts lie very close to the observed cluster positions within each plane. The visible outliers are Perseus, Ophiuchus and Leo (which actually lies closer to Coma in the SGY-SGZ plane). For the majority of clusters in S159, the observed cluster position coincides with an overdensity peak in the constrained simulation making it likely that a massive halo is found nearby. For some, _e.g._ Perseus, the observed cluster position sits outside or on the border of overdense regions and thus the best counterpart is further away. It is also worth noting the striking similarities between the QL in Fig. 1 and the S159 density features. Not only do the shapes and sizes of the overdensities match very closely within a region of [-100, 100]\(h^{-1}\) Mpc around the observer, but also the underdense void regions such as between Virgo and Perseus in the SGX-SGY plane, and in the centre of the SGX-SGZ plane. The values of the separation of these cluster counterparts relative to the observed cluster positions are shown in Fig. 5 (left; circles). The same outliers are visible here as well, having separations of \(>15\)\(h^{-1}\) Mpc while all other cluster counterparts have separations of \(<10\)\(h^{-1}\) Mpc. By looking at the masses of the counterparts shown in Fig. 5 (right; circles), it is possible to see again the trade-off between separation and mass for the calculation of \(p\) ; _e.g._ Perseus has a relatively large separation but is also the most massive of all Figure 5: The median (crosses), 16th and 84th percentile (error bars) of the separation between the observed cluster position and the simulation counterpart (left) and the mass of the simulation counterparts (right). The grey line indicates the distribution for all “best” counterparts while the different shades indicate groups of counterparts satisfying different thresholds of \(p\), where darker is lower. The circles indicate the values for the counterparts from the best constrained simulation with a seed of 159. counterpart which results in \(p<0.01\). Two cluster counterparts, Leo and Pisces, have \(p>0.05\) (grey circles). ## 6 Conclusion This work presents the Local Universe Model, a metric to gauge the quality of constrained simulations by comparing how similar a given simulation is, be it random or constrained, to the actual Universe. This is done by examining the proximity of candidate simulated cluster to a set of prominent rich clusters of the nearby Universe. The LUM is applied to an ensemble of 100 DM-only cosmological simulations constrained by the Cosmicflows-3 data of peculiar velocities. The model calculates the probability, from a random unconstrained simulation, of finding a halo at a given radial separation or closer to a random point in space (which could be the location of an observed cluster) for a range of mass cuts. This is effectively the null hypothesis which the constrained simulations are tested against. A halo candidate from the constrained simulation is then quantitatively evaluated by calculating how likely it is that a halo of equal mass would be found at a given separation or closer in a random simulation using the one-tailed \(p\)-value. This \(p\)-value depends on the Figure 6: The contours show log\({}_{10}\)(\(\Delta\)) from the single constrained simulations with seed 159, the best simulation determined by the LUM. The contours show underdensities (dashed blue lines), mean density (think black lines) and overdensity (thin black lines). The density slice is centred on the supergalactic origin and \(15.6\;h^{-1}\,\)Mpcthick. The observed cluster positions (black crosses) and the locations of all cluster counterparts (blue dots) from the 159 constrained simulation are also shown. The simulated counterparts are projected onto the density slice irrespective of their position. The approximate location of the Zone of Avoidance, of 10 degrees, is indicated in the SGX-SGY and SGY-SGZ planes (dashed grey line). simulated halo mass and separation to the observational cluster position. The best cluster counterpart is selected to be the halo with the lowest \(p\). It is assumed here that each cluster counterpart detection is independent, _i.e._ the detection of a Virgo counterpart is not affected by the detection of any of the other cluster counterparts. For all clusters, except Ophiuchus and Virgo, cluster counterparts with a minimum mass cut of \(10^{14}\ h^{-1}\,\mathrm{M}_{\odot}\) are found for \(\gtrsim 50\%\) of the simulation for \(p<0.05\) (which is \(2\sigma\) for a Gaussian distribution). This detection rate is reduced for lower \(p\) thresholds. Good Ophiuchus counterparts are detected less often since its environment does not contain a density peak, as seen in the QL density, because it is not well constrained. The low detection rate of Virgo counterparts is clearly affected by the \(10^{14}\ h^{-1}\,\mathrm{M}_{\odot}\) mass cut, which is too close to the mass of the simulated Virgo counterparts. All other cluster counterparts have a wide distribution in mass except for Virgo which is contained within the low end of the mass distribution in Fig. 5 (right). Such low mass haloes require a very small separation to have a low \(p\) that satisfies the threshold, making such counterparts unlikely to occur often. Inspection of the QL overdensity shows that the environment is well constrained which suggests that this region struggles to produce haloes above the minimum mass cut in the constrained simulations. The LUM can also be used to evaluate the best constrained simulation by calculating the sum of \(p\) for all cluster counterparts, choosing the simulation with the smallest sum, and/or counting the number of cluster counterparts below a \(p\) threshold. Out of the 100 constrained simulations, the simulation with a seed of 159 was chosen. The majority of cluster counterparts in this simulation are within 10 \(h^{-1}\,\mathrm{Mpc}\) of the observed cluster position, where Virgo is the closest with \(1.83\ h^{-1}\,\mathrm{Mpc}\). A few cluster counterparts have larger separations due to the fact that the constrained simulation has overdensities peaks, where massive haloes live, that are offset from the observed cluster positions. The visual comparison between the density of the 159 simulation and the QL show that many of the general features match, _e.g._ the ring-like overdensity on the SGX-SGZ plane contained within \(\pm 100\)\(h^{-1}\,\mathrm{Mpc}\), and the arch-like feature in the SGY-SGZ plane containing Virgo, Coma and Leo. But also the underdense void regions such as between Virgo and Perseus in the SGX-SGY plane, and in the centre of the SGX-SGZ plane, are recovered. This agreement suggests that by selecting the appropriate clusters and constraining their positions, the large scale structure of the Local Universe can also be recovered. In this work, we have presented a model that uses observational positions of clusters to evaluate the quality of constrained simulations. However, observational data has more to offer than positions. The LUM could be improved by including observation mass estimates of the clusters. Currently, the S159 simulation contains a Virgo counterpart that is more massive than the Coma counterpart. Including mass constraints would allow the relative mass distributions to match as well as their positions. This could potentially also be extended to include velocity information. ## Acknowledgements This work has been done within the framework of the Constrained Local UniversE Simulations (CLUES) project. SP and NIL acknowledges financial support from the Deutsche Forschungs Gemeinschaft joint Polish-German research project LI 2015/7-1 (LUSTRE). YH has been partially supported by the Israel Science Foundation grant ISF 1358/18. WAH is supported by research grants funded by the National Science Center, Poland, under agreements no. 2018/30/E/ST9/00698, 2018/31/G/ST9/03388, and 2020/39/B/ST9/03494. ## Data Availability The simulation data used in this work are available upon reasonable request to SP.
2310.10991
Higher-order protection of quantum gates: Hamiltonian engineering coordinated with dynamical decoupling
Dynamical decoupling represents an active approach towards the protection of quantum memories and quantum gates. Because dynamical decoupling operations can interfere with system's own time evolution, the protection of quantum gates is more challenging than that of quantum states. In this work, we put forward a simple but general approach towards the realization of higher-order protection of quantum gates and further execute the first cloud-based experimental demonstration of dynamical-decoupling-protected quantum gates at the first order and the second order. The central idea of our approach is to engineer (hence regain the control of) the gate Hamiltonian in coordination with higher-order dynamical decoupling sequences originally proposed for the protection of quantum memories. The physical demonstration on an IBM quantum processor indicates the effectiveness and potential of our approach on noisy intermediate scale quantum computers.
P. Z. Zhao, Tianqi Chen, Sirui Liu, Jiangbin Gong
2023-10-17T04:28:29Z
http://arxiv.org/abs/2310.10991v3
# Higher-order protection of quantum gates: Hamiltonian engineering ###### Abstract Dynamical decoupling represents an active approach towards the protection of quantum memories and quantum gates. Because dynamical decoupling operations can interfere with system's own time evolution, the protection of quantum gates is more challenging than that of quantum states. In this work, we put forward a simple but general approach towards the realization of higher-order protection of quantum gates. The central idea of our approach is to engineer (hence regain the control of) the quantum gate Hamiltonian in coordination with higher-order dynamical decoupling sequences originally proposed for the protection of quantum memories. In our computational examples presented for illustration, the required engineering can be implemented by only quenching the phase of an external driving field at particular times. _Introduction.--_ A crucial challenge in implementing quantum computation is to overcome the decoherence induced by the interaction between a quantum system and its environment. As an active approach, dynamical decoupling (DD) aims to average out system-environmental interaction and hence protects either quantum memories or quantum gates [1, 2]. For quantum memories, periodic DD (PDD) [3] eliminates the effect of system-environment interaction to the first-order (e.g., in the sense of a Magnus expansion), whereas concatenated DD (CDD) [4] and Uhrig DD (UDD) [5, 6, 7] can offer higher-order elimination of decoherence [8, 9]. Experimental advances on DD-based quantum memory protection have been highly successful [10, 11, 12, 13, 14, 15], with recent benchmarking studies of DD carried out on actual noisy quantum computers [16]. However, most of these experimental progresses on quantum memory protection have not been translated to the protection of quantum gates, because DD operations often interfere with quantum gate operations, e.g., DD pulses tend to freeze system's own time evolution. One solution towards the DD protection of quantum gates started with certain driving Hamiltonian commutable with DD operations [17, 18, 19], the so-called "non-interference" condition. This stimulating route however needs extra physical qubit resources to encode logical qubits, the implementation of which is often based on particular dynamical symmetries or specific physical interactions to meet the non-interference requirement [20, 21, 22]. The second solution utilizes gate operations inserted in between DD pulses. This approach typically extends the total gate time and can only realize first-order protection [23, 24, 25]. To further fill in the gap between quantum memory protection and quantum gate protection by DD, the dynamically corrected gate (DCG) was proposed as a sophisticated composite quantum gate constructed from primitive building blocks without encoding [26, 27, 28]. DCG may offer higher-order protection with concatenate design [29], but the total number of the required gate operations will be large and thus experimentally challenging. It remains to see if it is possible to reduce the overhead in the realization of higher-order protection of quantum gates. This work aims to show that higher-order protection of quantum gates can be directly achieved on general physical qubits, with straightforward driving Hamiltonians that are piecewise continuous and at no cost of the total gate time. The central idea of our higher-order approach lies in the engineering of the time dependence of system's driving Hamiltonian [3], in coordination with higher-order DD sequences originally designed for quantum memory protection, such as CDD, UDD, and even nested UDD. Because the realization of DD pulses already requires to design explicit and rapid time dependence of system's Hamiltonian, engineering some additional time dependence of system's driving Hamiltonian does not present a challenge when compared with the task of qubit encoding. As seen below, the required engineering of system's driving Hamiltonian is to _act against_ the unwanted impact of DD pulses on the system. This being done, only the system-environment interaction part is suppressed to higher orders and we recover our control over system's driving Hamiltonian. _The DD protection of quantum memories and quantum gates can thus be unified and applied on an equal footing._ Our theoretical proposal is verified by our computational examples below. _Approach.--_ Consider a quantum system coupled to its environment with the total Hamiltonian \(H(t)=H_{\rm S}(t)+H_{\rm E}+H_{\rm I}\). Here, \(H_{\rm S}(t)\) is our system's driving Hamiltonian, \(H_{\rm E}\) is the environment Hamiltonian and \(H_{\rm I}\) is the system-environment interaction. The most general interaction between a qubit and its environment is in the form \(H_{\rm I}=\sigma_{x}\otimes E_{x}+\sigma_{y}\otimes E_{y}+\sigma_{z}\otimes E _{z}\), where \(E_{x(y,z)}\) represent the associated environment operators. To illustrate the underlying principle of our idea, we start with a one-qubit case under PDD pulses, where \[H_{\rm S}(t)=\Omega_{x}(t)\sigma_{x}+\Omega_{y}(t)\sigma_{y}, \tag{1}\] with \(\Omega_{x}(t)\) and \(\Omega_{y}(t)\) being time-dependent system parameters. Let us now consider a periodic sequence of fast and strong symmetrizing pulses with the DD operations \(\{\sigma_{0},\sigma_{1},\sigma_{2},\sigma_{3}\}\) (\(\sigma_{0}\equiv I\), \(\sigma_{1}\equiv\sigma_{x}\), \(\sigma_{2}\equiv\sigma_{y}\) and \(\sigma_{3}\equiv\sigma_{z}\) with \(I\) corresponding to no pulse applied), over a duration of \(4\tau\), applied to the native evolution governed by the driving Hamiltonian \(H_{\text{S}}(t)\). Assuming that the applied pulses are instantaneous (that is, much shorter than the total gate time \(4\tau\)), the time evolution operator is given by \[\mathcal{U} =\prod_{k=0}^{3}\sigma_{k}\mathcal{T}e^{-i\int_{k\tau}^{(k+1)\tau }\mathcal{H}(t)dt}\sigma_{k}\] \[=\prod_{k=0}^{3}\mathcal{T}e^{-i\int_{k\tau}^{(k+1)\tau}\sigma_{k }H_{\text{S}}(t)\sigma_{k}dt+\sigma_{k}H_{\text{S}}\tau+H_{\text{E}}\tau}\Big{]}\] \[=\prod_{k=0}^{3}\mathcal{T}e^{-i\int_{k\tau}^{(k+1)\tau}\sigma_{k }H_{\text{S}}(t)\sigma_{k}dt}e^{-i\sum_{k=0}^{3}\sigma_{k}H_{\text{S}}(t) \sigma_{k}+H_{\text{E}}\tau})+\mathcal{O}(\tau^{2}), \tag{2}\] where the term with the smallest \(k\) is placed at the right most and \(\mathcal{T}\) denotes the time ordering. Note that \(\sum_{l=0}^{3}\sigma_{l}H_{\text{I}}\sigma_{l}=0\), so we have \[\mathcal{U}=\prod_{k=0}^{3}\mathcal{T}e^{-i\int_{k\tau}^{(k+1)\tau}\sigma_{k }H_{\text{S}}(t)\sigma_{k}dt}\otimes e^{-i\mathcal{H}H_{\text{E}}\tau}+ \mathcal{O}(\tau^{2}). \tag{3}\] Clearly then, the first-order effect arising from \(H_{\text{I}}\) is eliminated. It is also seen that the time evolution of the quantum system is strongly influenced by the DD pulses. Indeed, during each time interval of duration \(\tau\), the effective Hamiltonian [see the exponentials in Eq. (3)] is updated by the DD pulses. We now unravel a key observation behind our Hamiltonian engineering approach. Denote the effective Hamiltonians altered by DD pulses associated with the time interval \([k\tau,(k+1)\tau]\) as \(H_{k}(t)\equiv\sigma_{k}H_{\text{S}}(t)\sigma_{k}\). Using Eq. (1), one obtains \(H_{0}(t)=\Omega_{x}(t)\sigma_{x}+\Omega_{y}(t)\sigma_{y}\), \(H_{1}(t)=\Omega_{x}(t)\sigma_{x}-\Omega_{y}(t)\sigma_{y}\), \(H_{2}(t)=-\Omega_{x}(t)\sigma_{x}+\Omega_{y}(t)\sigma_{y}\), and \(H_{3}(t)=-\Omega_{x}(t)\sigma_{x}-\Omega_{y}(t)\sigma_{y}\). To fight against these changes to the system Hamiltonian imposed by DD pulses, we can quench the parameters of \(H_{\text{S}}(t)\) during the respective time intervals. Specifically for \(t\in[0,\tau]\), \(\Omega_{x}(t)=\Omega(t)\cos\varphi\) and \(\Omega_{y}(t)=\Omega(t)\sin\varphi\), for \(t\in(\tau,2\tau]\), \(\Omega_{x}(t)=\Omega(t)\cos\varphi\) and \(\Omega_{y}(t)=-\Omega(t)\sin\varphi\), for \(t\in(2\tau,3\tau]\), \(\Omega_{x}(t)=-\Omega(t)\cos\varphi\) and \(\Omega_{y}(t)=\Omega(t)\sin\varphi\), and finally, for \(t\in(3\tau,4\tau]\), \(\Omega_{x}(t)=-\Omega(t)\cos\varphi\) and \(\Omega_{y}(t)=-\Omega(t)\sin\varphi\). This can be experimentally implemented because \(\Omega(t)\) is identified as the Rabi frequency of the qubit Hamiltonian and \(\varphi\) represents a phase of the driving field coupling the two levels of the qubit. The proposed quenching can be realized by simply adjusting the phase parameter of the driving field, namely, the phase parameter should be chosen to be \(\varphi\) for \(t\in(0,\tau]\), \(-\varphi\) for \(t\in(\tau,2\tau]\), \(-\varphi+\pi\) for \(t\in(2\tau,3\tau]\) and \(\varphi+\pi\) for \(t\in(3\tau,4\tau]\). With phase parameters quenched in this manner, one has \(H_{k}(t)=\Omega(t)(\cos\varphi\sigma_{x}+\sin\varphi\sigma_{y})\). That is, the effective Hamiltonians appearing in the exponentials in Eq. (3) become the same Hamiltonian \(\Omega(t)(\cos\varphi\sigma_{x}+\sin\varphi\sigma_{y})\). If we further require \(\int_{0}^{4\tau}\Omega(t)dt=\theta/2\), Eq. (3) indicates \[\mathcal{U}=e^{-i\theta(\cos\varphi\sigma_{x}+\sin\varphi\sigma_{y})/2} \otimes e^{-i4H_{E}\tau}+\mathcal{O}(\tau^{2}). \tag{4}\] As such, to the first order of \(\tau\), system-environment coupling is eliminated and yet we still implement a gate operation, namely, rotating the qubit along an axis in the \(xy\) plane by angle \(\theta\). Remarkably, this is achieved without requiring that \(H_{\text{S}}(t)\) commutes with the DD operations. We are now ready to engineer the time dependence of the qubit Hamiltonian for higher-order protection of quantum gates. As an example, we consider the second-order protection of a quantum gate by CDD with two layers. To that end, we use the time evolution protected by the DD operations \(\{\sigma_{0},\sigma_{1},\sigma_{2},\sigma_{3}\}\) as the basic building block and then nest it with a second layer of DD operations \(\{\sigma_{0},\sigma_{1},\sigma_{2},\sigma_{3}\}\). In doing so, we examine how the time dependence of \(H_{\text{S}}(t)\) should be engineered. The evolution operator with two layers of DD operations is given by \[\mathcal{U} =\prod_{m=0}^{3}\sigma_{m}\left[\prod_{k=0}^{3}\mathcal{T}e^{-i \int_{k\tau}^{(k+1)\tau}\sigma_{k}H_{\text{S}}(t)\sigma_{k}dt}\right]\sigma_{m} \otimes e^{-i\mathcal{H}_{E}\tau}+\mathcal{O}(\tau^{3})\] \[=\prod_{m=0}^{3}\prod_{k=0}^{3}\mathcal{T}e^{-i\int_{k\tau+1}^{(k+ 1)\tau}\sigma_{m}\sigma_{k}H_{\text{S}}(t)\sigma_{k}\sigma_{m}dt}\otimes e^{- i\mathcal{H}_{E}\tau}+\mathcal{O}(\tau^{3}), \tag{5}\] where all the terms in the product should be arranged from right to left with increasing \(k\) or \(m\). We can analogously define the function appearing in the exponential as the effective Hamiltonian \(H_{mk}(t)\equiv\sigma_{m}\sigma_{k}H_{\text{S}}(t)\sigma_{k}\sigma_{m}\), with the explicit form easily found by use of Eq. (1). Just like the previous example showcasing first-order protection, here we can also quench the phase parameter \(\varphi\) without changing the Rabi frequency parameter \(\Omega(t)\) during time intervals as well as subintervals such that \(H_{mk}(t)=\Omega(t)(\cos\varphi\sigma_{x}+\sin\varphi\sigma_{y})\). With \(\int_{0}^{4\tau}\Omega(t)dt=\theta/2\), we have \[\mathcal{U}=e^{-i\theta(\cos\varphi\sigma_{x}+\sin\varphi\sigma_{y})/2}\otimes e ^{-i4H_{E}\tau}+\mathcal{O}(\tau^{3}), \tag{6}\] indicating that our one-qubit gate with second-order protection is achieved. It is straightforward to extend our approach to cases of higher-order protection of quantum gates by increasing the concatenation level of CDD. Let us now turn to more efficient higher-order protection of single-qubit quantum gates, using the same qubit Hamiltonian \(H_{\text{S}}(t)\) introduced above. According to the UDD theory proposed for quantum memory protection, the pure dephasing term \(\sigma_{z}\otimes E_{z}\) can be filtered by applying nonequidistant decoupling operator \(\sigma_{x}\). Likewise, the longitudinal relaxation term \(\sigma_{x}\otimes E_{x}+\sigma_{y}\otimes E_{y}\) can be suppressed by applying \(\sigma_{z}\) operations at special UDD timings [5; 6; 7]. Building upon this, in order to average out the general system-environmental interaction \(H_{I}=\sigma_{x}\otimes E_{x}+\sigma_{y}\otimes E_{y}+\sigma_{z}\otimes E_{z}\), we apply \(n\) decoupling operations \(\sigma_{x}\) to the quantum system at times \(t_{j}=4\tau\sin^{2}\{j\pi/(2n+2)\}\), and during each time interval \([t_{j-1},t_{j}]\), we further apply \(n\) decoupling operations \(\sigma_{z}\) at times \(t_{k}^{i}=t_{j}\sin^{2}[k\pi/(2n+2)]\), where \(j,\ k=1,...,n\) and \(n\) is taken as an even number to facilitate our engineering approach. Such a nested UDD protocol yields the following time evolution op erator: \[\mathcal{U}= \mathcal{U}(4\tau-t_{n})\prod_{j=1}^{n}\sigma_{x}\mathcal{U}(t_{j}-t _{j-1})\] \[= \prod_{j=1}^{n+1}\sigma_{x}^{j-1}\mathcal{U}(t_{j}-t_{j-1})\sigma_ {x}^{j-1} \tag{7}\] where \(t_{0}\equiv 0\), \(t_{n+1}\equiv 4\tau\), all the terms in the product from right to left should have an increasing \(j\), and \(\mathcal{U}(t_{j}-t_{j-1})\), the unitary evolution operator associated with the interval \([t_{j-1},t_{j}]\), is expressed by the subinterval evolution operators as \[\mathcal{U}(t_{j}-t_{j-1})= \mathcal{U}(t_{j}-t_{n}^{j})\prod_{k=1}^{n}\sigma_{x}\mathcal{U}( t_{k}^{j}-t_{k-1}^{j})\] \[= \prod_{k=1}^{n+1}\sigma_{z}^{k-1}\mathcal{U}(t_{k}^{j}-t_{k-1}^{j })\sigma_{z}^{k-1} \tag{8}\] with \(t_{0}^{j}\equiv t_{j-1}\) and \(t_{n+1}^{j}\equiv t_{j}\) and all the terms in the product from right to left should have an increasing \(k\). The UDD theory can guarantee that the effect of system-environment interaction is suppressed up to the \(n\)th order. That is, \[\mathcal{U}= \prod_{j=1}^{n+1}\prod_{k=1}^{n+1}\mathcal{T}e^{-i\int_{\zeta_{ -1}}^{t_{j}^{j}}\sigma_{z}^{p_{+}^{j-1}+1}H_{\zeta_{1}}(t)\sigma_{z}^{k-1} \sigma_{x}^{j-1}dt}\] \[\otimes e^{-4Ht_{E}\tau}+\mathcal{O}(\tau^{n+1}). \tag{9}\] To find out how to engineer the time dependence of \(H_{\zeta}(t)\), we define effective Hamiltonians \(H_{k}^{j}(t)\equiv\sigma_{x}^{j-1}\sigma_{z}^{k-1}H_{\delta}(t)\sigma_{z}^{k- 1}\sigma_{x}^{j-1}\), i.e., those terms appearing in the exponentials in the equation above. Further using Eq. (1), one obtains \(H_{k}^{j}(t)=(-1)^{k-1}\Omega_{x}(t)\sigma_{x}+(-1)^{k+}\Omega_{x}(t)\sigma_{y}\), explicitly depicting the impact of our nested UDD operations on the system Hamiltonian. To combat against these unwanted changes to the system Hamiltonian, we can just adjust only the phase parameter of the qubit driving field, such that for \(t\in[t_{k-1}^{j},t_{k}^{j}]\), \(\Omega_{x}(t)=(-1)^{k-1}\Omega(t)\cos\varphi\) and \(\Omega_{y}(t)=(-1)^{k+1}\Omega(t)\sin\varphi\). The evident outcome of this phase quenching approach is that all the effective Hamiltonians will be the same, namely, \(H_{k}^{j}(t)=\Omega(t)(\cos\varphi\sigma_{x}+\sin\varphi\sigma_{y})\). Assuming \(\int_{0}^{4\tau}\Omega(t)dt=\theta/2\), then one arrives at \[\mathcal{U}=e^{-i\theta(\cos\varphi\sigma_{x}+\sin\varphi\sigma_{y})/2}\otimes e ^{-i4H_{E}\tau}+\mathcal{O}(\tau^{n+1}), \tag{10}\] which is a quantum gate protected by nested UDD to the \(n\)th order. To our knowledge, such a possibility of higher-order UDD protection of quantum gates is never shown before. In addition, the quantum gate time is still the same as in the bare case without protection. To see that more explicitly, let \(\Omega\) be the average Rabi frequency during \([0,4\tau]\), namely, \(\Omega=\int_{0}^{4\tau}\Omega(t)dt/(4\tau)\). Then for all the cases illustrated above, the total gate time \(4\tau\) remains to be \(\theta/(2\Omega)\). _Numerical results.--_ To verify our engineering concept, let us consider a simple form of system-environment interaction of the following Heisenberg coupling form \[H_{\rm I}=\epsilon(\sigma_{x}^{s}\sigma_{x}^{b}+\sigma_{y}^{s}\sigma_{y}^{b}+ \sigma_{z}^{z}\sigma_{z}^{b}), \tag{11}\] with the first spin represents the qubit Hamiltonian and the second spin represents a quantum bath, and \(\epsilon\) being the coupling strength. We also assume that there is no driving field acting solely on the environment spin. Admittedly, this \(H_{\rm I}\) is extremely simple as compared with a real quantum bath, but it does capture the essence that the qubit system is subject to both dephasing and population relaxation and it suffices to illustrate how our engineering approach leads to higher-order protection. \(H_{\rm I}\) may also model some residue interaction between a qubit under gate operation and its surrounding qubit. The performance of our higher-order protection is characterized by the fidelity \(F=\langle\phi|\rho|\phi\rangle\). Here, \(|\phi\rangle\) is the target output state and \(\rho\) is the real output state that is obtained by first solving the Liouville equation \(\dot{\varrho}(t)=-i[H(t),\varrho(t)]\) and then performing a partial trace over the environment spin. We take the initial state as \((|0\rangle+|1\rangle)/\sqrt{2}\), the parameters of the driving Hamiltonian as \(\Omega=10\pi\) MHz and \(\varphi=\pi/4\), and the evolution time as \(0.05\mu\)s. The ideal quantum gate under these choices are hence \(U=(\sigma_{x}+\sigma_{y})/\sqrt{2}\) and the target output state is \(|\phi\rangle=[\exp(-i\pi/4)|0\rangle+\exp(i\pi/4)|1\rangle]/\sqrt{2}\). To illustrate how higher-order protection can be achieved, we tune the system-environment coupling strength \(\epsilon\) over a wide range, namely, \(\epsilon\in[0,0.5\Omega]\). We then plot the fidelity \(F\) vs the coupling strength \(\epsilon\) for the quantum gate under PDD and CDD with first-order and second-order protection, as shown by the blue and red lines in Fig. 1(a). As a comparison, we also plot the gate fidelity without any DD protection, as shown by the black line in Fig. 1(a). \begin{table} \begin{tabular}{c c c c c c c} \(\epsilon/\Omega\) & 0.05 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\ \hline \(F_{\rm bare}\) & 98.75\% & 95.05\% & 81.47\% & 63.57\% & 47.45\% & 38.67\% \\ \(F_{\rm PDD}\) & 99.90\% & 99.61\% & 98.39\% & 96.31\% & 93.37\% & 89.60\% \\ \(F_{\rm CDD}\) & 99.99\% & 99.98\% & 99.92\% & 99.82\% & 99.67\% & 99.49\% \\ \end{tabular} \end{table} Table 1: Computational results of \([\epsilon,F_{\rm bare},F_{\rm PDD},F_{\rm CDD}]\), namely, quantum gate fidelity without protection, with first order PDD and second-order CDD vs the system-environment coupling strength \(\epsilon\). Figure 1: Quantum gate fidelity as a function of the system-environment coupling strength \(\epsilon\). (a) Fidelity \(F\) with the first-order protection (blue), second-order protection (red), and without protection (black) for \(\epsilon\in[0,0.5\Omega]\). (b) More results of fidelity \(F\) under the second-order protection with CDD for \(\epsilon\in[0,2\Omega]\). Figure 1 illustrates that while the first-order protection yields gate fidelities much better than that of the bare gate, the second-order protection further dramatically improves the performance of the quantum gate. In particular, for the worse case \(\epsilon=2\Omega\) (the system-bath coupling strength is twice of the Rabi frequency), the gate fidelity with second-order protection can still reach more than \(92\%\), shown in Fig. 1(b). To appreciate the DD improvement more quantitatively, we present in table 1 our computational findings based on our simple model. As seen from table 1, for a moderate coupling strength \(\epsilon=0.05\Omega\), the gate fidelities under first-order and second-order protection can reach \(F_{\text{PDD}}=99.90\%\) and \(F_{\text{CDD}}=99.99\%\), respectively, both being much higher than the fidelity \(F_{\text{bare}}=98.75\%\) without DD protection. Notably, for the case \(\epsilon=0.5\Omega\), i.e., the coupling strength is approximately half of the Rabi frequency of gate Hamiltonian such that the fidelity of the bare gate is as low as \(38.67\%\) (the gate is basically totally destroyed), the first-order protection can boost the fidelity back to \(90\%\) (which may not suffice for an actual quantum computation application), but the second-order protection can further improve the fidelity to \(99.49\%\). These numerical findings do not represent at all what one can achieve on an actual quantum computing platform, but indicating the promise of higher-order protection of gate operations, now made possible by our engineering approach. Next let us check the performance of our approach when using nested UDD to protect the quantum gate to the \(n\)th order according to our protocol outlined above. Figure 2 presents the corresponding gate fidelity for a wide range of \(\epsilon\in[0,2\Omega]\), with \(n=2\), \(4\) and \(6\). The gate fidelity is seen to be greatly enhanced when we increase the order of protection. We further present our numerics in table 2. As seen from table 2, for \(\epsilon\in[0,2\Omega]\), the gate fidelity with \(4\)th and \(6\)th protection always exceed \(98\%\). Remarkably, the fidelity with \(6\)th order protection can be as high as \(99.54\%\) even when the system-environment coupling strength is twice of the Rabi frequency of the system Hamiltonian. _Two-qubit gates.--_ Having computationally verified that the concept underlying our engineering approach is valid, we finally discuss how to protect a nontrivial two-qubit gate, which is important for universal quantum computation under DD protection. As an example, we show below how to engineer a two-qubit Hamiltonian to realize a nontrivial two-qubit gate under second-order protection. Let us now start to consider a system Hamiltonian different from all single-qubit cases discussed above, namely, \[H_{\text{S}}=J_{1}(t)\sigma_{z}^{(1)}+J_{2}(t)\sigma_{z}^{(2)}+J(t)\sigma_{z}^ {(1)}\otimes\sigma_{z}^{(2)}, \tag{12}\] where \(J_{1}\) and \(J_{2}\) are the parameters of local operations respectively acting on the first and second qubits, and \(J\) is the coupling parameter between the two qubits. We assume each qubit is subject to its own dephasing and population relaxation. To suppress the decoherence to the second order, we exploit the time evolution protected by the PDD operations \(\{\sigma_{0}^{(1)}\otimes\sigma_{0}^{(2)},\sigma_{1}^{(1)}\otimes\sigma_{1}^ {(2)},\sigma_{2}^{(1)}\otimes\sigma_{2}^{(2)},\sigma_{3}^{(1)}\otimes\sigma_{3 }^{(2)}\}\) as the basic building block and then nest it into a second layer of the same DD operations. As a result, the time evolution operator over a period of \(4\tau\) is found to be \[\mathcal{U}= \left[\prod_{m=1}^{3}\sigma_{m}^{(1)}\otimes\sigma_{m}^{(2)} \prod_{k=1}^{3}\mathcal{T}e^{-i\frac{(k+1)\tau_{k}}{\tau_{k}}\sigma_{k}^{(1) }\otimes\sigma_{k}^{(2)}H_{\text{S}}(t)\sigma_{k}^{(1)}\otimes\sigma_{k}^{(2) }dt}\right.\] \[\times\sigma_{m}^{(1)}\otimes\sigma_{m}^{(2)}\left|\otimes e^{-4 H_{k}\tau}+\mathcal{O}(\tau^{3})\right.\] \[= \prod_{m=1}^{3}\prod_{k=1}^{3}\mathcal{T}e^{-i\frac{(k+1)\tau_{k} }{\tau_{k}}\sigma_{z}^{(1)}\otimes\sigma_{z}^{(2)}\otimes\sigma_{z}^{(2)} \sigma_{z}^{(2)}H_{\text{S}}(t)\sigma_{z}^{(1)}\otimes\sigma_{z}^{(2)}\sigma _{m}^{(2)}dt}\] \[\otimes e^{-4H_{k}\tau}+\mathcal{O}(\tau^{3}), \tag{13}\] Again we may define the effective Hamiltonians \(H_{\text{m}k}(t)\equiv[\sigma_{m}^{(1)}\sigma_{k}^{(1)}]\otimes[\sigma_{m}^{ (2)}\sigma_{k}^{(2)}]H_{\text{S}}(t)[\sigma_{k}^{(1)}\sigma_{m}^{(1)}]\otimes[ \sigma_{k}^{(2)}\sigma_{m}^{(2)}]\). Using the expression of such effective Hamiltonians, it can be observed that the impact of DD pulses on the system Hamiltonian will not flip the sign of the qubit-qubit interaction term \(J(t)\sigma_{z}^{(1)}\otimes\sigma_{z}^{(2)}\). In other words, only the sign of the single-qubit terms \(J_{1}(t)\sigma_{z}^{(1)}\) and \(J_{2}(t)\sigma_{z}^{(2)}\) may be changed. When this change does occur, we can just reverse the direction of the driving field to cancel the unwanted action of the CDD pulses. As an example where \(|J_{1}(t)|=|J_{2}(t)|=J(t)\), our quench protocol coordinated with DD pulses will lead to \(H_{\text{m}k}=-J(t)[\sigma_{z}^{(1)}+\sigma_{z}^{(2)}-\sigma_{z}^{(1)}\sigma _{z}^{(2)}]\). For \(\int_{0}^{\pi}J(t)dt=\theta/2\), we obtain \[\mathcal{U}=e^{-i\theta(\sigma_{z}^{(1)}\otimes J+i\theta\sigma_{z}^{(2)}- \sigma_{z}^{(1)}\sigma_{z}^{(2)})}\otimes e^{-i4H_{k}\tau}+\mathcal{O}(\tau^{ 3}). \tag{14}\] Ignoring an unimportant global phase factor \(\exp(-i\theta/2)\), this gate under second-order protection yields \[U=|00\rangle\langle 00|+|01\rangle\langle 01|+|10\rangle\langle 10|+e^{i \theta}|11\rangle\langle 11|, \tag{15}\] \begin{table} \begin{tabular}{c c c c c c} \(\epsilon/\Omega\) & 0.2 & 0.6 & 1.0 & 1.4 & 2.0 \\ \hline \(F_{\text{UDD-2}}\) & 99.57\% & 94.96\% & 84.51\% & 70.18\% & 50.28\% \\ \(F_{\text{UDD-4}}\) & 99.9998\% & 99.995\% & 99.93\% & 99.63\% & 98.14\% \\ \(F_{\text{UDD-6}}\) & 99.9999\% & 99.996\% & 99.97\% & 99.88\% & 99.54\% \\ \end{tabular} \end{table} Table 2: Computational results of \([\epsilon,F_{\text{UDD-2}},F_{\text{UDD-4}},F_{\text{UDD-6}}]\), namely, quantum gate fidelity under second-order, fourth-order, and sixth-order UDD protection vs the system-environment coupling strength \(\epsilon\), with a nested UDD scheme. Figure 2: Quantum gate fidelity as a function of the system-environment coupling strength over a wide range of \(\epsilon\in[0,2\Omega]\) under nested UDD protection. Black, blue and red lines respectively correspond to the protection under the orders of \(n=2\), \(4\) and \(6\). which is a controlled-phase gate. Combined with protected single-qubit gates illustrated above, we have thus conceptually shown that in principle it is possible to execute universal quantum computation under higher-order protection with Hamiltonian engineering. _Discussions.--_ We have proposed a simple approach towards the realization of higher-order protection of quantum gates. The central idea is to quench a gate Hamiltonian to combat against the influence of DD pulses on the time evolution of the system. This way, on the one hand we manipulate the quantum states by DD to average out system-bath interaction to high orders; on the other hand we regain our control on quantum evolution and hence achieve high-order protection of quantum gates. This work thus indicates that highly efficient schemes such as UDD for quantum memory protection can be integrated with quantum gates with higher-order protection. Though in this work we only illustrated our idea using general-purpose PDD, CDD and UDD sequences, our approach can exploit other DD designs as well. In particular, if there is spectral information about the environment noise, then the time intervals between DD sequences can be further optimized according to the noise spectrum [30] and as a result, quantum gate protection using our concept can directly benefit from such optimization that can further drastically improve the performance of decoherence suppression. Our Hamiltonian engineering approach in coordination with DD hence makes the measurement of environment spectrum much more relevant than before to the realization of high-fidelity quantum gates. Numerics presented in this work are not aimed to simulate a realistic situation, but to verify that our approach is conceptually correct. Indeed, as a price to implement our design, we have assumed instantaneous DD control pulses as compared with the total gate time. It will be of great interest to examine the actual performance of our simple design on real physical platforms, including noisy intermediate scale quantum computers. _Acknowledgments.--_ J.G. is grateful to Lorenza Viola for technical discussions and for her highly constructive comments on the first version of our manuscript. This work was supported by the National Research Foundation, Singapore and A*STAR under its CQT Bridging Grant.
2308.14003
Fitting for the energy levels of hydrogen
Atomic hydrogen energy levels calculated to high precision are required to assist experimental researchers working on spectroscopy in the pursuit of testing quantum electrodynamics (QED) and probing for physics beyond the Standard Model. There are two important parts to the problem of computing these levels: an accurate evaluation of contributions from QED and using an accurate value for the proton charge radius as an input. Recent progress on QED corrections to the fine structure, as well as increasing evidence that a proton charge radius in the range of 0.84 fm is favored over the previously adopted larger value in the 0.88 fm range, has advanced the field, yet several state-of-the-art measurements remain in contradiction with this smaller value. Motivated by on-going and future work in this area, we present here a simple parameterization for the energy levels of hydrogen at the level of hyperfine structure using the so-called relativistic Ritz approach. The fitting of a finite sample of QED-generated levels at low to intermediate principal quantum number, $n$, gives a generally applicable formula for \emph{all} values of $n$ for each distinct angular momentum channel, given in this work up to orbital angular momentum number $\ell=30$. We also provide a simple linear parameterization for the shift in hydrogen energy levels as a function of the proton radius, providing a useful cross check for extant and future measured energy intervals.
David M. Jacobs, Marko Horbatsch
2023-08-27T04:34:32Z
http://arxiv.org/abs/2308.14003v1
# Fitting for the energy levels of hydrogen ###### Abstract Atomic hydrogen energy levels calculated to high precision are required to assist experimental researchers working on spectroscopy in the pursuit of testing quantum electrodynamics (QED) and probing for physics beyond the Standard Model. There are two important parts to the problem of computing these levels: an accurate evaluation of contributions from QED and using an accurate value for the proton charge radius as an input. Recent progress on QED corrections to the fine structure, as well as increasing evidence that a proton charge radius in the range of 0.84 fm is favored over the previously adopted larger value in the 0.88 fm range, has advanced the field, yet several state-of-the-art measurements remain in contradiction with this smaller value. Motivated by on-going and future work in this area, we present here a simple parameterization for the energy levels of hydrogen at the level of hyperfine structure using the so-called relativistic Ritz approach. The fitting of a finite sample of QED-generated levels at low to intermediate principal quantum number, \(n\), gives a generally applicable formula for _all_ values of \(n\) for each distinct angular momentum channel, given in this work up to orbital angular momentum number \(\ell=30\). We also provide a simple linear parameterization for the shift in hydrogen energy levels as a function of the proton radius, providing a useful cross check for extant and future measured energy intervals. ## I Introduction Precision measurements of atoms are used for metrological purposes and testing the theory of quantum electrodynamics (QED). This is of current interest also in the context of beyond-the-Standard-Model phenomena, as they could manifest themselves in atomic spectroscopy [1]. The theory of bound-state QED is sufficiently mature that the dominant uncertainty in its predictions for the levels of hydrogen and deuterium is due to the nuclear radius. The proton radius puzzle first appeared in 2010 [2] when muonic hydrogen measurements indicated that \(r_{p}\) is 4% smaller than had been previously determined, a value near 0.88 fm [3]. Over the last decade or so, more scattering and spectroscopic experiments have been performed that suggest a value of \(r_{p}\) closer to 0.84 fm [4]. However, discrepancies remain, such as the results of Fleurbaey et al. [5] and Brandt et al. [6], that indicate a value of \(r_{p}\) larger than 0.84 fm with substantial statistical significance. Thus, the puzzle is not entirely solved; more measurements are planned in the near future, including that of the \(1S_{1/2}\to 4S_{1/2}\) interval [7]. The bound-state QED predictions for the energy levels of hydrogen involve a combination of long analytic expressions and numerical results that are cumbersome to use; see, e.g., [8]. Our goal here is to provide a fitting formula that reproduces the bound-state QED predictions for those energy levels to sufficiently high accuracy that bound-state QED need not be used directly. To this end, we use the so-called relativistic Ritz approach, which is a long-distance effective theory describing the bound states of two-particle systems whose binding potential is dominated by the Coulomb interaction [9]. In that effective theory, the energy levels of atomic hydrogen were shown to be \[\frac{E}{c^{2}}=\sqrt{m_{e}^{2}+m_{p}^{2}+\frac{2m_{e}m_{p}}{\sqrt{1+\left( \frac{\alpha}{n_{\star}}\right)^{2}}}}-\left(m_{e}+m_{p}\right), \tag{1}\] where \(\alpha\) is the fine-structure constant and the effective quantum number \[n_{\star}=n-\delta\,. \tag{2}\] The quantum defect, \(\delta\), itself depends on the principal quantum number, \(n\), and accounts for interactions that are shorter in range than the Coulomb interaction; it also depends on the orbital, total electronic, and total system quantum numbers, \(\ell\), \(j\), and \(f\), respectively. To make the numerical analyses more efficient, we Taylor expand equation (1) in small \(\alpha\) up to eighth order1 and factor out the Rydberg frequency, \[cR_{\infty}\equiv\frac{m_{e}\alpha^{2}c^{2}}{2h}\,, \tag{3}\] allowing us to write \[\frac{E}{h}=cR_{\infty}\left(\frac{A_{2}}{n_{\star}^{2}}+\frac{A_{4}}{n_{\star} ^{4}}+\frac{A_{6}}{n_{\star}^{6}}+\frac{A_{8}}{n_{\star}^{8}}\right)\,, \tag{4}\] where the \(A_{2k}=\mathcal{O}\big{(}\alpha^{2k-2}\big{)}\). For the value of the fine-structure constant we use \[\alpha^{-1}=137.035\,999\,166(15)\,, \tag{5}\] derived from a recent measurement of the electron g-factor [10]. Together with the mass ratio \[\frac{m_{p}}{m_{e}}=1\,836.152\,673\,349(71)\,, \tag{6}\] inferred from spectroscopy of HD\({}^{+}\)[11]2, this allows us to determine the constants Footnote 2: This value is consistent with that of Ref. [12]. Both values reported in Refs. [11] and [12] rely on the proton-to-deuteron mass ratio obtained by Fink and Myers [13]. \[A_{2} = -0.999\,455\,679\,424\,739(21) \tag{7}\] \[A_{4} = 3.990\,953\,7921(87)\times 10^{-5}\] (8) \[A_{6} = -1.770\,774\times 10^{-9}\] (9) \[A_{8} = 8.25\times 10^{-14}\,. \tag{10}\] There are uncertainties in \(A_{6}\) and \(A_{8}\); however, they are irrelevant at the level of accuracy needed here. The simplest Ritz-like expansion is posited for the quantum defect, namely a series expansion in terms of the energy eigenvalues, which are assumed to be small relative to some high-energy scale, \(\Lambda\): \[\delta=\delta_{0}+\lambda_{1}\frac{E}{\Lambda}+\lambda_{2}\left(\frac{E}{ \Lambda}\right)^{2}+\ldots\,, \tag{11}\] where \(\delta_{0}\) and the \(\lambda_{i}\) are dimensionless coefficients. However, because in this form \(E\) depends implicitly on \(\delta\), it is impractical to use for most theoretical or empirical applications. A _modified_ ansatz written as a series in inverse powers of \((n-\delta_{0})\) is asymptotically \((n\to\infty)\) equivalent to (11) and is significantly easier to use for data fitting. Analyzing the large-\(n\) behavior of (4) with (11), it may be verified that \[\delta=\delta_{0}+\frac{\delta_{2}}{\left(n-\delta_{0}\right)^{2} }+\frac{\delta_{4}}{\left(n-\delta_{0}\right)^{4}}+\frac{2\delta_{2}^{2}}{ \left(n-\delta_{0}\right)^{5}}+\frac{\delta_{6}}{\left(n-\delta_{0}\right)^{6 }}+\frac{6\delta_{2}\delta_{4}}{\left(n-\delta_{0}\right)^{7}}+\frac{\delta_{ 8}}{\left(n-\delta_{0}\right)^{8}}+\frac{4\delta_{4}^{2}+8\delta_{2}\delta_{ 6}}{\left(n-\delta_{0}\right)^{9}}\\ +\frac{\delta_{10}}{\left(n-\delta_{0}\right)^{10}}+\frac{-40 \delta_{2}^{4}+10\delta_{4}\delta_{6}+10\delta_{2}\delta_{8}}{\left(n-\delta_{ 0}\right)^{11}}+\frac{\delta_{12}}{\left(n-\delta_{0}\right)^{12}}+\frac{-296 \delta_{2}^{3}\delta_{4}+6\delta_{6}^{2}+12\delta_{4}\delta_{8}+12\delta_{2} \delta_{10}}{\left(n-\delta_{0}\right)^{13}}+\ldots\,, \tag{12}\] where the \(\delta_{i}\) are free parameters. As shown in the following section, \(\delta_{0}\) is small (of order \(\alpha^{2}\)) and thus \(1/(n-\delta_{0})\) is small for \(n>1\), but we find that the modified defect expansion (12) satisfactorily reproduces energy levels even for \(n=1\) with the inclusion of a sufficient number of \(\delta_{i}\). A truncation of equation (12) is required for any application and we specify the order of the analysis by the highest inverse power of \((n-\delta_{0})\) included. Actually, truncations made at each successive inverse _odd_ power includes one additional defect parameter. Because there is no \(-1\)st or \(-3\)rd term, including terms through \((n-\delta_{0})^{-1}\) requires only \(\delta_{0}\) and is considered lowest order (LO), whereas including terms through \((n-\delta_{0})^{-3}\) requires both \(\delta_{0}\) and \(\delta_{2}\) and is considered next-to-lowest order (NLO). At higher orders we use the abbreviation N\({}^{k}\)LO, where \(k+1\) is equal to the number of defect parameters needed. For practical purposes, as shown below, the largest expansion is needed for \(S\)-states, where we truncate at the level N\({}^{6}\)LO, thereby including defect parameters up to \(\delta_{12}\). For higher angular momentum eigenstates fewer terms are required to reach the same level of accuracy. As outlined below, we use a limited number of precisely calculated energy levels of hydrogen employing the most up-to-date bound-state QED calculations [8], and fit them with equation (4) using the defect formula in equation (12). We determine the necessary \(\delta_{i}\) to reproduce all theoretical energy levels to within their uncertainties and demonstrate the power of our fits by testing our results against higher-\(n\) calculated energies. Because some energies can be predicted with a relative precision that is better than \(10^{-13}\), to ensure this level of reproducibility, the parameters \(A_{2}\) through \(A_{8}\) are given to an absolute precision of \(10^{-15}\) and, likewise, we report our fit values of the \(\delta_{i}\) to the same level of precision. ## II Theoretical inputs, uncertainties, and shifts due to the proton radius According to bound-state QED, the theoretical energy levels of hydrogen can be written as the sum of a gross level structure, fine-structure (FS), and hyperfine-structure (HFS) contribution, \[E_{n\ell jj}=-\frac{cR_{\infty}}{n^{2}}\,\frac{m_{p}}{m_{p}+m_{e}}+E_{n\ell j }^{\rm(FS)}+E_{n\ell jj}^{\rm(HFS)}\,, \tag{13}\] where we have chosen units in which Planck's constant, \(h=1\). The electron's reduced Compton wavelength, muon-to-electron mass ratio, proton g-factor, and electron magnetic-moment anomaly taken from CODATA-18 [8] are \[\lambda_{e} = 3.861\,592\,6796(12)\times 10^{-13}\,{\rm m} \tag{14}\] \[\frac{m_{\mu}}{m_{e}} = 206.768\,2830(46)\] (15) \[g_{p} = 5.585\,694\,6893(16)\] (16) \[a_{e} = 1.159\,652\,181\,28(18)\times 10^{-3}\,. \tag{17}\] We also use the proton radius inferred from the muonic hydrogen spectroscopy of Antognini et al. 2013 [14], \[r_{p}=0.840\,87(39)\,{\rm fm}\,. \tag{18}\] Following the procedure described in [15], the measured \(1S_{1/2}\)[16] and \(2S_{1/2}\)[17] hyperfine intervals may be used to the determine \(E_{n\ell jj}^{\rm(HFS)}\) to sufficient accuracy such that \(cR_{\infty}\) may be determined using the measured \(1S_{1/2}^{(f=1)}\to 2S_{1/2}^{(f=1)}\) interval from [18]; this completely specifies the theory. The theoretical uncertainty in the energy levels is dominated by the uncertainty in \(E_{n\ell j}^{\rm(FS)}\), which affects the levels directly through \(E_{n\ell j}^{\rm(FS)}\) itself and also indirectly through the determination of \(cR_{\infty}\). There are 5 uncertainties relevant to \(E_{n\ell j}^{\rm(FS)}\) at the level of precision needed in this work. Four of these are QED uncertainties taken directly from CODATA-18 [8]: the uncertainty in the two-photon correction term \(B_{60}\) yields \(\delta_{\ell,0}\left(0.94\,{\rm kHz}\right)/n^{3}\) (a reduction by about 50% compared to CODATA-14); the uncertainty in the three-photon correction term \(C_{50}\) yields \(\delta_{\ell,0}\left(0.96\,{\rm kHz}\right)/n^{3}\); nuclear polarizability uncertainty yields \(\delta_{\ell,0}\left(0.39\,{\rm kHz}\right)/n^{3}\); and a radiative recoil uncertainty yields \(\delta_{\ell,0}\left(0.74\,{\rm kHz}\right)/n^{3}\). In addition to the QED uncertainties mentioned above, there is an error in \(E_{n\ell j}^{\rm(FS)}\) due to the proton radius (18) which amounts to \(\delta_{\ell,0}\left(1.03\,{\rm kHz}\right)/n^{3}\). Adding all of these errors in quadrature, the overall uncertainty in the fine-structure correction is \[\delta(E_{n\ell j}^{\rm(FS)})=\frac{\left(1.9\,{\rm kHz}\right)}{n^{3}} \delta_{\ell,0}\,, \tag{19}\] and it follows that \[cR_{\infty}=3\,289\,84\,960\,249.1(2.2)\,{\rm kHz}\,, \tag{20}\] a shift upward of 0.2 kHz compared to the result reported in [15], but well within the uncertainty computed therein. Accounting for the correlated uncertainties in the QED predictions and determination of \(cR_{\infty}\), the theoretical uncertainty on any given level is \[\delta(E_{n\ell jj})=\left|\frac{1.9\,{\rm kHz}}{n^{3}}\delta_{\ell,0}- \frac{2.2\,{\rm kHz}}{n^{2}}\right|\,, \tag{21}\] and is therefore below 0.6 kHz for all levels. Lastly, given the potential issue of a remaining proton radius puzzle, we consider the possible systematic implications of a shift away from the value quoted in equation (18). Defining the proton radius shift, \[\Delta r_{p}=r_{p}-0.840\,87\,{\rm fm}\,, \tag{22}\] it follows that the Rydberg frequency shifts by \[\Delta\left(cR_{\infty}\right)_{r_{p}}=3.1\,{\rm kHz}\left(\frac{\Delta r_{ p}}{0.001\,{\rm fm}}\right)\,, \tag{23}\] and the energy levels shift by \[\Delta\left(E_{n\ell jf}\right)_{r_{p}}=\left(\frac{2.6\,{\rm kHz}}{n^{3}} \delta_{\ell,0}-\frac{3.1\,{\rm kHz}}{n^{2}}\right)\left(\frac{\Delta r_{p}} {0.001\,{\rm fm}}\right)\,. \tag{24}\] Equations (23) and (24) will be utilized below to cross check our results against a selection of experimental results. ## III Fitting to theoretical levels of hydrogen ### Overview For a given orbital angular momentum value, \(\ell\), we generate values of \(E_{n\ell jj}\) using (13), following the same procedure described in Ref. [15] with updated theoretical inputs from Ref. [8]. Values of Bethe logarithms are taken from Refs. [19] and [20]; however, many levels require numerically-computed QED terms, such as \(B_{60}(n\ell_{j})\), which have not been computed (or made publicly available) for all values of \(n\), \(\ell\), and \(j\). Therefore, we fit the available values of such terms with simple formulas in terms of inverse powers of \(n\) and interpolate or extrapolate to obtain the needed terms. Our conservative estimates for the interpolation/extrapolation error is far below the theoretical (QED) error. We compute energy levels from \(n_{\rm min}=\ell+1\) up to \(n_{\rm max}=\max\left(15,\ell+1\right)\), fit them with equation (4) using the defect formula in equation (12), and weight each data point by the inverse square of the theoretical uncertainty given in (21). The fit order, i.e. the number of necessary defect parameters (\(\delta_{i}\)), is increased until the difference between the fit value and the QED value from (13) falls below the QED error given in equation (21). Some example fits for a subset of \(\ell=0\) and \(\ell=1\) states are shown in Figures 1 and 2, which show the absolute differences between the relativistic Ritz and QED predictions; differences that do not appear in the figures are below \(1\,\)Hz. For these states, we have used QED-predicted levels up to \(n_{\rm max}=15\) for the fits, so all \(n>15\) values represent a true test of the model against QED. Summarizing the findings presented in Figs. 1 and 2 we make the following observations: The number of fit parameters required, i.e., the order of the expansion in (12) depends on the angular momentum value \(\ell\). This is related physically to the fact that states with \(\ell>0\) have a centrifugal barrier preventing the electron from getting close to the proton. For \(S\)-states the complete model as written out in (12) is required, and with increasing integer values of \(\ell\) the required order tends to decrease until \(\ell=9\), beyond which only \(\delta_{0}\) is needed. This trend is partially demonstrated in Fig. 1, where \(\ell=0\) and the N\({}^{6}\)LO model provides an adequate fit, while in Fig. 2 (\(\ell=1\)) the N\({}^{5}\)LO model is shown to provide sufficient accuracy. In either case, the low-\(n\) levels are reproduced with such precision (\(<1\) Hz) that they do not appear in the figures. The full set of fit parameters for \(\ell=0\) and \(\ell=1\) states is shown in Table 1, and Tables 5 - VIII in Appendix A present fit parameters for states from \(\ell=2\) to \(\ell=30\). The number of fit parameters for each combination of \(\ell,j\), and \(f\) never exceeds the number of energy values used for each fit. In fact, for states with \(\ell\geq 14\) we have only used one QED-predicted level to fit for the one defect parameter (\(\delta_{0}\)) required and have verified our model predictions up to at least \(n=30\); see Fig. 3 for an example in which \(\ell=14\). This points to the efficiency of the relativistic Ritz family of models. we can approximate \[\frac{E}{h} \simeq -\frac{cR_{\infty}}{\left(n-\delta_{0}\right)^{2}} \tag{25}\] \[\simeq -cR_{\infty}\left(\frac{1}{n^{2}}+\frac{2}{n^{3}}\delta_{0}\right)\,,\] where in the second line we have assumed \(\delta_{0}/n\ll 1\), which is verified below. It is well known that fine-structure effects contribute to the energy levels a \(j\)-dependent term that scales3 as \(n^{-3}\), Footnote 3: The fine-structure correction that scales as \(n^{-4}\) is a relativistic kinetic energy correction that is already contained within the relativistic Ritz model – see equation (4). \[\Delta E^{\rm(FS)}=-\frac{cR_{\infty}}{n^{3}}\,\frac{\alpha^{2}}{j+1/2}+\ldots\,, \tag{26}\] so we should expect that \[\delta_{0}\simeq\frac{\alpha^{2}}{2j+1}\,. \tag{27}\] Hyperfine structure effects contribute to the energy a leading term (see, e.g., [15]) that is approximately \[\Delta E^{\rm(HFS)}\simeq\begin{cases}1.42\,{\rm GHz}\times\frac{\left(f- \frac{3}{4}\right)}{n^{3}}&(\ell=0)\\ 0.53\,{\rm GHz}\times\frac{f\left(f+1\right)-j\left(j+1\right)-\frac{3}{4}}{ n^{3}\left(2\ell+1\right)j\left(j+1\right)}&(\ell\neq 1)\,,\end{cases} \tag{28}\] which means that we should expect deviations in the leading order defect due to HFS effects (at fixed \(\ell\) and \(j\)) that are approximately \[\Delta\left(\delta_{0}\right)_{\rm HFS}\simeq\begin{cases}2.2\times 10^{-7}&( \ell=0)\\ 8.1\times 10^{-8}\times j^{-2}&(\ell\gg 1)\,.\end{cases} \tag{29}\] This accounts for the approximate differences between the \(\delta_{0}\) as seen in Table 1 as well as for the rest of the angular momentum channels, up to \(\ell=30\), listed in Tables 5 - 8 in Appendix A. We should, however, point out that the precise values for the defect parameters depend somewhat on which QED-predicted levels are used for the fit. For the states of low-lying \(\ell\) we have chosen \(n_{\rm max}=15\), but as an example we reconsider the \(S_{j=1/2}^{(f=0)}\) states by fitting to levels up to \(n_{\rm max}=16\). A comparison of the parameters between the \(n_{\rm max}=15\) and \(n_{\rm max}=16\) fits are shown in Table 2. Minor changes in \(\delta_{0}\) are observed, but more substantial changes are seen for the higher-order parameters. Nevertheless, either set of parameters could be used to reproduce the QED-predicted energy levels at a comparable level of accuracy. \begin{table} \begin{tabular}{c c c c} \hline \hline Parameter & \(n_{\rm max}=15\) (\(\times 10^{-5}\)) & \(n_{\rm max}=16\) (\(\times 10^{-5}\)) \\ \hline \(\delta_{0}\) & \(2.5502100611\) & \(2.5502099872\) \\ \(\delta_{2}\) & \(0.0083755621\) & \(0.0083855869\) \\ \(\delta_{4}\) & \(-0.0316626562\) & \(-0.0320835870\) \\ \(\delta_{6}\) & \(0.2714656639\) & \(0.2786622804\) \\ \(\delta_{8}\) & \(-1.4934453489\) & \(-1.5454747300\) \\ \(\delta_{10}\) & \(3.6057260079\) & \(3.7472037620\) \\ \(\delta_{12}\) & \(-2.3560904800\) & \(-2.4523221495\) \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of fitting parameters for \(S_{j=1/2}^{(f=0)}\) states between the \(n_{\rm max}=15\) and \(n_{\rm max}=16\) fit. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\ell\) & \(j\) & \(f\) & \(\delta_{i}\) & Value/\(10^{-5}\) \\ \hline \(0\) & \(\frac{1}{2}\) & \(0\) & \(\delta_{0}\) & \(2.550\,210\,0611\) \\ & & & \(\delta_{2}\) & \(0.008\,375\,5621\) \\ & & & \(\delta_{4}\) & \(-0.031\,662\,6562\) \\ & & & \(\delta_{6}\) & \(0.271\,465\,6639\) \\ & & & \(\delta_{8}\) & \(-1.493\,445\,3489\) \\ & & & \(\delta_{10}\) & \(3.605\,726\,0079\) \\ & & & \(\delta_{12}\) & \(-2.356\,090\,4800\) \\ \(0\) & \(\frac{1}{2}\) & \(1\) & \(\delta_{0}\) & \(2.528\,610\,1667\) \\ & & & \(\delta_{2}\) & \(0.008\,375\,3687\) \\ & & & \(\delta_{4}\) & \(-0.031\,651\,9458\) \\ & & & & \(\delta_{6}\) & \(0.271\,324\,9311\) \\ & & & & \(\delta_{8}\) & \(-1.492\,530\,9405\) \\ & & & & \(\delta_{10}\) & \(3.603\,313\,3005\) \\ & & & & \(\delta_{12}\) & \(-2.354\,461\,9943\) \\ \(1\) & \(\frac{1}{2}\) & \(0\) & \(\delta_{0}\) & \(2.669\,249\,8201\) \\ & & & & \(\delta_{2}\) & \(0.002\,680\,7904\) \\ & & & & \(\delta_{4}\) & \(-0.023\,483\,8927\) \\ & & & & \(\delta_{6}\) & \(0.234\,141\,1690\) \\ & & & & & \(\delta_{8}\) & \(-1.259\,283\,0776\) \\ & & & & & \(\delta_{10}\) & \(2.428\,620\,8124\) \\ \(1\) & \(\frac{1}{2}\) & \(1\) & \(\delta_{0}\) & \(2.662\,051\,7448\) \\ & & & & \(\delta_{2}\) & \(0.002\,681\,8114\) \\ & & & & & \(\delta_{4}\) & \(-0.023\,484\,0422\) \\ & & & & & \(\delta_{6}\) & \(0.234\,143\,0659\) \\ & & & & & \(\delta_{8}\) & \(-1.259\,294\,3395\) \\ & & & & & \(\delta_{10}\) & \(2.428\,643\,5695\) \\ \(1\) & \(\frac{3}{2}\) & \(1\) & \(\delta_{0}\) & \(1.331\,250\,5239\) \\ & & & & & \(\delta_{2}\) & \(0.002\,680\,3758\) \\ & & & & & \(\delta_{4}\) & \(-0.023\,507\,0995\) \\ & & & & & \(\delta_{6}\) & \(0.234\,315\,4776\) \\ & & & & & \(\delta_{8}\) & \(-1.259\,730\,2288\) \\ & & & & & \(\delta_{10}\) & \(2.428\,830\,0568\) \\ \(1\) & \(\frac{3}{2}\) & \(2\) & \(\delta_{0}\) & \(1.328\,373\,2345\) \\ & & & & & \(\delta_{2}\) & \(0.002\,680\,4364\) \\ & & & & & \(\delta_{4}\) & \(-0.023\,507\,0852\) \\ & & & & & \(\delta_{6}\) & \(0.234\,315\,3010\) \\ & & & & & \(\delta_{8}\) & \(-1.259\,729\,2077\) \\ & & & & & \(\delta_{10}\) & \(2.428\,828\,0568\) \\ \hline \hline \end{tabular} \end{table} Table 1: Relativistic Ritz fitting parameters for \(\ell=0\) and \(\ell=1\) HFS states of hydrogen. Note that the numbers are small, since they are to be multiplied by \(10^{-5}\). For states with \(2\leq\ell\leq 30\) see Appendix A. Some comments on this procedure are warranted. The defect parameters, \(\delta_{i}\), are perhaps best viewed as parameters of a particular fitting function, which is not unique, applied to a particular set of input data, which also is not unique. In fact, there are strong correlations between the parameters; see Table 3 for the correlation matrix between defect parameters for the \(S_{j=1/2}^{(f=0)}\) fit. Therefore, these parameters should not be viewed as fundamental, but a given set of them have a practical use in reproducing theoretical energy levels without having to use the QED theory directly. When using these parameters, only the values from a single fit should be used. Furthermore, all reported digits of the parameters up to an absolute precision of \(10^{-15}\) should conservatively be used to reproduce the levels below the theoretical uncertainty (21). ### Comparison with experiments As an example application of these fits, in Table 4 we provide a selection of recently measured hydrogen transition frequencies and their corresponding theory predictions using equation (4). Weighting by the number of states, the hyperfine centroid is defined as \[E_{n\ell j}^{\rm centroid}=\frac{\sum_{f}(2f+1)E_{n\ell jf}}{\sum_{f}(2f+1)} \tag{30}\] and the fine-structure centroid is defined as \[E_{n\ell}^{\rm centroid}=\frac{\sum_{j}(2j+1)E_{n\ell j}^{\rm centroid}}{ \sum_{j}(2j+1)}\,. \tag{31}\] Following the same rationale leading to equation (21), the theoretical error for any given transition is \[\delta(\nu\left(n_{i}\ell_{i}\to n_{f}\ell_{f}\right))=\\ \left|1.9\,{\rm kHz}\left(\frac{\delta_{\ell_{f},0}}{n_{f}^{3} }-\frac{\delta_{\ell_{i},0}}{n_{i}^{3}}\right)-2.2\,{\rm kHz}\left(\frac{1}{ n_{f}^{2}}-\frac{1}{n_{i}^{2}}\right)\right|\,, \tag{32}\] whereas the shift in a transition due to a shift in the proton radius can be easily computed using (24). In some cases the measurement and theory (columns 2 and 5 of Table 4) disagree. However, the sums of values in columns 5 and 6 are in good agreement with the measured values in column 2, which confirms that these disagreements are still well characterized by shifts in the proton radius. ## IV Discussion Here we have presented a simple fitting formula and parameters, equation (4) and Tables 1, and 5 through 8, that are sufficient to reproduce all hyperfine energy levels of hydrogen up to \(\ell=30\). The theoretical uncertainty of any level is given by equation (21) and additional systematic shifts in those levels due to a proton radius that differs from the one determined by Antognini et al. [14] is parameterized in equation (24). **Acknowledgements** We greatly appreciate the initial suggestion of Eric Hessels to pursue this work. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline & \(\delta_{0}\) & \(\delta_{2}\) & \(\delta_{4}\) & \(\delta_{6}\) & \(\delta_{8}\) & \(\delta_{10}\) & \(\delta_{12}\) \\ \hline \(\delta_{0}\) & 1.00000 & -0.94504 & 0.88620 & -0.84558 & 0.82121 & -0.80876 & 0.80445 \\ \(\delta_{2}\) & -0.94504 & 1.00000 & -0.98466 & 0.96259 & -0.94653 & 0.93770 & -0.93456 \\ \(\delta_{4}\) & 0.88620 & -0.98466 & 1.00000 & -0.99480 & 0.98747 & -0.98274 & 0.98097 \\ \(\delta_{6}\) & -0.84558 & 0.96259 & -0.99480 & 1.00000 & -0.99838 & 0.99641 & -0.99557 \\ \(\delta_{8}\) & 0.82121 & -0.94653 & 0.98747 & -0.99838 & 1.00000 & -0.99961 & 0.99930 \\ \(\delta_{10}\) & -0.80876 & 0.93770 & -0.98274 & 0.99641 & -0.99961 & 1.00000 & -0.99996 \\ \(\delta_{12}\) & 0.80445 & -0.93456 & 0.98097 & -0.99557 & 0.99930 & -0.99996 & 1.00000 \\ \hline \end{tabular} \end{table} Table 3: Correlation Matrix for the \(S_{j=1/2}^{(f=0)}\) fit.
2310.01122
A Fused Deep Denoising Sound Coding Strategy for Bilateral Cochlear Implants
Cochlear implants (CIs) provide a solution for individuals with severe sensorineural hearing loss to regain their hearing abilities. When someone experiences this form of hearing impairment in both ears, they may be equipped with two separate CI devices, which will typically further improve the CI benefits. This spatial hearing is particularly crucial when tackling the challenge of understanding speech in noisy environments, a common issue CI users face. Currently, extensive research is dedicated to developing algorithms that can autonomously filter out undesired background noises from desired speech signals. At present, some research focuses on achieving end-to-end denoising, either as an integral component of the initial CI signal processing or by fully integrating the denoising process into the CI sound coding strategy. This work is presented in the context of bilateral CI (BiCI) systems, where we propose a deep-learning-based bilateral speech enhancement model that shares information between both hearing sides. Specifically, we connect two monaural end-to-end deep denoising sound coding techniques through intermediary latent fusion layers. These layers amalgamate the latent representations generated by these techniques by multiplying them together, resulting in an enhanced ability to reduce noise and improve learning generalization. The objective instrumental results demonstrate that the proposed fused BiCI sound coding strategy achieves higher interaural coherence, superior noise reduction, and enhanced predicted speech intelligibility scores compared to the baseline methods. Furthermore, our speech-in-noise intelligibility results in BiCI users reveal that the deep denoising sound coding strategy can attain scores similar to those achieved in quiet conditions.
Tom Gajecki, Waldo Nogueira
2023-10-02T11:54:49Z
http://arxiv.org/abs/2310.01122v1
# A Fused Deep Denoising Sound Coding Strategy for Bilateral Cochlear Implants ###### Abstract Cochlear implants (CIs) provide a solution for individuals with severe sensorineural hearing loss to regain their hearing abilities. When someone experiences this form of hearing impairment in both ears, they may be equipped with two separate CI devices, which will typically further improve the CI benefits. This spatial hearing is particularly crucial when tackling the challenge of understanding speech in noisy environments, a common issue CI users face. Currently, extensive research is dedicated to developing algorithms that can autonomously filter out undesired background noises from desired speech signals. At present, some research focuses on achieving end-to-end denoising, either as an integral component of the initial CI signal processing or by fully integrating the denoising process into the CI sound coding strategy. This work is presented in the context of bilateral CI (BiCI) systems, where we propose a deep-learning-based bilateral speech enhancement model that shares information between both hearing sides. Specifically, we connect two monanural end-to-end deep denoising sound coding techniques through intermediary latent fusion layers. These layers amalgamate the latent representations generated by these techniques by multiplying them together, resulting in an enhanced ability to reduce noise and improve learning generalization. The objective instrumental results demonstrate that the proposed fused BiCI sound coding strategy achieves higher interaural coherence, superior noise reduction, and enhanced predicted speech intelligibility scores compared to the baseline methods. Furthermore, our speech-in-noise intelligibility results in BiCI users reveal that the deep denoising sound coding strategy can attain scores similar to those achieved in quiet conditions. Cochlear implants, Sound coding strategy, Deep neural networks, End-to-end, Speech enhancement ## I Introduction A cochlear implant (CI) is a medical device surgically implanted to restore the sense of hearing in individuals with severe to profound sensorineural hearing loss. Notably, recent years have seen significant advancements in CI technology [1]. Consequently, individuals with bilateral hearing loss often receive implants on both sides [2]. Those who receive a CI in each ear are known as bilateral CI (BiCI) users, typically demonstrate improved speech understanding, sound localization, reduced listening effort, and enhanced quality of life in comparison to unilateral CI users (e.g., [3, 4, 5, 6]). However, their listening performance remains inferior to individuals with normal hearing (NH) (e.g., [7, 8, 9]). The disparity in performance between BiCI users and individuals with NH could potentially stem from differences in electrode array insertion depth, differences between the electrode-nerve interfaces in each ear, and from the independent processing in each CI (e.g., [10, 11, 12, 13]). The computation of stimulation current levels over time and for individual electrodes (referred to as electrodograms) relies on audio captured by microphones embedded within each speech processor. This computation is achieved by applying the CI sound coding strategy independently to both listening sides, which can lead to a lack of effective binaural integration [14], may introduce binaural artifacts [15], may struggle to effectively suppress background noise or competing speech signals when present in both ears simultaneously [12], and might not fully transmit interaural cues [16]. Typically, a CI in conjunction with its associated sound coding strategy enables the user to understand speech effectively in quiet environments. However, its effectiveness diminishes when encountering loud interfering signals, characterized by low signal-to-noise ratios (SNRs), such as background noise or multiple speakers talking simultaneously [17]. Several approaches have been proposed to enhance speech understanding in noisy environments for BiCIs. Some of these methods utilize traditional front-end processing techniques like binaural beamforming (e.g., [18, 19, 20]), while others integrate elements of the CI sound coding strategy and establish bilateral connections between certain processing components (e.g., [12, 21, 22]). These conventional approaches have proven effective in augmenting speech understanding in noise and sound source localization for BiCI users. However, with the advent of deep learning technology, the field is increasingly exploring the use of deep neural networks (DNNs) for speech enhancement (e.g., [23, 24, 25, 26, 27]). These methods have proven to be very successful at performing speech denoising while keeping speech quality and a high degree of generalization capabilities. To optimize the enhancement of speech for CIs, it could prove advantageous to devise algorithms that take into account the specific processing scheme of CIs. Consequently, there has been research dedicated to CIs, where DNNs are incorporated into their signal pathway [27, 28, 29, 30, 31]. These approaches target noise reduction by directly applying masks within the filter bank utilized by the CI sound coding strategy. Recently, drawing inspiration from the Conv-TasNet [23], an end-to-end CI sound coding strategy based on deep learning, termed "Deep ACE." was proposed [32, 33]. This approach was designed to replace the clinically available ACE sound coding strategy, but it could also be used to replace other commercially available ones. This method completely replaces the CI sound coding strategy with a DNN and achieves high speech understanding improvements in BiCI users (up to 22.8% improvement in word recognition score (WRS) in modulated background noise). Presently, data-driven methodologies have primarily been employed with single CIs. However, these approaches are equally applicable to BiCIs. For instance, there is no inherent justification to assume that utilizing any monaural speech enhancement algorithm within a BiCI framework would not produce comparable auditory advantages as observed in the unilateral configuration. Nevertheless, this may not be the optimal approach, as independent processing could still potentially introduce artifacts that hinder effective binaural listening. A promising avenue to enhance the listening experience of BiCI users involves embracing multi-channel sound processing. Notably, diverse multi-channel front-end speech enhancement methods have been proposed. These strategies not only showcase effectiveness in enhancing speech denoising but also reveal an ability to maintain the integrity of essential binaural auditory cues (e.g., [34, 35]). In recent advancements, there's a novel concept referred to as "Fusion Layers" introduced in [36]. These layers entail the exchange of information between two individual monaural speech denoising algorithms. They achieve this by enabling Hadamard products between latent spaces at specific processing stages, drawing inspiration from multi-task learning methods, and emulating the inhibitory and excitatory mechanisms found in the human brain stem for binaural hearing [37]. Precisely, the fusion layers are designed to introduce non-linear elements into the learning model, enhancing the model's ability to fit training data effectively while improving generalization without impacting the number of trainable parameters. This approach of sharing features has proven to be highly effective in enhancing noise reduction compared to independent bilateral models, where processing is performed separately on each side. In our study, we present a novel approach termed the "fused Deep ACE," which can naturally be extrapolated to other CI processing strategies. This approach integrates two Deep ACE algorithms through the utilization of fusion layers, enabling the sharing of latent representations from particular processing stages. Our hypothesis is that this bilateral sound coding strategy will result in improved speech understanding when contrasted with the conventional clinical approach. Additionally, we postulate that the fusion layer will capitalize on bilateral redundant information, potentially mitigating certain binaural artifacts and leading to the generation of more bilaterally coherent output electrodograms. ## II Methods & Materials ### _Bilateral advanced combination encoder (ACE; unprocessed)_ This is the main baseline algorithm used in this work and is based on a clinical BiCI setup, where each CI processes the sound independently using the ACE sound coding strategy. This setup does not perform any noise reduction and does not share any information between the listening sides. The ACE strategy begins by sampling the acoustic signal at 16 kHz, followed by applying a filter bank through a 128-point fast Fourier transform. This process introduces a 2 ms algorithmic latency, dependent on the channel stimulation rate (CSR). Estimations of desired envelopes are calculated for each spectral band (\(E_{k}\)) corresponding to an electrode, with \(M\) representing the total channels. In this study, we select the \(N\) most energetic envelopes out of \(M\) based on their amplitudes. These selected envelopes undergo non-linear compression via a loudness growth function (LGF). The LGF output (\(p_{k}\)) represents the normalized stimulation amplitude for electrode \(k\) to stimulate the auditory nerve. Lastly, we map each \(p_{k}\) within the subject's dynamic range, spanning from threshold to comfortable stimulation levels giving the output current stimulation patterns \(I_{k}\). These \(N\)-selected electrodes are stimulated sequentially for each audio frame, defining one stimulation cycle, and the CSR is determined by the cycles per second. #### Ii-1 Bilateral Deep ACE This condition closely resembles the baseline scenario (referred to as bilateral ACE) in that it lacks any exchange of information between the listening sides. However, it diverges from clinical ACE sound coding strategies by adopting the newly developed Deep ACE approach, as detailed in [32, 33]. More specifically, Deep ACE substitutes the conventional clinical ACE method with a DNN that takes in raw audio as its input and generates the denoised LGF output \(p_{k}\). In the initial step, Deep ACE encodes the left and right signals \(X_{\{l,r\}}\) into a latent representation using a 1-D convolution layer. This operation can be mathematically expressed as a matrix multiplication: \[X^{\prime}_{\{l,r\}}=\Theta(X_{\{l,r\}}\cdot\mathbf{E}), \tag{1}\] where \(\mathbf{E}_{\{l,r\}}\in\Re^{(F\times L)}\) are the left and right encoder basis functions and \(\Theta(\cdot)\) is the antirectifier activation function used in Deep ACE, and \(F\) and \(L\) the number and length (in samples) of the filters used, respectively. The signal is then sent to a deep envelope detector (DED) that performs dimensionality reduction (from \(F\) to \(M\)) and to the separator module that will generate a deeper latent representation for each side \(X^{\prime\prime}_{l,r}=\zeta(X^{\prime}_{\{l,r\}})\in\Re^{(1\times S)}\), where \(\zeta(\cdot)\) is the learned function by the separator and \(S\) is the number of skip connections [23]. Then the DED and separator outputs are fed into a masker that will remove the noisy components of the encoded mixture. Finally, the masked signals will be decoded through a transposed 1-D convolution to obtain \(p_{k}\) for each CI. #### Ii-2 Fused Deep ACE In this study, we introduce an approach involving integrating two monaural Deep ACE [33] models, with one model associated with each listening side. This integration is achieved through the utilization of fusion layers [36]. These fusion layers are influenced by the principles of multi-task learning, where model weights are shared across different models to address interconnected tasks. The function of these layers involves conducting element-wise dot products on tensors that depict latent representations at identical processing stages. More precisely, we combine the latent representations generated within each Deep ACE model, both following the encoding stage and subsequent to the separator modules as follows: \[\begin{split} X_{\Lambda}^{\prime}=\rho(X_{l}^{\prime},X_{r}^{ \prime})\\ X_{\Lambda}^{\prime\prime}=\rho(X_{l}^{\prime\prime},X_{r}^{ \prime\prime}),\end{split} \tag{2}\] where \(\rho(\cdot)\) is the Hadamard product operator. The outcome of these two fusion operations results in a model that performs "double fusion." These fused signals are fed into the separator and masker modules the same way as in the bilateral Deep ACE condition. A visual representation of this model's structure can be observed in Figure 1. It is important to note that, within this model, the functions of the band selection and mapping blocks are mutually exclusive and unaffected by one another. ### _Model training setup_ Training the models was conducted over a maximum of 100 epochs, employing batches consisting of two 4-second-long audio segments. The initial learning rate was initialized to 1e-3. In case the validation set accuracy displayed no enhancement over a span of 3 consecutive epochs, the learning rate was reduced by half. To ensure regularization, early stopping with a patience of 5 epochs was implemented, safeguarding against overfitting. Only the model displaying the highest performance was retained. Model optimization was facilitated using the Adam optimizer [38]. The model's training employs a cost function based on mean-squared error (MSE) and binary cross entropy (BCE) for each listening side (for a detailed description of this cost function refer to [33]). The hyperparameter configuration used in this study was slightly modified with respect to the ones shown in [32], specifically the separator module was bigger and the deep-envelope-detector (DED) was also increased in size. The model hyperparameters are shown in Table I. ### _Audio material_ In this work, we used a total of three different speech datasets and three noise types to assess the models' performance and generalization abilities. All these audio sets will be described in this section. As a preprocessing stage, all audio material was set to mono and resampled at 16 kHz. The corresponding electrodograms were obtained by processing all audio data with the ACE sound coding strategy at an output CSR of 1,000 pulses per second. All audio signals were generated by convolving source signals with binaural room impulse responses (BRIRs; [39]) and summing. BRIRs were generated for hearing aids located in each listening side 1 and consisted of 4 different rooms of different sizes and acoustic properties, using the front microphone. Footnote 1: [https://github.com/IoSR-Surrey/RealRoomBRIRs](https://github.com/IoSR-Surrey/RealRoomBRIRs) #### Iv-B1 Speech data **LibriVox corpus [40]** This speech data was originally designed for end-to-end speech translation, however, in this study, we mix the speech material with noise to train our models for speech denoising. The speech data consists of fluent spoken sentences with a total duration of 18 hours. The quality of audio and sentence alignments was checked by a manual evaluation, showing that speech alignment is in general very high. In fact, the sentence alignment quality is comparable to well-used parallel translation data. **TIMIT corpus [41]** This corpus contains broadband recordings of 630 people speaking the eight major dialects of American English, each reading ten phonetically rich sentences. In this work, files from 112 male and 56 female speakers in the test set were selected. **HSM corpus [42]** Speech intelligibility in quiet and in noise was measured by means of the Hochmair, Schulz, Moser (HSM) sentence test, based on a dataset composed of 30 lists with 20 everyday sentences each (106 words per list). #### Iv-B2 Noise data **Environmental noises; DEMAND [43]** The environmental noises recorded to create this dataset are split into six categories; four are indoor noises and the other two are outdoor recordings. The indoor environments are further divided into domestic, office, public, and transportation; the open-air environments are divided into streets and nature. There are 3 environment recordings per category. **Synthetic noises; SSN [44]** and ICRA7 [45]** To evaluate the different algorithms, in this work we also use stationary speech-shaped noise (SSN) and non-stationary modulated seven-speaker babble noise (ICRA7) as synthetic interferers. #### Iv-B3 Training, evaluation and testing data The training set was composed of speech from the LibriVox corpus and noise from the DEMAND dataset. Specifically, 30 male (M) and female (F) speakers were randomly selected from the speech corpus, and two environments were randomly selected from each of the noise categories. For validation, 20% of the training data was used. The noise and speech subsets used for training will be referred to as EN\({}_{1}\) and LibriVox\({}_{1}\), respectively. The testing phase involved the utilization of the HSM speech Fig. 1: Block diagram of the proposed fused Deep ACE. The model takes the right and left time-domain noisy speech signals (\(X_{r}(n)\) and \(X_{l}(n)\) respectively) and produces the respective denoised current stimulation patterns for each listening side \(I_{r}(k)\) and \(I_{l}(k)\) for each stimulation frame \(k\). The fusion model performs element-wise dot product between the latent representations generated in each of the Deep ACE models and the deep envelope detector (DED) is used for dimensionality reduction. dataset, coupled with synthetic noises employed as interfering signals. Speech and noise signals were mixed at SNR values ranging uniformly from -5 to 10 dB. The processed clean speech signals were also included in the listening experiments to assess whether the proposed model introduced perceptually relevant distortions. ### _Objective Evaluation_ To objectively evaluate the performance of each examined algorithm, we gauge the extent of noise reduction accomplished, establish electrode-wise correlation coefficients between the denoised and clean signals, and determine speech intelligibility through the application of the modified binaural short-time objective intelligibility (MBSTOI) index [46]. Notably, in this study, our focus is on investigating comprehensive CI processing, consequently prompting the computation of the MBSTOI index from synthesized electrocardiograms (\(\mathbf{p}\)) derived using a vocoder. This results in the utilization of a specific variant of MBSTOI referred to as vocoder-MBSTOI (V-MBSTOI). #### Iv-D1 SNRi To assess the amount of noise reduction performed by each of the tested algorithms we compute the SNR improvement (SNRi). This measure is calculated in the electrodogram domain and compares the original input SNR to the one obtained after denoising, and is given by: \[\mathrm{SNRi}=10\cdot\log_{10}\Bigg{(}\frac{\sum_{k=1}^{M}||\mathbf{p}_{k}^{ \mathrm{n}}-\mathbf{p}_{k}^{c}||^{2}}{\sum_{k=1}^{M}||\mathbf{p}_{k}^{d}-\mathbf{p}_{k}^{c} ||^{2}}\Bigg{)}, \tag{3}\] where \(\mathbf{p}_{k}\) represents the LGF output of band \(k\) and the superscripts \(n\), \(c\), and \(d\) are used to denote the noisy, clean, and denoised electrodograms, respectively. V-MbstoiTo estimate the speech intelligibility performance expected from each of the algorithms, the V-MBSTOI score [47, 48, 49] was used. This metric relies directly on MBSTOI [46], which is modeled based on normal hearing binaural speech performance. Specifically, the purpose of this metric is to evaluate the potential relative variations in speech performance that could be achieved in behavioral experiments, rather than providing an exact estimation of an individual's CI performance. The V-MBSTOI score ranges from 0 to 1, where the higher score represents a predicted higher speech performance. Linear cross-correlationTo characterize potential distortions and artifacts introduced by the tested algorithms, the linear correlation coefficients (LCCs) between the clean ACE electrodograms (\(\mathbf{p}^{c}\)) and the denoised electrodograms (\(\mathbf{p}^{d}\)) were computed. The LCCs were first computed channel-wise (i.e., one correlation coefficient was computed for each of the 22 channels) to assess channel output degradation caused by the denoising process. The \(\mathrm{LCC}_{k}\) for band \(k\) is computed based on the Pearson correlation coefficient [50] as follows: \[\mathrm{LCC}_{k}=\frac{\mathrm{cov}\left(\mathbf{p}_{k}^{c},\mathbf{p}_{k}^{d}\right) }{\sigma_{\mathbf{p}_{k}^{c}}\cdot\sigma_{\mathbf{p}_{k}^{d}}}, \tag{4}\] where \(\mathrm{cov}(X,Y)\) is the covariance between \(X\) and \(Y\), and \(\sigma_{\mathbf{p}_{k}}\) is the standard deviation of the values in the corresponding electrodogram \(\mathbf{p}_{k}\). We also present the LCCs as a function of the noise azimuth \(LCC_{\theta}\), which is computed as follows: \[\mathrm{LCC}_{\theta}=\frac{\mathrm{cov}\left(\mathbf{p}_{\theta}^{c},\mathbf{p}_{ \theta}^{d}\right)}{\sigma_{\mathbf{p}_{\theta}^{c}}\cdot\sigma_{\mathbf{p}_{\theta}^ {d}}}, \tag{5}\] where \(\mathbf{p}_{\theta}^{c}\) and \(\mathbf{p}_{\theta}^{d}\) are the LCCs averaged across electrodes for a noise source coming from azimuth \(\theta\). Electric interaural coherenceSimilar to the LCCs, we also use the electric interaural coherence (EIC). Here we compute the channel-wise LCCs between between the right electrodograms (\(\mathbf{p}^{r}\)) and the left electrodograms (\(\mathbf{p}^{l}\)) as follows: \[\mathrm{EIC}_{k}=\frac{\mathrm{cov}\left(\mathbf{p}_{k}^{r},\mathbf{p}_{k}^{l}\right) }{\sigma_{\mathbf{p}_{k}^{r}}\cdot\sigma_{\mathbf{p}_{k}^{l}}}, \tag{6}\] We also present the EIC as a function of the noise azimuth \(\mathrm{LCC}_{\theta}\), which is computed as follows: \[\mathrm{EIC}_{\theta}=\frac{\mathrm{cov}\left(\mathbf{p}_{\theta}^{r},\mathbf{p}_{ \theta}^{l}\right)}{\sigma_{\mathbf{p}_{\theta}^{c}}\cdot\sigma_{\mathbf{p}_{\theta}^ {l}}}, \tag{7}\] where \(\mathbf{p}_{\theta}^{r}\) and \(\mathbf{p}_{\theta}^{r}\) are the EIC averaged across electrodes for a noise source coming from azimuth \(\theta\). ### _Behavioral evaluation_ To validate the objective instrumental measures and to assess their impact on actual BiCI hearing, we perform two behavioral experiments namely, a speech intelligibility experiment and a Multiple Stimuli with Hidden Reference and Anchor (MUSHRA). The speech intelligibility experiments are designed to investigate the benefits of the proposed denoising algorithms when compared to the clinical setups, and the MUSHRA [51] will help understand how BiCIs rate the quality of the performed denoising. The stereo signals were transmitted through direct stimulation using a bilaterally synchronized RF GeneratorXS interface from Cochlear Ltd. (Sydney, Australia) in conjunction with MATLAB software (Mathworks, Natick, MA) via the Nucleus Implant Communicator V.3, also from Cochlear Ltd. All testing procedures were conducted on a personal computer equipped with customized MATLAB software. Prior to commencing experiments involving subjects, a hardware check was carried out by analyzing the signals generated by the research interface using an oscilloscope. The stimulation signals were characterized by cathodic-phase leading, biphasic pulses presented in a monopolar configuration (MP1+2). This stimulation mode utilizes two extracochlear electrodes: one ball electrode positioned under the temporalis muscle and another plate electrode on the receiver case. These pulses consistently featured an 8-\(\mu\)s phase gap and 25-\(\mu\)s phase duration, and they were presented in a base-to-apex sequence. #### Ii-B1 Speech understanding experiment Speech intelligibility in noisy environments was assessed using the HSM sentence set [42]. To conduct this assessment, each speech token underwent digital downsampling from 44.1 kHz to 16 kHz. During testing, subjects were presented with sentences from the front in a simulated acoustic setting, which included background interference noise (either CCITT or ICRA7) originating from a 55-degree azimuth angle, masking their self-reported better ear. The noise azimuth was selected to be 55 degrees because this angle corresponded to the point where electrical interaural coherence (EIC; described in II-D1c) was at its minimum (see Figure 8), thus maximizing the impact on speech understanding. Before the speech tests began, a training phase was implemented, comprising two sets of 20 sentences presented in quiet conditions. This training allowed listeners to adapt to the fitting parameters specific to the study and familiarize themselves with the sound delivery through the research interface. Subjects were instructed to verbally repeat the sentences as accurately as possible during the tests. Two observers were present during the tests: one managed the software interface, while the other recorded the number of correctly identified words by marking them in a printed list corresponding to the sentences. Each listening condition was evaluated twice using different sentence lists, and the final score was computed as the average number of correctly identified words across these repetitions. The subjects were unaware of the specific conditions being tested, and an audiologist, blind to the test conditions, conducted the speech intelligibility assessments. #### Ii-B2 Mushra This test is aimed at assessing how well-presented speech sentences are perceived in comparison to a specified reference using MUSHRA. The scores provided by the listener will range from 0 (poor) to 100 (excellent). In the context of this study, the primary goal of this experiment was to establish a relative score for the quality of speech denoising concerning the clean speech signal generated by the clinical sound coding strategy ACE. To create a reference point, we derived an anchor by applying a low-pass filter with a cut-off frequency of 3.5 kHz to the noisy, unprocessed mixture. Two primary conditions were examined: one with clean audio and the other in a noisy environment (using both CCITT and ICRA7 noise profiles). In the clean condition, we compared the reference clean ACE to the anchor and the clean speech signals processed separately by the independent BiCI strategy and the fused Deep ACE sound coding strategy. This comparison aimed to determine if there were discernible differences between clinical processing in a quiet setting and the proposed algorithms. In the noisy condition, we compared the reference clean ACE to the anchor, the two proposed algorithms, and the unprocessed ACE signal in a noisy environment. Within each MUSHRA block, corresponding to each primary condition, eight sentences were assessed. These sentences were presented at various SNRs, including 2 at -5dB, 2 at 0dB, 2 at 5dB, and 2 at 10 dB. ## III Results ### _Objective instrumental results_ SnriFigure 2 shows box plots showing the mean SNRj scores across listening sides in dB for the tested algorithms in CCITT and ICRA7 noises for the different SNRs using the HSM speech dataset. #### Iii-A2 V-Mbsstoi Figure 3 illustrates the V-MBSTOI scores obtained by the evaluated algorithms in quiet. It can be seen here that the denoising algorithms do not introduce a significant drop in the V-MBSTOI scores relative to the bilateral ACE condition. Figure 4 presents the V-MBSTOI scores achieved by the assessed algorithms under different speech and noise conditions. Generally, the denoised signals exhibit higher scores using the denoising algorithms compared to the bilateral ACE, and the improvement is roughly proportional to the input SNR (calculated at the better SNR side). However, it is noteworthy that the bilateral Deep ACE model falls short of the fused speech denoising method, indicating that the artifacts in the latter are comparatively smaller. Additionally, the V-MBSTOI scores computed across various input SNRs exhibit less variability for the fused Deep ACE model when compared to the bilateral Deep ACE and bilateral ACE counterparts. This suggests that the fused Deep ACE model may demonstrate greater robustness in scenarios with low input SNRs. contributes to the enhancement of speech in BiCI listening. Figure 6 depicts the computed linear cross-correlations with respect to the noise azimuth, considering an average across all electrodes. The data showcases a trend where the LCCs decrease for the bilateral ACE and Deep ACE conditions when the noise source is located ipsilaterally to the CI processor. In contrast, for the fused Deep ACE model, the LCCs appear to remain relatively constant regardless of the azimuth of the interfering noise signal. This observation suggests that the fused Deep ACE model effectively utilizes the fusion operation by leveraging redundant information present on both listening sides. unprocessed condition exhibits higher coherence when the noise originates from the listener's front. However, a shift occurs with the bilateral Deep ACE model, which displays greater coherence when noise is in front but reverses this trend when the noise source widens to azimuths beyond 25 degrees. This pattern suggests that the bilateral Deep ACE model may have limitations in handling denoising when target and interfering signals are co-located. ### _Behavioral results_ Speech intelligibilityFigure 9 shows the bar plots of the individual WRS obtained by each of the tested BiCI listeners for the three tested conditions (i.e., clean, CCITT, and ICRA7). The tested SNR for each individual and noise type is shown in Table II. Figure 10 displays box plots illustrating the mean WRS measured in the five BiCI subjects across three noise conditions: clean, CCITT, and ICRA7. A Kruskal-Wallis test did not reveal any significant differences in mean speech intelligibility scores for the clean condition (\(H(2)=0.04\), \(p=0.98\)). However, in the case of the CCITT noisy condition (\(H(2)=7.46\), \(p=0.02\)) and the ICRA noisy condition (\(H(2)=9.57\), \(p=0.008\)), the subsequent non-parametric Kruskal-Wallis tests did detect significant differences. Subsequent pairwise comparisons, conducted using Wilcoxon signed-rank tests, indicated a significant distinction between the unprocessed (\(M=54.52\%\), \(SD=23.65\%\)) and the fused deep ACE condition for the CCITT noise (\(M=92.07\), \(SD=5.27\%\)) conditions (\(p=0.008\)). Similarly, significant differences were observed between the unprocessed condition (\(M=57.74\%\), \(SD=20.90\%\)) and the fused Deep ACE condition (\(M=89.81\%\), \(SD=5.49\%\)) in the ICRA7 noise condition (\(p=0.008\)). Additionally, in the ICRA7 noise condition, significant differences were found between the bilateral Deep ACE condition (\(M=60\%\), \(SD=28\%\)) and the fused Deep ACE condition (\(p=0.008\)). Fig. 8: Polynomial regressions showing the electric interaural coherence (EIC) for each azimuth averaged across electrodes, noises, and listening sides using the HSM dataset. Shaded areas represent the 95% confidence level interval [52]. Higher electrode numbers represent lower frequencies. Fig. 10: Box plots of the group word recognition score measured in the five tested BiCI subjects for the three noise conditions. The black horizontal bars within each box represent the median for each condition, the diamond-shaped marks indicate the mean, and the top and bottom extremes of the boxes indicate the \(Q_{3}=75\%\) and \(Q_{1}=25\%\) quartiles, respectively. The box length is given by the interquartile range (IQR), used to define the whiskers that show the variability of the data above the upper and lower quartiles (the upper whisker is given by \(Q_{3}+1.5\)-IQR and the lower whisker is given by \(Q_{1}-1.5\)-IQR [52]). Fig. 7: Polynomial regressions showing the electric interaural coherence (EIC) for each electrode pair averaged across noises and listening sides using the HSM dataset. Shaded areas represent the 95% confidence level interval [52]. Higher electrode numbers represent lower frequencies. #### Iii-C2 Mushra Figure 11 shows the bar plots of the individual MUShRA scores obtained by each of the tested BiCI listeners for the three tested noise conditions (i.e., clean, CCITT, and ICRA7). Figure 12 illustrates box plots depicting the group MUShRA scores obtained from five BiCI subjects under three distinct noise conditions: clean, CCITT, and ICRA7. Three separate Kruskal-Wallis tests, one for each noise condition, unveiled significant differences in the mean MUShRA scores. Specifically, there were significant differences observed in the quiet condition (\(H(3)=10.99\), \(p=0.01\)), the CCITT noise condition (\(H(3)=14.5\), \(p=0.002\)), and the ICRA7 noise condition (\(H(3)=16.02\), \(p=0.001\)). Subsequent non-parametric Wilcoxon signed-rank pairwise comparisons further elucidated these differences. In the clean condition, the reference (\(M=92.37\), \(SD=2.46\)) obtained higher scores compared to the anchor (\(M=16.23\), \(SD=10.25\)) conditions (\(p=0.008\)). In the CCITT noise condition, the reference (\(M=89.4\), \(SD=10.52\)) received higher ratings than both the bilateral Deep ACE (\(M=65.28\), \(SD=17.2\); \(p=0.03\)) and anchor (\(M=14.88\), \(SD=11.46\); \(p=0.01\)) conditions. Finally, in the ICRA7 noise condition, the reference also achieved higher scores (\(M=96.65\), \(SD=4.46\)) compared to the bilateral Deep ACE (\(M=32.17\), \(SD=13.09\); \(p=0.01\)) and anchor (\(M=14.02\), \(SD=4.1\); \(p=0.01\)) conditions. ## IV Discussion In this study, we introduce and evaluate a novel deep learning-based strategy for sound coding in BiCIs. Our approach involves the integration of two monaural end-to-end deep denoising CI sound coding methods through fusion layers that facilitate the exchange of information between the listening sides. This exchange is achieved by combining specific latent representations generated in each monaural model. The presented fused Deep ACE model aims to replicate the ACE sound coding strategy while automatically eliminating unwanted interfering noise from the target speech, all while maintaining minimal processing latency. To be precise, this model introduces the same 2ms latency as the bilateral ACE setup, allowing for potential real-time application of the proposed approach. It's important to note that the transmission of the latent representation must also be taken into account, necessitating efficient coding schemes for the latent space to enable the functional use of the fused Deep ACE model. Initially, we assess the impact of fusion (fused Deep ACE) by comparing the effectiveness of speech denoising and performance with the bilateral version (bilateral Deep ACE). Furthermore, we compare our approach with the standard clinical BiCI setup, which does not incorporate any denoising (bilateral ACE). Our evaluation involves the testing of this method on speech and the assessment of speech enhancement quality in five BiCI users. The objective instrumental measures reveal that in quiet environments, there are no discernible differences in speech intelligibility between the bilateral ACE setup, bilateral Deep ACE, and fused Deep ACE models (as shown in Figure 3). However, in the context of speech denoising, both bilateral Deep ACE and fused Deep ACE models exhibit improvements in SNR, with the fused Deep ACE model achieving the highest. Surprisingly, the bilateral Deep ACE model performs less effectively when exposed to background ICRA7-modulated noise. This outcome is unexpected, given previous research indicating better results in a similar scenario (as reported in [33]). This discrepancy could be attributed to the lower SNR used in the current study. Nevertheless, both fused Deep ACE and bilateral Deep ACE models consistently outperform the unprocessed setup in terms of predicted speech intelligibility across all input SNRs. To assess the extent of clean speech preservation after denoising, we employ objective measures, such as cross-channel and cross-noise azimuths' LCCs. These measures demonstrate that the fused Deep ACE model surpasses the bilateral Deep Fig. 11: Bar plots showing the mean individual MUShRA scores for the HSM sentence test in quiet (left panel), in CCITT noise (center panel), and ICRA7 noise (right panel) noises for all tested algorithms. Fig. 12: Box plots of the group MUShRA score measured in the five tested BiCI subjects for the three noise conditions. The black horizontal bars within each box represent the median for each condition, the diamond-shaped marks indicate the mean, and the top and bottom extremes of the boxes indicate the \(Q_{3}=75\%\) and \(Q_{1}=25\%\) quartiles, respectively. The box length is given by the interquartile range (IQR), used to define the whiskers that show the variability of the data above the upper and lower quartiles (the upper whisker is given by \(Q_{3}+1.5\)-IQR and the lower whisker is given by \(Q_{1}-1.5\)-IQR [52]). ACE model in terms of speech-denoising effectiveness and introduces fewer artifacts. This improvement is likely associated with the fused model's ability to exploit the redundancy of speech information shared between sides through the fusion layers. This result is consistent across channels and azimuths. Additionally, as expected, the bilateral Deep ACE model generally exhibits higher LCCs than the unprocessed condition, considering that the unprocessed signal retains all the original interfering noise. It is noteworthy that there is an asymmetry in the LCCs observed in both the bilateral Deep ACE and bilateral ACE conditions (as depicted in Figure 6), with lower LCCs measured on the side ipsilateral to the noise source. This asymmetry is also present in the fused Deep ACE model, but to a lesser extent, possibly due to the sharing of speech information between sides. In the context of BiCI listening, it is crucial to evaluate the retention of EIC after speech denoising, as low EIC has been shown to negatively affect speech intelligibility in BiCI users, as highlighted in [53]. Our assessment reveals that the fused layer achieves the highest EIC scores when measured across azimuths and across electrodes, outperforming the bilateral Deep ACE and bilateral ACE conditions. When observing EIC as a function of the azimuth (as shown in Figure 8), all three conditions exhibit the highest EIC when speech and noise sources are co-located, aligning with expectations. In this scenario, the unprocessed condition achieves EIC scores closer to those of the fused Deep ACE model, surpassing the scores of the bilateral Deep ACE model. Shows that enhancing speech becomes easier even for the investigated models when interfering noise and target speech are spatially separated, potentially linked to the binaural unmasking phenomenon observed in human binaural hearing. This underscores the significance of higher SNR listening sides in BiCI speech denoising, particularly when speech information is shared between sides, as facilitated by the fusion layers in our approach. The behavioral results in quiet conditions reveal no significant differences in speech intelligibility among the ACE, bilateral Deep ACE condition, and fused Deep ACE sound coding strategies, corroborating the findings from objective measures. This consistency is further confirmed by the MUSHRA test, where no discrepancies in scores are observed among the reference, bilateral Deep ACE, and fused Deep ACE conditions. However, in noisy speech conditions, speech intelligibility experiments showed that the fused Deep ACE model outperforms the bilateral ACE and bilateral Deep ACE conditions, while the bilateral Deep ACE condition surpasses ACE only in the presence of CCITT noise, failing to yield improvement when ICRA7 background noise is present. These results align with the observed SNR improvements in these conditions. Furthermore, the MUSHRA scores indicate that all BiCI users were capable of identifying the reference and the anchor. In terms of denoising algorithms, the scores were consistently lower for the bilateral Deep ACE model compared to the reference, particularly for both CCITT and ICRA7 conditions. This concurs with the measured speech intelligibility results, implying that speech intelligibility may be significantly affected not only by the limited SNR improvement in this condition but also by the bilateral distortions introduced by the bilateral Deep ACE model. ## V Conclusion This study underscores the potential of a fused deep learning-based BiCI sound coding strategy (fused Deep ACE) in enhancing speech, especially when speech and interfering noise sources are spatially separated. Notably, the approach's ability to retain interaural coherence compared to the bilateral Deep ACE model is highlighted. The proposed fused Deep ACE model achieved significant improvement in objective instrumental measures as well as in the listening experiments with BiCI participants. However, it is crucial to recognize that this approach may not be optimal in all listening conditions, as it may compromise binaural and spatial awareness, akin to the effects of front-end beamformers. Further research is warranted to strike a balance between achieving high speech denoising performance and maintaining spatial awareness through fusion layers, which may entail a trade-off, as outlined in [54]. Nevertheless, our presented approach exhibits promising speech-denoising performance and may prove beneficial in specific listening conditions.
2306.15869
Evaluating Portable Parallelization Strategies for Heterogeneous Architectures in High Energy Physics
High-energy physics (HEP) experiments have developed millions of lines of code over decades that are optimized to run on traditional x86 CPU systems. However, we are seeing a rapidly increasing fraction of floating point computing power in leadership-class computing facilities and traditional data centers coming from new accelerator architectures, such as GPUs. HEP experiments are now faced with the untenable prospect of rewriting millions of lines of x86 CPU code, for the increasingly dominant architectures found in these computational accelerators. This task is made more challenging by the architecture-specific languages and APIs promoted by manufacturers such as NVIDIA, Intel and AMD. Producing multiple, architecture-specific implementations is not a viable scenario, given the available person power and code maintenance issues. The Portable Parallelization Strategies team of the HEP Center for Computational Excellence is investigating the use of Kokkos, SYCL, OpenMP, std::execution::parallel and alpaka as potential portability solutions that promise to execute on multiple architectures from the same source code, using representative use cases from major HEP experiments, including the DUNE experiment of the Long Baseline Neutrino Facility, and the ATLAS and CMS experiments of the Large Hadron Collider. This cross-cutting evaluation of portability solutions using real applications will help inform and guide the HEP community when choosing their software and hardware suites for the next generation of experimental frameworks. We present the outcomes of our studies, including performance metrics, porting challenges, API evaluations, and build system integration.
Mohammad Atif, Meghna Battacharya, Paolo Calafiura, Taylor Childers, Mark Dewing, Zhihua Dong, Oliver Gutsche, Salman Habib, Kyle Knoepfel, Matti Kortelainen, Ka Hei Martin Kwok, Charles Leggett, Meifeng Lin, Vincent Pascuzzi, Alexei Strelchenko, Vakhtang Tsulaia, Brett Viren, Tianle Wang, Beomki Yeo, Haiwang Yu
2023-06-28T02:00:57Z
http://arxiv.org/abs/2306.15869v1
Evaluating Portable Parallelization Strategies for Heterogeneous Architectures in High Energy Physics ###### Abstract High-energy physics (HEP) experiments have developed millions of lines of code over decades that are optimized to run on traditional x86 CPU systems. However, we are seeing a rapidly increasing fraction of floating point computing power in leadership-class computing facilities and traditional data centers coming from new accelerator architectures, such as GPUs. HEP experiments are now faced with the untenable prospect of rewriting millions of lines of x86 CPU code, for the increasingly dominant architectures found in these computational accelerators. This task is made more challenging by the architecture-specific languages and APIs promoted by manufacturers such as NVIDIA, Intel and AMD. Producing multiple, architecture-specific implementations is not a viable scenario, given the available person power and code maintenance issues. The Portable Parallelization Strategies team of the HEP Center for Computational Excellence is investigating the use of Kokkos, SYCL, OpenMP, std::execution::parallel and alpaka as potential portability solutions that promise to execute on multiple architectures from the same source code, using representative use cases from major HEP experiments, including the DUNE experiment of the Long Baseline Neutrino Facility, and the ATLAS and CMS experiments of the Large Hadron Collider. This cross-cutting evaluation of portability solutions using real applications will help inform and guide the HEP community when choosing their software and hardware suites for the next generation of experimental frameworks. We present the outcomes of our studies, including performance metrics, porting challenges, API evaluations, and build system integration. ## 1 Introduction High Energy Physics is facing an enormous challenge in the coming decades as data volumes and processing requirements for precision physics analysis and simulation increase dramatically with experiments such as the Deep Underground Neutrino Experiment (DUNE) and those on the High-Luminosity Large Hadron Collider (HL-LHC). Traditionally, computing for HEP has been undertaken at a mixture of institutional clusters, distributed grid sites, and HPC centers. Recently there has been increased use of commercial cloud resources, but this fraction is still small. All these computing resources have been almost entirely comprised of the same hardware architecture: x86-based CPUs. However, as HPC centers attempt to increase their computational power, energy consumption limits of this architecture have necessitated the introduction of GPU-based accelerators to provide the majority of the computing. NVIDIA, AMD and Intel GPUs are now being heavily used in recent and next generation HPC centers. Programming techniques for GPUs and other massively parallel accelerators are very different from that of traditional CPUs, requiring a significant understanding of the hardware architecture to achieve good performance. Special languages and compilers have been developed to target these architectures, such as CUDA for NVIDIA GPUs, HIP for AMD GPUs, and SYCL for Intel GPUs. HEP experiments have developed millions of lines of code, and in order to use these new computational accelerator facilities would need to rewrite large amounts of their code bases. In order to be able to run on all the different hardware architectures, this task would have to be repeated multiple times in each architecture's preferred language. This exercise, and the validation necessary to keep all versions coherent is both time and cost prohibitive. A solution needs to be found where code can be written once, and then run on all available architectures without modification. In recent years, a number of portability layers have been developed which address this problem, and the ANONYMOUS group was created in order to test and evaluate these portability layers in relation to their usage in current and future HEP experiments. The work presented in this paper represents one of the largest-scale portability studies using multiple programming models and portability layers in a diverse set of HEP applications. In addition to the achieved computational performance across different compute architectures, we also evaluate the different portability approaches using a comprehensive set of metrics most relevant to HEP software, such as modifications needed to the existing code, event data model, build system, etc. In this paper, we describe the portability layers studied (Section 2), the representative HEP testbeds (Section 3), metrics considered (Section 4), and performance evaluations of different portability layers (Section 5). We end the paper with a summary of our non-performance evaluations of different programming models in Section 6. ## 2 Portability Layers There are a number of currently existing portability layers (Figure 1) which permit user code to execute on various accelerator hardware architectures with little or no modifications to the source code. In general, selecting a new architecture will require a re-compilation of the code, though in certain cases an entirely different compiler must be used. These layers usually provide their own memory and data management facilities to attempt to abstract away the specifics of the backend hardware. We have chosen to evaluate Kokkos, SYCL, OpenMP, alpaka, and std::execution::parallel. In this study, we have used CUDA as a baseline standard and comparison metric, as it is the most widely used GPU programming language in the field, and several of our testbeds already had pre-existing CUDA versions. ### Kokkos Kokkos [1, 2] is a portable, performant, C++ based shared-memory programming model that is single source, ie it lets you write algorithms once and run on any supported backend architectures, such as a traditional CPU, NVIDIA, AMD, and Intel GPUs, and manycore CPUs, minimizing the amount of architecture-specific implementation details that users need to know. It provides a number of different parallel abstractions available, such as parallel_for, reductions, and scans, and also provides utilities such as random number generators, and support for atomic operations, chained kernels and callbacks. The library, which is mostly header based, is compiled for a selected set of backends - one serial, one host parallel, and one specific accelerator device can be chosen in a single binary. These backends must be selected at compile time. Though it provides constructs for allocating and managing data on the host and accelerator devices, these can be wrapped around pre-existing data objects. Execution kernels can also use bare pointers to operate on data allocated and transferred by other means. ### Sycl SYCL is a cross platform abstraction layer intended for heterogeneous computing, based on OpenCL, and originally released by the Khronos group in 2014. Since then, there have been a number of implementations by different groups. Like Kokkos, it is also single source, and understands C++17. It does not mandate explicit memory transfers, but rather builds a DAG of kernel data dependencies, and transfers the data between host and offload device as needed. SYCL runs on a broad range of architectures, and in theory permits the selection of the execution devices at runtime. In practice, different accelerator backends require different compilers, such as openSYCL to target AMD GPUs, and different builds of llvm/dpc++ to target NVIDIA or Intel GPUs. ### OpenMP and OpenACC OpenMP is a very large and complex directive based API that enables multi-platform shared memory multiprocessing, with support for C, C++ and Fortran. It provides parallelism within a node and is heavily used for multicore and many-node CPUs in HPCs. In its recent versions, OpenMP has been extended to GPUs via the "target offloading" model which offloads a kernel to NVIDIA, AMD, and Intel devices. OpenACC is a similar standard that was developed explicitly for accelerator devices like GPUs. It allows the compiler to make intelligent decisions on how to decompose problems, and describes what the compiler _should_ do, not must. This allows different compilers to interpret "should" very differently. ### std::execution::parallel std::execution::parallel (std::par), is an existing C++ standard that was introduced in C++17 to enable parallel processing of algorithms by defining execution policies. Available policies are serial, parallel execution using threads, and parallel execution using threads and vectorization. In 2020 NVIDIA introduced a compiler (nvc++) which enabled the execution of parallel execution policies on NVIDIA GPUs. nvc++ uses unified shared memory for data access, with data being migrated to the GPU on demand via page faults. It is not intended to be a replacement for CUDA, as it lacks many low-level features and optimizations, but rather as a stepping stone or bridge between CPU-based serial code, and that explicitly written for GPUs, dramatically lowering the entry bar for execution on accelerators. Intel also has a compiler (oneAPI:dpl) that mostly supports this standard. ### alpha alpha [3, 4, 5] is another single source portability layer, where the backend is chosen at compile time. It uses a CUDA-like multidimensional set of work units, using a grid / block / thread / element hierarchy, mapping the abstraction model to the desired backend. It has a data-agnostic memory model, allocating memory using smart memory buffers, which take care of proper memory handling for specific backends. Alpaka uses the same API to allocate memory on the host and on the device. Alpaka also offers straightforward porting of CUDA kernels using its extension called "cupla", where only the includes and syntax of the kernel calls need to be changed. ## 3 Testbeds In order to evaluate the various portability layers, we have selected a number of representative testbeds from several HEP experiments, including ATLAS, CMS and DUNE. These testbeds are relatively small code bases that can be executed outside of the experiments' full frameworks in order to simplify the development process. Some of the testbeds had already been ported from the original serial code to run on GPUs using CUDA. We have used either the original serial CPU code or the CUDA versions to develop implementations for all of the different portability layers that we are exploring. ### Patatrack Patatrack is a standalone version [6] of CMS heterogeneous pixel reconstruction [7]. The chain of reconstruction algorithms takes the raw data of the CMS pixel detector as an input, along with the beamspot parameters and necessary calibration data, and produces pixel tracks and vertices. The algorithms are divided into about 40 kernels, and were originally implemented in CUDA. The standalone setup includes a multithreaded testing framework mimicking relevant aspects of the CMS data processing framework, CMSSW [8, 9, 10, 11, 12]. Many of the kernels and memory copies are short, and therefore the application is very sensitive to overheads. The application attempts to maximize the GPU utilization by processing events concurrently with CUDA streams, leveraging asynchronous execution capabilities of the CMSSW framework and minimizing synchronization points, and using a memory pool between the algorithms and CUDA runtime API. ### Wire-Cell Toolkit Wire-Cell Toolkit (WCT) is a new standalone C++ software package for Liquid Argon Time Projection Chamber (TPC) simulation, signal processing, reconstruction and visualization, and is intended to be used by the Deep Underground Neutrino Experiment (DUNE). It is written in modern C++, following the data flow programming paradigm. It supports both single-threaded and multi-threaded execution on CPUs with the choice determined by user configuration. WCT currently includes central elements for DUNE data analysis, such as signal and noise simulation, noise filtering and signal processing, and is currently deployed in production jobs for MicroBooNE and ProtoDUNE experiments. For the portability evaluation, we chose the Liquid Argon Time Projection Chamber (LArTPC) signal simulation module in WCT, as it is computationally expensive and may benefit from acceleration on GPUs. The LArTPC signal simulation is composed of three major steps: "rasterization" to decompose data (individual ionization electron groups) into patches of various sizes, "scatter-add" to sum up the patches into a larger grid, and "convolution" to Figure 1: Hardware support of portability layers. Dark green indicates full support, light green indicates partial support or that the project is still under development, and red indicates no support. obtain the simulated detector signal. The rasterization step involves nested for loops that can be parallelized and offloaded to GPUs. Scatter-add requires atomic operations, while convolution's main computation is done through Fast Fourier Transforms (FFT). The three steps make up the majority of the DepoTransform routine, which is the main computational kernel for the LArTPC signal simulation. In serial execution on CPUs, rasterization is typically the most time-consuming step, followed by convolution. ### FastCaloSim The ATLAS detector at the Large Hadron Collider (LHC) relies on large samples of simulated proton-proton collision events for detector modeling, upgrade studies and to deliver high-quality physics results. Standard simulations using Geant4 [13] for accurate and detailed modeling of physics processes and detector geometry are extremely CPU-intensive, and its use for all Monte Carlo-based studies is impractical due to the formidable size of the LHC dataset. In particular, simulating electromagnetic and hadronic shower developments in the ATLAS calorimeters is extremely CPU-intensive, comprising more than 90% of the overall simulation time [14]. To reduce the significant processing time required to simulate the calorimeters, ATLAS developed FastCaloSim [15] which uses a simplified detector geometry and parameterizations of shower development initiated by particles traversing the calorimeter volumes. During a simulation, the optimal parameterization is selected based on the particle type, energy deposit and location in the detector in order to best model the corresponding electromagnetic or hadronic shower. This parameterized simulation can reduce the CPU time by a factor of 10 to 25 -- depending on the process being modeled -- relative to that of fully Geant4-based simulations. The main event loop of the FastCaloSim testbed is comprised of four parts: 1) workspace initialization, where a large array representing the calorimeter cells on the GPU is reset, 2) the main simulation where energy deposits of the particle hits are calculated, 3) stream compaction where the hits are combined to determine the energy deposits in the calorimeter, and 4) copying the results from the GPU to the host. There are two different versions of the code: the first simulates the detector response a particle at a time, and the second groups multiple particles together before offloading the calculations to the GPU. Offloading a single particle at a time makes inefficient use of the GPU, as the calculations may not be numerically intensive, and can often be limited by the launch latency of the GPU. Grouping particles together results in much larger workloads for the GPU, improving GPU efficiency. Details can be found at [16]. ### p2r Propagation-to-r (p2r) [17] is a standalone mini-application, which performs the core math of parallelized track reconstructions. The kernel aims at building charged particle tracks in the radial direction under a magnetic field from detector hits, which involves propagating the track states and performing Kalman updates after the propagation. The kernels are implemented based on a more realistic application, called mkFit[18], which performs vectorized CPU track fitting. To simplify the track reconstruction problem, the p2r program processes a fixed number of events with the same number of tracks in each events. All track computations are implemented in a single GPU kernel. A fixed set of input track parameters is smeared randomly and then used for every tracks. ## 4 Metrics Choosing between the different portability layers is not an easy task. Merely looking at their computational performance, while important, is not sufficient, as there are many other characteristics which may affect their adoption by an experiment. In order to evaluate them, we have identified a number of metrics that will provide both subjective and objective measures of their different characteristics and suitability for their end users. Below are the simplified list of the 14 metrics that will be evaluated using each of the test-beds. The full metrics can be found in Ref [19]. The first 5 metrics focus on the porting and development experience, whereas the rest of the metrics explore other advantages/disadvantages or potential limitations associated with the portability layer. Depending on the scale of the test-bed and nature of the computation, the evaluation on each metric could vary. With a representative set of test-beds, the evaluation aims to provide valuable information to different use-cases of the HEP communities. 1. Ease of Learning 2. Code conversion * From CPU to GPU and between different APIs 3. Extent of modifications to existing code * Control of main, threading/execution model 4. Extent of modifications to the Data Model 5. Extent of modifications to the build system 6. Hardware Mapping * Current and promised future support of hardware 7. Feature Availability 8. Address needs of large and small workflows 9. Long term sustainability and code stability * Backward/forward compatibility of API and e.g. CUDA 10. Compilation time 11. Run time/Performance 12. Ease of Debugging 13. Aesthetics 14. Interoperability * Interaction with external libraries, thread pools, C++ standards ## 5 Performance Studies ### Patatrack The standalone Patatrack code was first ported from CUDA to Kokkos in [6], and has since then been ported to HIP, and to Alpaka [20], and SYCL [21] by other groups. When benchmarking the Kokkos performance, we compare the event processing throughput using both the host and device execution spaces. As shown in Table 1, the Kokkos host serial execution space is about 1.6x slower than the original CPU version when only a single thread is used, and when all 40 threads on a 20 core Intel Xeon Gold 6148 CPU are used, the Kokkos thread execution space is about 20x slower than the multithreaded CPU version. The multithreaded CPU version uses inter-event parallelism, where each event is processed in its own thread, whereas the Kokkos version attempts to parallelize _within_ a single event. Until very recently, Kokkos was not compatible with external use of thread pools such as TBB, and while it can now co-exist, the performance is still very poor. The CUDA version of the code is able to process data from multiple events concurrently each in its own thread, with each thread owning its own CUDA stream. It also uses a memory pool to amortize the cost of raw CUDA memory allocations. For the closest comparison with the Kokkos version using the CUDA device parallel backend, we disable the CUDA memory pool and only allow a single concurrent event, at which point we see in Table 2 that the CUDA version is 37% faster than the Kokkos version. It proved infeasible to implement a comparable device memory pool in Kokkos. If we enable the memory pool, the performance gap increases to 6.2x. And if we allow the CUDA version to process multiple concurrent events via multithreading, this increases yet again such that the CUDA version with 9 concurrent events is 16x faster than the Kokkos. Increasing the number of concurrent events beyond 9 shows no improvement in throughput. The std::par port of the Patatrack is complete, however bugs in the nvc++ compiler prevent it from running. Similarly, OpenMP offload is still lacking some features that are used in the CUDA version, making the OpenMP port incomplete. We await the continued improvement of both these compilers in order to complete our benchmarks. ### Wire-Cell Toolkit The first full GPU implementation for the Wire-Cell Toolkit (WCT) test case, signal simulation of the LArTPC detector, was done with Kokkos [22, 23], so we do not have a CUDA version as a baseline comparison. Since then, we have also ported the WCT test case to OpenMP and SYCL. Figure 2 compares the timing of the computational kernel DepoTransform in WCT using Kokkos, SYCL and OpenMP on three different architectures: NVIDIA V100 GPU, AMD Raedon Pro VII GPU and AMD Ryzen 24-core CPU. The compilers used for these platforms for each programming model are tabulated in Table 3. While there are some small variations in the timing, all three programming models achieve very similar performance after some tuning and performance optimization. ### FastCaloSim The original FastCaloSim serial CPU code was first ported to CUDA, and we have ported that version to Kokkos, SYCL, std::par, OpenMP and alpaka. We have used the CUDA implementation as a baseline to compare performance. A certain amount of code restructuring was necessary for the port to Kokkos, due to the lack of support for jagged arrays in Kokkos. These jagged data structures are flattened into 1D arrays or padded into regular 2D arrays. In general the portability backends tended to perform similarly to the original native CUDA \begin{table} \begin{tabular}{l l} \hline Test case & Throughput \\ & (events/s) \\ \hline CPU version, 1 thread & \(13.5\pm 0.2\) \\ Kokkos version, Serial & \(8.5\pm 0.2\) \\ execution space & \\ \hline CPU version, 40 threads & \(539\pm 9\) \\ Kokkos version, Thread & \(28\pm 1\) \\ execution space, peak (18 & \\ threads) & \\ \hline \end{tabular} \end{table} Table 1: Comparison of the event processing throughput between the Kokkos version of Patatrack using Serial and Threads execution spaces and the CPU version implemented from the original CUDA version through a simple translation header. In all cases all the threads were pinned to a single CPU socket (Intel Xeon Gold 6148) that has 20 cores and 2 threads per core. Each test ran about 5 minutes, and CPU-heavy threads from a background process were used to fill all the 40 hardware threads of the socket. The work in the CPU version is parallelized by processing as many events concurrently as the number of threads the job uses without any intra-event parallelization, whereas in the Kokkos version there is only one event in flight, and all parallelization is within the data of that event. For the Kokkos version with Threads execution space the maximum throughput from a scan from 1 to 20 threads is reported. The reported uncertainty corresponds to sample standard deviation of 8 trials. From Ref. [6] or HIP implementation for the main simulation kernel execution as seen in Figures 3, 4 and 5. Overheads are often seen for the initialization, manipulation and transfers of the data when portability layers are used. The SYCL port has a substantially different code structure as compared with the CUDA and Kokkos ports, making comparisons of the individual kernel elements more challenging. In general however, the main simulation throughput is similar to that of CUDA, and SYCL does suffer from as many initialization overheads as Kokkos. The alpaka port is structurally very similar to Kokkos, and while the performance of the main kernel is comparable to the native CUDA or HIP implementations, we do see some unusual behavior for the workspace resetting, which is still not fully understood. It seems to be related to how alpaka does device synchronization. We also see some small overheads from memory transfers. One very unusual feature that was noted is that data transfers from the device to the host when using std::par are significantly degraded when the host CPU is an AMD EPYC (see Figure 6). The cause is thought to be related to how page faults are handled by the hardware, but the exact nature is still under investigation by NVIDIA. Another interesting observation is that the \begin{table} \begin{tabular}{l l l} \hline Test case & \multicolumn{2}{l}{Throughput} \\ & & (events/s) \\ \hline CUDA version, peak (9 & \(1840\pm 20\) \\ concurrent events and CPU & \\ threads) & \\ CUDA version, 1 concurrent & \(720\pm 20\) \\ event & \\ CUDA version, 1 concurrent & \(159\pm 1\) \\ event, memory pool disabled & \\ \hline Kokkos version, CUDA & \(115.7\pm 0.3\) \\ execution space & \\ \hline \end{tabular} \begin{tabular}{l|c|c|c} \hline Target & Model & Compiler & Version \\ \hline NVIDIA GPU & Kokkos/3.3.01 & nvcc & 11.0.2 \\ & SYCL & Intel/LLVM & nightly-20220425 \\ & OpenMP & LLVM/Clang & 15 \\ \hline AMD GPU & Kokkos/3.3.01 & rocm/hipcc & 4.5.2 \\ & SYCL & Intel/LLVM & nightly-20220425 \\ & OpenMP & LLVM/Clang & 15 \\ \hline AMD CPU & Kokkos/3.3.01 & GCC & 9.3.0 \\ & SYCL & Intel OneAPI & 2022.02 \\ & OpenMP & LLVM/Clang & 15 \\ \hline \end{tabular} \end{table} Table 2: Comparison of the event processing throughput between the Kokkos version of Patatrack using CUDA execution space and the original CUDA version. In all cases the CPU threads were pinned to a single CPU socket, and used one NVIDIA V100 GPU. Each test ran about 5 minutes, and the machine was free from other activity. The CUDA version can process data from multiple events concurrently using many CPU threads and CUDA streams, and uses a memory pool to amortize the cost of raw CUDA memory allocations. The maximum throughput from a scan from 1 to 20 concurrent events is reported for the CUDA version. In order to compare to the current state of the Kokkos version, the CUDA version was tested also with 1 concurrent event and disabling the use of the memory pool. The reported uncertainty corresponds to sample standard deviation of 8 trials. From Ref. [6] \begin{table} \begin{tabular}{c|c|c|c} \hline Target & Model & Compiler & Version \\ \hline NVIDIA GPU & Kokkos/3.3.01 & nvcc & 11.0.2 \\ & SYCL & Intel/LLVM & nightly-20220425 \\ & OpenMP & LLVM/Clang & 15 \\ \hline AMD GPU & Kokkos/3.3.01 & rocm/hipcc & 4.5.2 \\ & SYCL & Intel/LLVM & nightly-20220425 \\ & OpenMP & LLVM/Clang & 15 \\ \hline AMD CPU & Kokkos/3.3.01 & GCC & 9.3.0 \\ & SYCL & Intel OneAPI & 2022.02 \\ & OpenMP & LLVM/Clang & 15 \\ \hline \end{tabular} \end{table} Table 3: Compilers used for WCT portability studies. Thrust implementation of std::par is, in certain circumstances, faster than the original CUDA code. As well, when using the serial single core CPU backend of std::par, the overall performance is 20% faster than the original serial CPU code. We have not been able to exercise the multicore backend of nvc++ due to compiler bugs. ### p2r p2r has been ported to alpaka, Kokkos, SYCL and std::par, and we have evaluated the performance of the GPU backends on a NVIDIA A100 and an AMD MI100 GPU, using the Figure 3: Performance of FastCaloSim using Kokkos relative to CUDA. Figure 2: Performance comparison of Wire-Cell Toolkit DepoTransform kernel using Kokkos, SYCL and OpenMP on three different architectures. Timing was averaged over 20 runs in each case. Joint Laboratory for System Evaluation (JLSE) system at Argonne National Laboratory. The kernel-only throughput achieved by using a portability layer is compared to that of the native programming model. On the A100 GPU, alpaka and Kokkos's performance is very close to the native CUDA version, whereas SYCL and std::par are approximately factor of 10 and 2 slower respectively. We note, however, that both alpaka and Kokkos suffer from a \(\sim 40\%\) slow down if the kernel launch parameters determined by the portability layer are used. Results shown in Figure 8 and 9 are obtained by explicitly choosing the same launch parameters and register per thread values as the native CUDA version. On an AMD MI100, alpaka achieved 23% better performance than the native HIP Figure 4: Performance of FastCaloSim using alpaka, relative to native CUDA/HIP. Figure 5: Performance of FastCaloSim Simulation kernel for event batched data using std::par. implementation. While it is possible that the portability layer could introduce better optimization than the native implementation, further profiling work is required to confirm the cause for the better performance observed in this case. The poor performance of the SYCL implementation on both tested GPUs is not fully understood. Initial profiling results show orders of magnitude more instructions were executed in the kernel, inducing much more memory traffic and hence latency. Figure 6: Performance of FastCaloSim data transfers with event batching using std::par. Figure 7: Performance of FastCaloSim Simulation kernel using OpenMP compared with CUDA. ## 6 Portability Layer Evaluations Over the past three years, all four testbeds have been ported to each portability layer, except for the alpaka port of the Wirecell Toolkit, for which there was insufficient person power. The std::par port of Patatrack, while complete, has encountered compiler bugs, for which we are still awaiting resolution from NVIDIA. A brief overview of the evaluation metrics is shown in Tables Figure 8: Performance comparison of p2r using CUDA, alpaka, Kokkos, SYCL and std::par in NVIDIA A-100 GPU. Figure 9: Performance comparison of p2r using HIP, alpaka, Kokkos, SYCL in AMD MI-100 GPU. 4 and 5, with some further details listed below. ### Kokkos Kokkos provides a programming model that has a higher level of abstraction than that of CUDA. We find the learning curve to be largely similar to learning CUDA, though understanding the proper mapping of some advanced CUDA constructs, such as shuffles among threads, to Kokkos is challenging. Kokkos' runtime library requires explicit initialization and finalization calls, which can be intrusive to code organization. When using the CUDA backend, all source files that includes a Kokkos header must be compiled by the CUDA aware compiler (nvcc or clang). If the application uses shared libraries, Kokkos itself must be built with this feature enabled, and all Kokkos code in the application must be packaged into a single shared library. \begin{table} \begin{tabular}{|c|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Metric** & **Kokkos** & **alpaka** & **SYCL** & **std::par/nvc++** & **OpenMP** \\ \hline \multirow{3}{*}{\begin{tabular}{c} Ease of \\ Learning \\ \end{tabular} } & Similar to C++ and CUDA, optimization more challenging & Very verbose API and sparse documentation make for steep learning curve & Similar to C++ and CUDA. Lots of documentation. & is C++ and CUDA. Lots of documentation. & C++ with extra pragmas. Sparse documentation/examples for offload. \\ \multirow{3}{*}{\begin{tabular}{c} Serial Code \\ Conversion \\ \end{tabular} } & Similar to CUDA, though different syntax. Speceilized optimizations not straightforward & Work needed to wrap kernels in callable objects. Many typedefs in API benefit from layer of template functions. & Similar to CUDA, though different syntax & very simple from serial code (add pragmas, get offload working, trimming). More work to point for existing CUDA code. & Easy incremental porting mechanism from serial code (add pragmas, get offload working, trimming). More work to point for existing CUDA code. & Speceil memory transaction and transler APIs. Can operate device and hard parallel \\ \multirow{3}{*}{\begin{tabular}{c} Code \\ Modification \\ \end{tabular} } & Can be used in an existing complicated application without changes elsewhere. & similar to CUDA & Parallelize loops with command group handlers and parallel_for_. If file compiled by necessary to checkrate kernel calls explicitly via waiting on events & Memory accessible on device must be allocated/freed in a thread or APIs. Can generic device and hard parallel \\ & CoRR & Can be used in an existing complicated application without changes elsewhere. & Parallelize loops with command graph_for_. If message to be parallel, for example, when the thread is parallel \\ & Kokkos runtime needs explicit init/fini. Can take and interpret command line arguments & & & \\ \multirow{3}{*}{\begin{tabular}{c} Data Model \\ Modification \\ \end{tabular} } & Views can be used as a smart pointer to 10 data to 12 data & Buffers used to wrap existing objects tedious to use. Alpaka uses. Alpaka uses. Alpaka must be done at the time of configuring the Kokkos runtime & Buffers can be instantiated only to check its message memory compatible with current EDM and custom data types. & May need to copy data to make it visible to USM & In general simpler than CUDA \\ & & & & \\ \multirow{3}{*}{\begin{tabular}{c} Build \\ System \\ Integration \\ \end{tabular} } & Can choose at most one backend for each execution space type. Choice must be done at the time of configuring the Kokkos runtime build. & Extensive configuration via configurability via CMake and, depending on the platform/backend, additional Clang mechanism & Need compiler wrapper to filter with CMake and make. & Good integration with CMake and make. \\ \multirow{3}{*}{\begin{tabular}{c} Hardware \\ Mapping \\ \end{tabular} } & Supports multiple host-parallel backends, and NVIDIA, AMD, and Intel GPUs. Kokkos developers have been & Supports multiple host parallel & Can build and run now on any major vendor vendor CPU and NVIDIA and MPIEUS & \begin{tabular}{c} nvc++ supports CPU serial, CPU \\ multicore, NVIDIA \\ GPU. Trid-party \\ GPUs \\ \end{tabular} & Supports multiple CPU, CPU & Supports multiple CPU, CPU multicore, NVIDIA & \begin{tabular}{c} supports multiple \\ host-parallel \\ backends, and \\ \end{tabular} \\ & & & & \\ \hline \end{tabular} \end{table} Table 4: Table of metrics We find that for relatively simple and long-running kernels, reaching the performance of the native CUDA versions is easy to achieve with Kokkos, but with complex and short kernels, the optimization requires significant effort and even then does not always reach the performance of native CUDA. There can be significant overheads from intializing Kokkos Views. We also find that concurrent use of Kokkos' API depends on the backend. For example, concurrency outside of Kokkos works with CUDA and HIP backends, whereas pthreads backend explicitly disallows more than 1 calling thread, and the serial backend serializes all API calls. Plentiful documentation and tutorials for Kokkos exist, as well as a very active Slack channel. The Kokkos developers' support for any kind of questions is excellent. ### Sycl Like Kokkos, SYCL can currently target all available backends from the same source, though recompilation is required using different versions of the compiler (openSYCL for AMD, llvm/dpc++ for NVIDIA, oneAPI for Intel). It provides comparable performance to CUDA \begin{table} \begin{tabular}{|r|l|l|l|l|l|} \hline **Metric** & **Kokkos** & **alpaka** & **SYCL** & **std::par/nvc++** & **OpenMP** \\ \hline \multirow{4}{*}{\begin{tabular}{c} Feature \\ Availability \\ \end{tabular} } & Concurrent kernel & Similar to CUDA & Support for reductions, kernel & No low level & \begin{tabular}{l} Scan and memset \\ ops not yet \\ supported on GPUs \\ by most compilers. \\ Some CUDA \\ atomic ops also not \\ yet supported. \\ \end{tabular} \\ \cline{1-1} & \begin{tabular}{c} Countron kernel \\ only with CUDA \\ bachead, and then \\ requires CUDA \\ specific features. \\ Concurrent calls to \\ Serial backend safe, \\ but not efficient. \\ Unsupported: sort \\ from device code; \\ common API to \\ vendor-optimized \\ FFT libraries; \\ RNGs not following \\ Gaussian or \\ uniform \\ distributions, such \\ as binomial \\ \end{tabular} & \begin{tabular}{c} Similar to CUDA \\ solutions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ kernel functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel \\ functions, kernel functions, kernel \\ functions, kernel functions kernel \\ functions, kernel functions functions, kernel \\ functions, kernel \\ functions, kernel functions, kernel \\ kernel functions, kernel \\ functions, kernel functions kernel \\ functions, kernel functions kernel \\ functions, kernel functions kernel \\ functions, kernel functions kernel \\ functions, kernel kernel \\ functions, kernel \\ functions, kernel functions kernel \\ functions, kernel functions kernel \\ functions, kernel \\ functions, kernel functions, kernel functions, kernel functions kernel \\ functions, kernel kernel \\ functions, kernel functions kernel \\ functions, kernel functions kernel \\ functions, kernel functions kernel \\ functions, kernel functions kernel kernel \\ functions, kernel functions kernel kernel \\ functions, kernel functions kernel kernel \\ functions, kernel functions kernel kernel \\ functions, kernel functions kernel kernel \\ functions kernel kernel functions, kernel functions kernel kernel \\ functions, kernel functions kernel kernel \\ functions, kernel functions kernel kernel kernel \\ functions kernel kernel functions, kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel kernel functions kernel functions kernel kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel functions kernel kernel functions kernel kernel functions kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel kernel kernel functions kernel functions kernel functions kernel kernel kernel functions kernel kernel functions kernel functions kernel functions kernel kernel kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel kernel functions kernel kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel functions kernel kernel functions kernel or HIP. Programmatically, we find it to be more verbose than CUDA, but similar to Kokkos for memory management when using buffers, with similar learning curves. Its ability to automatically migrate data to devices based on data dependencies of kernel/buffer associations when using USM can simplify code design. Its support of data object types that can be offloaded is more stringent than that of CUDA. There is currently no support for concurrent kernel execution, and callback functionality is likely to be deprecated. SYCL has good interoperability with external concurrency layers such as TBB, OpenMP and MPI, though it will not natively target host parallel devices. While the SYCL standard supports concurrent kernel execution on devices, it appears that this is not currently implemented by compilers. It has mostly seamless integration with build systems like CMake. SYCL is strongly supported by Intel, who is pushing various features towards integration into the C++ standard. Intel offers many training classes for SYCL, and provides tools for migration from CUDA. There is a wide variety of online documentation available. ### OpenMP We find that offloading a C++ kernel to GPUs via OpenMP is easy to implement and does not require major changes to the code. The architecture agnostic compiler directives can, in principle, offload to multiple GPUs and FPGAs, and compilers are under active development by NVIDIA (nvc++), AMD (rocm-clang, aomp), Intel (icpx), LLVM Clang and GCC. However, extracting performance requires fine tuning the data transfers as the map clause implicitly transfers some variables. We also find that performance currently varies from compiler to compiler. Many specialized operations (e.g. atomic) are currently less performant than CUDA and some operations like scan and memset are not supported on GPUs. Manually parallelizing nested loops almost always outperforms collapse as the latter will use more registers which degrades performance. Lastly, it is also important to tune number of threads per team and use suitable compiler flags as they sometimes drastically improve the performance. Figure 2 shows the comparison of Wire-Cell Toolkit's OpenMP port with Kokkos and SYCL and Figure 7 compares the performance of FastCaloSim's OpenMP port with CUDA for various GPUs. Examples and documentation for OpenMP/offload are somewhat sparse, especially for more advanced features. Debugging and using performance tuning tools is challenging due to the extra OpenMP code infrastructure, and the architecture-specific plugins that it loads at initialization, which cause issues with NVIDIA's compute_sanitizer and AMD's omnitrace profiler. ### alphaka In general, alphaka offers a lower-level API than other portability layers (e.g., Kokkos). As a result, the application code written in alphaa tends to be rather verbose. Hence, in the case of applications with a large code-base, it can be desirable to implement a shallow layer of template functions on top of the alpha API in order to hide this complexity from the application code. The alphaka style of GPU code development is somewhat complex. The API is not always intuitive, which makes the learning process challenging. However, once developers are familiar with the API, porting the existing kernel code (e.g., written in CUDA) to alphaa is quite simple, as the kernel body remains practically unchanged. As it can be seen in Figure 4, the FastCaloSim application performance with the CUDA backend of alphaa is comparable with the native CUDA implementation. The slight performance degradation of the memory transfers can be explained by the fact that alphaa numbers include one extra copy of memory buffers on the host. ### std::par NVIDIA's implementation of the ISO standard for parallelism (nvc++) has not been intended to be a direct replacement for CUDA. It lacks the ability to access many low level features of GPU programming, such as synchronization, explicit launch parameter control, asynchronous operations, or explicit memory transfers. Yet because it utilizes standard C++, it provides a very low entry bar for developers, offering a simplified path for porting from serial CPU code to GPUs, without the need to explicitly manage memory transfers. While, in theory, nvc++ is link compatible with gcc libraries, there are certain limitations. Any memory that will be offloaded to the device must be allocated in code compiled by nvc++, and there are issues with some STL containers. The linking of the final executable must also be performed by nvc++. The compiler is new enough that critical bugs are still being found, though are often rapidly patched. Furthermore, it is not fully recognized by many build systems, requiring significant user intervention for more complex installations, especially when some packages require g++ and others nvc++. The compilation time with nvc++ tends to be significantly longer than with the other portability layers. nvc++ uses Thrust to implement parallelism. When there are well matched Thrust algorithms for the parallelized regions, performance can match, or sometimes exceed equivalent CUDA code. But this performance drops off when there is not a good matching to Thrust. Unfortunately, there is no way to see the intermediate Thrust code that the compiler generates. The use of page faults and Unified Shared Memory to exchange data between device and host can also degrade performance as compared with an explicit memory transfer. ## 7 Conclusions Portable parallel programming layers have seen tremendous development over the past several years. The range of enabled backends has vastly increased, with most now being supported by the majority of APIs. The compilers are being actively developed, with missing features being addressed, and improved performance profiles. There are strengths and weaknesses to each product, and careful choices should be made when choosing an API so that it works well with the original code and software environment, the build system, and the desired hardware. All APIs have a learning curve, though std::par is closest to normal C++, providing an easy transition path to GPU programming for little effort. In many cases, performance of all portability layers can approach, and sometimes even exceed, that of "native" compilers, though performance tuning can come at the expense of portability. The developers of Kokkos, SYCL, and nvc++ are aware of the benefits that standards bring, and have all made proposals to the C++ standards committees for the inclusion of various programming paradigms into the C++23 and C++26 standards. We fervently hope that these proposals succeed, and that compiler developers and hardware manufacturers collaborate to integrate them in their next generation of products, as a standards-based approach will have the best chance of long-term survival. ## 8 Acknowledgments This work was supported by the DOE HEP Center for Computational Excellence at Brookhaven National Laboratory, and Lawrence Berkeley National Laboratory under B&R KA2401045. This work was also done as part of the offline software research and development programme of the ATLAS and CMS Collaborations, and we thank the collaborations for its support and cooperation. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231. We gratefully acknowledge the use of the computing resources provided by Brookhaven National Laboratory and by the Joint Laboratory for System Evaluation (JLSE) at Argonne National Laboratory.
2310.17998
Closing the Gap Between the Upper Bound and the Lower Bound of Adam's Iteration Complexity
Recently, Arjevani et al. [1] established a lower bound of iteration complexity for the first-order optimization under an $L$-smooth condition and a bounded noise variance assumption. However, a thorough review of existing literature on Adam's convergence reveals a noticeable gap: none of them meet the above lower bound. In this paper, we close the gap by deriving a new convergence guarantee of Adam, with only an $L$-smooth condition and a bounded noise variance assumption. Our results remain valid across a broad spectrum of hyperparameters. Especially with properly chosen hyperparameters, we derive an upper bound of the iteration complexity of Adam and show that it meets the lower bound for first-order optimizers. To the best of our knowledge, this is the first to establish such a tight upper bound for Adam's convergence. Our proof utilizes novel techniques to handle the entanglement between momentum and adaptive learning rate and to convert the first-order term in the Descent Lemma to the gradient norm, which may be of independent interest.
Bohan Wang, Jingwen Fu, Huishuai Zhang, Nanning Zheng, Wei Chen
2023-10-27T09:16:58Z
http://arxiv.org/abs/2310.17998v1
# Closing the Gap Between the Upper Bound and the Lower Bound of Adam's Iteration Complexity ###### Abstract Recently, Arjevani et al. [1] establish a lower bound of iteration complexity for the first-order optimization under an \(L\)-smooth condition and a bounded noise variance assumption. However, a thorough review of existing literature on Adam's convergence reveals a noticeable gap: none of them meet the above lower bound. In this paper, we close the gap by deriving a new convergence guarantee of Adam, with only an \(L\)-smooth condition and a bounded noise variance assumption. Our results remain valid across a broad spectrum of hyperparameters. Especially with properly chosen hyperparameters, we derive an upper bound of iteration complexity of Adam and show that it meets the lower bound for first-order optimizers. To the best of our knowledge, this is the first to establish such a tight upper bound for Adam's convergence. Our proof utilizes novel techniques to handle the entanglement between momentum and adaptive learning rate and to convert the first-order term in the Descent Lemma to the gradient norm, which may be of independent interest. ## 1 Introduction First-order optimizers, also known as gradient-based methods, make use of gradient (first-order derivative) information to find the minimum of a function. They have become a cornerstone of many machine learning algorithms due to the efficiency as only gradient informaiton is required, and the flexibility as gradients can be easily computed for any function represented as directed acyclic computational graph via auto-differentiation [2; 25]. Therefore, it is fundamental to theoretically understand the properties of these first-order methods. Recently, Arjevani et al. [1] establish a lower bound on the iteration complexity of stochastic first-order methods. Formally, for a well-studied setting where the objective is \(L\)-smooth and a stochastic oracle can query the gradient unbiasly with bounded variance (see Assumption 1 and 2), any stochastic first-order algorithm requires at least \(\varepsilon^{-4}\) queries (in the worst case) to find an \(\varepsilon\)-stationary point, i.e., a point with gradient norm at most \(\varepsilon\). Arjevani et al. [1] further show that the above lower bound is tight as it matches the existing upper bound of iteration complexity of SGD [15]. On the other hand, among first-order optimizers, Adam [20] becomes dominant in training state-of-the-art machine learning models [3; 18; 4; 11]. Compared to vanilla stochastic gradient descent (SGD), Adam consists of two more key components: (i) momentum to accumulate historical gradient information and (ii) adaptive learning rate to rectify coordinate-wise step sizes. The psedo-code of Adam is given as Algorithm 1. While the sophisticated design of Adam enables its empirical superiority, it brings great challenges for the theoretical analysis. After examining a series of theoretical works on the upper bound of iteration complexity of Adam [33; 9; 10; 36; 16; 27; 34], we find that none of them match the lower bound for first-order optimizers: they not only consume more queries than the lower bound to reach \(\varepsilon\)-stationary iterations but also requires additional assumptions (see Section 3 for a detailed discussion). This theoretical mismatch becomes even more unnatural given the great empirical advantage of Adam over SGD, which incites us to think: _Is the gap between the upper and lower bounds for Adam a result of the inherent complexity induced by Adam's design, or could it be attributed to the proof techniques not being sharp enough?_ This paper answers the above question, validating the latter hypothesis, by establishing a new upper bound on iteration complexity of Adam for a wide range of hyperparameters that cover typical choices. Specifically, our contribution can be summarized as follows: * We examine existing works that analyze the iteration complexity of Adam, and find that none of them meets the lower bound of first-order optimization algorithms; * We derive a new convergence guarantee of Adam with only assuming \(L\)-smooth condition and bounded variance assumption (Theorem 1), which holds for a wide range of hyperparameters covering typical choices; * With chosen hyperparameters, we further tighten Theorem 1 and show that the upper bound on the iteration complexity of Adam meets the lower bound, closing the gap (Theorem 2). Our upper bound is tighter than existing results by a logarithmic factor, in spite of weaker assumption. To the best of our knowledge, this work provides the first upper bound on the iteration complexity of Adam without additional assumptions other than \(L\)-smooth condition and bounded variance assumption. It is also the first upper bound matching the lower bound of first-order optimizers. **Organization of this paper.** The rest of the paper is organized as follows: in Section 2, we first present the notations and setup of analysis in this paper ; in Section 3, we revisit the existing works on the iteration complexity of Adam; in Section 4, we present a convergence analysis of Adam with general hyperparameters (Theorem 1); in Section 5, we tighten Theorem 1 with a chosen hyperparameter, and derive an upper bound of Adam's iteration complexity which meets the lower bound; in Section 6, we discuss the limitation of our results. ## 2 Preliminary The Adam algorithm is restated in Algorithm 1 for convenient reference. Note that compared to the original version of Adam in Kingma and Ba [20], the bias-correction terms are omitted to simplify the analysis, and our analysis can be immediately extended to the original version of Adam because the effect of bias-correction term decays exponentially. Also, in the original version of Adam, the adaptive learning rate is \(\frac{\eta}{\sqrt{\mathbf{\nu}_{t}}+\lambda\mathbbm{1}_{d}}\) instead of \(\frac{\eta}{\sqrt{\mathbf{\nu}_{t}}}\). However, our setting is more challenging and our result can be easily extend to the original version of Adam, since the \(\lambda\) term makes the adaptive learning rate upper bounded and eases the analysis. **Notations.** For \(a,b\in\mathbb{Z}^{\geq 0}\) and \(a\leq b\), denote \([a,b]=\{a,a+1,\cdots,b-1,b\}\). For any two vectors \(\mathbf{w},\mathbf{v}\in\mathbb{R}^{d}\), denote \(\mathbf{w}\odot\mathbf{v}\) as the Hadamard product (i.e., coordinate-wise multiplication) between \(\mathbf{w}\) and \(\mathbf{v}\). When analyzing Adam, we denote the true gradient at iteration \(t\) as \(\mathbf{G}_{t}=\nabla f(\mathbf{w}_{t})\), and the sigma algebra before iteration \(t\) as \(\mathcal{F}_{t}=\sigma(\mathbf{g}_{1},\cdots,\mathbf{g}_{t-1})\). We denote conditional expectation as \(\mathbb{E}^{|\mathcal{F}_{t}|}[*]=\mathbb{E}[*|\mathcal{F}_{t}]\). We also use asymptotic notations \(\mathbf{o}\), \(\mathcal{O}\), \(\Omega\), and \(\Theta\), where \(h_{2}(x)=\mathbf{o}_{x\to x_{0}}(h_{1}(x))\) means that \(\lim_{x\to x_{0}}\frac{h_{2}(x)}{h_{1}(x)}=0\) (when the context is clear, we abbreviate \(x\to x_{0}\) and only use \(\mathbf{o}(h_{1}(x))\)); \(h_{2}(x)=\mathcal{O}(h_{1}(x))\) means that there exists constant \(\gamma\) independent of \(x\) such that \(h_{2}(x)\leq\gamma h_{1}(x)\); \(h_{2}(x)=\Omega(h_{1}(x))\) means that \(h_{1}(x)=\mathcal{O}(h_{2}(x))\); and \(h_{2}(x)=\Theta(h_{1}(x))\) means that \(h_{2}(x)=\mathcal{O}(h_{1}(x))\) and \(h_{2}(x)=\Omega(h_{1}(x))\). **Objective function.** In this paper, we consider solving the following optimization problem: \(\min_{\mathbf{w}\in\mathbb{R}^{d}}f(\mathbf{w})\). We make the following assumption on the objective function \(f\). **Assumption 1** (On objective function).: _We assume \(f\) to be non-negative. We further assume that \(f\) satisfies \(L\)-smooth condition, i.e., \(f\) is differentiable, and the gradient of \(f\) is \(L\)-Lipschitz._ We denote the set of all objective functions satisfying Assumption 1 as \(\mathcal{F}(L)\). **Stochastic oracle.** As \(f\) is differentiable, we can utilize the gradient of \(f\) (i.e., \(\nabla f\)) to solve the above optimization problem. However, the \(\nabla f\) is usually expensive to compute. Instead, we query a stochastic estimation of \(\nabla f\) through a stochastic oracle \(\mathbf{O}\). Specifically, the stochastic oracle \(\mathbf{O}\) consists of a distribution \(\mathcal{P}\) over a measurable space \(\mathcal{Z}\) and a mapping \(\mathbf{O}_{f}:\mathbb{R}^{d}\times\mathcal{Z}\to\mathbb{R}^{d}\). We make the following asssumption on \(\mathbf{O}\). **Assumption 2** (On stochastic oracle).: _We assume that \(\mathbf{O}\) is unbiased, i.e., \(\forall\mathbf{w}\in\mathbb{R}^{d}\), \(\mathbb{E}_{z\sim\mathcal{P}}\mathbf{O}_{f}(\mathbf{w},z)=\nabla f(\mathbf{w})\). We further assume \(\mathbf{O}\) has bounded variance, i.e., \(\forall\mathbf{w}\in\mathbb{R}^{d}\), \(\mathbb{E}_{z\sim\mathcal{P}}[\|\mathbf{O}_{f}(\mathbf{w},z)-\nabla f(\mathbf{w})\|^{2}] \leq\sigma^{2}\)._ We denote the set of all stochastic oracles satisfying Assumption 2 with variance bound \(\sigma^{2}\) as \(\mathfrak{O}(\sigma^{2})\). **Algorithm.** Adam belongs to first-order optimization algorithms, which is defined as follows: **Definition 1** (First-order optimization algorithm).: _An algorithm \(\mathbf{A}\) is called a first-order optimization algorithm, if it takes an input \(\mathbf{w}_{1}\) and hyperparameter \(\theta\), and produces a sequence of parameters as follows: first sample a random seed \(r\) from some distribution \(\mathcal{P}_{r}\)*, set \(\mathbf{w}_{1}^{\mathbf{A}(\theta)}=\mathbf{w}_{1}\) and then update the parameters as_ Footnote *: Such a random seed allows sampling from all iterations to generate the final output of the optimization algorithm. As an example, Algorithm 1 sets \(\mathcal{P}_{r}\) as a uniform distribution over \([T]\). \[\mathbf{w}_{t+1}^{\mathbf{A}(\theta)}=\mathbf{A}_{\theta}^{t}(r,\mathbf{w}_{1}^{\mathbf{A}( \theta)},\mathbf{O}_{f}(\mathbf{w}_{1}^{\mathbf{A}(\theta)},z_{1}),\cdots,\mathbf{O}_{f}(\mathbf{w }_{t}^{\mathbf{A}(\theta)},z_{t})),\] _where \(z_{1},z_{2},\cdots,z_{t}\) are sampled i.i.d. from \(\mathcal{P}\)._ **Iteration complexity.** Denote the set of all first-order optimization algorithms as \(\mathcal{A}_{\rm first}\). We next introduce _iteration complexity_ to measure the convergence rate of optimization algorithms. **Definition 2** (Iteration complexity).: _The iteration complexity of first-order optimization algorithm \(\mathbf{A}\) is defined as_ \[\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2})=\sup_{\mathbf{O}\in\mathfrak{ O}(\sigma^{2})}\sup_{f\in\mathcal{F}(L)}\sup_{\mathbf{w}_{1}:f(\mathbf{w}_{1})=\Delta}\inf_{ \theta}\{T:\mathbb{E}\|\nabla f(\mathbf{w}_{T}^{\mathbf{A}(\theta)})\|\leq\varepsilon\}.\] _Furthermore, the iteration complexity of the family of first-order optimization algorithms \(\mathcal{A}_{\mathrm{first}}\) is_ \[\mathcal{C}_{\varepsilon}(\Delta,L,\sigma^{2})=\sup_{\mathbf{O}\in\mathfrak{O}( \sigma^{2})}\sup_{f\in\mathcal{F}(L)}\sup_{\mathbf{w}_{1}:f(\mathbf{w}_{1})=\Delta}\inf _{\mathbf{A}\in\mathcal{A}_{\mathrm{first}}}\inf_{\theta}\{T:\mathbb{E}\|\nabla f( \mathbf{w}_{T}^{\mathbf{A}(\theta)})\|\leq\varepsilon\}.\] It should be noticed that the iteration complexity of the family of first-order optimization algorithms is a lower bound of the iteration complexity of a specific first-order optimization algorithm, i.e., \(\forall\mathbf{A}\in\mathcal{A}_{\mathrm{first}}\), \(\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2})\geq\mathcal{C}_{ \varepsilon}(\Delta,L,\sigma^{2})\). ## 3 Related works: none of existing upper bounds match the lower bound In this section, we examine existing works that study the iteration complexity of Adam, and defer a discussion of other related works to Appendix A. Specifically, we find that none of them match the lower bound for first-order algorithms provided in [1] (restated as follows). **Proposition 1** (Theorem 3, [1]).: \(\forall L,\Delta,\sigma^{2}>0\)_, we have \(\mathcal{C}_{\varepsilon}(\Delta,L,\sigma^{2})=\Omega(\frac{1}{\varepsilon^{ 4}})\)._ Note that in the above bound, we omit the dependence of the lower bound over \(\Delta\), \(L\), and \(\sigma^{2}\), which is a standard practice in existing works (see Cutkosky and Mehta [8], Xie et al. [32], Faw et al. [13] as examples) because the dependence over the accuracy \(\varepsilon\) can be used to derive how much additional iterations is required for a smaller target accuracy and is thus of more interest. In this paper, when we say "match the lower bound", we always mean that the upper bound has the same order of \(\varepsilon\) as the lower bound. Generally speaking, existing works on the iteration complexity of Adam can be divided into two categories: they either (i) assume that gradient is universally bounded or (ii) make stronger assumptions on smoothness. Below we respectively explain how these two categories of works do not match the lower bound in [1]. The first line of works, including Zaheer et al. [33], De et al. [9], Defossez et al. [10], Zou et al. [36], Guo et al. [16], assume that the gradient norm of \(f\) is universally bounded, i.e., \(\|\nabla f(\mathbf{w})\|\leq G\), \(\forall\mathbf{w}\in\mathbb{R}^{d}\). In other words, what they consider is another iteration complexity defined as follows: \[\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2},G)\triangleq\sup_{\mathbf{O} \in\mathfrak{O}(\sigma^{2})}\sup_{f\in\mathcal{F}(L),\|\nabla f\|\leq G}\sup _{\mathbf{w}_{1}:f(\mathbf{w}_{1})=\Delta}\inf_{\theta}\{T:\mathbb{E}\|\nabla f(\mathbf{ w}_{T}^{\mathbf{A}(\theta)})\|\leq\varepsilon\}.\] This line of works do not match the lower bound due to the following two reasons: First of all, the upper bound they derive is \(O(\frac{\log 1/\varepsilon}{\varepsilon^{4}})\), which has an additional \(\log 1/\varepsilon\) factor more than the lower bound; secondly, the bound they derive is for \(\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2},G)\). Note that \(\mathcal{F}(L)\cap\{f:\|\nabla f\|\leq G\}\) is a proper subset of \(\mathcal{F}(L)\) for any \(G\), where a simple example in \(\mathcal{F}(L)\) but without bounded gradient is the quadratic function \(f(x)=\|x\|^{2}\). Therefore, we have that \[\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2})\geq\mathcal{C}_{ \varepsilon}(\mathbf{A},\Delta,L,\sigma^{2},G),\quad\forall G\geq 0, \tag{1}\] and thus the upper bound on \(\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2},G)\) does not apply to \(\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2})\). Moreover, their upper bound of \(\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2},G)\) tends to \(\infty\) as \(G\to\infty\), which indicates that if following their analysis, the upper bound of \(\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2})\) would be infinity based on Eq. (1). The second line of works includes [27; 34; 30], which additionally assume a mean-squared smoothness property besides Assumption 1 and 2, i.e., \(\mathbb{E}_{\infty\mathcal{P}}\|\mathbf{O}_{f}(\mathbf{w},z)-\mathbf{O}_{f}(\mathbf{v},z)\|^{2} \leq L\|\mathbf{w}-\mathbf{v}\|^{2}\). Denote \(\tilde{\mathfrak{O}}(\sigma^{2},L)\triangleq\{\mathbf{O}:\mathbb{E}_{z\sim \mathcal{P}}\|\mathbf{O}_{f}(\mathbf{w},z)-\mathbf{O}_{f}(\mathbf{v},z)\|^{2}\leq L\|\mathbf{w}-\mathbf{ v}\|^{2},\forall\mathbf{w},\mathbf{v}\in\mathbb{R}^{d}\}\cap\mathfrak{O}(\sigma^{2})\). The iteration complexity that they consider is defined as follows: \[\tilde{\mathcal{C}}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2})=\sup_{\mathbf{O}\in \tilde{\mathfrak{O}}(\sigma^{2},L)}\sup_{f\in\mathcal{F}(L)}\sup_{\mathbf{w}_{1}:f (\mathbf{w}_{1})=\Delta}\inf_{\theta}\{T:\mathbb{E}\|\nabla f(\mathbf{w}_{T}^{\mathbf{A}( \theta)})\|\leq\varepsilon\}.\] The rate derived in [27; 34; 30] is \(O(\frac{\log 1/\varepsilon}{\varepsilon^{6}})\), which is derived by minimizing the upper bounds in [27; 34; 30] with respect to the hyperparameter of adaptive learning rate \(\beta_{2}\). According to [1], the lower bound of iteration complexity of \(\tilde{\mathcal{C}}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2})\) is \(\Omega(\frac{1}{\varepsilon^{3}})\) and smaller than the original lower bound \(\Omega(\frac{1}{\varepsilon^{4}})\), resulting in an even larger gap between the upper bound and lower bound. Recently, there is a concurrent work [21] which does not require bounded gradient assumption and mean-squared smoothness property but poses a stronger assumption on the stochastic oracle: the set of stochastic oracles they consider is \(\tilde{\tilde{\mathfrak{D}}}=\{\mathbf{O}:\forall\mathbf{w}\in\mathbb{R}^{d}\), \(\mathbb{E}_{z\sim P}\mathbf{O}_{f}(\mathbf{w},z)=\nabla f(\mathbf{w}),\mathbb{P}\left(\|\mathbf{ O}_{f}(\mathbf{w},z)-\nabla f(\mathbf{w})\|^{2}\leq\sigma^{2}\right)=1\}\). \(\tilde{\tilde{\mathfrak{D}}}\) is a proper subset of \(\mathfrak{D}\) because a simple example is that \(\mathbf{O}_{f}(\mathbf{w},z)=\nabla f(\mathbf{w})+z\) where \(z\) is a standard gaussian variable. Therefore, their result does not provide a valid upper bound of \(\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2})\). ## 4 Convergence analysis of Adam with only Assumptions 1 and 2 As discussed in Section 3, existing works on analyzing Adam require additional assumptions besides Assumption 1 and 2. In this section, we provide the first convergence analysis of Adam with only Assumption 1 and 2, which naturally gives an upper bound on the iteration complexity \(\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2})\). In fact, our analysis even holds when the stochastic oracle satisfies the following more general assumption. **Assumption 3** (Coordinate-wise affine noise variance).: _We assume that \(\mathbf{O}\) is unbiased, i.e., \(\forall\mathbf{w}\in\mathbb{R}^{d}\), \(\mathbb{E}_{z\sim P}\mathbf{O}_{f}(\mathbf{w},z)=\nabla f(\mathbf{w})\). We further assume \(\mathbf{O}\) has coordinate-wise affine variance, i.e., \(\forall\mathbf{w}\in\mathbb{R}^{d}\) and \(\forall i\in[d]\), \(\mathbb{E}_{z\sim P}[|(\mathbf{O}_{f}(\mathbf{w},z))_{i}|^{2}]\leq\sigma_{0}^{2}+ \sigma_{1}^{2}\partial_{i}f(\mathbf{w})^{2}\)._ One can easily observe that Assumption 3 is more general than Assumption 2 since Assumption 2 immediately indicates Assumption 3 with \(\sigma_{0}=\sigma\) and \(\sigma_{1}=1\). We consider Assumption 3 not only because it is more general but also because it allows the noise to grow with the norm of the true gradient, which is usually the case in machine learning practice [14; 19]. Our analysis under Assumptions 1 and Assumption 3 is then given as follows. **Theorem 1**.: _Let \(\mathbf{A}\) be by Adam (Algorithm 1) and \(\theta=(\eta,\beta_{1},\beta_{2})\) are the hyperparameters of \(\mathbf{A}\). Let Assumption 1 and 2 hold. Then, if \(0\leq\beta_{1}\leq\sqrt{\beta_{2}}-8\sigma_{1}^{2}(1-\beta_{2})\beta_{2}^{-2}\) and \(\beta_{2}<1\), we have_ \[\mathbb{E}\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\|\leq \sqrt{C_{2}+2C_{1}\sum_{i=1}^{d}\left(\ln\left(2(T+1)\sum_{i=1}^{ d}\sqrt{\mathbf{\nu}_{0,i}+\sigma_{0}^{2}}+24d\frac{\sigma_{1}^{2}C_{1}}{\sqrt{ \beta_{2}}}\ln d\frac{\sigma_{1}^{2}C_{1}}{\sqrt{\beta_{2}}}+\frac{12\sigma_{1 }^{2}}{\sqrt{\beta_{2}}}C_{2}\right)\right)}\] \[\times\sqrt{2(T+1)\sum_{i=1}^{d}\sqrt{\mathbf{\nu}_{0,i}+\sigma_{0}^{ 2}}+24d\frac{\sigma_{1}^{2}C_{1}}{\sqrt{\beta_{2}}}\ln d\frac{\sigma_{1}^{2}C_ {1}}{\sqrt{\beta_{2}}}+\frac{12\sigma_{1}^{2}}{\sqrt{\beta_{2}}}C_{2}}. \tag{2}\] _where \(\mathbf{\nu}_{0,i}\) is the \(i\)-th coordinate of \(\mathbf{\nu}_{0}\),_ \[C_{1}=\frac{32L\eta\left(1+\frac{\beta_{1}}{\sqrt{\beta_{2}}} \right)^{3}}{(1-\beta_{2})\left(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}\right)^ {3}}+\frac{16\beta_{1}^{2}\sigma_{0}(1-\beta_{1})}{\beta_{2}\sqrt{1-\beta_{2} }\left(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}\right)^{3}}+\frac{64(1+\sigma_{1}^ {2})\sigma_{1}^{2}L^{2}\eta^{2}d}{\beta_{2}^{2}\left(1-\frac{\beta_{1}}{\sqrt {\beta_{2}}}\right)^{4}\sigma_{0}(1-\beta_{2})^{\frac{3}{2}}},\] \[C_{2}=\frac{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}{1-\beta_{1}} \frac{8}{\eta}f(\mathbf{u}_{1})+\frac{32}{\beta_{2}\left(1-\frac{\beta_{1}}{\sqrt {\beta_{2}}}\right)^{2}}\sum_{i=1}^{d}\mathbb{E}\frac{\mathbf{G}_{1,i}^{2}}{\sqrt{ \mathbf{\nu}_{1,i}}}+2C_{1}\sum_{i=1}^{d}\left(\ln\left(\frac{1}{\sqrt{\beta_{2} \mathbf{\nu}_{0,i}}}\right)-T\ln\beta_{2}\right).\] A proof sketch is given in Section 4.2 and the full proof is deferred to Appendix. The right-hand side in Eq. (2) looks messy at the first glance. We next explain Theorem 1 in detail and make the upper bound's dependence over hyperparameters crystally clear. ### Discussion on Theorem 1 **Required assumptions and conditions.** As mentioned previously, Theorem 1 only requires Assumption 1 and 2, which aligns with the setting of the lower bound (Proposition 1). To our best knowledge, this is the first analysis of Adam without additional assumptions. As for the range of \(\beta_{1}\) and \(\beta_{2}\), one can immediately see that the condition \(\beta_{1}\leq\sqrt{\beta_{2}}-8\sigma_{1}^{2}(1-\beta_{2})\beta_{2}^{-2}\) degenerates to \(\beta_{1}\leq\sqrt{\beta_{2}}\) in the bounded gradient case (i.e., \(\sigma_{1}=0\)), the weakest condition required in existing literature [36]. When \(\sigma_{1}\neq 0\), such a condition is stronger than \(\beta_{1}\leq\sqrt{\beta_{2}}\). We point out that this is not due to technical limitations but instead agrees with existing counterexamples for Adam [26; 34]: [26; 34] shows that when \(\sigma_{1}\neq 0\), there exists a counterexample satisfying Assumption 1 and Assumption 3 and a pair of (\(\beta_{1}\), \(\beta_{2}\)) with \(\beta_{1}<\sqrt{\beta_{2}}\) and Adam with (\(\beta_{1}\), \(\beta_{2}\)) diverges over such a counterexample. **Dependence over \(\beta_{2}\), \(\eta\), and \(T\).** Here we consider the influence of \(\beta_{2}\), \(\eta\), and \(T\) while fixing \(\beta_{1}\) constant (we will discuss the effect of \(\beta_{1}\) in Section 6). With logarithmic factors ignored and coefficients hidden, \(C_{1}\), \(C_{2}\) and the right-hand-side of Eq. (2) can be rewritten with asymptotic notations as \[C_{1}=\tilde{\mathcal{O}}\left(\frac{1}{\sqrt{1-\beta_{2}}}+ \frac{\eta^{2}}{\sqrt{(1-\beta_{2})^{3}}}\right),C_{2} =\tilde{\mathcal{O}}\left(\frac{1}{\sqrt{1-\beta_{2}}}+\frac{ \eta^{2}}{\sqrt{(1-\beta_{2})^{3}}}+\frac{1}{\eta}+T\sqrt{1-\beta_{2}}+\frac {\eta^{2}}{\sqrt{1-\beta_{2}}}T\right),\] \[\mathbb{E}\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\| =\tilde{\mathcal{O}}\left(C_{1}+C_{2}+\sqrt{TC_{1}}+\sqrt{TC_{2}} \right),\] where \(\tilde{\mathcal{O}}\) denotes \(\mathcal{O}\) with logarithmic terms ignored. Consequently, the dependence of Eq. (2) over \(\beta_{2},\eta\) and \(T\) becomes \[\mathbb{E}\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\|= \tilde{\mathcal{O}}\left(\frac{1}{\sqrt{1-\beta_{2}}}+\frac{\eta ^{2}}{\sqrt{(1-\beta_{2})^{3}}}+\frac{1}{\eta}+T\sqrt{1-\beta_{2}}+\frac{\eta ^{2}}{\sqrt{1-\beta_{2}}}T\right)\] \[+\tilde{\mathcal{O}}\left(\frac{\sqrt{T}}{\sqrt[3]{1-\beta_{2}}} +\frac{\eta\sqrt{T}}{\sqrt[3]{(1-\beta_{2})^{3}}}+\frac{\sqrt{T}}{\sqrt[3]{ \eta}}+T\sqrt[4]{1-\beta_{2}}+\frac{\eta}{\sqrt[3]{1-\beta_{2}}}T\right).\] Here we consider two cases: (i). \(\beta_{2}\) and \(\eta\) are independent over \(T\), and (ii). \(\beta_{2}\) and \(\eta\) are dependent over \(T\). For case (i), based on the above equation, one can easily observe that the averaged gradient norm \(\frac{1}{T}\mathbb{E}\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\|\) will converge to the threshold \(\mathcal{O}(\frac{\eta^{2}}{\sqrt{1-\beta_{2}}}+\sqrt[4]{1-\beta_{2}}+\frac{ \eta}{\sqrt[3]{1-\beta_{2}}})\) with rate \(\mathcal{O}(1/\sqrt{T})\). This aligns with the observation in [27; 34] that Adam will not converge to the stationary point with constant \(\beta_{2}\). For case (ii), in order to ensure convergence, i.e., \(\min_{t\in[T]}\mathbb{E}\|\mathbf{G}_{t}\|_{1}\to 0\) as \(T\to\infty\), a sufficient condition is that the right-hand-side of the above equation is \(\mathbf{o}(T)\). Specifically, by choosing \(\eta=\Theta(T^{-a})\) and \(1-\beta_{2}=\Theta(T^{-b})\), we obtain that \[\frac{1}{T}\mathbb{E}\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\|= \tilde{\mathcal{O}}\left(T^{\frac{1}{2}-1}+T^{-2a+\frac{3a}{2}-1}+ T^{a-1}+T^{-\frac{b}{2}}+T^{-2a+\frac{b}{2}}\right)\] \[+\tilde{\mathcal{O}}\left(T^{-\frac{1}{2}+\frac{b}{4}}+T^{-\frac {1}{2}-a+\frac{3b}{4}}+T^{-\frac{1}{2}+\frac{a}{2}}+T^{-\frac{b}{4}}+T^{-a+ \frac{b}{4}}\right).\] By simple calculation, we obtain that the right-hand side of the above inequality is \(\mathbf{o}(1)\) as \(T\to\infty\) if and only if \(b>0\), \(1>a>0\) and \(b-a<1\). Moreover, the minimum of the right-hand side of the above inequality is \(\tilde{\mathcal{O}}(\frac{1}{T^{\frac{1}{2}}})\), which is achieved at \(a=\frac{1}{2}\) and \(b=1\). Such a minimum implies an upper bound of the iteration complexity which at most differs from the lower bound by logarithmic factors as solving \(\tilde{\mathcal{O}}(\frac{1}{T^{\frac{1}{4}}})=\varepsilon\) gives \(T=\tilde{\mathcal{O}}(\frac{1}{\varepsilon^{\frac{1}{4}}})\). In Theorem 2, we will further remove the logarithmic factor by giving a refined proof when \(a=\frac{1}{2}\) and \(b=1\) and close the gap between the upper and lower bounds. **Dependence over \(\lambda\).** Our analysis allows \(\lambda=0\) in the adaptive learning rate \(\eta\frac{1}{\sqrt{\mathbf{\nu}_{t}+\lambda L_{d}}}\). In contrast, some existing works [16; 21] require non-zero \(\lambda\) and their iteration complexity has polynomial dependence over \(\frac{1}{\lambda}\), which is less desired as \(\lambda\) can be as small as \(10^{-8}\) in practice (e.g., in PyTorch's default setting). Furthermore, compared to their setting, our setting is more challenging as non-zero \(\lambda\) immediately provides an upper bound of the adaptive learning rate. ### Proof Sketch of Theorem 1 In this section, we demonstrate the proof idea of Theorem 1. Generally speaking, our proof is inspired by (i). the construction of the Lyapunov function for SGDM [22] and (ii) the construction of auxiliary function and the conversion from regret bound to gradient bound for AdaGrad [31], but the adaptation of these techniques to Adam is highly non-trivial, as SGDM does not hold an adaptive learning rate, and the adaptive learning rate of AdaGrad is monotonously decreasing. Below we sketch the proof by identifying three key challenges in the proof and provide our solutions respectively. **Challenge I: Disentangle the stochasticity in stochastic gradient and adaptive learning rate.** For simplicity, let us first consider the case where \(\beta_{1}=0\), i.e., where the momentum \(\mathbf{m}_{t}\) degenerates to the stochastic gradient \(\mathbf{g}_{t}\). According to the standard descent lemma, we have that \[\begin{split}\mathbb{E}f(\mathbf{w}_{t+1})&\leq f(\mathbf{w} _{t})+\mathbb{E}\left[\left\langle\mathbf{G}_{t},\mathbf{w}_{t+1}-\mathbf{w}_{t}\right\rangle +\frac{L}{2}\left\|\mathbf{w}_{t+1}-\mathbf{w}_{t}\right\|^{2}\right]\\ &\leq\mathbb{E}f(\mathbf{w}_{t})+\underbrace{\mathbb{E}\left[\left\langle \mathbf{G}_{t},-\eta\frac{1}{\sqrt{\mathbf{\nu}_{t}}}\odot\mathbf{g}_{t}\right\rangle \right]}_{\text{First Order}}+\underbrace{\frac{L}{2}\eta^{2}\mathbb{E}\left\| \frac{1}{\sqrt{\mathbf{\nu}_{t}}}\odot\mathbf{m}_{t}\right\|^{2}}_{\text{Second Order}} \end{split} \tag{3}\] The first challenge arises from bounding the "First Order" term above. To facilitate the understanding of the difficulty, we compare the "First Order" term of Adam to the corresponding "First Order" term of SGD, i.e., \(-\eta\mathbb{E}\langle\mathbf{G}_{t},\mathbf{g}_{t}\rangle\). By directly applying \(\mathbb{E}^{|\mathcal{F}_{t}}g_{t}=\mathbf{G}_{t}\), we obtain that the "First-Order" term of SGD equals to \(-\eta\mathbb{E}\|\mathbf{G}_{t}\|^{2}\). However, as for Adam, we do not even know what \(\mathbb{E}^{|\mathcal{F}_{t}}\frac{1}{\sqrt{\mathbf{\nu}_{t}}}\odot\mathbf{g}_{t}\) is given that the stochasticity in \(\mathbf{g}_{t}\) and \(\mathbf{\nu}_{t}\) entangles. A common practice is to use a _surrogate adaptive learning rate_\(\widetilde{\mathbf{\nu}}_{t}\) measurable with respect to \(\mathcal{F}_{t}\), to approximate the real adaptive learning rate \(\mathbf{\nu}_{t}\). This leads to the following equation: \[\underbrace{\mathbb{E}\left[\left\langle\mathbf{G}_{t},-\eta\frac{1}{\sqrt{\mathbf{ \nu}_{t}}}\odot\mathbf{g}_{t}\right\rangle\right]}_{\text{First Order Main}}=\underbrace{\mathbb{E}\left[\left\langle\mathbf{G}_{t},-\eta\frac{1}{\sqrt{ \widetilde{\mathbf{\nu}}_{t}}}\odot\mathbf{g}_{t}\right\rangle\right]}_{\text{First Order Main}}+\underbrace{\mathbb{E}\left[\left\langle\mathbf{G}_{t},-\eta\left(\frac{1}{\sqrt{ \widetilde{\mathbf{\nu}}_{t}}}-\frac{1}{\sqrt{\widetilde{\mathbf{\nu}}_{t}}}\right) \odot\mathbf{g}_{t}\right\rangle\right]}_{\text{Error}}.\] One can immediately see that "First Order Main" terms equals to \(\mathbb{E}[\langle\mathbf{G}_{t},-\eta\frac{1}{\sqrt{\widetilde{\mathbf{\nu}}_{t}}} \odot\mathbf{G}_{t}\rangle]<0\), but now we need to handle the "Error" term. In existing literature, such a term is mostly bypassed by applying the bounded gradient assumption [10, 36], which, however, we do not assume. **Solution to Challenge I.** Inspired by recent advance in the analysis of AdaGrad [31], we consider the auxiliary function \(\xi_{t}=\mathbb{E}[\eta\langle\mathbf{G}_{t},-\frac{1}{\sqrt{\widetilde{\mathbf{\nu}} _{t+1}}}\odot\mathbf{G}_{t}\rangle]\), where we choose \(\widetilde{\mathbf{\nu}}_{t}=\beta_{2}\mathbf{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2} \mathds{1}_{d}\). In the following lemma, we show that the error term can be controlled using \(\xi_{t}\), parallel to (Lemma 4. [31]). **Lemma 1** (Informal version of Lemma 7 with \(\beta_{1}=0\)).: _Let all conditions in Theorem 1 hold. Then,_ \[\begin{split} Error\leq\frac{5}{8}\mathbb{E}\left[\eta\left\langle \mathbf{G}_{t},-\frac{1}{\sqrt{\widetilde{\mathbf{\nu}}_{t}}}\odot\mathbf{G}_{t}\right\rangle \right]+\mathcal{O}\left(\frac{1}{\sqrt{\beta_{2}}}\xi_{t-1}-\xi_{t}\right)+ \text{Small Error}.\end{split} \tag{4}\] In the right-hand-side of inequality (4), one can easily observe that the first term can be controlled by "First Order Main" term, and the third term is as small as the "Second Order" term. However, the second term seems annoying - in the analysis of AdaGrad [31], there is no \(1/\sqrt{\beta_{2}}\) factor, making the corresponding term a telescoping, but this is no longer true due to the existence of the \(1/\sqrt{\beta_{2}}\) factor. We resolve this difficulty by looking at the sum of \(\frac{1}{\sqrt{\beta_{2}}}\xi_{t-1}-\xi_{t}\) over \(t\) from \(1\) to \(T\), which gives \(\mathcal{O}((1-\beta_{2})\sum_{t=1}^{T-1}\xi_{t})\). By further noticing that \(\widetilde{\mathbf{\nu}}_{t+1}\geq\beta_{2}\widetilde{\mathbf{\nu}}_{t}\), we have \[\sum_{t=1}^{T}\left(\frac{1}{\sqrt{\beta_{2}}}\xi_{t-1}-\xi_{t}\right)\leq \mathcal{O}\left((1-\beta_{2})\sum_{t=1}^{T-1}\mathbb{E}\left[\eta\left\langle \mathbf{G}_{t},-\frac{1}{\sqrt{\widetilde{\mathbf{\nu}}_{t}}}\odot\mathbf{G}_{t}\right\rangle \right]\right).\] The right-hand-side term can thus be controlled by the "First Order Main" term when \(\beta_{2}\) is close to \(1\). **Remark 1**.: _Compared to the analysis of AdaGrad in [31], our proof technique has two-fold novelties. First, our auxiliary function has an additional \((1-\beta_{2})\sigma_{0}^{2}\mathds{1}_{d}\) term, which is necessary for the analysis of Adam as it makes \(\widetilde{\mathbf{\nu}}_{t}\) lower bounded from \(0\) (AdaGrad does not need this, as \(\mathbf{\nu}_{t-1}\) of AdaGrad itself is lower bounded). Secondly, as discussed above, the "AdaGrad version" of second term in the right-hand-side of inequality (4) is a telescoping, the sum of which can be bounded straightforwardly._ **Challenge II: Handle the mismatch between stochastic gradient and momentum.** In the analysis above, we assume \(\beta_{1}=0\). Additional challenges arise when we move to the case where \(\beta_{1}\neq 0\). Specifically, following the same routine, the "First Order Main" term now becomes \(\mathbb{E}\left[\langle\mathbf{G}_{t},-\eta\frac{1}{\sqrt{\mathbf{\nu}_{t}}}\odot\mathbf{m}_{t}\right]\). It is hard to even estimate whether such a term is negative or not, given that \(\mathbf{m}_{t}\) and \(\tilde{\mathbf{\nu}}_{t}\) still has entangled stochasticity, and the conditional expectation of \(\mathbf{m}_{t}\) also differs from \(\mathbf{G}_{t}\), both due to the existence of historical gradient. **Solution to Challenge II.** Inspired by the state-of-art analysis of SGDM [22], which leverage the potential function \(f(v_{t})\) with \(v_{t}=\frac{\mathbf{w}_{t}-\beta\mathbf{w}_{t-1}}{1-\beta}\), we propose to use the potential function \(f(\mathbf{u}_{t})\) with \(\mathbf{u}_{t}=\frac{\mathbf{w}_{t}-\frac{\beta_{1}}{\sqrt{\beta_{2}}}\mathbf{w}_{t-1}}{1- \frac{\beta_{1}}{\sqrt{\beta_{2}}}}\). Applying descent lemma to \(f(\mathbf{u}_{t})\), we obtain that \[\mathbb{E}[f(\mathbf{u}_{t+1})]\leq\mathbb{E}f(\mathbf{u}_{t})+\underbrace{\mathbb{E }\left[\langle\nabla f(\mathbf{u}_{t}),\mathbf{u}_{t+1}-\mathbf{u}_{t}\rangle\right]}_{ \text{First Order}}+\underbrace{\frac{L}{2}\mathbb{E}\left\|\mathbf{u}_{t+1}-\mathbf{u}_ {t}\right\|^{2}}_{\text{Second Order}}. \tag{5}\] We again focus on the "First Order" term, which can be written as \[\mathbb{E}\left[\langle\nabla f(\mathbf{u}_{t}),\mathbf{u}_{t+1}-\mathbf{u}_ {t}\rangle\right]= \mathbb{E}\left[\left\langle\nabla f(\mathbf{u}_{t}),\frac{\mathbf{w}_{t +1}-\mathbf{w}_{t}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}-\frac{\beta_{1}}{\sqrt{ \beta_{2}}}\frac{\mathbf{w}_{t}-\mathbf{w}_{t-1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}} }}\right\rangle\right]\] \[\overset{(*)}{\approx} \mathbb{E}\left[\left\langle\nabla f(\mathbf{w}_{t}),-\frac{\eta}{1- \frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1}{\sqrt{\mathbf{\nu}_{t}}}\odot\mathbf{m}_{t }+\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{\beta_{1}}{\sqrt{\bm {\tilde{\nu}}_{t}}}\odot\mathbf{m}_{t-1}\right\rangle\right]\] \[\overset{(*)}{\approx} \mathbb{E}\left[\left\langle\nabla f(\mathbf{w}_{t}),-\frac{\eta}{1- \frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1}{\sqrt{\mathbf{\tilde{\nu}}_{t}}}\odot \mathbf{m}_{t}+\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{\beta_{1}}{ \sqrt{\mathbf{\tilde{\nu}}_{t}}}\odot\mathbf{m}_{t-1}\right\rangle\right]\] \[= \mathbb{E}\left[\left\langle\mathbf{G}_{t},-\frac{\eta(1-\beta_{1})}{ 1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1}{\sqrt{\mathbf{\tilde{\nu}}_{t}}} \odot\mathbf{g}_{t}\right\rangle\right]=\mathbb{E}\left[\left\langle\mathbf{G}_{t},- \frac{\eta(1-\beta_{1})}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1}{\sqrt{ \mathbf{\tilde{\nu}}_{t}}}\odot\mathbf{G}_{t}\right\rangle\right].\] Here approximate equation \((*)\) is due to Assumption 1 and that \(\mathbf{w}_{t}\) is close to \(\mathbf{u}_{t}\), and approximate equation \((\circ)\) is due to Lemma 1 and \(\tilde{\mathbf{\nu}}_{t}=\beta_{2}\mathbf{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}\approx \beta_{2}\mathbf{\nu}_{t-1}\) (of course, these are informal statements. Please refer to Appendix C for the detailed proof). With the above methodology, we arrive at the following lemma. **Lemma 2** (Informal Version of Lemma 8).: _Let all conditions in Theorem 1 holds. Then,_ \[\mathbb{E}f(\mathbf{u}_{t+1})\leq\mathbb{E}f(\mathbf{u}_{t})-\Omega\left(\mathbb{E} \left[\eta\left\langle\mathbf{G}_{t},-\frac{1}{\sqrt{\mathbf{\tilde{\nu}}_{t}}}\odot \mathbf{G}_{t}\right\rangle\right]\right)+\mathcal{O}\left(\frac{1}{\sqrt{\beta_{ 2}}}\xi_{t-1}-\xi_{t}\right)+\text{Small Error}.\] Summing the above lemma over \(t\) from \(1\) to \(T\), we obtain \[\sum_{t=1}^{T}\mathbb{E}\left[\left\|\frac{1}{\sqrt[4]{\mathbf{\tilde{\nu}}_{t}}} \odot\mathbf{G}_{t}\right\|^{2}\right]\leq\mathcal{O}(1)+\sum_{l=1}^{d}\mathcal{O} \left(\mathbb{E}\ln\left(\frac{\mathbf{\nu}_{t,i}}{\mathbf{\nu}_{0,l}}\right)-T\ln \beta_{2}\right). \tag{6}\] We then encounter the second challenge. **Challenge III: Convert Eq. (6) to a bound of gradient norm.** Although we have derived a regret bound, i.e., a bound of \(\sum_{t=1}^{T}\mathbb{E}[\|\frac{1}{\sqrt[4]{\mathbf{\tilde{\nu}}_{t}}}\odot\mathbf{G }_{t}\|^{2}]\), we need to convert it into a bound of \(\mathbb{E}[\|\mathbf{G}_{t}\|^{2}]\). In existing works [10, 16, 10] which assumes bounded gradient, such a conversion is straightforward because (their version of) \(\tilde{\mathbf{\nu}}_{t}\) is upper bounded. However, we do not assume bounded gradient and \(\tilde{\mathbf{\nu}}_{t}\) can be arbitrarily large, making \(\mathbb{E}[\|\frac{1}{\sqrt[4]{\mathbf{\tilde{\nu}}_{t}}}\odot\mathbf{G}_{t}\|^{2}]\) arbitrarily small than \(\mathbb{E}[\|\mathbf{G}_{t}\|^{2}]\). **Solution to Challenge III.** As this part involves coordinate-wise analysis, we define \(\mathbf{g}_{t,i}\), \(\mathbf{G}_{t,i}\), \(\mathbf{\nu}_{t,i}\), and \(\tilde{\mathbf{\nu}}_{t,i}^{1}\) respectively as the \(l\)-th coordinate of \(\mathbf{g}_{t}\), \(\mathbf{G}_{t}\), \(\mathbf{\nu}_{t}\), and \(\tilde{\mathbf{\nu}}_{t}^{1}\). To begin with, note that due to Cauchy's inequality and Holder's inequality, \[\left(\mathbb{E}\sum_{t=1}^{T}\|\mathbf{G}_{t}\|\right)^{2}\leq\left(\sum_{t=1}^{T} \mathbb{E}\left[\left\|\frac{1}{\sqrt[4]{\mathbf{\tilde{\nu}}_{t}}}\odot\mathbf{G}_{t} \right\|^{2}\right]\right)\left(\sum_{t=1}^{T}\mathbb{E}\left[\left\|\sqrt[4]{ \tilde{\mathbf{\nu}}_{t}}\right\|^{2}\right]\right). \tag{7}\] Therefore, we only need to derive an upper bound of \(\sum_{t=1}^{T}\mathbb{E}[\|\sqrt[4]{\tilde{\mathbf{\nu}}_{t}}\|^{2}]\), which is achieved by the following divide-and-conque methodology. Firstly, when \(|\mathbf{G}_{t,i}|\geq\frac{\sigma_{0}}{\sigma_{1}}\), we can show \(2\mathbb{E}^{|\mathcal{F}_{t}|}|\mathbf{g}_{t,i}|^{2}\geq 2|\mathbf{G}_{t,i}|^{2}\geq\mathbb{E}^{| \mathcal{F}_{t}|}|\mathbf{g}_{t,i}|^{2}\). Then, through a direct calculation, we obtain that \[\mathbb{E}\left[\frac{|\mathbf{G}_{t,i}|^{2}}{\sqrt{\mathbf{\tilde{\nu}}_{t,i}}}\mathbf{1}_{|G _{t,i}|\geq\frac{\sigma_{0}}{\sigma_{1}}}\right]\geq\frac{\sqrt{\beta_{2}}}{3(1- \beta_{2})\sigma_{1}^{2}}\mathbb{E}\left[\left(\sqrt{\tilde{\mathbf{\nu}}_{t+1,i}}- \sqrt{\beta_{2}\tilde{\mathbf{\nu}}_{t}}\right)\mathbf{1}_{|G_{t,i}|\geq\frac{\sigma_{0}} {\sigma_{1}}}\right],\] and thus \[\sum_{t=1}^{T}\mathbb{E}\left[\frac{|\mathbf{G}_{t,i}|^{2}}{\sqrt{\mathbf{\nu}_{t,i}}} \right]\geq\frac{\sqrt{\beta_{2}}}{3(1-\beta_{2})\sigma_{1}^{2}}\sum_{t=1}^{T} \mathbb{E}\left[\left(\sqrt{\widetilde{\mathbf{\nu}}_{t+1,i}}-\sqrt{\beta_{2} \widetilde{\mathbf{\nu}}_{t,i}}\right)\mathbf{1}_{|G_{t,i}|\geq\frac{\sigma_{0}}{\sigma _{1}^{2}}}\right].\] Secondly, when \(|\mathbf{G}_{t,i}|<\frac{\sigma_{0}}{\sigma_{1}}\), define \(\{\bar{\mathbf{\nu}}_{t,i}\}_{t=0}^{\infty}\) as \(\bar{\mathbf{\nu}}_{0,l}=\mathbf{\nu}_{0,l}\), \(\bar{\mathbf{\nu}}_{t,i}=\bar{\mathbf{\nu}}_{t-1,i}+|g_{t,i}|^{2}\mathbf{1}_{|G_{t,i}|< \frac{\sigma_{0}}{\sigma_{1}}}\). One can easily observe that \(\bar{\mathbf{\nu}}_{t,i}\leq\mathbf{\nu}_{t,i}\), and thus \[\sum_{t=1}^{T}\mathbb{E}\left[\left(\sqrt{\widetilde{\mathbf{\nu}}_{ t+1,i}}-\sqrt{\beta_{2}\widetilde{\mathbf{\nu}}_{t,i}}\right)\mathbf{1}_{|\mathbf{G}_{t,i}|< \frac{\sigma_{0}^{2}}{\sigma_{1}^{2}}}\right]\] \[\leq \sum_{t=1}^{T}\mathbb{E}\left(\sqrt{\beta_{2}\bar{\mathbf{\nu}}_{t,i }+(1-\beta_{2})\sigma_{0}^{2}}-\sqrt{\beta_{2}(\beta_{2}\bar{\mathbf{\nu}}_{t-1,i} +(1-\beta_{2})\sigma_{0}^{2})}\right)\] \[= \mathbb{E}\sqrt{\beta_{2}\bar{\mathbf{\nu}}_{t,i}+(1-\beta_{2}) \sigma_{0}^{2}}+(1-\sqrt{\beta_{2}})\sum_{t=1}^{T-1}\mathbb{E}\sqrt{\beta_{2} \bar{\mathbf{\nu}}_{t,i}+(1-\beta_{2})\sigma_{0}^{2}}-\mathbb{E}\sqrt{\beta_{2}( \beta_{2}\bar{\mathbf{\nu}}_{0,i}+(1-\beta_{2})\sigma_{0}^{2})}.\] Putting the above two estimations together, we derive that \[(1-\sqrt{\beta_{2}})\sum_{t=1}^{T+1}\mathbb{E}\sqrt{\widetilde{\mathbf{\nu}}_{t,i }}\leq\frac{3(1-\beta_{2})\sigma_{1}^{2}}{\sqrt{\beta_{2}}}\sum_{t=2}^{T} \mathbb{E}\left[\frac{|\mathbf{G}_{t,i}|^{2}}{\sqrt{\widetilde{\mathbf{\nu}}_{t,i}}} \right]+(1-\sqrt{\beta_{2}})(T+1)\sqrt{\sigma_{0}^{2}+\mathbf{\nu}_{0,i}}.\] The above methodology can be summarized as the following lemma. **Lemma 3**.: _Let all conditions in Theorem 1 hold. Then,_ \[\sum_{t=1}^{T+1}\sum_{i=1}^{d}\mathbb{E}\sqrt{\widetilde{\mathbf{\nu}}_{t,i}}\leq 2 (T+1)\sum_{i=1}^{d}\sqrt{\mathbf{\nu}_{0,i}+\sigma_{0}^{2}}+24d\frac{\sigma_{1}^{2} C_{1}}{\sqrt{\beta_{2}}}\ln d\frac{\sigma_{1}^{2}C_{1}}{\sqrt{\beta_{2}}}+C_{2}.\] Based on Lemma 3, we can derive the estimation of \(\sum_{t=1}^{T}\mathbb{E}[\|\sqrt[d]{\widetilde{\mathbf{\nu}}_{t}}\|^{2}]\) since \(\widetilde{\mathbf{\nu}}_{t}\) is close to \(\mathbf{\nu}_{t}\). The proof is then completed by combining the estimation of \(\sum_{t=1}^{T}\mathbb{E}[\|\sqrt[d]{\widetilde{\mathbf{\nu}}_{t}}\|^{2}]\) (Eq. (6)) and Eq. (7). ## 5 Gap-closing upper bound on the iteration complexity of Adam In this section, based on a refined proof of Stage II of Theorem 1 (see Appendix C) under the specific case \(\eta=\Theta(1/\sqrt{T})\) and \(\beta_{2}=1-\Theta(1/T)\), we show that the logarithmic factor in Theorem 1 can be removed and the lower bound can be achieved. Specifically, we have the following theorem. **Theorem 2**.: _Let Assumption 1 and Assumption 2 hold. Then, select the hyperparameters of Adam as \(\eta=\frac{a}{\sqrt{T}}\), \(\beta_{2}=1-\frac{b}{T}\) and \(\beta_{1}=c\sqrt{\beta}_{2}\), where \(a,b>0\) and \(0\leq c<1\) are independent of \(T\). Then, let \(\mathbf{w}_{\tau}\) be the output of Adam in Algorithm 1, and we have_ \[\mathbb{E}\|\nabla f(\mathbf{w}_{r})\|\leq\sqrt{2\sum_{i=1}^{d}\sqrt{ \mathbf{\nu}_{0,i}+3b\sigma_{0}^{2}}+\frac{4D_{2}\sigma_{1}^{2}b}{\sqrt{T}}+\frac {256\sigma_{1}^{2}b}{(1-c)^{2}T}\sum_{i=1}^{d}\mathbb{E}\frac{\mathbf{G}_{1,i}^{2} }{\sqrt{\widetilde{\mathbf{\nu}}_{1,i}}}+\frac{16D_{1}\sigma_{1}^{2}b}{\sqrt{T}} \ln\left(e+\frac{4\bar{D}\sigma_{1}^{2}b}{\sqrt{T}}\right)}\] \[\times\sqrt{\frac{2D_{1}}{\sqrt{T}}\sum_{i=1}^{d}\ln\left(2\sum_ {i=1}^{d}\sqrt{\mathbf{\nu}_{0,i}+3b\sigma_{0}^{2}}+\frac{4D_{2}\sigma_{1}^{2}b}{ \sqrt{T}}+\frac{256\sigma_{1}^{2}b}{(1-c)^{2}T}\sum_{i=1}^{d}\mathbb{E}\frac{ \mathbf{G}_{1,i}^{2}}{\sqrt{\widetilde{\mathbf{\nu}}_{1,i}}}+\frac{16D_{1}\sigma_{1}^{2} b}{\sqrt{T}}\ln\left(e+\frac{4\bar{D}\sigma_{1}^{2}b}{\sqrt{T}}\right) \right)}\] \[\times\sqrt{+\frac{64}{(1-c)^{2}T}\sum_{i=1}^{d}\mathbb{E}\frac{ \mathbf{G}_{1,i}^{2}}{\sqrt{\widetilde{\mathbf{\nu}}_{1,i}}}+\frac{D_{2}}{\sqrt{T}}}\] _where_ \[D_{1}\triangleq\frac{32La}{b}\frac{(1+c)^{3}}{(1-c)^{3}}+\frac{32\sigma_{0}}{ \sqrt{b}(1-c)^{3}}+\frac{(1+\sigma_{1}^{2})\sigma_{1}^{2}L^{2}da^{2}}{(1-c)^{4} \sigma_{0}\sqrt{b^{3}}},D_{2}\triangleq\frac{8}{a}f(\mathbf{u}_{1})+D_{1}\left(bd- \sum_{i=1}^{d}\ln\mathbf{\nu}_{0,i}\right).\] _As a result, let \(\mathbf{A}\) be Adam in Algorithm 1, we have \(\mathcal{C}_{e}(\mathbf{A},\Delta,L,\sigma^{2})=\mathcal{O}(\frac{1}{\varepsilon^{4}})\)._ The proof of Theorem 2 is based on a refined solution of Challenge II in the proof of Theorem 1 under the specific hyperparameter settings, and we defer the concrete proof to Appendix D. Below we discuss on Theorem 2, comparing it with practice, with Theorem 1 and existing convergence rate of Adam, and with the convergence rate of AdaGrad. **Alignment with the practical hyperparameter choice.** The hyperparameter setting in Theorem 2 indicates that to achieve the lower bound of iteration complexity, we need to select small \(\eta\) and close-to-\(1\)\(\beta_{2}\), with less requirement over \(\beta_{1}\). This agrees with the hyperparameter setting in deep learning libraries, for example, \(\eta=10^{-3}\), \(\beta_{2}=0.999\), and \(\beta_{1}=0.9\) in PyTorch. **Comparison with Theorem 1 and existing works.** To our best knowledge, Theorem 2 is the first to derive the iteration complexity \(\mathcal{O}(\frac{1}{\varepsilon^{4}})\). Previously, the state-of-art iteration complexity is \(\mathcal{O}(\frac{\log 1/\varepsilon}{\varepsilon^{4}})\)[10] where they additionally assume bounded gradient. Theorem 2 is also tight than Theorem 1 (while Theorem 1 holds for more general hyperparameter settings). As discussed in Section 4.1, if applying the hyperparameter setting in Theorem 2 (i.e., \(\eta=\frac{a}{\sqrt{T}}\), \(\beta_{2}=1-\frac{b}{T}\) and \(\beta_{1}=c\sqrt{\beta}_{2}\)) to Theorem 1, we will obtain that \(\mathbb{E}\|\nabla f(\mathbf{w}_{\tau})\|\leq\mathcal{O}(\mathrm{poly}(\log T)/ \sqrt[4]{T})\) and \(\mathcal{C}_{\varepsilon}(\mathbf{A},\Delta,L,\sigma^{2})=\mathcal{O}(\frac{ \log 1/\varepsilon}{\varepsilon^{4}})\), which is worse than the upper bound in Theorem 2 and the lower bound in Proposition 1 by a logarithmic factor. **Comparison with AdaGrad.** AdaGrad [12] is another popular adaptive optimizer. Under Assumptions 1 and 2, the state-of-art iteration complexity of AdaGrad is \(\mathcal{O}(\frac{\log 1/\varepsilon}{\varepsilon^{4}})\)[13], which is worse than Adam by a logarithmic factor. Here we show that such a gap may be not due to the limitation of analysis, and can be explained by analogizing AdaGrad to Adam without momentum as SGD with diminishing learning rate to SGD with constant learning rate. To start with, the update rule of AdaGrad is given as \[\mathbf{\nu}_{t}=\mathbf{\nu}_{t-1}+\mathbf{g}_{t}^{\odot 2},\mathbf{w}_{t+1}=\mathbf{w}_{t} -\eta\frac{1}{\sqrt{\mathbf{\nu}_{t}}}\odot\mathbf{g}_{t}. \tag{8}\] We first show that in Algorithm 1, if we allow the hyperparameters to be dynamical, i.e., \[\mathbf{\nu}_{t}=\beta_{2,t}\mathbf{\nu}_{t-1}+(1-\beta_{2,t})\mathbf{g}_{t}^{\odot 2},\mathbf{m}_{t}=\beta_{1,t}\mathbf{m}_{t-1}+(1- \beta_{1,t})\mathbf{g}_{t},\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta_{t}\frac{1}{\sqrt{\mathbf{\nu} _{t}}}\odot\mathbf{m}_{t}, \tag{9}\] then Adam is equivalent to AdaGrad by setting \(\eta_{t}=\frac{\eta}{\sqrt{t}}\), \(\beta_{1,t}=0\), and \(\beta_{2,t}=1-\frac{1}{t}\). Specifically, by setting \(\mathbf{\mu}_{t}=\mathbf{\nu}_{t}\) in Eq. (9), we have Eq. (9) is equivalent to with Eq. (8) (by replacing \(\mathbf{\nu}_{t}\) by \(\mathbf{\mu}_{t}\) in Eq. (8)). Comparing the above hyperparameter setting with that in Theorem 2, we see that the above hyperparameter setting can be obtained by changing \(T\) to \(t\) and setting \(c=0\) in Theorem 2. This is similar to the relationship between SGD with diminishing learning rate \(\Theta(1/\sqrt{t})\) and SGD with diminishing learning rate \(\Theta(1/\sqrt{T})\). Recall that the iteration complexity of SGD with diminishing learning rate \(\Theta(1/\sqrt{t})\) also has an additional logarithmic factor than SGD with constant learning rate, which may explain the gap between AdaGrad and Adam. ## 6 Limitations Despite that our work provide the first result closing the upper bound and lower bound of the iteration complexity of Adam, there are several limitations listed as follows: **Dependence over the dimension \(d\).** The bounds in Theorem 1 and Theorem 2 is monotonously increasing with respect to \(d\). This is undesired since the upper bound of iteration complexity of SGD is invariant with respect to \(d\). Nevertheless, removing such an dependence over \(d\) is technically hard since we need to deal with every coordinate separately due to coodinate-wise learning rate, while the descent lemma does not hold for a single coordinate but combines all coordinates together. To our best knowledge, all existing works on the convergence of Adam also suffers from the same problem. We leave removing the dependence over \(d\) as an important future work. **No better result with momentum.** It can be observed that in Theorem 1 and Theorem 2, the tightest bound is achieved when \(\beta_{1}=0\) (i.e., no momentum is applied). This contradicts with the common wisdom that momentum helps to accelerate. Although the benefit of momentum is not very clear even for simple optimizer SGD with momentum, we view this as a limitation of our work and defer proving the benefit of momentum in Adam as a future work. Also, our result does not imply that setting \(\beta_{1}\) is not as critical as setting \(\beta_{2}\). The primary objective of this paper is to characterize the dependence on \(\varepsilon\), and the importance of setting \(\beta_{1}\) might be justified in other ways or characterizations. To help readers gain a deeper understanding of this issue, we include experiments to illustrate the dependence of performance on \(\beta_{1}\) in Appendix E. ## Acknowledgments and Disclosure of Funding This work was founded by the CAS Project for Young Scientists in Basic Research under Grant No. YSBR-034 and the Innovation Funding of ICT, CAS under Grant No.E000000.
2305.13948
Decoupled Kullback-Leibler Divergence Loss
In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence loss and observe that it is equivalent to the Doupled Kullback-Leibler (DKL) Divergence loss that consists of 1) a weighted Mean Square Error (wMSE) loss and 2) a Cross-Entropy loss incorporating soft labels. From our analysis of the DKL loss, we have identified two areas for improvement. Firstly, we address the limitation of DKL in scenarios like knowledge distillation by breaking its asymmetry property in training optimization. This modification ensures that the wMSE component is always effective during training, providing extra constructive cues. Secondly, we introduce global information into DKL for intra-class consistency regularization. With these two enhancements, we derive the Improved Kullback-Leibler (IKL) Divergence loss and evaluate its effectiveness by conducting experiments on CIFAR-10/100 and ImageNet datasets, focusing on adversarial training and knowledge distillation tasks. The proposed approach achieves new state-of-the-art performance on both tasks, demonstrating the substantial practical merits. Code and models will be available soon at https://github.com/jiequancui/DKL.
Jiequan Cui, Zhuotao Tian, Zhisheng Zhong, Xiaojuan Qi, Bei Yu, Hanwang Zhang
2023-05-23T11:17:45Z
http://arxiv.org/abs/2305.13948v1
# Decoupled Kullback-Leibler Divergence Loss ###### Abstract In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence loss and observe that it is equivalent to the Douped Kullback-Leibler (DKL) Divergence loss that consists of 1) a weighted Mean Square Error (\(w\)MSE) loss and 2) a Cross-Entropy loss incorporating soft labels. From our analysis of the DKL loss, we have identified two areas for improvement. Firstly, we address the limitation of DKL in scenarios like knowledge distillation by breaking its asymmetry property in training optimization. This modification ensures that the \(w\)MSE component is always effective during training, providing extra constructive cues. Secondly, we introduce global information into DKL for intra-class consistency regularization. With these two enhancements, we derive the Improved Kullback-Leibler (IKL) Divergence loss and evaluate its effectiveness by conducting experiments on CIFAR-10/100 and ImageNet datasets, focusing on adversarial training and knowledge distillation tasks. The proposed approach achieves new state-of-the-art performance on both tasks, demonstrating the substantial practical merits. Code and models will be available soon at [https://github.com/jiequancui/DKL](https://github.com/jiequancui/DKL). ## 1 Introduction Loss functions are a critical component of training deep models. Cross-Entropy loss is particularly important in image classification tasks [1; 2; 3; 4; 5], while Mean Square Error (MSE) loss is commonly used in regression tasks [6; 7; 8]. Contrastive loss [9; 10; 11; 12; 13; 14; 15] has emerged as a popular objective for representation learning. The selection of an appropriate loss function can exert a substantial influence on a model's performance. Therefore, the development of effective loss functions [16; 17; 18; 19; 20; 21; 22; 23] remains a critical research topic in the fields of computer vision and machine learning. Kullback-Leibler (KL) Divergence quantifies the degree of dissimilarity between a probability distribution and a reference distribution. As one of the most frequently used loss functions, it finds application in various scenarios, such as adversarial training [24; 25; 26; 27], knowledge distillation [28; 29; 18], incremental learning [30; 31], and robustness on out-of-distribution data [32]. Although many of these studies incorporate KL Divergence loss as part of their algorithms, they may not thoroughly investigate the underlying mechanisms of the loss function. To address this issue, our paper aims to elucidate the working mechanism of KL Divergence during training optimization. Our study focuses on the Kullback-Leibler (KL) Divergence loss from the perspective of gradient optimization. We provide theoretical proof that it is equivalent to the Decoupled Kullback-Leibler (DKL) Divergence loss, which comprises a weighted Mean Square Error (\(w\)MSE) loss and a Cross-Entropy loss with soft labels. We have identified potential issues with the DKL loss. Specifically, its gradient optimization is asymmetric with respect to inputs, which can lead to the weighted MSE (\(w\)MSE) component being ignored in certain scenarios, such as knowledge distillation. Fortunately, it is convenient to address this issue with the formulation of DKL by breaking the asymmetry property. Moreover, global information is used to regularize the training process as a holistic categorical distribution prior. Combining DKL with these two points, we derive the Improved Kullback-Leibler (IKL) Divergence loss. Fig. 1 presents a clear visual comparison of the KL, DKL, and IKL loss functions. To demonstrate the effectiveness of our proposed IKL loss, we evaluate it in adversarial training and knowledge distillation tasks. Our experimental results on CIFAR-10/100 and ImageNet show that the IKL loss achieves new state-of-the-art performance on both tasks. In summary, the main contributions of our work are: * Our study provides insights into the Kullback-Leibler (KL) Divergence loss by analyzing its gradient optimization properties. In doing so, we reveal that it is mathematically equivalent to a combination of a weighted Mean Square Error (\(w\)MSE) loss and a Cross-Entropy loss with soft labels. * After analyzing the Decoupled Kullback-Leibler (DKL) Divergence loss, we propose two modifications for enhancement: addressing its asymmetry property and incorporating global information. The derived Improved Kullback-Leibler (IKL) Divergence loss demonstrates improved performance. * By utilizing the IKL loss for adversarial training and knowledge distillation, we obtain state-of-the-art results for both tasks on CIFAR-10/100 and ImageNet. ## 2 Related Work Adversarial Robustness.Since the identification of adversarial examples by Szegedy et al. [33], the security of deep neural networks (DNNs) has gained significant attention, and ensuring the reliability of DNNs has become a prominent topic in the deep learning community. Numerous algorithms have been developed to defend against adversarial attacks. However, as highlighted by Athalye et al. [34], methods relying on obfuscated gradients can create a deceptive sense of security and remain vulnerable to strong attacks such as auto-attack [35]. Adversarial training [36], being the most effective method, stands out due to its consistently high performance. Adversarial training incorporates adversarial examples into the training process. Madary et al. [36] propose the adoption of the universal first-order adversary, specifically the PGD attack, in adversarial training. Zhang et al. [24] enhance model robustness by utilizing the Kullback-Leibler (KL) Divergence loss based on their theoretical analysis. Wu et al. [25] introduce adversarial weight perturbation to explicitly regulate the flatness of the weight loss landscape. Cui et al. [26] leverage guidance from naturally-trained models to regularize the decision boundary in adversarial training. Additionally, various other techniques [27] focusing on optimization or training aspects have also been developed. Figure 1: **The proposed DKL and IKL losses.**\(\mathcal{M}\) and \(\mathcal{N}\) can be the same one or two separate models determined by application scenarios. Similarly, \(x_{m}\), \(x_{n}\in\mathcal{X}\) can also be the same one or two different images. \(o_{m}\), \(o_{n}\) are logits output with which the probability vector can be obtained when applying the _Softmax_ activation. Black arrows represent the forward process while colored arrows indicate the gradient backpropagation flows driven by the corresponding loss functions in the same color. “\(wMSE\)” is a weighted MSE loss. “\(\bar{w}MSE\)” is incorporated with global information. In recent years, several works have explored the use of data augmentation techniques to improve adversarial training. Gowal et al. [37] have shown that synthesized images using generative models can enhance adversarial training and improve robustness against adversarial attacks. Wang et al. [38] have demonstrated that stronger robustness can be achieved by utilizing better generative models such as the popular diffusion model [39], resulting in new state-of-the-art adversarial robustness. Additionally, Addepalli et al. [40] have made it feasible to incorporate general augmentation techniques for image classification, such as Autoaugment [41] and CutMix [42], into adversarial training. We have explored the mechanism of KL loss for adversarial robustness in this paper. The effectiveness of the proposed IKL loss is tested in both settings with and without synthesized data. Knowledge distillation.The concept of Knowledge Distillation (KD) was first introduced by Hinton et al. [28]. It involves extracting "dark knowledge" from accurate teacher models to guide the learning process of student models, which often have lower capacity than their teachers. This is achieved by utilizing the Kullback-Leibler Divergence (KL) loss to regularize the output probabilities of student models, aligning them with those of their teacher models when given the same inputs. This simple yet effective technique significantly improves the generalization ability of smaller models and finds extensive applications in various domains. Since the initial success of KD [28], several advanced methods, including logits-based [43; 44; 45; 46; 47; 18] and features-based approaches [48; 49; 50; 51; 29; 52; 53; 54; 55; 56; 57], have been introduced. Logits-based methods extract only the logits output from teacher models. These methods are more general than features-based methods as there is no requirement to know the teacher model architecture, and only the logits output is needed for inputs. Several advanced methods have been proposed, including mutual learning methods like DML [47], which train students and teachers simultaneously. Another approach, DKD [18], decomposes KD into target class knowledge distillation and non-target class knowledge distillation. Features-based methods explore to take advantage of intermediate layer features compared with logits-based methods. This kind of method usually requires knowing the architecture of teacher models. With such extra priors and features information, features-based methods are expected to achieve higher performance, which can be along with more computation or storage costs. Works [50; 53; 48] directly transfer the representation of teacher models to student models. ReviewKD [29] distills knowledge from the integrated features of multiple layers in the teacher model. This paper decouples the KL loss into a new formulation, _i.e._, DKL, and addresses the limitation of KL loss for application scenarios like knowledge distillation. With the improved version of DKL, _i.e._, IKL loss, our models even surpass all previous features-based methods. ## 3 Method In this section, we detail the preliminary and our motivation in Sec. 3.1, and then discuss our Improved Kullback-Leibler (IKL) Divergence loss in Sec. 3.2. ### Preliminary and Motivation Revisiting Kullback-Leibler (KL) Divergence Loss.Kullback-Leibler (KL) Divergence measures the differences between two probability distributions. For distributions \(P\) and \(Q\) of a continuous random variable, It is defined to be the integral: \[D_{KL}(P||Q)=\int_{-\infty}^{+\infty}p(x)*\log\frac{p(x)}{q(x)}dx, \tag{1}\] where \(p\) and \(q\) denote the probability densities of \(P\) and \(Q\). KL loss is one of the most commonly used objectives in deep learning. In this paper, we study the mechanism of KL loss and test our Improved Kullback-Leibler (IKL) Divergence loss with adversarial training and knowledge distillation tasks. For adversarial training, to enhance model robustness, KL loss regularizes the output probabilities of adversarial examples to be the same as that of their corresponding clean images. Knowledge distillation algorithms adopt KL loss to let a student model mimic behaviors of one teacher model. With the transferred knowledge from the teacher, the student is expected to improve performance. Preliminaries.We consider image classification models that output predicted probability vectors with the _Softmax_ activation. Assume \(o_{i}\in\mathcal{R}^{C}\) is the logits output of one deep model with an image \(x_{i}\in\mathcal{X}\) as input, where \(C\) is the number of classes in the task. \(s_{i}\in\mathcal{R}^{C}\) is the predicted probability vector and \(s_{i}=\textit{Softmax}(o_{i})\). \(o_{i}^{j}\) and \(s_{i}^{j}\) are values for the \(j\)-th class in \(o_{i}\) and \(s_{i}\) respectively. KL loss is applied to make \(s_{m}\) and \(s_{n}\) similar in many scenarios, leading to the following objective, \[\mathcal{L}_{KL}(x_{m},x_{n})=\sum_{j=1}^{C}s_{m}^{j}*\log\frac{s_{m}^{j}}{s_{ n}^{j}}. \tag{2}\] For instance, in adversarial training, \(x_{m}\) is a natural image, and \(x_{n}\) is the corresponding adversarial example of \(x_{m}\). \(x_{m}\) and \(x_{n}\) indicate the same image and are fed into the teacher and student models separately in knowledge distillation. It is worth noting that \(s_{m}\) is untraceable because the teacher model is well-trained in advance and fixed in the distillation process. Motivation.Previous works [28; 18; 24; 26] incorporate the KL loss into their algorithms without exploring its inherent working mechanism. The objective of this paper is to uncover the driving force behind training optimization through an examination of the KL loss function. With the back-propagation rule, the derivative gradients are as follows, \[\frac{\partial\mathcal{L}_{KL}}{\partial o_{m}^{j}} = \sum_{k=1}^{C}((\Delta m_{j,k}-\Delta n_{j,k})*(s_{m}^{k}*s_{m}^{ j})), \tag{3}\] \[\frac{\partial\mathcal{L}_{KL}}{\partial o_{n}^{j}} = s_{m}^{j}*(s_{n}^{j}-1)+s_{n}^{j}*(1-s_{m}^{j}), \tag{4}\] where \(\Delta m_{j,k}=o_{m}^{j}-o_{m}^{k}\), and \(\Delta n_{j,k}=o_{n}^{j}-o_{n}^{k}\). Taking advantage of the gradient information, we introduce a novel formulation - the Decoupled Kullback-Leibler (DKL) Divergence loss - which is presented in Remark 1. The DKL loss is expected to be equivalent to the KL loss and prove to be a more analytically tractable alternative for further exploration and study. **Remark 1**: From the perspective of gradient optimization, the Kullback-Leibler (KL) Divergence loss is equivalent to the following Decoupled Kullback-Leibler (DKL) Divergence loss when \(\alpha=1\) and \(\beta=1\). \[\mathcal{L}_{DKL}(x_{m},x_{n})\!=\!\underbrace{\frac{\alpha}{4}\sum_{j=1}^{C} \sum_{k=1}^{C}((\Delta m_{j,k}-\mathcal{S}(\Delta n_{j,k}))^{2}*\mathcal{S}( w_{m}^{j,k}))}_{\textbf{weighted MSE ($w$MSE)}}\!-\!\underbrace{\beta\sum_{j=1}^{C}\mathcal{S}(s_{m}^{j})*\log s_{n}^{ j}}_{\textbf{Cross-Entropy}}, \tag{5}\] where \(\mathcal{S}(\cdot)\) means _stop gradients_ operation. \(w_{m}^{j,k}\) = \(s_{m}^{j}*s_{m}^{k}\). _Proof_ See Appendix. As demonstrated by Remark 1 and Eqs. (3) (4), we can conclude the following key properties of KL and DKL. * DKL loss is equivalent to KL loss in terms of gradient optimization. Thus, KL loss can be decoupled into a weighted Mean Square Error (\(w\)MSE) loss and a Cross-Entropy loss incorporating soft labels. * Optimization is asymmetric for \(o_{m}\) and \(o_{n}\). The \(w\)MSE and Cross-Entropy losses in (5) are complementary and collaboratively work together. The asymmetry property can cause the \(w\)MSE to be neglected or overlooked when \(o_{m}\) is untraceable, like in the knowledge distillation scenario discussed in Sec. 3.2. * The "\(w_{m}^{j,k}\)" in Eq. (5) is conditioned on the prediction of \(x_{m}\). Nevertheless, sample-wise predictions may be subject to significant variance, which may result in unstable training and challenging optimization problems. ### Improved Kullback-Leibler (IKL) Divergence Loss Based on the analysis in Sec. 3.1, we propose an Improved Kullback-Leibler (IKL) Divergence loss, \[\mathcal{L}_{IKL}(x_{m},x_{n}) =\underbrace{\frac{\alpha}{4}\sum_{j=1}^{C}\sum_{k=1}^{C}(( \Delta m_{j,k}-\Delta n_{j,k})^{2}*\mathcal{S}(\bar{w}_{y}^{j,k}))}_{\text{ global weighted MSE ($\bar{w}$\text{MSE})}}\underbrace{-\beta\sum_{j=1}^{C}\mathcal{S}(s_{m}^{j})* \log s_{n}^{j}}_{\text{Cross-Entropy}}, \tag{6}\] where \(y\) is the ground-truth label for \(x_{m}\). \(\bar{w}_{y}\in\mathcal{R}^{C\times C}\) is the weights for class \(y\). Compared with DKL in Eq. (5), we make the following improvements: 1) **breaking the asymmetry property**; 2) **introducing global information**. The respective details are presented as follows. Breaking the asymmetry property.As shown in Eq. (5), the weighted MSE encourages \(o_{n}\) to be similar to \(o_{m}\) with the second-order information, _i.e._, logit differences between any two classes. The cross-entropy loss guarantees that \(s_{n}\) can have the same predicted scores with \(s_{m}\). Two loss terms collaboratively work together to make \(o_{n}\) and \(o_{m}\) similar absolutely and relatively. Discarding any one of them can lead to performance degradation. However, because of the asymmetry property of KL/DKL, the unexpected case may occur when \(s_{m}\) is detached from the gradient back-propagation, which is formulated as: \[\mathcal{L}_{DKL}(x_{m},x_{n}) =\underbrace{\frac{\alpha}{4}\sum_{j=1}^{C}\sum_{k=1}^{C}(( \mathcal{S}(\Delta m_{j,k})-\mathcal{S}(\Delta n_{j,k}))^{2}*\mathcal{S}(w_{ m}^{j,k}))}_{\text{weighted MSE ($\bar{w}$\text{MSE})}}\underbrace{-\beta\sum_{j=1}^{C}\mathcal{S}(s_{m}^{j})* \log s_{n}^{j}}_{\text{Cross-Entropy}}, \tag{7}\] where \(\mathcal{S}(\cdot)\) means _stop gradients_ operation. \(w_{m}^{j,k}\) = \(s_{m}^{j}\)\(*s_{m}^{k}\). As indicated by Eq. (7), the weighted MSE loss will take no effect on training optimization since all components of \(w\)MSE are detached from gradient propagation, which can potentially hurt the model performance. Knowledge distillation matches this case because the teacher model is fixed during distillation training. We address this issue by breaking the asymmetry property of KL/DKL, _i.e._, enabling the gradients of \(\mathcal{S}(\Delta n_{j,k})\). The updated formulation becomes, \[\mathcal{L}_{DKL}(x_{m},x_{n}) =\underbrace{\frac{\alpha}{4}\sum_{j=1}^{C}\sum_{k=1}^{C}(( \mathcal{S}(\Delta m_{j,k})-\Delta n_{j,k})^{2}*\mathcal{S}(w_{m}^{j,k}))}_{ \text{weighted MSE ($w$\text{MSE})}}\underbrace{-\beta\sum_{j=1}^{C}\mathcal{S}(s_{m}^{j})* \log s_{n}^{j}}_{\text{Cross-Entropy}}, \tag{8}\] where \(\mathcal{S}(\cdot)\) means _stop gradients_ operation. \(w_{m}^{j,k}\) = \(s_{m}^{j}\)\(*s_{m}^{k}\). Introducing global information.The _weights_ for the weighted MSE of DKL in Eq. (5) is sample-wise and depends on the prediction \(s_{m}\), \[w_{m}^{j,k}=s_{m}^{j}*s_{m}^{k}. \tag{9}\] However, sample-wise _weights_ can be biased due to the individual prediction variance. We thus adopt class-wise _weights_ for \(\overline{\text{IKL loss}}\) shown in Eq. (6), \[\bar{w}_{y}^{j,k}=\bar{s}_{y}^{j}*\bar{s}_{y}^{k}. \tag{10}\] where \(y\) is ground-truth label of \(x_{m}\), \(\bar{s}_{y}=\frac{1}{|\mathcal{X}_{y}|}\sum_{x_{i}\in\mathcal{X}_{y}}s_{i}\). The global information injected by \(\bar{w}_{y}^{j,k}\) can act as a regularization to enhance intra-class consistency and mitigate biases that may arise from sample noise. To this end, we derive the IKL loss in Eq. (6) by incorporating these two designs. A case study.We empirically examine each component of IKL on CIFAR-100 with the adversarial training task. Ablation experimental results and their setting descriptions are listed in Table 1. In the implementation, we use improved TRADES [24] as our baseline that combines with AWP [25] and uses an increasing epsilon schedule [40]. The comparison between (a) and (b) shows that DKL can achieve comparable performance, confirming the equivalence to KL. The comparisons among (b), (d), and (e) validate the effectiveness of the "GI" mechanism. We also confirm the importance of "BA" with the knowledge distillation task in Sec. 4.2. ## 4 Experiments To verify the effectiveness of the proposed IKL loss, we conduct experiments on CIFAR-10, CIFAR100, and ImageNet for adversarial training (Sec. 4.1) and knowledge distillation (Sec. 4.2). ### Adversarial Robustness **Experimental settings.** We use an improved version of TRADES [24] as our baseline, which incorporates AWP [25] and adopts an increasing epsilon schedule [40]. SGD optimizer with a momentum of 0.9 is used. We use the cosine learning rate strategy with an initial learning rate of 0.2 and train models 200 epochs. The batch size is 128, the weight decay is 5e-4 and the perturbation size \(\epsilon\) is set to 8 / 255. Following previous work [24; 26], standard data augmentation including random crops with 4 pixels of padding and random horizontal flip is performed for data preprocessing. Under the setting of training with generated data, we strictly follow the training configurations in [38] for fair comparisons. Our implementations are based on their open-sourced code. We only replace the KL loss with our IKL loss. **Datasets and evaluation.** CIFAR-10 and CIFAR-100 are the two most popular benchmarks in the adversarial community. The CIFAR-10 dataset consists of 60,000 32\(\times\)32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images. The more challenging CIFAR-100 has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. Following previous work [25; 26], we report the clean accuracy on natural images and adversarial robustness under auto-attack [35] with epsilon 8/255. **Comparison methods.** To compare with previous methods, We categorize them into two groups according to the different types of data preprocessing: * Methods [25; 26; 58] with basic augmentation, _i.e_., random crops and random horizontal flip. * Methods [38; 59; 37] with augmentation with generative models or AutoAug [41], CutMix [60]. **Comparisons with state-of-the-art on CIFAR-100.** On CIFAR-100, with the basic augmentations setting, we compare with AWP, LBGAT, LAS-AT, and ACAT. The experimental results are summarized in Table 2. Our WRN-34-10 models trained with IKL loss do a better trade-off between natural accuracy and adversarial robustness. With \(\alpha=20\) and \(\beta=3\), the model achieves 66.51% top-1 accuracy on natural images while 31.45% robustness under auto-attack. We follow [38] to take advantage of synthesized images generated by the popular diffusion models [39]. With 1M generated images, our model achieves 68.99% top-1 natural accuracy and 35.89% robustness, surpassing [38] by 0.93% and 0.24% respectively. With 50M generated images, we create new state-of-the-art with WideResNet-28-10, achieving **73.85%** top-1 natural accuracy and **39.18%** adversarial robustness under auto-attack. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Index** & **GI** & **BA** & Clean & AA & **Descriptions** \\ \hline (a) & Na & Na & 62.87 & 30.29 & baseline with KL loss. \\ \hline (b) & ✗ & ✗ & 62.54 & 30.20 & DKL, equivalent to KL loss \\ (c) & ✗ & ✗ & 62.69 & 30.42 & (b) with BA \\ (d) & ✗ & ✗ & 66.67 & 29.10 & (c) with \(w^{j,k}_{j}=0.01\) \\ (e) & ✗ & ✗ & 66.51 & 31.45 & (c) with GI, _i.e_., IKL \\ \hline \hline \end{tabular} \end{table} Table 1: Ablation study on “GI” and “BA” with DKL loss. “GI” is “Global Information”, and “BA” indicates “Breaking Asymmetry”. **Comparison with state-of-the-art on CIFAR-10.** Experimental results on CIFAR-10 are listed in Table 4, with the basic augmentation setting, our model achieves 84.70% top-1 accuracy on natural images and 57.13% robustness, outperforming previous state-of-the-art by 0.96% on robustness. With extra generated data, we improve the state-of-the-art by 0.44%, achieving **67.75%** robustness. ### Knowledge Distillation **Datasets and evaluation.** Following previous work [29; 49], we conduct experiments on CIFAR-100 [65] and ImageNet [66] to show the advantages of IKL on knowledge distillation. ImageNet [66] is the most challenging dataset for classification, which consists of 1.2 million images for training and 50K images for validation over 1,000 classes. For evaluation, we report top-1 accuracy on CIFAR-100 and ImageNet validation. The training speed of different methods is also discussed. **Experimental settings.** We follow the experimental settings in [18] by Zhao et al. Our implementation for knowledge distillation is based on their open-sourced code. Specifically, on CIFAR-100, we train all models for 240 epochs with a learning rate that decayed by 0.1 at the 150th, 180th, and 210th epoch. We initialize the learning rate to 0.01 for MobileNet and \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & Method & Architecture & Augmentation Type & Clean & AA \\ \hline \multirow{8}{*}{**CIFAR-100** (\(\ell_{\infty}\), \(\epsilon=8/255\)) (\(\ell_{\infty}\), \(\epsilon=8/255\)) & WRN-34-10 & Basic & 60.38 & 28.86 \\ & LBGAT [26] & WRN-34-10 & Basic & 60.64 & 29.33 \\ & LAS-AT [27] & WRN-34-10 & Basic & 64.89 & 30.77 \\ & ACAT [40] & WRN-34-10 & Basic & 65.75 & 30.23 \\ & \multirow{2}{*}{**IKL-AT**} & WRN-34-10 & Basic & **66.51** & **31.45** \\ & & WRN-34-10 & Basic & 64.08 & **31.67** \\ \cline{2-6} & \multirow{2}{*}{[61]} & \multirow{2}{*}{WRN-34-10} & \multirow{2}{*}{1M Generated Data} & 65.90 & 31.20 \\ & & & & 1M Generated Data & 62.08 & 31.40 \\ & & & & 1M Generated Data & 62.41 & 32.06 \\ & \multirow{2}{*}{[38]} & \multirow{2}{*}{WRN-28-10} & \multirow{2}{*}{1M Generated Data} & 68.06 & 35.65 \\ & & & & 1M Generated Data & 72.58 & 38.83 \\ & & & & 1M Generated Data & **68.99** & **35.89** \\ & \multirow{2}{*}{**IKL-AT**} & WRN-28-10 & \multirow{2}{*}{50M Generated Data} & **73.85** & **39.18** \\ \hline \hline \end{tabular} \end{table} Table 2: Test accuracy (%) of clean images and Robustness (%) under AutoAttack on CIFAR-100. We highlight our results in **bold** whenever the value represents an improvement relative to the strongest baseline under the same training settings, and we underline them whenever the value achieves a new SOTA result under the threat model. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} Distillation \\ Manner \\ \end{tabular} } & \multirow{2}{*}{Teacher} & \multirow{2}{*}{Extra Parameters} & ResNet34 & ResNet50 \\ & & & 73.31 & 76.16 \\ & & & ResNet18 & MobileNet \\ & & & 69.75 & 68.87 \\ \hline \multirow{4}{*}{Features} & AT [51] & ✗ & 70.69 & 69.56 \\ & OFD [50] & ✔ & 70.81 & 71.25 \\ & CRD [49] & ✔ & 71.17 & 71.37 \\ & ReviewKD [29] & ✔ & 71.61 & 0.319 s/iter & 72.56 \\ \hline \multirow{4}{*}{Logits} & DKD [18] & ✗ & 71.70 & 72.05 \\ & KD [28] & ✗ & 71.03 & 70.50 \\ & **IKL-KD** & ✗ & **71.91** & **0.197 s/iter** & **72.84** & **0.252 s/iter** \\ \cline{1-1} & \(\Delta\) & & +0.88 & +2.34 \\ \hline \hline \end{tabular} \end{table} Table 3: **Top-1 accuracy (%) on the ImageNet validation and training speed (sec / iteration) comparisons. \(\Delta\) represents the performance improvement over the classical KD. Training speed is calculated on 4 Nvidia GeForce 3090 GPUs with a batch of 512 224x224 images. We underline the values that achieve new SOTA results. All results are the average over three trials.** ShuffleNet, and 0.05 for other models. The batch size is 64 for all models. We train all models three times and report the mean accuracy. On ImageNet, we use the standard training that trains the model for 100 epochs and decays the learning rate for every 30 epochs. We initialize the learning rate to 0.2 and set the batch size to 512. For both CIFAR-100 and ImageNet, we consider the distillation among the architectures having the same unit structures, like ResNet56 and ResNet20, VGGNet13 and VGGNet8. On the other hand, we also explore the distillation among architectures made up of different unit structures, like WideResNet and ShuffleNet, VggNet and MobileNet-V2. **Comparison methods.** According to the information extracted from the teacher model in distillation training, knowledge distillation methods can be divided into two categories: * Features-based methods [48, 49, 29, 50]. This kind of method makes use of features from different layers of the teacher model, which can need extra parameters and high training computational costs. * Logits-based methods [28, 18]. This kind of method only makes use of the logits output of the teacher model, which does not require knowing the architectures of the teacher model and thus is more general in practice. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & Method & Architecture & Augmentation Type & Clean & AA \\ \hline \multirow{8}{*}{**CIFAR-10** (\(\ell_{\infty}\), \(\epsilon=8/255\)) & WRN-34-20 & Basic & 85.34 & 53.42 \\ & LBGAT [26] & WRN-34-20 & Basic & 88.70 & 53.57 \\ & AWP [25] & WRN-34-10 & Basic & 85.36 & 56.17 \\ & LAS-AT [27] & WRN-34-10 & Basic & 87.74 & 55.52 \\ & ACAT [40] & WRN-34-10 & Basic & 82.41 & 55.36 \\ & **IKL-AT** & WRN-34-10 & Basic & 85.31 & **57.13** \\ \cline{2-6} & [61] & WRN-34-10 & 10M Generated Data & 87.00 & 60.60 \\ & [62] & WRN-28-10 & 1M Generated Data & 87.33 & 60.73 \\ & [37] & WRN-28-10 & 100M Generated Data & 87.50 & 63.38 \\ & [38] & WRN-28-10 & 1M Generated Data & 91.12 & 63.35 \\ & [38] & WRN-28-10 & 50M Generated Data & 92.27 & 67.17 \\ & & WRN-28-10 & 20M Generated Data & 92.44 & 67.31 \\ & & WRN-28-10 & 1M Generated Data & 90.75 & **63.54** \\ & & WRN-28-10 & 20M Generated Data & 92.16 & **67.75** \\ \hline \hline \end{tabular} \end{table} Table 4: Test accuracy (%) of clean images and robustness (%) under AutoAttack on CIFAR-10. We highlight our results in **bold** whenever the value represents an improvement relative to the strongest baseline under the same training settings, and we underline them whenever the value achieves a new SOTA result under the threat model. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{3}{*}{ \begin{tabular}{c} Distillation \\ Manner \\ \end{tabular} } & Teacher & ResNet56 & ResNet110 & ResNet32\(\times\)4 & WRN-40-2 & WRN-40-2 & VGG13 \\ & & 72.34 & 74.31 & 79.42 & 75.61 & 75.61 & 74.64 \\ & Student & ResNet20 & ResNet32 & ResNet8\(\times\)4 & WRN-16-2 & WRN-40-1 & VGG8 \\ & 69.06 & 71.14 & 72.50 & 73.26 & 71.98 & 70.36 \\ \hline \multirow{4}{*}{Features} & FitNet [48] & 69.21 & 71.06 & 73.50 & 73.58 & 72.24 & 71.02 \\ & RKD [64] & 69.61 & 71.82 & 71.90 & 73.35 & 72.22 & 71.48 \\ & CRD [49] & 71.16 & 73.48 & 75.51 & 75.48 & 74.14 & 73.94 \\ & OPD [50] & 70.98 & 73.23 & 74.95 & 75.24 & 74.33 & 73.95 \\ & ReviewKD [29] & 71.89 & 73.89 & 75.63 & 76.12 & **75.09** & 74.84 \\ \hline \multirow{4}{*}{Logits} & DKD [18] & **71.97** & 74.11 & 76.32 & 76.24 & 74.81 & 74.68 \\ & KD [28] & 70.66 & 73.08 & 73.33 & 74.92 & 73.54 & 72.98 \\ \cline{1-1} & **IKL-KD** & 71.44 & **74.26** & **76.59** & **76.45** & 74.98 & **74.98** \\ \cline{1-1} & \(\Delta\) & +0.78 & +1.16 & +3.26 & +1.53 & +1.44 & +2.00 \\ \hline \hline \end{tabular} \end{table} Table 5: **Top-1 accuracy (%) on the CIFAR-100 validation. Teachers and students are in the same architectures. And \(\Delta\) represents the performance improvement over the classical KD. We underline the values that achieve new SOTA results. All results are the average over 3 trials.** **Comparison with state-of-the-art on CIFAR-100.** Experimental results on CIFAR-100 are summarized in Table 5 and Table 7 (in Appendix). Table 5 lists the comparisons with previous methods under the setting that the architectures of the teacher and student have the same unit structures. Models trained by IKL-KD can achieve comparable or better performance in all considered settings. Specifically, we achieve the best performance in 4 out of 6 training settings. Table 7 in Appendix shows the comparisons with previous methods under the setting that the architectures of the teacher and student have different unit structures. **Comparison with state-of-the-art on ImageNet.** We empirically show the comparisons with other methods on ImageNet in Table 3. With a ResNet34 teacher, our ResNet18 achieves **71.91%** top-1 accuracy. With a ResNet50 teacher, our MobileNet achieves **72.64%** top-1 accuracy. Models trained by IKL-KD surpass all previous methods while saving **38%** and **52%** computation costs for ResNet34-ResNet18 and ResNet50-MobileNet distillation training respectively when compared with ReviewKD [29]. ### Ablation Studies **Hyper-parameters of \(\alpha\) and \(\beta\).** With IKL, the two components can be manipulated independently. We empirically study the effects of hyper-parameters of \(\alpha\) and \(\beta\) on CIFAR-100 for adversarial robustness. Robustness under APGD-CE, APGD-T, and AA [35] are reported in Table 6. Especially, only samples that can not be attacked by APGD-CE will be tested under APGD-T attack. Reasonable \(\alpha\) and \(\beta\) should be chosen for the best trade-off between natural accuracy and adversarial robustness. **Visualization with T-SNE.** We randomly sample 20 classes in CIFAR-100. The numbers in the pictures are class indexes. For each sampled class, we collect the feature representation of natural images and adversarial examples with the validation set. The visualization by T-SNE is shown in Fig. 2. Compared with TRADES that trained with KL loss, Features by IKL-AT models are more compact and separable. ## 5 Conclusion and Limitation In this paper, we have investigated the mechanism of Kullback-Leibler (KL) Divergence loss in terms of gradient optimization. Based on our analysis, we decouple the KL loss into a weighted Mean Square Error (\(w\)MSE) loss and a Cross-Entropy loss with soft labels. The new formulation is named Decoupled Kullback-Leibler (DKL) Divergence loss. To address the spotted issues of DKL, we make two improvements that break asymmetry property in optimization and incorporate global information, deriving the Improved Kullback-Leibler (IKL) Divergence loss. Experimental results on CIFAR-10/100 and ImageNet show that we create new state-of-the-art on adversarial training and knowledge distillation tasks, indicating the effectiveness of our IKL loss. KL loss has various applications. we consider it as future work to showcase the potential of IKL in other scenarios. \begin{table} \end{table} Table 6: Ablation study of hyper-parameters \(\alpha\) and \(\beta\) in IKL. Figure 2: Visualizations by T-SNE with randomly selected 20 classes in CIFAR-100.
2303.11209
Constraints on the Binarity of the WN3/O3 Class of Wolf-Rayet Stars
The WN3/O3 Wolf-Rayet (WR) stars were discovered as part of our survey for WRs in the Magellanic Clouds. The WN3/O3s show the emission lines of a high-excitation WN star and the absorption lines of a hot O-type star, but our prior work has shown that the absorption spectrum is intrinsic to the WR star. Their place in the evolution of massive stars remains unclear. Here we investigate the possibility that they are the products of binary evolution. Although these are not WN3+O3~V binaries, they could still harbor unseen companions. To address this possibility, we have conducted a multi-year radial velocity study of six of the nine known WN3/O3s. Our study finds no evidence of statistically significant radial velocity variations, and allows us to set stringent upper limits on the mass of any hypothetical companion star: for probable orbital inclinations, any companion with a period less than 100 days must have a mass less than 2Mo. For periods less than 10 days, any companion would have to have a mass less than 1Mo. We argue that scenarios where any such companion is a compact object are unlikely. The absorption lines indicate a normal projected rotational velocity, making it unlikely that these stars evolved with the aid of a companion star that has since merged. The modest rotation also suggests that these stars are not the result of homogenous evolution. Thus it is likely that these stars are a normal but short-lived stage in the evolution of massive stars.
Philip Massey, Kathryn F. Neugent, Nidia I. Morrell
2023-03-20T15:41:18Z
http://arxiv.org/abs/2303.11209v2
# Constraints on the Binarity of the WN3/O3 Class of Wolf-Rayet Stars1 ###### Abstract The WN3/O3 Wolf-Rayet (WR) stars were discovered as part of our survey for WRs in the Magellanic Clouds. The WN3/O3s show the emission lines of a high-excitation WN star and the absorption lines of a hot O-type star, but our prior work has shown that the absorption spectrum is intrinsic to the WR star. Their place in the evolution of massive stars remains unclear. Here we investigate the possibility that they are the products of binary evolution. Although these are not WN3+O3 V binaries, they could still harbor unseen companions. To address this possibility, we have conducted a multi-year radial velocity study of six of the nine known WN3/O3s. Our study finds no evidence of statistically significant radial velocity variations, and allows us to set stringent upper limits on the mass of any hypothetical companion star: for probable orbital inclinations, any companion with a period less than 100 days must have a mass \(<\)2\(M_{\odot}\). For periods less than 10 days, any companion would have to have a mass \(<\)1\(M_{\odot}\). We argue that scenarios where any such companion is a compact object are unlikely. The absorption lines indicate a normal projected rotational velocity, making it unlikely that these stars evolved with the aid of a companion star that has since merged. The modest rotation also suggests that these stars are not the result of homogenous evolution. Thus it is likely that these stars are a normal but short-lived stage in the evolution of massive stars. + ## 1 Introduction Wolf-Rayet stars are the evolved descendants of massive O-type stars. Their spectra are dominated by broad, strong emission lines formed in dense stellar winds. WN-type WRs predominately show lines of nitrogen and helium, the products of the CNO-cycle of hydrogen-burning, while WC-type WRs show carbon and oxygen, the products of helium-burning. WN-type WRs show little or no hydrogen; the WCs show none. Although we understand that WRs have been stripped of their outer layers, revealing the products of nuclear burning, we do not know the relative importance of stellar winds and binary interactions in this process. In order to better understand the formation mechanism for these massive stars, we conducted a survey to identify a complete sample of WRs in the Magellanic Clouds (Massey et al., 2014, 2015, 2017; Neugent et al., 2018). As part of this work, we discovered a new class of these objects, dubbed the WN3/O3 stars (Massey et al., 2014; Neugent et al., 2018, 2017), further adding to the mysteries of massive star evolution. In this paper we report on a multi-year project to determine the binarity of the WN3/O3s, placing limits on the masses of any companions, in order to better understand how they formed. As summarized in Neugent et al. (2017) and Neugent et al. (2018), the WN3/O3s show an optical emission-line spectrum typical of a high-excitation, nitrogen-rich WN3 star, along with an absorption spectrum typical of an O3 V star. We were immediately able to exclude the possibility that these were WN3+O3 V binaries as their absolute magnitudes were \(M_{V}=-2\) to \(-3\), in accord with what is expected for a WN3 star alone (see, e.g., Breysacher, 1986; van der Hucht, 2001) and much fainter than that of an O3 V (\(M_{V}=-5\) to \(-6\); see, e.g., Conti et al., 1988; Massey et al., 2005). In all, nine such stars were discovered in the Large Magellanic Cloud (LMC). UV spectra of three of these, obtained and described by Neugent et al. (2017) using the Hubble Space Telescope, further demonstrated that the absorption was not coming from a normal O-type star, as there was no sign of the C iv\(\lambda\)1550 resonance wind line which would have otherwise been the strongest UV feature. Neugent et al. (2017) conducted a comprehensive spectral analysis, demonstrating that the emission and absorption likely arose from a single source for each star, and that a binary explanation was not necessary to explain the spectral features. The analysis showed that physically these stars have high effective temperatures (100,000\(\pm\)5,000 K), a bit hotter than the typical 90,000 K of most other WN3-4s. Their bolometric luminosities (\(\log L/L_{\odot}=5.6\pm 0.3\)) are consistent with most LMC WN3-WN4 stars (Hainich et al., 2014). Their surface abundances are consistent with CNO equilibrium, as are most other WNs, with the only peculiarity that they are still relatively hydrogen-rich, with a He/H number ratio of 1\(\pm\)0.2, rather than the usual \(>10\). The other difference of note is that the mass-loss rates are very low for WRs, with \(\log\dot{M}\sim-5.9\pm 0.2\) for all nine stars, rather than the typical \(-5.0\) dex for WNs of similar luminosities (see Figure 19 in Neugent et al., 2017). We describe the WN3/O3s as a new class of WRs, as there are no previously known WRs with similar characteristics. Absorption lines in the spectra of WRs have usually proven to be the signatures of OB-type companions, although there are exceptions. For instance, the "slash" stars ("O2-3/WN5-6") show both WR-like emission as well as O-type absorption lines. The most luminous and massive stars in R136, NGC 3603, and M33's NGC 604 are all such objects; these stars are so close to their Eddington limits that their winds are optically thick, as first suggested by de Koter et al. (1997) and Massey & Hunter (1998). _Thus the O2-3/WN5-6 show absorption lines because of their high luminosities, which is not the case for the WN3/O3 stars._ In the SMC, most of the WNs show absorption features (Massey & Duffy, 2001; Massey et al., 2003), but most have been shown to be binaries (Foellmi et al., 2003; Shenar et al., 2016; Hainich et al., 2015). However, modeling by Hainich et al. (2015) of archival optical data suggests that in AB1 and AB12 the absorption may be intrinsic. Additional work with better data is underway to determine if these stars are indeed low-metallicity analogs of the LMC's WN3/O3s, but we note here that AB1 and 12 have \(M_{V}\)'s of \(-4.6\) and \(-4.0\), respectively (Massey et al., 2003), implying a visual luminosity 3-8\(\times\) greater than that of the LMC WN3/O3s. Finally, we note that the Galactic "WN3+abs" star HD 9974 has never been shown to be a binary. No radial velocity variations were found by Massey & Conti (1981), who argued that the absorption was intrinsic to the WR. Modeling by Marchenko et al. (2004) showed that the emission and absorption likely arose in the same object, and suggested it was the result of homogenous evolution, in which stars evolve fully (or mostly) mixed due to high initial rotation speeds. Where WN3/O3s fall in the evolution of massive stars is unclear (Neugent et al., 2017). Their spatial distribution is the same as that of other WRs in the LMC (Neugent et al., 2017, 2018), and thus we can infer that they formed out of the same metallicity as other LMC WNs. There are 142 "true" WRs known in the LMC (excluding the O2-3If/WN5-6 stars; see Neugent et al., 2018 and Massey et al., 2021), so the WN3/O3s represent 6% of the LMC WR population, or 8% of the LMC's WNs. This suggests that these are not so rare that they require special circumstances for their formation. One possibility is that these WN3/O3s are in a relatively short-lived transitional phase between O-type stars and hydrogen-poor WNs, and that they will develop denser winds as they lose their hydrogen envelopes. Another possibility is that these stars are the results of homogenous evolution, as had been suggested for HD 9974. However Neugent et al. (2017) argue against this possibility, noting that the projected rotational velocities of all of the WN3/O3s are a modest 120-150 km s\({}^{-1}\) rather than the high rotational speeds one requires for homogenous evolution. While one or two WN3/O3s might be viewed at an unfavorable inclination, it is highly unlikely that is true for the entire sample. Although mass loss could have slowed the rotation of these stars, this is inconsistent with their low mass-loss rates1. Footnote 1: We note that the absorption lines in HD 9974 have projected rotational velocity of 150-200 km s\({}^{-1}\) according to Massey & Conti (1981), arguing against a homogenous evolution explanation, unless the star is seen at an unfavorable inclination. Is it an additional example of a WN3/O3? The UV spectrum shows a “weak” C iv\(\lambda\)1550 line according to Marchenko et al. (2004). The Marchenko et al. (2004) study of HD 9974 reported an absolute magnitude \(M_{V}=-3.7\), twice as bright as our LMC WN3/O3s using their adopted 4.3 kpc kinematic distance, but Rate & Crowther (2020) find a Gaia distance of 2.9 kpc. That would bring HD 9974’s absolute visual magnitude \(M_{V}\) to \(-2.9\), similar to what we found for our WN3/O3s. Since the mass-loss and other physical parameters were computed using a luminosity derived from an incorrect distance, we conclude that the analysis needs to be redone in order to answer the nature of HD 9974. A third option is that these WN3/O3s are the products of binary evolution. Although we can exclude the possibility that they have massive, luminous companions at present, we need to consider whether they have a lower-mass companion. Neugent et al. (2017) argues that it is unlikely that such a putative companion is a neutron star or black hole, as none of the WN3/O3s is an X-ray source. The lack of X-rays rules out these being close binaries with compact companions. However, a compact companion might remain bound but be in a wide orbit (and hence not generating X-rays) after it stripped off the outer layers of the WN3/O3 precursor. Neugent et al. (2017) note that such a situation would occur only through a narrow range of initial conditions, and the lifetime of the remaining star would be quite short as shown by Pols (1994). Another binary scenario would be for a main-sequence star to merge with an early-type WN star, enriching its surface with hydrogen, and forming the WN3/O3. However, Neugent et al. (2017) suggest that in such a scenario one would again expect the remaining object to be a rapid rotator, which--thanks to the absorption line profiles--we can rule out. Regardless of these arguments, we have long planned to carry out a radial velocity study of the WN3/O3s to either find companions to the WN3/O3s or set stringent limits on their existence. In Neugent et al. (2018) we gave a brief progress report, and confidently asserted that our study would be completed by the following Magellanic Cloud season. Unfortunately, weather-related delays and the global COVID-19 pandemic delayed our work until now. In Section 2 we describe our observations and reductions. In Section 3 we describe our procedure for measuring the radial velocities and present the data from our multi-year study. In Section 4 we describe what these measurements mean in terms of setting limits on any companion, and in Section 5 we summarize and discuss the implications of our findings2. Footnote 2: A tenth star, LMCe055-1, has properties somewhat similar to the WN3/O3s. Classified as a WN4/O4, it is faint, and our preliminary modeling shows that the emission and _most_ of the absorption arises in a single object. A faint He i\(\lambda\)4471 feature, however, comes from a companion object, and the star eclipses. We will present the results of our analysis of our comprehensive photometry and spectral modeling in a subsequent paper. ## 2 Observations and Reductions Although our initial plan was to obtain multiple spectra of each of the nine WN3/O3s, as observing seasons came and went, we chose instead to concentrate on fewer stars but obtain more spectra. In the end, we obtained enough spectra for six of the WN3/O3s to adequately look for the presence of lower mass companions. We list these stars in Table 1. We include basic information repeated from Neugent et al. (2018), as well as listing the number of spectra we obtained. As we show below, for measuring the radial velocities of the weak absorption lines adequately, we needed to use only the spectra with SNRs\(>\)100, and we quote that number as well. Our discovery spectra were all taken with the Las Campanas Magellan Echellette (MagE) spectrograph (Marshall et al., 2008), and we continued to use this instrument for our follow-up radial velocity measurements owing to its excellent throughput and good spectral resolution. For the data taken in 2014 and 2015, the instrument was mounted on the Clay 6.5-meter Magellan telescope, after which the instrument was moved to the Baade 6.5-meter Magellan telescope. The 1\({}^{\prime\prime}\) wide slit was used, resulting in a spectral resolving power \(R=4100\). Wavelength coverage was from the atmospheric cutoff in the near-UV (\(\sim\)3200A) to 1\(\mu\)m. Data were taken either as fill-in on other observing projects, or on dedicated one- or two-night observing runs. The slit was oriented to the parallactic angle, except occasionally for LMC172-1, where we needed to keep a nearby star off the slit. (This companion is 2 mag brighter optically, and is located 3\(\farcs\)2 to the west.) The challenges of flat-fielding an echellette with such a wide wavelength range are severe, especially since our goal was to obtain spectra with signal-to-noise ratios (SNRs) of 100 or higher. Massey et al. (2012) found they could achieve SNRs \(>\)350 with MagE by _not_ flat-fielding their data, but rather by dithering along the 10\({}^{\prime\prime}\) long slit. We did not need SNRs that high for this project, and so we did not dither, but instead relied upon the intrinsic uniformity of the CCD, following a suggestion by Ian Thompson (Massey & Hanson, 2013). We did use well-exposed dome-flat exposures to flat-field in the red to remove the fringing. Bias frames were also obtained and used to subtract from the data, although there is negligible bias structure. After each set of observations of a star, a 3 sec long Th-Ar lamp exposure was made to provide wavelength calibration before moving the telescope to the next object. Several spectrophotometric standards were observed each night to provide flux calibration, useful for combining the spectral orders. Reductions were carried out using a combination of standard iraf routines and special scripts written by Jack Baldwin. Further reduction details can be found in Massey et al. (2012). In Table 2 we list the details of the observations of each star. Our exposures ranged from a single 10 min exposure to an hour (3\(\times\)20 min). The observations with short exposure times were made before we began velocity monitoring; i.e., they were primarily our "discovery" data, and proved not to have sufficiently high SNRs to be used for our radial velocity study. They are included in the Journal of Observations only as they have been mentioned in earlier works. As discussed below, we found that the cross-correlations were more reliable with spectra of SNR values (per 3-pixel spectral resolution element) of 100 or greater. Those spectra are indicated with a letter designation in Table 2 that will be used to identify the results of the radial velocity cross-correlation results in the next section. A few spectra that are listed with poorer SNRs in Table 2 were obtained with poor seeing and/or through clouds, both relatively rare occurrences on Las Campanas during Magellanic Cloud observing seasons. As argued by Neugent et al. (2018), our choice of MagE proved optimal for this project. Residuals from the wavelength solutions were typically 0.05-0.06A (3 km s\({}^{-1}\) in the blue). Although better wavelength calibration could be achieved with a higher dispersion instrument, such as MIKE (the Magellan Inamori Kyocera Echelle), using it rather than MagE would have provided no greater accuracy. The absorption lines are to 120-150 km s\({}^{-1}\) and thus well sampled with our MagE 3-pixel spectral resolution (\(R=4100\), or 73 km s\({}^{-1}\)). Even the most narrow emission line, N v\(\lambda\)4946, has a width of 300 km s\({}^{-1}\). Thus the WN3/O3 spectra features are well sampled with MagE, and for a given exposure time we achieve a 2.5\(\times\) larger SNR per spectral resolution element than we would have with MIKE. ## 3 Radial Velocity Analysis ### Methodology The difficulty of measuring stellar radial velocities is dependent on spectral type. For most stars of type F and later, cross-correlation of a star's spectrum with a suitable radial velocity template is standard technique, utilizing many dozens of spectral features (Tonry & Davis, 1979), with precisions now reaching tens of cm s\({}^{-1}\) employed to find extra-solar planets (see, e.g., Wright, 2018; Zhao et al., 2022). Main-sequence stars of earlier types have fewer, and broader, lines, and traditionally lines are measured one-by-one and the results averaged (see, e.g., Niemela & Gamen, 2004; Morrell et al., 2014) although sometimes also by cross-correlation techniques (e.g., Gies et al., 2008). Determining the radial velocities of WR stars is particularly challenging, as the emission lines are all formed in an accelerating stellar wind. This has two consequences: (a) lines of different ionization levels will have different velocities depending upon their location in the wind, and (b) most lines are incredibly broad, with widths of thousands of km s\({}^{-1}\), making precise measurements difficult. A variety of techniques have been used over the years, including Gaussian fitting and intensity centroids (e.g., Niemela et al., 2002) and centroids of higher order (e.g., Massey, 1980). However, we are not so much interested in the actual radial velocity of our WN3/O3s. Rather, our goal is to determine if our spectra show statistically significant radial velocity variations over time, and, if not, to place upper limits on any radial velocity variability. This will allow us to place upper limits on the masses of any undetected companions. A particular powerful technique for such work was pioneered by Neugent & Massey (2014) to establish the relative binary frequency of WRs in M31 and M33. They utilized a cross-correlation method using selected spectral regions. Cross-correlation techniques against a WR "standard" is unlikely to work well, as line profiles usually differ significantly from star to star. Instead, Neugent & Massey (2014) cross-correlated each spectrum of a given star against each of the other spectra of the same star. Furthermore, since the density of spectral features is low compared to that of late-type "normal" stars, they did this cross-correlation on selected spectral regions, isolating one or two features since the line-free continuum adds only noise. The cross-correlation produces a measure of the velocity shift between one spectrum and another for each of these regions. The dispersion in these shifts from spectral region to spectral region for a particular cross-correlation pair tells us the measuring uncertainty. A comparison of this "internal error" (\(I\)) with the average velocity shift from pair to pair (the "external error," \(E\)), corrected for the number of lines measured, forms the basis for concluding if the data indicate the star is a binary. As an example, consider the case where the dispersion in radial velocity measurements of ten spectral lines on each spectrum is, on average, 10 km s\({}^{-1}\) (\(I\)). If the star is a binary with an orbital semi-amplitude \(K=100\) km s\({}^{-1}\) (i.e., a full amplitude of 200 km s\({}^{-1}\)), and one has 9 spectra taken at random orbital phases, then a Monte Carlo simulation shows that the dispersion in the averages of the resulting 36 unique cross-correlation pairs (A-B, A-C, A-D,...B-C, B-D, B-E,...G-H, G-I, H-I) will be 70 km s\({}^{-1}\). To compare this to the internal error, we have to adjust the ratio by the square root of the number of lines that were measured in producing these errors, resulting in an \(E/I\) value of 22. We would reasonably conclude the star is a binary. If instead \(K=10\) km s\({}^{-1}\), the average dispersion from pair to pair will be 7.5 km s\({}^{-1}\), and \(E/I\)=2.6, again suggestive of binary motion, although if the measurement had been based upon fewer lines, we would be less convinced. With only four lines, we would obtain an \(E/I\) of 2.1. In other words, the larger the value of \(E/I\), the more likely the pair-to-pair averages represent actual changes. Typically values of E/I greater than 2 are taken as evidence of statistically significant variations. This rule-of-thumb was not based on statistics, but seems to trace back to an empirical determination by Abt & Smith (1969). (See also Popper 1974.) However, as various authors have pointed out (see e.g., Conti et al. 1977), the actual statistical probability corresponding to a particular \(E/I\) value depends upon both the number of lines and number of spectra. This classic \(E/I\) test is a simplified version of the general analysis of variance (ANOVA) test, which can be used to compute the F distribution, which takes into account the degrees of freedom, unlike the \(E/I\) computation. This F distribution (or ratio) tests the hypothesis that there is no variability. Values with probabilities less than 1% likely mean that there are statistically significant velocity variations. Using \(E/I\geq 2\) as a measure of binarity is statistically very conservative and likely to miss real binaries, as \(E/I=2\) corresponds to probabilities of less than \(0.5\%\) for six spectra with eight lines per spectrum, and less than \(0.01\%\) for twelve spectra with eight lines per spectrum Conti et al. 1977.)3 Footnote 3: We note that by using the relative velocities, rather than the absolute velocities, we can determine probabilities using the one-way ANOVA test. Conti et al. (1977) and Garmany et al. (1980) describe using two-way ANOVA computations in the case of spectral lines having formed in regions of differing outflows, a situation that they demonstrate is true for the absorption lines of O-type stars. Sadly, these findings have typically been overlooked in recent analyses of the binary frequency of O-type stars. Consider three additional examples. As noted above, a \(K=10\) km s\({}^{-1}\) binary with an average dispersion of 10 km s\({}^{-1}\) from measuring four spectral lines on 9 spectra, our \(E/I\) test would yield 2.1, just barely above the nominal cutoff. An ANOVA test yields an F value of 2.9, corresponding to a probability of \(0.001\%\), well below our \(1\%\) criteria: we would say that the data definitely supports the star being a binary. A marginal case would be a \(K=5\) km s\({}^{-1}\) binary with the same 10 km s\({}^{-1}\) measuring uncertainty and the same amount of data. In that case, F=1.62, corresponding to a probability of \(3\%\). We would consider such data "suggestive" but inconclusive. A system where \(K=1\) km s\({}^{-1}\) would certainly be undetected with a 10 km s\({}^{-1}\) measuring uncertainty: F=0.6, corresponding to a probability of \(94\%\), saying that any spectrum-to-spectrum variations are lost within the internal error. As our good friend and colleague, the late Virpi Niemela would say, one can never rule out any star as being a binary, but what we could say in such a case is that the data do not support binarity. However, our analysis would be able to place limits on any binary motion. Having described in detail the \(E/I\) statistic and the ANOVA test, we will now apply these methodologies to the measurements of our spectra. This will allow us to determine if our data indicate binarity, and if not, to determine what the upper limits are on the masses of any undetected companions. ### Measurements and Results In order to identify the wavelength regions that give us the most robust velocity measurements, we experimented with two spectra of LMC277-2 taken on the same night (2018 Jan 6) a few hours apart. In the end, we chose four wavelength regions for use in our cross-correlations for this project, two containing emission features and two containing absorption lines. For the emission, we chose 4934-4959A, which contains the N v \(\lambda 4946\) line, and 4565-4765A, which contains the N v \(\lambda\lambda 4603,19\) doublet and the He ii\(\lambda 4686\) line. The N v \(\lambda 4946\) feature is a particularly narrow, strong line, while the N v-He ii complex contains the strongest emission features. For the absorption, we chose the 4075-4125A region, containing the H\(\delta\)/He ii\(\lambda 4100\) blend, and the 4315-4365A region, containing the H\(\gamma\)/He ii\(\lambda 4339\) blend. These are the strongest and best defined of the absorption features, as the higher-order Balmer/Pickering lines have poorer SNRs, while the H\(\beta\)/He ii\(\lambda 4859\) and H\(\alpha\)/He ii\(\lambda 6560\) have strong emission components. The He ii odd-n Pickering lines (e.g., \(\lambda\) 4200, 4542, 5411) were simply too weak and broad to provide reliable velocities. We illustrate the four regions chosen in Figure 14. Footnote 4: Note that although lines like the He ii\(\lambda\)4200 and \(\lambda\)4542 stand out well in this very high SNR spectrum, they resulted in poorer results in the spectra with SNRs of only 100. Before cross-correlation, the spectra were prepared by trimming (3785-6850A) in order to ease the normalization process. The spectra were then normalized within iraf using a fifth-order cubic spline with iterative rejection of 2\(\sigma\) high and low points. After division, the intensity of the normalized spectra were shifted by subtracting 1.0 to minimize continuum contribution to the cross-correlations. The iraf routine fxcor was then used for the correlations, using a 21-point wide parabolic fit. The measurements are given in Table 3. The values are given in terms of the relative velocity between pairs of spectra for each wavelength region, after account for the (very slight) differences in heliocentric corrections. The designations for each pair were defined in Table 2. Thus the first entry row for star LMC079-1 shows the velocity of spectrum A (taken on 2017 Feb 08, at an HJD of 2457792.606) minus the velocity obtained from spectrum B (taken on 2017 Dec 31, at an HJD of 2458118.659) for all four regions, followed by the mean of these four values, and the standard deviation of those four values, all in km s\({}^{-1}\). At the end of the measurements for each star we give the standard deviation \(\sigma_{\rm pairs}\) for each of the four wavelength regions, their mean, as well as the standard deviation of the means. In Table 4 we list the \(E/I\) values for each of our stars, as well as the results of our ANOVA analysis. The internal error \(I\) was computed as the average of the standard deviations of the measurements (i.e., the average of the last column in Table 3), while the \(E/I\) values are 2\(\times\) the mean \(\sigma_{\rm pair}\) (taken from Table 3) of the means divided by \(I\). The F-ratio and corresponding probabilities were computed using the Python statsmodels.stats.anova.anova_1m function, and confirmed with the bioinfokit.analys anova_stat routine. Figure 1: A section of one of our highest SNR spectra. The regions used in our cross-correlations are shown in red. ## 4 Limits on Binarity of the WN3/O3S The analysis of our spectra given in Table 4 does not show any evidence of radial velocity variations for any of the six WN3/O3s in our sample. Of course, low-amplitude velocity variations could be hidden by our measuring uncertainties. In this section we will consider what limits we can place on binarity for the stars in our sample. The internal errors \(I\) in Table 4 are roughly 10-14 km s\({}^{-1}\). We conducted multiple simulations to see what these meant in terms of what orbital semi-amplitude \(K\) could be hidden in our data. We find that with 28 cross-correlation pairs (corresponding to 8 spectra), and 4 measurements, a circular orbit with \(K\geq 10\) would be reliably detected as having statistically significant (\(<\)1%) radial velocity variations as long as the period was shorter than the span of time over which our observations were made. Thus we assume that if there were a companion in any of these systems, its orbital motion is less than this. What does this mean in terms of a mass for such a hypothetical, unseen companion? We will use \(K=10\) km s\({}^{-1}\) to compute the mass function \(f\) as a function of period, and then use this to set limits on the mass of the companion star. For this, we must know the mass of the WN3/O3 star. Thanks to the presence of the absorption lines, Neugent et al. (2017) were able to determine surface gravities for the WN3/O3s; combined with the other physical properties they derived, these provided mass estimates good to 20%. Values for the stars in our sample ranged from 9\(M_{\odot}\) (LMC199-1) to 19\(M_{\odot}\) (LMC277-2). Thus we will assume that the mass of the WN3/O3 component is 14\(\pm\)5\(M_{\odot}\). The mass function \(f\) is related to the masses and orbital parameters as \[f=\frac{M_{c}^{3}\sin^{3}i}{(M_{\rm WN3/O3}+M_{c})^{2}}=\frac{PK^{3}}{2\pi G} (1-e^{2})^{3/2},\] where \(M_{c}\) is the mass of the unseen companion, \(P\) is the period, and \(e\) is the orbital eccentricity. In solar units, and with \(P\) in days and \(K\) in km s\({}^{-1}\), \[\frac{M_{c}^{3}\sin^{3}i}{(M_{\rm WN3/O3}+M_{c})^{2}}=1.03\times 10^{-7}PK^{3}( 1-e^{2})^{3/2}.\] In Figure 2 we show the maximum mass of any companion star as a function of period, based on the assumption that the largest orbital semi-amplitude \(K\) that could be hidden in our data is 10 km s\({}^{-1}\). For purposes of illustration we include three orbital inclinations \(i\), and three eccentricities \(e\). (Note that if the orbit is elliptical, then our limit that \(K<10\) km s\({}^{-1}\) is not rigorous, but remains a good approximation.) We expect that the orientations of the orbital planes will be random. The probability of an orbital inclination being between \(i\) and \(i+di\) will simply be proportional to the area subtended on a unit sphere, \(2\pi\sin i\,di\); i.e., high inclination values are favored over low, and the expectation value for the inclination \(<i>\) is simply given by \[<i>=\frac{2\pi\int_{0}^{\pi/2}i\sin i\,di}{2\pi\int_{0}^{\pi/2}\sin i\,di}=1 \ {\rm rad}.\] This means that the \(i=60^{\circ}\) (middle set) of curves in each panel is a good representation of the most probable situation5. Footnote 5: This is nicely explained in [http://keatonb.github.io/archivers/uniforminclination](http://keatonb.github.io/archivers/uniforminclination), which also notes that \(\cos i\) is uniformly distributed for isotropic inclination angles, a very useful result for Monte Carlo simulations. See also Section 4.4 in Harwit (2006). Thus, any putative companion is likely to have a mass \(\lesssim 2M_{\odot}\) for periods less than 100 days. Furthermore, a companion in a close orbit (period \(<\)10 days) would likely be solar or sub-solar in mass. We cannot rule out the possibility of extremely long periods (and thus higher masses) or unfavorable inclinations. ## 5 Summary and Discussion We have conducted a radial velocity study of six of the nine known WN3/O3 stars, obtaining 6-8 high signal-to-noise spectra of each over a 3-5 yr period. Our analysis shows no evidence of radial velocity variations. Any binary motion would have to have an orbital semi-amplitude of \(K\lesssim 10\) km s\({}^{-1}\) to remain undetected in our data. This requires that the mass of any unseen companion would likely be less than \(2M_{\odot}\) for periods of 100 days or less, and less than \(1M_{\odot}\) for periods of 10 days or less. Of course, in any individual case we cannot rule out the possibility of an unfavorable inclination resulting in a higher mass companion going undetected, but that is unlikely to be the case for the entire sample. The limits on the mass of a companion and the lack of X-ray emissions allow for the possibility of a neutron star companion in a wide orbit. Nor can we rule out the presence of a non-compact companion of solar mass. However, much the same can be said of any WR star that lacks either X-ray emission and radial velocity variations. We do note that the formation time for a solar-mass star is many times the age of a WR star, so such a hypothetical object would have to be in a T Tauri pre-main-sequence (PMS) stage. However, the initial mass \(q\) ratio of such a WR plus Figure 2: The maximum allowable mass for any companion is shown as a function of period based upon the maximum orbital semi-amplitude allowed by our data (K\(<\)10 km s\({}^{-1}\)). The three panels cover the range of masses determined for the WN3/O3 stars in our sample by the analysis by Neugent et al. (2017). For each panel, we have computed 9 curves, corresponding to orbital inclinations \(i\) of \(90^{\circ}\) (edge-on), \(60^{\circ}\), and \(30^{\circ}\), and eccentricities \(e\) of 0.0 (circular orbit, shown in black), 0.3 (shown in red), and 0.5 (shown in blue). The \(i=60^{\circ}\), \(e=0.5\) curve is coincident with the \(i=90^{\circ}\), \(e=0.0\) curve. \(1M_{\odot}\) PMS system would have to be \(<\)0.1. No such systems are known to exist, and extrapolation suggests that such systems, if they were to exist, should be relatively rare (Moe & Di Stefano, 2017). Binary enthusiasts note that although only 40% of WRs are found in massive binaries, the non-binary WRs may have been stripped by companions that has since merged. However, as Neugent et al. (2017) argue, such a scenario should result in rapid rotation. While this could be easily overlooked for most WRs, whose emission line widths are dominated by the stellar wind velocities, the WN3/O3s have absorption lines whose widths give a good indication of the projected rotation rates, which are typical of normal O-type dwarfs, 120-150 km s\({}^{-1}\), as discussed earlier. Also as discussed earlier, homogenous evolution also seems to be ruled out by the lack of rapid rotation. Given that the binary fraction of WRs is 40%, should we be concerned that _none_ of the six WN3/O3s shows evidence of a companion? We think not: the companions in most of the known WR systems are luminous O-type stars. Such a companion would dominate the spectral energy distribution, swamping intrinsic absorption from the WN3/O3 component. Such a system might reveal itself to careful study: a WN3/O3+O9 V system, for instance, would likely be classified as something like a WN3+O7 V, with the WN3/O3 contributing all of the He ii absorption. No obvious candidates are known in the LMC (see Table 3 in Neugent et al., 2018) or in the Milky Way (see van der Hucht, 2001), but perhaps the bright WN3+O7 SMC-AB6 or WN2+O6 V SMC-AB7 are examples of such composites6. Footnote 6: We also note that three other LMC stars have been identified by Neugent et al. (2018) as potential. WN3/O3 candidates, BAT99 15a, Bat99 72, and BAT99 74. In conclusion, our multiyear study fails to find evidence of binarity for any of the WN3/O3 stars. We have presented arguments as to why past binarity is an unlikely explanation for their origins. That leaves us with the possibility that the WN3/O3s are a normal, short-lived transitional phase in the evolution of massive stars, where the stars have not shed all their hydrogen and their mass-loss rates are so low that their winds are still optically thin enough that we can see absorption lines. Additional work is in progress, looking for other examples of this new class of WR in order to better understand their place in the evolution of massive stars. ## Acknowledgments Lowell Observatory sits at the base of mountains sacred to tribes throughout the region. We honor their past, present, and future generations, who have lived here for millennia and will forever call this place home. The observations presented here were obtained over years at Las Campanas Observatory, and we are grateful to the excellent technical and logistical support we have always received there. We also acknowledge long-term support by both the Carnegie and Arizona Time Allocation Committees. Partial support for this work was provided by the National Science Foundation through AST-83116 awarded to P.M. In addition, support for K.F.N. was provided from NASA through the NASA Hubble Fellowship grant HST-HF2-51516 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. We are grateful to Drs. Michael Meyer and Trevor Dorn-Wallenstein for useful correspondence, and to an anonymous referee for suggestions that led to improvements in the paper. \begin{table} \begin{tabular}{c r r r r r r r} \hline \hline \multicolumn{1}{c}{ Star} & \multicolumn{1}{c}{\(\alpha_{2000}\)} & \multicolumn{1}{c}{\(\delta_{2000}\)} & \multicolumn{1}{c}{\(V\)} & \multicolumn{1}{c}{\(B-V\)} & \multicolumn{1}{c}{\(M_{V}\)} & \multicolumn{1}{c}{\#Obs} & \multicolumn{1}{c}{\#SNR\(>\)100} \\ \hline LMC079-1 & 05 07 13.33 & \(-\)70 33 33.9 & 16.31 & \(-\)0.25 & \(-\)2.6 & 13 & 9 \\ LMC170-2 & 05 29 18.19 & \(-\)69 19 43.2 & 16.13 & \(-\)0.17 & \(-\)2.8 & 10 & 6 \\ LMC172-1 & 05 35 00.90 & \(-\)69 21 20.2 & 15.95 & \(-\)0.12 & \(-\)3.0 & 12 & 8 \\ LMC199-1 & 05 28 27.12 & \(-\)69 06 36.2 & 16.65 & \(-\)0.22 & \(-\)2.3 & 10 & 8 \\ LMC277-2 & 05 04 32.64 & \(-\)68 00 59.4 & 15.83 & \(-\)0.16 & \(-\)3.1 & 9 & 8 \\ LMCe159-1 & 05 24 56.87 & \(-\)66 26 44.4 & 16.34 & \(-\)0.23 & \(-\)2.6 & 9 & 8 \\ \hline \end{tabular} \end{table} Table 1: WN3/O3 Stars in this Radial Velocity Study \begin{table} \begin{tabular}{c c c c c} \hline \hline Date & HJD & Exp. time (s) & SNR [FOOTNOTE:]Footnote : footnotemark: [ENDFOOTNOTE] & Designation \\ \hline \hline LMC079-1 & & & & \\ \hline 2013 Oct 18 & 2456583.869 & 1\(\times\) 600 & 50 & \\ 2013 Dec 14 & 2456640.671 & 1\(\times\) 600 & 60 & \\ 2015 Jan 09 & 2457031.668 & 1\(\times\)1200 & 80 & \\ 2017 Feb 07 & 2457791.675 & 3\(\times\) 500 & 60 & \\ 2017 Feb 08 & 2457792.606 & 3\(\times\) 550 & 100 & A \\ 2017 Dec 31 & 2458118.659 & 3\(\times\) 900 & 130 & B \\ 2018 Jan 01 & 2458119.675 & 3\(\times\) 900 & 120 & C \\ 2018 Jan 06 & 2458124.661 & 3\(\times\) 900 & 100 & D \\ 2018 Feb 05 & 2458154.678 & 3\(\times\) 900 & 130 & E \\ 2018 Nov 25 & 2458447.765 & 3\(\times\) 900 & 140 & F \\ 2020 Nov 26 & 2459179.684 & 3\(\times\) 900 & 170 & G \\ 2021 Dec 21 & 2459569.789 & 3\(\times\) 900 & 130 & H \\ 2022 Oct 02 & 2459854.738 & 3\(\times\) 900 & 175 & I \\ \hline LMC277-2 & & & & \\ \hline 2013 Oct 18 & 2456583.844 & 1 \(\times\) 600 & 75 & \\ 2013 Dec 14 & 2456640.662 & 1 \(\times\) 600 & 70 & \\ 2014 Sep 03 & 2457031.700 & 1 \(\times\) 600 & 65 & \\ 2015 Jan 09 & 2457031.670 & 3\(\times\)1200 & 120 & A \\ 2017 Feb 07 & 2457791.651 & 3\(\times\) 600 & 65 & \\ 2017 Feb 08 & 2457792.643 & 3\(\times\) 600 & 105 & B \\ 2017 Dec 31 & 2458118.626 & 3\(\times\) 900 & 120 & C \\ 2018 Jan 01 & 2458119.602 & 3\(\times\) 900 & 125 & D \\ 2018 Jan 06 & 2458124.696 & 3\(\times\) 900 & 120 & E \\ 2018 Nov 14 & 2458436.712 & 3\(\times\) 900 & 145 & F \\ \hline LMC172-1 & & & & \\ \hline 2013 Oct 16 & 2456581.723 & 1\(\times\) 600 & 75 & \\ 2013 Dec 14 & 2456640.685 & 1\(\times\) 600 & 75 & \\ 2015 Jan 09 & 2457031.813 & 1\(\times\)1200 & 60 & \\ 2017 Feb 07 & 2457791.626 & 3\(\times\)600 & 95 & \\ 2017 Feb 08 & 2457792.667 & 3\(\times\) 600 & 120 & A \\ 2018 Jan 06 & 2458124.730 & 3\(\times\) 900 & 120 & B \\ 2018 Feb 04 & 2458153.668 & 3\(\times\) 900 & 165 & C \\ 2018 Nov 18 & 2458440.797 & 3\(\times\) 900 & 170 & D \\ 2018 Nov 26 & 2458448.631 & 3\(\times\) 900 & 130 & E \\ 2020 Nov 26 & 2459179.638 & 3\(\times\) 900 & 195 & F \\ 2022 Oct 02 & 24598854.790 & 3\(\times\) 900 & 190 & G \\ 2022 Oct 31 & 2459883.823 & 3\(\times\)1200 & 160 & H \\ \hline \end{tabular} \end{table} Table 2: Journal of Observations **Table 2**_(continued)_ \begin{tabular}{c c c c c} \hline \hline Date & HJD & Exp. time (s) & SNR\({}^{\mbox{\scriptsize\boldmath${U}$}}\) & Designation \\ \hline 199-1 & & & & \\ \hline 2013 Dec 14 & 2456640.637 & 1\(\times\) 600 & 50 & \\ 2015 Jan 09 & 2457031.829 & 1\(\times\)1500 & 80 & \\ 2018 Jan 06 & 2458124.770 & 3\(\times\)1200 & 125 & A \\ 2018 Feb 04 & 2458153.707 & 3\(\times\)1200 & 130 & B \\ 2018 Nov 14 & 2458436.673 & 3\(\times\)1200 & 130 & C \\ 2018 Nov 25 & 2458447.715 & 3\(\times\)1200 & 150 & D \\ 2020 Jan 15 & 2458863.734 & 3\(\times\) 900 & 120 & E \\ 2020 Nov 25 & 2459178.750 & 3\(\times\)1200 & 180 & F \\ 2020 Nov 26 & 2459179.589 & 3\(\times\)1200 & 165 & G \\ 2020 Dec 09 & 2459192.699 & 3\(\times\)1200 & 130 & H \\ \hline 277-2 & & & & \\ \hline 2013 Dec 14 & 2456640.662 & 1\(\times\) 600 & 80 & \\ 2015 Jan 09 & 2457031.686 & 1\(\times\)1200 & 105 & A \\ 2018 Jan 01 & 2458119.636 & 3\(\times\) 900 & 140 & B \\ 2018 Jan 06 & 2458124.588 & 3\(\times\) 900 & 140 & C \\ 2018 Jan 06 & 2458124.840 & 3\(\times\) 900 & 140 & D \\ 2018 Nov 18 & 2458440.686 & 3\(\times\) 900 & 175 & E \\ 2018 Nov 25 & 2458447.596 & 3\(\times\) 900 & 155 & F \\ 2020 Jan 15 & 2458863.700 & 3\(\times\) 900 & 175 & G \\ 2020 Dec 09 & 2459192.739 & 3\(\times\) 900 & 155 & H \\ \hline LMCe159-1 & & & & \\ \hline 2015 Jan 09 & 2457031.760 & 1\(\times\)1200 & 75 & \\ 2018 Jan 06 & 2458124.626 & 3\(\times\) 900 & 130 & A \\ 2018 Feb 04 & 2458153.752 & 3\(\times\)1200 & 130 & B \\ 2018 Nov 14 & 2458436.633 & 3\(\times\) 900 & 125 & C \\ 2018 Nov 18 & 2458440.831 & 3\(\times\) 900 & 180 & D \\ 2018 Nov 25 & 2458447.676 & 3\(\times\) 900 & 165 & E \\ 2018 Nov 26 & 2458448.665 & 3\(\times\) 900 & 140 & F \\ 2020 Jan 15 & 2458863.826 & 3\(\times\) 900 & 160 & G \\ 2020 Dec 09 & 2459192.773 & 3\(\times\) 900 & 160 & H \\ \hline \end{tabular} \({}^{a}\)Signal-to-noise ratio per 3-pixel spectral resolution element measured over the region 4210-4330A. \({}^{b}\)Signal-to-noise ratio per 3-pixel spectral resolution element measured over the region 4210-4330A. **Table 3**. Radial Velocity Measurements \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{6}{c}{Radial Velocities (km s\({}^{-1}\))} \\ \cline{2-7} Cross & N v & N v+He ii & H\(\delta\)/He ii & H\(\gamma\)/He ii & Mean & Std. Dev. \\ Pair \(a\) & \(\lambda\)4946 & \(\lambda\)4603-4686 & \(\lambda\)4100 & \(\lambda\)4339 & (regions) & (regions) \\ \hline \hline LMC079-1 & & & & & & \\ \hline A-B & 9.9 & 9.8 & 11.5 & 18.4 & 12.4 & 4.0 \\ A-C & 12.6 & 11.1 & 4.6 & 8.6 & 9.2 & 3.5 \\ A-D & 16.8 & 5.1 & 11.3 & 11.2 & 11.1 & 4.8 \\ A-E & 13.0 & 1.5 & 0.3 & 14.4 & 7.3 & 7.4 \\ A-F & 11.5 & -13.8 & 1.1 & 11.8 & 2.7 & 12.1 \\ A-G & 6.2 & 2.1 & 8.6 & 5.5 & 5.6 & 2.7 \\ A-H & 16.5 & 19.6 & 2.6 & -15.8 & 5.7 & 16.1 \\ A-I & 13.8 & -4.1 & 15.7 & 11.6 & 9.3 & 9.1 \\ B-C & 2.9 & -0.4 & -8.5 & -17.5 & -5.9 & 9.1 \\ B-D & 6.5 & -3.4 & 0.8 & -12.3 & -2.1 & 7.9 \\ B-E & 1.9 & -7.7 & -13.5 & -6.9 & -6.6 & 6.4 \\ B-F & 1.0 & -25.0 & -6.2 & -10.9 & -10.3 & 11.0 \\ B-G & -2.3 & -8.3 & -2.4 & -14.9 & -7.0 & 6.0 \\ B-H & 5.8 & 9.5 & -8.8 & -31.1 & -6.1 & 18.4 \\ B-I & 4.0 & -14.3 & 7.7 & -9.7 & -3.1 & 10.6 \\ C-D & 3.7 & -4.4 & 7.9 & 0.6 & 2.0 & 5.2 \\ C-E & -0.7 & -7.9 & -6.3 & 1.7 & -3.3 & 4.5 \\ C-F & -1.7 & -22.2 & 2.1 & 1.9 & -5.0 & 11.6 \\ C-G & -5.7 & -7.9 & 4.5 & -1.8 & -2.7 & 5.4 \\ C-H & 2.7 & 9.9 & 2.3 & -27.0 & -3.0 & 16.4 \\ C-I & 1.3 & -11.9 & 13.0 & 2.4 & 1.2 & 10.2 \\ D-E & -5.0 & -4.2 & -14.3 & 1.8 & -5.4 & 6.6 \\ D-F & -4.0 & -20.1 & -7.7 & 4.4 & -6.9 & 10.2 \\ D-G & -8.4 & -3.3 & -2.3 & -3.1 & -4.3 & 2.8 \\ D-H & 0.4 & 14.4 & -10.6 & -21.0 & -4.2 & 15.2 \\ D-I & -1.5 & -9.6 & 4.5 & 2.5 & -1.0 & 6.3 \\ E-F & 0.2 & -17.7 & 7.6 & -0.1 & -2.5 & 10.8 \\ E-G & -5.0 & 1.0 & 10.2 & -9.7 & -0.9 & 8.6 \\ E-H & 5.3 & 16.1 & 4.8 & -28.6 & -0.6 & 19.4 \\ E-I & 1.7 & -6.5 & 17.6 & 0.3 & 3.3 & 10.2 \\ F-G & -4.0 & 15.5 & 2.7 & -5.0 & 2.3 & 9.4 \\ F-H & 4.7 & 31.1 & -1.4 & -26.5 & 2.0 & 23.6 \\ F-I & 3.0 & 12.1 & 10.3 & 3.1 & 7.1 & 4.8 \\ G-H & 8.9 & 15.7 & -4.8 & -20.3 & -0.1 & 15.9 \\ G-I & 7.5 & -4.3 & 7.4 & 6.1 & 4.2 & 5.7 \\ H-I & -1.1 & -19.4 & 16.5 & 24.2 & 5.1 & 19.5 \\ \(\sigma_{\rm pairs}\) & 6.4 & 13.2 & 8.4 & 13.8 & 5.7 & \(\cdots\) \\ \hline \end{tabular} **Table 3** continued on next page **Table 3** _(continued)_ \begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{6}{c}{Radial Velocities (km s\({}^{-1}\) )} \\ \cline{2-7} Cross & N v & N v+He ii & H\(\delta\)/He ii & H\(\gamma\)/He ii & Mean & Std. Dev. \\ Pair \(a\) & \(\lambda\)4946 & \(\lambda\)4603-4686 & \(\lambda\)4100 & \(\lambda\)4339 & (regions) & (regions) \\ \hline LMC277-2 & & & & & & \\ \hline A-B & -13.6 & -20.3 & 6.2 & 34.3 & 1.7 & 24.5 \\ A-C & -12.4 & -8.4 & 4.2 & 34.3 & 4.4 & 21.1 \\ A-D & -10.8 & -14.8 & -2.9 & 33.2 & 1.2 & 21.9 \\ A-E & -13.5 & 0.3 & -12.6 & 35.7 & 2.5 & 23.0 \\ A-F & -14.3 & -25.5 & -2.1 & 31.2 & -2.7 & 24.5 \\ B-C & 1.0 & 14.3 & 4.8 & 3.1 & 5.8 & 5.9 \\ B-D & 4.2 & 6.1 & -4.6 & 2.5 & 2.0 & 4.7 \\ B-E & -1.6 & 22.4 & -12.3 & 4.7 & 3.3 & 14.5 \\ B-F & 1.9 & -5.0 & 1.4 & 1.5 & -0.1 & 3.3 \\ C-D & 2.4 & -8.5 & 0.1 & -0.1 & -1.5 & 4.8 \\ C-E & -1.5 & 9.1 & -11.6 & 0.0 & -1.0 & 8.5 \\ C-F & 0.8 & -20.9 & 2.5 & -3.5 & -5.3 & 10.7 \\ D-E & -2.9 & 18.4 & -7.2 & -1.4 & 1.7 & 11.4 \\ D-F & -1.6 & -13.5 & 1.7 & -3.3 & -4.2 & 6.6 \\ E-F & 2.7 & -28.8 & 17.8 & -3.3 & -2.9 & 19.4 \\ \(\sigma_{\rm pairs}\) & 6.9 & 16.2 & 8.1 & 16.7 & 3.2 & \\ \hline \end{tabular} **Table 3**_continued on next page_ **Table 3** _(continued)_ \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{6}{c}{Radual Velocities (km s\({}^{-1}\))} \\ \cline{2-7} Cross & N v & N v+He ii & H\(\delta\)/He ii & H\(\gamma\)/He ii & Mean & Std. Dev. \\ Pair \(a\) & \(\lambda\)4946 & \(\lambda\)4603-4686 & \(\lambda\)4100 & \(\lambda\)4339 & (regions) & (regions) \\ \hline \hline LMC172-1 & & & & & & \\ \hline A-B & 0.2 & 10.1 & 16.6 & 10.0 & 9.3 & 6.8 \\ A-C & 4.3 & 13.0 & 4.4 & 22.9 & 11.2 & 8.8 \\ A-D & -3.7 & -24.2 & 11.0 & 5.5 & -2.8 & 15.5 \\ A-E & -5.3 & -6.7 & 9.4 & 3.8 & 0.3 & 7.6 \\ A-F & 1.7 & 1.8 & -10.9 & -8.2 & -3.9 & 6.6 \\ A-G & 6.3 & -1.0 & 6.0 & 30.3 & 10.4 & 13.7 \\ A-H & 1.6 & -17.9 & 18.1 & 23.2 & 6.2 & 18.5 \\ B-C & 3.0 & 3.5 & -13.1 & 9.3 & 0.7 & 9.6 \\ B-D & -4.8 & -32.0 & -9.8 & -5.4 & -13.0 & 12.9 \\ B-E & -6.0 & -16.7 & 10.6 & -13.0 & -6.3 & 12.1 \\ B-F & 1.5 & -7.1 & -17.1 & -28.3 & -12.8 & 12.9 \\ B-G & 6.2 & -11.9 & 6.4 & 20.0 & 5.2 & 13.1 \\ B-H & 2.8 & -25.6 & 11.7 & 7.9 & -0.8 & 16.9 \\ C-D & -9.3 & -37.7 & 4.6 & -22.2 & -16.2 & 18.0 \\ C-E & -10.1 & -20.3 & 10.8 & -16.6 & -9.1 & 13.9 \\ C-F & -2.1 & -10.9 & -9.3 & -34.8 & -14.3 & 14.2 \\ C-G & 2.5 & -16.4 & 7.5 & 13.3 & 1.7 & 12.9 \\ C-H & -0.6 & -31.7 & 19.3 & -0.9 & -3.5 & 21.1 \\ D-E & -1.3 & 17.7 & 1.4 & 1.7 & 4.9 & 8.7 \\ D-F & 6.7 & 27.5 & -14.6 & -4.6 & 3.8 & 18.1 \\ D-G & 11.2 & 21.8 & -9.7 & 34.0 & 14.3 & 18.5 \\ D-H & 7.3 & 8.3 & 10.7 & 23.4 & 12.4 & 7.5 \\ E-F & 8.0 & 9.2 & -19.1 & -14.4 & -4.1 & 14.8 \\ E-G & 12.2 & 4.2 & -8.5 & 33.9 & 10.5 & 17.8 \\ E-H & 7.5 & -11.4 & 3.1 & 13.0 & 3.1 & 10.5 \\ F-G & 4.3 & -5.9 & 8.4 & 42.9 & 12.4 & 21.2 \\ F-H & 0.4 & -21.3 & 24.5 & 30.0 & 8.4 & 23.6 \\ G-H & -3.9 & -15.2 & 17.2 & -15.3 & -4.3 & 15.3 \\ \(\sigma_{\rm pairs}\) & 5.7 & 17.0 & 12.2 & 20.4 & 8.8 & \\ \hline \end{tabular} **Table 3** _continued on next page_ **Table 3**_continued_ \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{6}{c}{Radial Velocities (km s\({}^{-1}\))} \\ \cline{2-7} Cross & N v & N v+He ii & H\(\delta\)/He ii & H\(\gamma\)/He ii & Mean & Std. Dev. \\ Pair \({}^{\mbox{\small\it 4}}\) & \(\lambda\)4946 & \(\lambda\)4603-4686 & \(\lambda\)4100 & \(\lambda\)4339 & (regions) & (regions) \\ \hline 199-1 & & & & & & \\ \hline A-B & -9.7 & -8.7 & 18.4 & -7.4 & -1.9 & 13.5 \\ A-C & -4.2 & -16.5 & -8.8 & 11.0 & -4.6 & 11.6 \\ A-D & -9.9 & -23.3 & -4.3 & -15.8 & -13.3 & 8.2 \\ A-E & 7.2 & -28.4 & -7.3 & 19.2 & -2.3 & 20.5 \\ A-F & 1.5 & -2.0 & -8.4 & -10.2 & -4.8 & 5.5 \\ A-G & -0.4 & -6.6 & -7.5 & -28.1 & -10.6 & 12.0 \\ A-H & -5.8 & -1.7 & -5.1 & -4.1 & -4.1 & 1.8 \\ B-C & 5.5 & -9.0 & -15.7 & 21.5 & 0.6 & 16.5 \\ B-D & -1.2 & -14.3 & -12.3 & -0.4 & -7.0 & 7.3 \\ B-E & 16.7 & -20.3 & -10.0 & 26.5 & 3.2 & 22.0 \\ B-F & 11.3 & 7.1 & -13.8 & -4.9 & -0.1 & 11.5 \\ B-G & 9.8 & 2.8 & -22.3 & -15.0 & -6.2 & 15.0 \\ B-H & 4.1 & 6.2 & -15.1 & 4.9 & 0.0 & 10.1 \\ C-D & -7.4 & -4.6 & 0.2 & -20.5 & -8.1 & 8.8 \\ C-E & 11.2 & -10.7 & 1.8 & 11.0 & 3.3 & 10.3 \\ C-F & 5.0 & 14.9 & -8.3 & -17.1 & -1.4 & 14.2 \\ C-G & 4.9 & 11.3 & -2.6 & -31.3 & -4.4 & 18.8 \\ C-H & -1.6 & 14.6 & -1.0 & -8.3 & 0.9 & 9.7 \\ D-E & 18.9 & -3.4 & 1.8 & 32.6 & 12.5 & 16.5 \\ D-F & 12.9 & 21.0 & -0.9 & 0.9 & 8.5 & 10.3 \\ D-G & 12.3 & 15.4 & -5.6 & -11.4 & 2.7 & 13.2 \\ D-H & 5.6 & 19.7 & -4.0 & 8.6 & 7.5 & 9.8 \\ E-F & -5.5 & 27.5 & -9.2 & -52.5 & -9.9 & 32.8 \\ E-G & -6.7 & 23.0 & -14.5 & -48.9 & -11.8 & 29.6 \\ E-H & -12.7 & 25.3 & -4.4 & -20.6 & -3.1 & 20.1 \\ F-G & -0.8 & -4.0 & -9.3 & -14.5 & -7.2 & 6.0 \\ F-H & -6.8 & -0.2 & 9.4 & 15.1 & 4.4 & 9.7 \\ G-H & -5.6 & 2.9 & 3.6 & 21.6 & 5.6 & 11.5 \\ \(\sigma_{\rm pairs}\) & 8.6 & 15.2 & 8.2 & 21.2 & 6.4 & \\ \hline \end{tabular} **Table 3**_continued_ \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{6}{c}{Radial Velocities (km s\({}^{-1}\))} \\ \cline{2-7} Cross & N v & N v+He ii & H\(\delta\)/He ii & H\(\gamma\)/He ii & Mean & Std. Dev. \\ Pair \({}^{\mbox{\small\it 4}}\) & \(\lambda\)4946 & \(\lambda\)4603-4686 & \(\lambda\)4100 & \(\lambda\)4339 & (regions) & (regions) \\ \hline 199-1 & & & & & & \\ \hline A-B & -9.7 & -8.7 & 18.4 & -7.4 & -1.9 & 13.5 \\ A-C & -4.2 & -16.5 & -8.8 & 11.0 & -4.6 & 11.6 \\ A-D & -9.9 & -23.3 & -4.3 & -15.8 & -13.3 & 8.2 \\ A-E & 7.2 & -28.4 & -7.3 & 19.2 & -2.3 & 20.5 \\ A-F & 1.5 & -2.0 & -8.4 & -10.2 & -4.8 & 5.5 \\ A-G & -0.4 & -6.6 & -7.5 & -28.1 & -10.6 & 12.0 \\ A-H & -5.8 & -1.7 & -5.1 & -4.1 & -4.1 & 1.8 \\ B-C & 5.5 & -9.0 & -15.7 & 21.5 & 0.6 & 16.5 \\ B-D & -1.2 & -14.3 & -12.3 & -0.4 & -7.0 & 7.3 \\ B-E & 16.7 & -20.3 & -10.0 & 26.5 & 3.2 & 22.0 \\ B-F & 11.3 & 7.1 & -13.8 & -4.9 & -0.1 & 11.5 \\ B-G & 9.8 & 2.8 & -22.3 & -15.0 & -6.2 & 15.0 \\ B-H & 4.1 & 6.2 & -15.1 & 4.9 & 0.0 & 10.1 \\ C-D & -7.4 & -4.6 & 0.2 & -20.5 & -8.1 & 8.8 \\ C-E & 11.2 & -10.7 & 1.8 & 11.0 & 3.3 & 10.3 \\ C-F & 5.0 & 14.9 & -8.3 & -17.1 & -1.4 & 14.2 \\ C-G & 4.9 & 11.3 & -2.6 & -31.3 & -4.4 & 18.8 \\ C-H & -1.6 & 14.6 & -1.0 & -8.3 & 0.9 & 9.7 \\ D-E & 18.9 & -3.4 & 1.8 & 32.6 & 12.5 & 16.5 \\ D-F & 12.9 & 21.0 & -0.9 & 0.9 & 8.5 & 10.3 \\ D-G & 12.3 & 15.4 & -5.6 & -11.4 & 2.7 & 13.2 \\ D-H & 5.6 & 19.7 & -4.0 & 8.6 & 7.5 & 9.8 \\ E-F & -5.5 & 27.5 & -9.2 & -52.5 & -9.9 & 32.8 \\ E-G & -6.7 & 23.0 & -14.5 & -48.9 & -11.8 & 29.6 \\ E-H & -12.7 & 25.3 & -4.4 & -20.6 & -3.1 & 20.1 \\ F-G & -0.8 & -4.0 & -9.3 & -14.5 & -7.2 & 6.0 \\ F-H & -6.8 & -0.2 & 9.4 & 15.1 & 4.4 & 9.7 \\ G-H & -5.6 & 2.9 & 3.6 & 21.6 & 5.6 & 11.5 \\ \(\sigma_{\rm pairs}\) & 8.6 & 15.2 & 8.2 & 21.2 & 6.4 & \\ \hline \end{tabular} **Table 4**_continued_ **Table 3** _(continued)_ \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{6}{c}{Radial Velocities (km s\({}^{-1}\))} \\ \cline{2-7} Cross & N v & N v+He ii & H\(\delta\)/He ii & H\(\gamma\)/He ii & Mean & Std. Dev. \\ Pair \(a\) & \(\lambda\)4946 & \(\lambda\)4603-4686 & \(\lambda\)4100 & \(\lambda\)4339 & (regions) & (regions) \\ \hline \hline 277-2 & & & & & & \\ \hline A-B & -11.2 & -17.7 & -12.4 & -1.8 & -10.8 & 6.6 \\ A-C & -12.4 & -10.5 & -7.9 & -12.4 & -10.8 & 2.1 \\ A-D & -3.9 & -20.1 & -8.8 & 0.7 & -8.0 & 8.9 \\ A-E & -5.2 & -25.0 & -6.0 & -8.6 & -11.2 & 9.3 \\ A-F & -5.5 & -35.9 & -10.6 & -13.4 & -16.4 & 13.4 \\ A-G & -18.0 & -45.5 & -3.0 & 15.3 & -12.8 & 25.7 \\ A-H & -6.2 & -19.5 & 6.7 & 0.4 & -4.6 & 11.2 \\ B-C & -2.7 & 7.5 & 6.4 & -8.3 & 0.7 & 7.5 \\ B-D & 6.3 & -1.3 & 4.9 & 3.8 & 3.4 & 3.3 \\ B-E & 4.9 & -9.0 & 2.5 & -4.9 & -1.6 & 6.5 \\ B-F & 5.4 & -22.7 & -0.5 & -8.7 & -6.6 & 12.2 \\ B-G & -7.1 & -27.3 & 7.4 & 21.4 & -1.4 & 20.8 \\ B-H & 3.5 & -0.7 & 17.4 & 3.3 & 5.9 & 7.9 \\ C-D & 8.5 & -8.7 & -1.3 & 12.2 & 2.7 & 9.5 \\ C-E & 7.4 & -16.8 & -3.9 & 4.4 & -2.2 & 10.8 \\ C-F & 7.6 & -28.9 & -6.9 & -0.1 & -7.1 & 15.7 \\ C-G & -4.2 & -36.3 & 1.0 & 29.8 & -2.4 & 27.1 \\ C-H & 6.2 & -8.1 & 11.8 & 10.7 & 5.2 & 9.1 \\ D-E & -1.5 & -7.7 & -2.6 & -7.9 & -4.9 & 3.3 \\ D-F & -1.1 & -21.2 & -4.1 & -12.1 & -9.7 & 9.0 \\ D-G & -13.9 & -27.3 & 1.8 & 18.0 & -5.3 & 19.6 \\ D-H & -2.7 & 0.4 & 13.5 & -0.6 & 2.6 & 7.4 \\ E-F & -0.1 & -10.6 & -3.0 & -3.1 & -4.2 & 4.5 \\ E-G & -11.7 & -16.7 & 4.3 & 27.2 & 0.8 & 19.7 \\ E-H & -0.8 & 6.5 & 14.8 & 6.4 & 6.7 & 6.4 \\ F-G & -11.3 & -6.4 & 7.9 & 31.5 & 5.4 & 19.2 \\ F-H & -0.8 & 20.1 & 18.1 & 10.6 & 12.0 & 9.5 \\ G-H & 11.5 & 27.3 & 9.8 & -19.1 & 7.4 & 19.3 \\ \(\sigma_{\rm pairs}\) & 7.7 & 16.5 & 8.5 & 13.5 & 7.1 & \\ \hline \end{tabular} **Table 3** _continued on next page_ **Table 3**_(continued)_ \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{6}{c}{Radial Velocities (km s\({}^{-1}\) )} \\ \cline{2-7} Cross & N v & N v+He ii & H\(\delta\)/He ii & H\(\gamma\)/He ii & Mean & Std. Dev. \\ Pair \(a\) & \(\lambda\)4946 & \(\lambda\)4603-4686 & \(\lambda\)4100 & \(\lambda\)4339 & (regions) & (regions) \\ \hline LMCe159-1 & & & & & & \\ \hline A-B & 1.2 & -10.8 & 5.4 & -3.5 & -1.9 & 6.9 \\ A-C & -10.1 & -12.5 & 5.9 & 2.8 & -3.5 & 9.2 \\ A-D & 5.6 & -16.5 & -5.8 & -5.7 & -5.6 & 9.0 \\ A-E & 5.6 & 5.1 & -10.1 & -10.9 & -2.6 & 9.2 \\ A-F & -6.3 & -2.7 & -7.8 & -4.0 & -5.2 & 2.3 \\ A-G & 3.2 & -1.6 & 0.0 & 19.1 & 5.2 & 9.5 \\ A-H & 4.4 & 5.9 & 0.3 & 10.7 & 5.3 & 4.3 \\ B-C & -9.4 & -2.7 & 2.5 & 5.9 & -0.9 & 6.7 \\ B-D & 5.0 & -7.1 & -6.7 & -2.0 & -2.7 & 5.6 \\ B-E & 6.3 & 10.5 & -16.2 & -10.9 & -2.6 & 13.0 \\ B-F & -6.0 & 5.7 & -15.2 & -5.3 & -5.2 & 8.5 \\ B-G & 2.7 & 11.1 & -9.3 & 24.3 & 7.2 & 14.1 \\ B-H & 4.0 & 18.0 & -2.5 & 12.9 & 8.1 & 9.1 \\ C-D & 15.1 & -1.7 & -10.5 & -6.3 & -0.9 & 11.2 \\ C-E & 16.7 & 12.5 & -20.7 & -13.0 & -1.1 & 18.5 \\ C-F & 3.1 & 9.6 & -15.8 & -9.6 & -3.2 & 11.6 \\ C-G & 13.7 & 16.5 & -7.2 & 26.0 & 12.3 & 14.0 \\ C-H & 14.7 & 18.6 & -8.7 & 7.1 & 7.9 & 12.1 \\ D-E & 1.2 & 16.9 & -5.4 & -7.0 & 1.4 & 10.9 \\ D-F & -11.4 & 12.9 & -5.5 & -4.0 & -2.0 & 10.4 \\ D-G & -1.8 & 18.2 & 2.8 & 28.0 & 11.8 & 13.8 \\ D-H & -0.8 & 22.0 & 3.0 & 16.6 & 10.2 & 10.9 \\ E-F & -13.4 & -6.6 & -1.7 & 5.4 & -4.1 & 7.9 \\ E-G & -3.2 & -2.0 & 10.2 & 35.6 & 10.1 & 18.0 \\ E-H & -1.6 & 2.6 & 9.2 & 21.4 & 7.9 & 10.0 \\ F-G & 10.3 & 2.6 & 10.9 & 28.9 & 13.2 & 11.1 \\ F-H & 11.9 & 7.8 & 8.3 & 15.9 & 11.0 & 3.8 \\ G-H & 1.5 & 3.3 & -0.6 & -11.3 & -1.8 & 6.5 \\ \(\sigma_{\rm pairs}\) & 8.2 & 10.3 & 8.5 & 14.6 & 6.3 & \\ \hline \end{tabular} \({}^{a}\)Identification of pair spectra are given by the "Designation" column in Table 2.
2310.11331
TOB-SVD: Total-Order Broadcast with Single-Vote Decisions in the Sleepy Model
Over the past years, distributed consensus research has extended its focus towards addressing challenges in large-scale, permissionless systems, such as blockchains. This shift is characterized by the need to accommodate dynamic participation, contrasting the traditional approach of a static set of continuously online participants. Works like Bitcoin and the sleepy model have set the stage for this evolving framework. Notable contributions from Momose and Ren (CCS 2022) and subsequent works have introduced Total-Order Broadcast protocols leveraging Graded Agreement primitives and supporting dynamic participation. However, these approaches often require multiple phases of voting per decision, creating a potential bottleneck for real-world large-scale systems. Addressing this, our paper introduces TOB-SVD, a novel Total-Order Broadcast protocol in the sleepy model, which is resilient to up to 1/2 of adversarial participants. TOB-SVD requires only a single phase of voting per decision in the best case and achieves lower expected latency compared to existing approaches offering the same optimal adversarial resilience. This work paves the way to more practical Total-Order Broadcast protocols to be implemented in real-world systems where a large number of participants are involved simultaneously and their participation level might fluctuate over time.
Francesco D'Amato, Roberto Saltini, Thanh-Hai Tran, Luca Zanolini
2023-10-17T15:11:14Z
http://arxiv.org/abs/2310.11331v2
# Streamlining Sleepy Consensus: Total-Order Broadcast with Single-Vote Decisions in the Sleepy Model ###### Abstract Over the past years, distributed consensus research has shifted its focus towards addressing challenges in large-scale, permissionless systems, such as blockchains. This shift is characterized by the need to accommodate dynamic participation, contrasting the traditional approach of a static set of continuously online participants. Works like Bitcoin and the Sleepy Model have set the stage for this developing framework. Notable contributions from Momose and Ren (CCS 2022) and subsequent works have introduced Total-Order Broadcast protocols leveraging Graded Agreement primitives and supporting dynamic participation, though often requiring multiple rounds of voting per decision - a potential bottleneck for real-world large-scale systems. Addressing this, our paper presents a novel Total-Order Broadcast protocol in the Sleepy Model resilient to up to \(1/2\) adversarial participants, requiring just a single round of voting per decision. This work paves the way to more practical Total-Order Broadcast protocols to be implemented in real-world systems where a large number of participants are involved simultaneously and their participation level might fluctuate over time. ## 1 Introduction Distributed consensus research has been reshaped in recent years to tackle the challenges posed by large-scale, permissionless systems, such as blockchains. Unlike traditional consensus-solving methods where participants are assumed to be continuously online and actively engaged in the protocol, this new paradigm captures fluctuating participation levels among network participants. This approach was implicitly introduced by the Bitcoin's protocol [12] and it was later refined and formalized by Pass and Shi in their "Sleepy Model" [13]. We refer to consensus (or, in our context, Total-Order Broadcast) protocols supporting dynamic participation as _dynamically available_. Momose and Ren's research [11] laid the foundation for dynamically available Total-Order Broadcast protocols (MR), sparking a series of subsequent research [9, 10, 8, 2]. Notably, the protocols stemming from these works share a common structural theme: they all leverage a Graded Agreement (GA) primitive, albeit with different implementations and properties. For example, the Total-Order Broadcast protocol proposed by Momose and Ren [11] utilizes a Graded Agreement protocol resilient to \(1/2\) adversarial participants. However, their TOB protocol is rather impractical due to a high latency of \(16\Delta\). In contrast, the subsequent study by Malkhi, Momose, and Ren [9] (MMR) proposes two protocols (\(1/3\)MMR and \(1/4\)MMR) which eliminate the need for stable participation and improve the latency to \(3\Delta\) and \(~{}2\Delta\), respectively, but lower the adversarial tolerance to \(1/3\) and \(1/4\), respectively. A later refinement by the same authors [10] (MMR2) reverts to tolerating minority corruption, while maintaining a comparable latency of \(4\Delta\). A concurrent and independent work [8] (GL) also uses a Graded Agreement primitive (called Commit-Adopt in their work) and achieves \(1/2\) adversarial resilience, with latency \(6\Delta\). The three currently existing dynamically available protocols with optimal adversarial resilience and latency constant in the security parameter [11, 10, 8] all share the drawback of requiring multiple rounds of voting for each decision. In practice, these protocols all operate in views, and within each view a block is proposed and a decision is made. To do so, two [8], three [10] and five [11] instances of Graded Agreement are invoked within a view, each including four, three and two rounds of voting, respectively. This hinders their practicality in real-world systems where a large number of participants are involved simultaneously, as is the case in the Ethereum blockchain, because voting rounds can be quite slow and expensive in such systems (for example due to the need to perform _vote aggregation_). We extend this line of work, with the aim of improving the practicality of dynamically available protocols, in particular for systems with a large number of participants. We introduce a Total-Order Broadcast protocol that tolerates up to \(1/2\) adversarial participants, has comparable latency to the state-of-the-art in this respect, the MMR2 protocol [10] (slightly better in the average-case and slightly worse in the best-case, as shown in Table 1), _and_ only necessitates a single instance of Graded Agreement, and a single voting round, per view. In comparison, the MMR2 protocol [10] requires three GA instances per view, each with two voting rounds. Moreover, unlike MMR2 [10] and GL [8], our protocol is friendly to vote aggregation, because in each voting round all participants are expected to send the same votes. However, these simplifications reintroduce the need for a minor stable participation assumption, requiring participants to be awake for at least \(2\Delta\) in order to contribute to the security of the protocol. We believe this trade-off to be entirely reasonable, because completely doing away with stable participation requires the unrealistic assumption that, upon going online, participants immediately receive all the messages they should have received while offline. A more realistic approach is for participants to execute a _recovery protocol_ upon going online, as prescribed by [11, 10] precisely for this reason, in which case some stable participation is anyway required. The remainder of this work is structured as follows. Section 2 presents the system model and essential definitions. Section 3 gives a technical outline of our protocols and results. In Section 4, we revisit the sleep model, expanding upon the notation first introduced by Malkhi, Momose, and Ren [10]. In Section 5, we introduce two Graded Agreement protocols. Specifically, Section 5.1 describes a GA with two grades, serving as a foundation for a subsequent GA protocol with three-grades, described in Section 5.2. This Graded Agreement with three grades is then used in our Total-Order Broadcast protocol presented in Section 6. Related works are discussed in Section 7. A Total-Order Broadcast that builds upon our Graded Agreement protocol with two grades can be found in Appendix A. In Appendix B we enhance our TOB protocol introduced in Section 6 by making it resilient to bounded periods of asynchrony. We do so by building upon our previous works [4, 2]. Finally, conclusions are drawn in Section 8. ## 2 Model and definitions System model.We consider a system of \(n\)_validators_\(\mathcal{V}=\{v_{1},\ldots,v_{n}\}\) that communicate with each other. The validators interact by exchanging messages over a synchronous network with delay bound \(\Delta>0\). We assume that validators have synchronized clocks and that every validator is identified by a unique cryptographic identity and the public keys are common knowledge. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & Fig 4 & Fig 6 & MR [11] & MMR2 [10] & GL [8] & 1/3 MMR [9] & 1/4 MMR [9] & DINTS [3] & PS [13] \\ \hline Adversarial resilience & 1/2 & 1/2 & 1/2 & 1/2 & 1/2 & 1/3 & 1/4 & 1/2 & 1/2 \\ \hline Best-case latency & \(6\Delta\) & \(4\Delta\) & \(16\Delta\) & \(4\Delta\) & \(6\Delta\) & \(3\Delta\) & \(2\Delta\) & \(O(\kappa)^{*}\) & \(O(\kappa)\) \\ \hline Average-case latency & \(8\Delta\) & \(6.5\Delta\) & \(24\Delta\) & \(9\Delta\) & \(11\Delta\) & \(4\Delta\) & \(3\Delta\) & \(O(\kappa)^{*}\) & \(O(\kappa)\) \\ \hline Block-time & \(4\Delta\) & \(5\Delta\) & \(16\Delta\) & \(10\Delta\) & \(10\Delta\) & \(2\Delta\) & \(2\Delta\) & \(3\Delta\) & \(\Delta\) \\ \hline Voting rounds per block & 1 & 2 & 10 & 9 & 9 & 2 & 1 & 1 & 0 \\ \hline \end{tabular} \end{table} Table 1: We compare all existing dynamically available TOB protocols we are aware of. Let the _confirmation time_ of a transaction be the time between its submission and the decision of a log containing it. We call _best-case latency_ the minimum confirmation time of a transaction, where the minimum is over all submission times. Equivalently, the minimum time between a proposal and its decision. We call _average-case latency_ the expected confirmation time of a transaction with a random submission time. Equivalently, the sum of best-case latency and a half of the time between adjacent proposals, or _block time_. \({}^{*}\) For Goldfish [3], we show the latency in conditions of low participation, when the optimistic fast confirmation rule does not apply. A protocol for \(\mathcal{V}\) consists of a collection of programs with instructions for all validators. Failures.A validator that follows its protocol during an execution is called _honest_. On the other hand, a faulty validator may deviate arbitrarily from its specification, e.g., when corrupted by an adversary; we refer to such validators as _adversarial_ or _Byzantine_, interchangeably. Adversarial power.The adversary can permanently corrupt honest validators throughout the execution in a _mildly adaptive_ way. To be more specific, if the adversary corrupts an honest validator \(v_{i}\) at time \(t\), then \(v_{i}\) becomes adversarial only at time \(t+\Delta\). In other words, we assume that the adversary doesn't have the capability to corrupt honest validators immediately. Instead, there's a delay, represented by \(\Delta\), before it can do so. This delay is essential for the effective operation of a VRF-based leader election, as detailed in Section 2.1. The adversary can also _fully adaptively_ either _put validators to sleep_, preventing them from participating in the protocol, or _wake them up_, enabling them to participate in the protocol. Validators subjected to the sleeping state are referred to as _asleep_, while those actively participating in the protocol are designated _awake_. In other words, we are considering (a variant of) the sleepy model [13], which we fully specify in Section 4. ### Graded Agreement and Total-Order Broadcast Logs.A _log_ is defined as a finite sequence of _blocks_\(b_{i}\), represented as \(\Lambda=[b_{1},b_{2},\ldots,b_{k}]\). Here, a block represents a batch of transactions and it contains a reference to another block. In this work, we assume that there exists an external pool of transactions. Honest validators retrieve transactions from this pool and validate them using a specified validity predicate before batching them into blocks. Given two logs, \(\Lambda\) and \(\Lambda^{\prime}\), the notation \(\Lambda\preceq\Lambda^{\prime}\) indicates that \(\Lambda\) serves as a prefix to \(\Lambda^{\prime}\), implying \(\Lambda^{\prime}\) extends \(\Lambda\). Two logs are called _compatible_ if one acts as a prefix for the other. Conversely, if neither log is a prefix of the other, they are deemed to be in conflict. Graded Agreement.We define a generic Graded Agreement (GA) primitive, with maximum grade \(k\geq 1\). In such a primitive, each validator has an input log \(\Lambda\), and validators can output _logs_ with grade \(0\leq g\leq k\), which we denote with the pair \((\Lambda,g)\). We refer to the phase where a log \(\Lambda\) is inputted into the primitive as the _input_ phase, and the phase during which the output is retrieved from the Graded Agreement primitive as the _output_ phase. For this reason, we often refer to a log input by a validator \(v_{i}\) into the GA as _the input of validator \(v_{i}\)_. Such a primitive can have different output phases (potentially up to \(k\)) but only one input phase. A validator \(v_{i}\) that is awake in the output phase for grade \(g\)_may attempt_ to output a log with grade \(g\), i.e., to run an output procedure, possibly resulting in outputting some log with grade \(g\). If validator \(v_{i}\) attempts to do so, we say that \(v_{i}\)_participates in the output phase for grade \(g\)_. The criteria which validators use to decide whether to participate in output phases are specific to each implementation of the Graded Agreement primitive, with the caveat that _honest validators that are always awake participate in every output phase_, so that outputs can at a minimum be guaranteed when participation is _stable_. We require the Graded Agreement primitive to satisfy the following properties: **Consistency:**: If an honest validator outputs \((\Lambda,g)\) for \(g>0\), then no honest validator outputs \((\Lambda^{\prime},g)\) for \(\Lambda^{\prime}\) conflicting with \(\Lambda\). **Graded Delivery:**: If an honest validator outputs \((\Lambda,g)\) for \(g>0\), any honest validator that participates in the output phase for grade \(g-1\) outputs \((\Lambda,g-1)\). **Validity:**: If all honest validators awake at time \(t=0\) input an extension of a log \(\Lambda\), then all honest validators participating in the output phase for a grade \(g\) output \((\Lambda,g)\). **Integrity:**: If no honest validator inputs a descendant of \(\Lambda\), then no honest validator outputs \((\Lambda,*)\). **Uniqueness:**: An honest validator does not output \((\Lambda,g)\) and \((\Lambda^{\prime},g)\) for \(\Lambda\) conflicting with \(\Lambda^{\prime}\). Note that, for \(g>0\), consistency already implies uniqueness. It is only a separate property for outputs of grade \(0\). Total-Order Broadcast.A Byzantine Total-Order Broadcast (TOB) protocol ensures that all the honest validators _deliver_ the same log \(\Lambda\). A protocol for Byzantine Total-Order Broadcast satisfies the following properties: **Safety**: If two honest validators deliver logs \(\Lambda_{1}\) and \(\Lambda_{2}\), then \(\Lambda_{1}\) and \(\Lambda_{2}\) are compatible. **Liveness**: For every valid transaction there exists a log \(\Lambda\) containing it and a time \(t\) such that all honest validators awake for sufficiently long after \(t\) deliver \(\Lambda\). Leader electionOur Total-Order Broadcast protocols proceed in views, and employ a VRF-based leader election primitive. Each validator has an associated VRF value for each view. Whenever a _proposal_ has to be made to extend the current log, validators broadcast one together with their VRF value for the current view, and priority is given to proposals with a higher VRF value. Since this is a standard tool for leader election in dynamically available protocols [11, 9, 10, 3, 8], and not the focus of this work, we use the VRF primitive informally. As mentioned at the beginning of this section, such a leader election requires us to consider a mildly adaptive corruption model, where the adversary has a delay of time \(\Delta\) between scheduling a corruption and executing it. This appears to be necessary also in other dynamically available protocols which use this strategy. To see why, consider the usual leader election where VRF values are broadcast at time \(t\) and a leader is chosen at time \(t+\Delta\) based on the highest VRF value so far observed. Between time \(t\) and \(t+\Delta\), an adaptive adversary can observe the highest VRF value and corrupt its sender, then have it deliver an equivocating proposal only to a subset of the honest validators. This way, some subset of the honest validators only knows of a single proposal with the highest VRF value, and some other subset knows of two such proposals. We cannot then ensure that all honest validators vote for the same proposal, which is typically required by liveness arguments. ## 3 Technical outline Graded Agreement under dynamic participationMomose and Ren [11] introduced the first quorum-based GA protocol working in the sleepy model [13], achieving resilience to minority corruptions. Tolerating dynamic participation necessitates doing away with absolute quorums (e.g., \(1/2\) of the validators), and instead defining them based on the participation level. However, participants can have different _perceived participation levels_, making agreement challenging. To overcome this, Momose and Ren introduce the novel _time-shifted quorum_ technique. Letting the support level for a value be the number of inputs received with that value, and the perceived participation level be the number of validators from which an input has been received, their protocol works as follows at a high level: 1. (\(t=0\)) Inputs are broadcast. 2. (\(t=\Delta\)) A support level for a value \(b\) is determined and stored in \(\overline{\mathcal{E}}(b)\). 3. (\(t=2\Delta\)) Validators send a vote for \(b\) if the _current_ support level \(\mathcal{E}(b)\) is greater than half of the perceived participation level, \(\mathcal{M}_{1}\). 4. (\(t=3\Delta\)) Update the perceived participation level \(\mathcal{M}_{1}\), output \((b,1)\) if \(\overline{\mathcal{E}}(b)>\mathcal{M}_{1}/2\) and \((b,0)\) if the number of votes for \(b\) is greater than half of all received votes. The key insight is that, if an honest validator \(v_{i}\) outputs \((b,1)\) by seeing \(\overline{\mathcal{E}}(b)^{i}>\mathcal{M}_{1}^{i}/2\), then an honest validator \(v_{j}\) awake at time \(2\Delta\) would have \(\mathcal{E}(b)^{j},\mathcal{M}_{1}^{j}\) such that \(\mathcal{E}(b)^{j}\geq\overline{\mathcal{E}}(b)^{i}>\mathcal{M}_{1}^{i}/2\geq \mathcal{M}_{1}^{j}/2\), and thus would vote for \(b\). The first inequality holds because any input seen by \(v_{i}\) at time \(\Delta\), and thus counted in \(\overline{\mathcal{E}}(b)^{i}\), is forwarded and received by \(v_{j}\) by time \(2\Delta\), and so counted in \(\mathcal{E}^{j}(b)\) as well. The converse applies to \(\mathcal{M}_{1}^{j}/2\) and \(\mathcal{M}_{1}^{i}/2\), i.e., \(\mathcal{M}_{1}^{j}/2\) is determined \(\Delta\) time before \(\mathcal{M}_{1}^{i}/2\), justifying the last inequality. In other words, an honest validator outputting \((b,1)\) implies that all honest validators awake at time \(2\Delta\) vote for \(b\), and thus also that all honest validators output \((b,0)\). Notably, this protocol counts _all inputs_, including equivocations, when determining \(\mathcal{E}(b)\), though it does not do so when determining \(\overline{\mathcal{E}}(b)\). This is crucial in the time-shifted quorum argument, because then all inputs that count for \(\overline{\mathcal{E}}(b)\) are guaranteed to count for \(\mathcal{E}(b)\) as well. A compromise of this approach is that it prevents this GA primitive from satisfying the _uniqueness_ property, meaning that honest validators can output multiple values with grade \(0\). The same limitation applies to the variant presented in [10]. Implementing a Total-Order Broadcast based on these primitives, as shown in these works, introduces significant complexity to the protocols. Graded Agreement with uniquenessOur first Graded Agreement protocol, detailed in Section 5.1, is a variant of the GA of Momose and Ren, designed to also satisfy uniqueness. To each validator we associate \(V\) - the set of all received inputs _excluding equivocations_ - and \(S\) - the set of all validators from which an input has been received. Moreover, we let \(V^{t}\) and \(S^{t}\) denote \(V\) and \(S\) at time \(t\), while \(V^{t}_{\Lambda}\) denote the inputs in \(V^{t}\) extending \(\Lambda\), i.e., the inputs inputs for a log \(\Lambda\). The protocol then is as follows: 1. \((t=0)\) Inputs are broadcast. 2. \((t=\Delta)\)\(V^{\Delta}\) is stored. 3. \((t=2\Delta)\) Output \((\Lambda,0)\) if \(|V_{\Lambda}|>|S|/2\). 4. \((t=3\Delta)\) Output \((\Lambda,1)\) if \(|V^{\Delta}_{\Lambda}\cap V_{\Lambda}|>|S|/2\). The protocol still relies on the key ideas of the time-shifted quorum technique. A notable difference is that grade \(0\) outputs in the work of Momose and Ren [11] are computed at time \(3\Delta\) using _votes_, whereas our GA protocol does not have any other message other than inputs, and it computes grade \(0\) outputs at time \(2\Delta\). That said, votes in their GA are cast (almost) in the same way as grade \(0\) outputs are computed in our protocol, and they are essentially used to propagate the information forward until the end of the protocol. Another difference is the use of \(|V^{\Delta}_{\Lambda}\cap V_{\Lambda}|\) when determining outputs of grade \(1\), which is related to the treatment of equivocations. Let us initially ignore that, and pretend that we output \((\Lambda,1)\) when \(|V^{\Delta}_{\Lambda}|>|S^{3\Delta}|/2\), i.e., exactly as \(\overline{\mathcal{E}}(b)\) was used in the GA protocol of Momose and Ren [11]. Were we to disallow equivocations, we would get that the inequalities \(|V^{2\Delta,j}|\geq|V^{\Delta,i}|>|S^{3\Delta,i}|/2\geq|S^{2\Delta,j}|/2\) hold for validators \(v_{i}\) and \(v_{j}\), when \(v_{i}\) outputs \((\Lambda,1)\), analogously to \(\mathcal{E}(b)^{j}\geq\overline{\mathcal{E}}(b)^{i}>\mathcal{M}^{i}_{1}/2 \geq\mathcal{M}^{i}_{1}/2\) in the previous GA. This would immediately give us that \(v_{j}\) outputs \((\Lambda,0)\), i.e., graded delivery. However, the inclusion \(V^{\Delta,i}\subseteq V^{2\Delta,j}\) is not guaranteed once equivocations are allowed, because validator \(v_{j}\) might discard some of the inputs in \(V^{\Delta,i}\) between time \(\Delta\) and time \(2\Delta\), if such inputs turned out to be equivocations. To ensure that the supporting inputs used when attempting to output a log with grade \(0\) are more than when doing so with grade \(1\), we use \(V^{\Delta}\cap V^{3\Delta}\) instead of just \(V^{\Delta}\). Crucially, any validator \(v_{k}\) which is seen as an equivocator by validator \(v_{j}\) at time \(2\Delta\) will also be seen as an equivocator by validator \(v_{i}\) at time \(3\Delta\), since \(v_{j}\) would forward the equivocating inputs of \(v_{k}\) upon receiving them. In particular, inputs from \(v_{k}\) will _not_ be contained in \(V^{3\Delta,i}\), and thus also not in \(V^{\Delta,i}\cap V^{3\Delta,i}\). This ensures that \(V^{\Delta,i}\cap V^{3\Delta,i}\subseteq V^{2\Delta,j}\), since any input in \(V^{\Delta,i}\setminus V^{2\Delta,j}\) is an equivocation, which will then also be absent from \(V^{3\Delta,i}\). In other words, we separate the initial determination of possible supporting inputs from the final determination of which inputs should be treated as equivocations and removed, and we _apply the time-shifted quorum technique to the set of equivocators_ as well: like the perceived participation level, the set of equivocators _increases_ when going from the output phase for grade \(0\) to the output phase for grade \(1\). Graded Agreement with three gradesBuilding upon our Graded Agreement protocol with two grades, we extend it to a Graded Agreement with three grades by applying the time-shifted quorum technique _twice_. The first application happens during time \([2\Delta,4\Delta]\), and ensures the graded delivery property between grades \(0\) and \(1\), exactly in the same way as in our Graded Agreement with two grades. This application is _nested_ inside the second one, which ensures the graded delivery property between grades \(1\) and \(2\), and happens during time \([\Delta,5\Delta]\). Overall, the relevant inclusions are \(V^{\Delta}\cap V^{5\Delta}\subseteq V^{2\Delta}\cap V^{4\Delta}\subseteq V^{3\Delta}\), and \(S^{3\Delta}\subseteq S^{4\Delta}\subseteq S^{5\Delta}\) (each set can belong to a different validator). 1. \((t=0)\) Broadcast an input log \(\Lambda\). 2. \((t=\Delta)\)\(V^{\Delta}\) is stored. 3. \((t=2\Delta)\)\(V^{2\Delta}\) is stored. 4. (\(t=3\Delta\)) Output \((\Lambda,0)\) if \(|V_{\Lambda}|>|S|/2\). 5. (\(t=4\Delta\)) Output \((\Lambda,1)\) if \(|V_{\Lambda}^{2\Delta}\cap V_{\Lambda}|>|S|/2\). 6. (\(t=5\Delta\)) Output \((\Lambda,2)\) if \(|V_{\Lambda}^{\Delta}\cap V_{\Lambda}|>|S|/2\). Total-Order BroadcastTotal-Order Broadcast protocols can be built using Graded Agreement (GA) primitives, with some protocols invoking GA multiple times for a single decision [11, 9, 10]. At its core, TOB aims to guarantee the total order delivery of messages among honest participants, even in the face of challenges like dynamic participation. However, existing TOB protocols [11, 9, 10, 8] have grappled with challenges related to latency [11], resilience [9], and scalability [8]. Multiple invocations of GA for a single decision might exacerbate scalability and latency issues, especially when considering implementation in blockchain networks with hundreds of thousands of participants weighing in on every decision. Notably, Malkhi, Momose, and Ren [9] developed a TOB protocol (1/4MMR) built upon a GA with grades 0, 1, and 2, that streamlined the process with just a single round of voting. Their design, however, had an adversarial resiliency limited to 1/4. Building upon these insights, our work introduces a GA-based TOB protocol, which, akin to 1/4MMR, necessitates only a single GA invocation for each decision, but enhances adversarial resiliency to 1/2. We believe that our refinements improve the feasibility of deploying such protocols in practical, large-scale networks where many participants might be involved in every decision-making process. Specifically, our TOB protocol proceeds over a series of views, each spanning a duration of \(4\Delta\). Every view \(v\) initiates a Graded Agreement \(GA_{v}\) with grades 0, 1, and 2, which extends and overlaps with the following \(GA_{v+1}\) during view \(v+1\). This structure implies that a single view doesn't encapsulate a full cycle of a Graded Agreement; instead, a \(GA_{v}\) initiated in view \(v\) concludes its operations only in the succeeding view \(v+1\). At a high level, our protocol operates through the following phases, depicted in Figure 5. 1. **Proposal Phase:** This phase corresponds to the grade 0 output phase of \(GA_{v-1}\). Validators propose a log extending their _candidate_ log, the grade 0 output from \(GA_{v-1}\). 2. **Voting Phase:** This phase corresponds to both the input stage of \(GA_{v}\) and the grade 1 output phase of \(GA_{v-1}\). Validators with a grade 1 output treat it as a _lock_. To maintain safety, they only input to \(GA_{v}\) either the lock or a proposal that extends it. 3. **Decision Phase:** This phase corresponds to the grade 0 output phase of \(GA_{v-1}\). Validators _decide_ their grade 2 output. Figure 1: In the middle, views \(v-1\), \(v\), and \(v+1\) of our Total-Order Broadcast protocol, each with its three phases. At the top and bottom, respectively, \(GA_{v-1}\) and \(GA_{v}\) – the Graded Agreement protocols that are run as part of the TOB protocol. Arrows indicate that outputs of a GA are used by a parallel phase of the TOB and/or the next GA: outputs of grade 0 of \(GA_{v-1}\) are extended by proposals of view \(v\), outputs of grade 1 of \(GA_{v-1}\) are extended by votes in the TOB, which _exactly correspond_ to the inputs to \(GA_{v}\), while outputs of grade 2 of \(GA_{v-1}\) are decided in view \(v\). Safety and liveness follow from graded delivery, uniqueness, and validity of our Graded Agreement protocol. By construction, decided logs are those output with grade 2, locks are logs output with grade 1, and candidate logs are those output with grade 0. Graded delivery ensures that, if an honest validator outputs a log \((\Lambda,g)\), all other honest validators output \((\Lambda,g-1)\), and uniqueness moreover guarantees that they then do not output any conflicting log with grade \(g-1\). In other words, graded delivery _together with uniqueness_ ensures the _decide-lock-candidate_ relationship, i.e., that decided logs are extended by locks, and locks by candidate logs. Therefore, both proposals from honest leaders and decided logs are voted (input to the next GA) by all honest participants, crucial respectively for _liveness_ and _safety_. Once we have agreement on decided logs and honest proposals, both the liveness and safety arguments are completed by leveraging the validity property. In short, if all honest validators vote for logs extending \(\Lambda\), i.e., input them to the next GA, then validity ensures that all honest validators output \(\Lambda\) with grades 1 and 2 in the next GA. The latter means that they all decide \(\Lambda\), guaranteeing liveness. The former means that all honest validators again vote for an extension of \(\Lambda\), and by induction this also happens in all subsequent views, guaranteeing safety. ## 4 Sleepy model In the original formulation of the sleepy model [13], the adversary can at any time determine the state - either _awake_ or _asleep_ - of any honest validator. Validators in the asleep state do not participate in the protocol; messages intended for them during this state are enqueued, only to be delivered in the subsequent time step when the validator is reawakened. Additionally, adversarial validators remain always awake. The sleepy model, in fact, does not allow for fluctuating participation among Byzantine validators. The reason for that is due to a problem called _costless simulation_[5]: adversarial validators can exploit both past and future activity to compromise, e.g., consensus. When awaken, a Byzantine validator might mimic continuous past engagement, generating messages to retroactively alter outcomes (_backward simulation_). Additionally, these validators can conduct _forward simulations_ by sharing their secret keys with allied adversarial validators, enabling them to act as their proxies, creating an illusion of persistent activity. Variants of the sleepy model, and subsequent consensus protocols working in those models, have been put forth [9, 10, 3, 4, 2], which empower the adversary with the capability to grow. Given this expansion, these models can, at a minimum, address backward simulation. This particular approach, which is the one that we adopt in this paper, is referred to as the _growing adversary_ model [10]. Malkhi, Momose, and Ren [10] provide a formalization of this model using specific parameters. We adopt (a slight variant of) this formalization for the remaining part of our work. Given any time \(t\geq 0\): 1. \(H_{t}\) denotes the set of honest validators awake at time \(t\). 2. \(B_{t}\) represents the set of Byzantine validators awake at time \(t\). 3. For any time interval \([t_{1},t_{2}]\), we define: \[B_{t_{1},t_{2}}=\bigcup_{t\in[t_{1},t_{2}]}B_{t}\] to be the set of adversarial validators awake during the span of this interval. Let \(f_{t_{1},t_{2}}\) represent the cardinality of \(B_{t_{1},t_{2}}\). 4. For the interval \([t_{1},t_{2}]\) and a time \(t_{3}\), we introduce: \[H_{t_{1},t_{2},t_{3}}=(\bigcap_{t\in[t_{1},t_{2}]}H_{t})\cap\overline{B_{t,t_{ 3}}}.\] This set contains the honest validators that were awake between \(t_{1}\) and \(t_{2}\) and remain honest up to time \(t_{3}\). Let \(h_{t_{1},t_{2},t_{3}}\) represent the cardinality of \(H_{t_{1},t_{2},t_{3}}\). Let \(T_{f}\), \(T_{b}\), \(T_{s}\), \(T_{c}\) be non-negative integers representing specific time constants. Furthermore, let \(\rho\geq 1\) denote a predetermined failure ratio of honest to adversarial participants. A system is compliant with the \((T_{f},T_{b},T_{s},T_{c},\rho)\)_-sleepy model_ if and only if, for every time instance \(t\geq 0\), the following condition is satisfied: \[h_{t-T_{s},t,t+T_{c}}>\rho\cdot f_{t-T_{f},t+T_{b}} \tag{1}\] In other words, the validators that are awake and honest _throughout_ the period \([t-T_{s},t]\)_and still not corrupted_ by time \(T_{c}\) outnumber the Byzantine validators awake _at any point_ in the period \([t-T_{f},t+T_{b}]\) by a factor of \(\rho\). Given a choice of parameters \((T_{f},T_{b},T_{s},T_{c},\rho)\), we say that a Total-Order Broadcast protocol is _dynamically available_ if it is a Total-Order Broadcast in the \((T_{f},T_{b},T_{s},T_{c},\rho)\)_-sleepy model_. The \((T_{f},T_{b},T_{s},T_{c},\rho)\)_-sleepy model_ is inspired by the \((T_{f},T_{b},\alpha)\)-sleepy model introduced by Malkhi, Momose, and Ren [10]. In their formulation, as in ours, \(T_{f}\) designates the duration for which forward simulation is considered, while \(T_{b}\) delineates the analogous period for backward simulation. In other words, Byzantine validators are counted for an extra time \(T_{f}\) forward and \(T_{b}\) backward with respect to \(t\). Our model is additionally augmented with \(T_{s}\) in order to encapsulate a _stable participation_ requirement. This is emphasized by considering only those honest validators that remain awake throughout a time span of \(T_{s}\). Moreover, it is augmented with \(T_{c}\), to express the requirement that not too many honest validators be corrupted within time \(T_{c}\) after performing some action, for example to prevent them from equivocating soon after voting. As in other protocols that are based on _virtual resources_[1] like for example proof-of-stake protocols, we always take \(T_{f}=\infty\), because the adversary can always transfer its private keys to other adversarial validators prior to entering a sleep state, enabling these keys to be used _indefinitely_. In the rest of this work, we therefore assume \(B_{t}\) to be monotonically non-decreasing. Accordingly, we write \(B_{t}\) and \(f_{t}\) instead of \(B_{t^{\prime},t}\) and \(f_{t^{\prime},t}\), for any \(t^{\prime}\leq t\). In this work we present a Total-Order-Broadcast protocol with latency \(6\Delta\), which works in the \((\infty,5\Delta,2\Delta,5\Delta,1)\)-sleepy model and requires only a single round of voting per decision, and a Total-Order-Broadcast protocol with latency \(4\Delta\), which works in the \((\infty,3\Delta,2\Delta,3\Delta,1)\)-sleepy model and requires two rounds of voting per decision. ## 5 Graded Agreement protocols In this section, we present two distinct Graded Agreement protocols. The first is a Graded Agreement with \(k=2\), drawing inspiration from the original Graded Agreement protocol introduced by Momose and Ren [11]. The second, a Graded Agreement with \(k=3\), plays a central role in subsequent sections of this paper, serving as the building block of our Total-Order Broadcast protocol, which requires only one round of votes. ### Graded Agreement with two grades We begin by presenting the Graded Agreement protocol with two grades, \(0\) and \(1\). The protocol protocol lasts \(3\Delta\) time, and requires \(h_{0,0,3\Delta}>f_{3\Delta}\), i.e., that the validators which are _initially awake and honest throughout the execution_ outnumber all adversarial validators. Therefore, it in particular works in the \((\infty,3\Delta,0,3\Delta,1)\)-sleepy model. Honest validators which are awake during the output phase for grade \(1\), at time \(t=3\Delta\), only participate in it _if they were also awake at time \(t=\Delta\)_. Once we use this Graded Agreement primitive as a building block for our Total-Order-Broadcast protocol (Figure 4), this translates into the _stability requirement_\(T_{s}=2\Delta\), leading to the use of the \((\infty,3\Delta,2\Delta,3\Delta,1)\)-sleepy model instead. EquivocationsWe refer to multiple different inputs from the same validator as _equivocations_, and to any pair of such inputs as _equivocation evidence_ for its sender. Validators only ever accept and forward up to two inputs per validator. Validator stateAt all times, an honest validator keeps only two local variables, \(V\) and \(E\). First, it keeps a mapping \(V:\{1,\ldots,n\}\mapsto(\mathcal{L},[0,3\Delta])\cup\{\bot\}\), where \(\mathcal{L}\) is the space of all possible valid logs. The mapping associates to a validator \(v_{i}\) the pair \(V(i)=(\Lambda_{i},t_{i})\) if a unique input log \(\Lambda_{i}\) has been received from \(v_{i}\) at time \(t_{i}\), or \(V(i)=\bot\) if either none or more than one input log has been received from \(v_{i}\). In other words, \(V\) keeps track of non-equivvocating inputs together with their reception time. We write \(v_{i}\in V\) if \(V(i)\neq\bot\), and write \(V_{\Lambda}\) for the set of all descendants of \(\Lambda\) recorded in \(V\), paired with their sender, i.e., \(V_{\Lambda}=\{(\Lambda^{\prime},v_{i}):v_{i}\in V,V(i)=(\Lambda^{\prime},*), \Lambda\preceq\Lambda^{\prime}\}\). Moreover, an honest validator keeps a mapping \(E:\{1,\ldots,n\}\mapsto\mathcal{L}^{2}\cup\{\bot\}\), containing a record of equivocators and equivocation evidence, i.e., \(E(i)=\bot\) if \(v_{i}\) is not known to have equivocated, and otherwise \(E(i)=(\Lambda_{i},\Lambda^{\prime}_{i})\), where \((\Lambda_{i},\Lambda^{\prime}_{i})\) is equivocation evidence for validator \(v_{i}\). As for \(V\), we write \(v_{i}\in E\) if \(E(i)\neq\bot\). A validator can compute from \(V\) and \(E\) the set \(S=\{v_{i}\in V:v_{i}\in V\lor v_{i}\in E\}\) of all input senders, i.e., of all validators from which at least one input has been received. When we want to emphasize the time \(t\) at which we consider these variables, we write \(V^{t},E^{t}\) and \(S^{t}\). If we also want to emphasize the validator \(v_{i}\) whose sets we consider, we write \(V^{t,i},E^{t,i}\) and \(S^{t,i}\). Note that a validator \(v_{i}\) which was awake at time \(t\) can access \(V^{t,i}\) at any later time \(t^{\prime}\) through \({V^{t^{\prime},i}}\), by simply considering only entries in \({V^{t^{\prime},i}}\) with reception time \(\leq t\). Message handlingIf an input \(\Lambda\) is received at time \(t\) from a validator \(v_{i}\) at time \(t\), we have three possibilities on how to handle it. If \(v_{i}\not\in V\), i.e., we have not received an input from \(v_{i}\) yet, we record the input by setting \(V(i)=(\Lambda,t)\), and forward it. If \(v_{i}\in V\) and \(V(i)=(\Lambda^{\prime},t)\) for \(\Lambda^{\prime}\neq\Lambda\), i.e., we are first learning about an equivocation from \(v_{i}\), we record the equivocation by setting \(V(i)=\bot\) and \(E(i)=(\Lambda,\Lambda^{\prime})\). Moreover, we also forward the input, to make sure other validators also learn about the equivocation. Finally, the input is ignored if \(v_{i}\in E\), i.e., we already know \(v_{i}\) is an equivocator. **Theorem 1**.: _The protocol implemented in Figure 2 implements Graded Agreement with \(k=2\)._ Proof.: For the _consistency_ property we want to show that no two honest validators \(v_{i}\) and \(v_{j}\) output conflicting logs \(\Lambda\) and \(\Lambda^{\prime}\) with grade \(1\). Without loss of generality, say that \(|V^{\Delta,j}_{\Lambda^{\prime}}\cap V^{3\Delta,j}_{\Lambda^{\prime}}|\leq|V^ {\Delta,i}_{\Lambda}\cap V^{3\Delta,i}_{\Lambda}|\). We first show that the sets of senders of inputs in \(V^{\Delta,j}_{\Lambda^{\prime}}\cap V^{3\Delta,j}_{\Lambda^{\prime}}\) and \(V^{\Delta,i}_{\Lambda}\cap V^{3\Delta,i}_{\Lambda}\) are disjoint, and moreover that they are both contained in \(S^{3\Delta,j}\). Let \(m=(\Lambda^{\prime\prime},v_{k})\) for some \(\Lambda\preceq\Lambda^{\prime\prime}\) and some validator \(v_{k}\), and say \(m\in V^{\Delta,i}\cap V^{3\Delta,i}\). Since \(m\in V^{\Delta,i}\), validator \(v_{i}\) forwards it at time \(\Delta\) and \(v_{j}\) receives by time \(2\Delta\). Therefore, \(V^{3\Delta,j}\) can at most contain \(m\)s as a pair with sender \(v_{k}\), because receiving any other such pair would lead to \(v_{j}\) having equivocation evidence for \(v_{k}\). Since \(m\) is for \(\Lambda^{\prime\prime}\), extending \(\Lambda\) and thus conflicting with \(\Lambda^{\prime}\), \(V^{3\Delta,j}_{\Lambda^{\prime}}\) does not include any input from \(v_{k}\). Therefore, the senders of \(V^{\Delta,j}_{\Lambda^{\prime}}\cap V^{3\Delta,j}_{\Lambda^{\prime}}\) and \(V^{\Delta,i}_{\Lambda}\cap V^{3\Delta,i}_{\Lambda}\) are disjoint. Moreover, the senders of inputs in \(V^{\Delta,j}_{\Lambda^{\prime}}\cap V^{3\Delta,j}_{\Lambda^{\prime}}\) are by definition contained in \(S^{3\Delta,j}\). Finally, the senders of inputs \(V^{\Delta,i}_{\Lambda}\) are also all contained in \(S^{3\Delta,j}\), because \(v_{i}\) forwards them at time \(\Delta\) and, by time \(2\Delta,\;v_{j}\) accepts either them or equivocation evidence for their senders. It then follows that \(|S^{3\Delta,j}|\geq|V^{\Delta,i}_{\Lambda}\cap V^{3\Delta,i}_{\Lambda}|+|V^{ \Delta,j}_{\Lambda^{\prime}}\cap V^{3\Delta,j}_{\Lambda^{\prime}}|\). Then, \(|V^{\Delta,j}_{\Lambda^{\prime}}\cap V^{3\Delta,j}_{\Lambda^{\prime}}|\leq|V^ {\Delta,i}_{\Lambda}\cap V^{3\Delta,i}_{\Lambda}|\) implies \(|S^{3\Delta,j}|\geq 2|V^{\Delta,j}_{\Lambda^{\prime}}\cap V^{3\Delta,j}_{\Lambda^{ \prime}}|\), so \(v_{j}\) does not output \((\Lambda^{\prime},1)\). For the _graded delivery_ property we show that an honest validator \(v_{i}\) outputting \((\Lambda,1)\) implies that any honest validator \(v_{j}\) participating in the output phase for grade \(0\) outputs \((\Lambda,0)\). At time \(\Delta\), validator \(v_{i}\) forwards all inputs in \(V^{\Delta,i}\). At time \(2\Delta\), validator \(v_{j}\) has equivocation evidence for the sender of any input in \(V^{\Delta,i}\setminus V^{2\Delta,j}\), since otherwise such an input would also be contained in \(V^{2\Delta,j}\). This equivocation evidence is forwarded by \(v_{j}\) and received by \(v_{i}\) by time \(3\Delta\), so the senders of inputs \(V^{\Delta,i}\setminus V^{2\Delta,j}\) are all considered as equivocators by \(v_{i}\) then. This implies that \(V^{\Delta,i}\setminus V^{2\Delta,j}\) and \(V^{3\Delta,i}\) are disjoint, and thus so are \(V^{\Delta,i}\cap V^{3\Delta,i}\) Figure 2: Graded Agreement protocol with two grades – protocol for validator \(v_{i}\). and \(V^{\Delta,i}\setminus V^{2\Delta,j}\). Therefore, \(V^{\Delta,i}\cap V^{3\Delta,i}\subseteq V^{2\Delta,j}\). Moreover, \(S^{2\Delta,j}\subseteq S^{3\Delta,i}\), seeing that \(v_{i}\) forwards at least an input for each sender in \(S^{2\Delta,j}\) by time \(2\Delta\), and by time \(3\Delta\) validator \(v_{i}\) receives either the forwarded inputs or equivocation evidence for each sender. Validator \(v_{i}\) outputs \((\Lambda,1)\) if \(|V^{\Delta,i}\cap V^{3\Delta,i}|>|S^{3\Delta,i}|/2\), in which case we also have \(|V^{2\Delta,j}|\geq|V^{\Delta,i}\cap V^{3\Delta,i}|>|S^{3\Delta,i}|/2\geq|S^{ 2\Delta,j}|/2\). Thus, validator \(v_{j}\) outputs \((\Lambda,0)\) if validator \(v_{i}\) outputs \((\Lambda,1)\). For the _validity_ property, consider an honest validator \(v_{i}\in H_{\Delta}\cap H_{3\Delta}\) participating in the output phase for grade 1, i.e., \(v_{i}\) awake at \(t=\Delta\) and \(t=3\Delta\). Suppose that all validators \(H_{0}\cap\overline{B_{3\Delta}}\) (initially awake and honest throughout the protocol) input an extension of \(\Lambda\). Since such validators are honest throughout the protocol, they never equivocate, so \(V^{\Delta,i}_{\Lambda}\) and \(V^{3\Delta,i}_{\Lambda}\) contain all of these inputs. Therefore, \(|V^{\Delta,i}_{\Lambda}\cap V^{3\Delta,i}_{\Lambda}|\geq|H_{0}\cap\overline{ B_{3\Delta}}|=h_{0,0,3\Delta}\). \(H_{0}\cup B_{3\Delta}\) contains all validators which might ever send an input during the whole protocol, so \(|S^{3\Delta,i}|\leq|H_{0}\cup B_{3\Delta}|\leq|H_{0}\cap\overline{B_{3\Delta}}| +|B_{3\Delta}|=h_{0,0,3\Delta}+f_{3\Delta}\). Since by assumption \(h_{0,0,3\Delta}>f_{3\Delta}\), we have that \(h_{0,0,3\Delta}>|S^{3\Delta,i}|/2\), and thus \(|V^{\Delta,i}_{\Lambda}\cap V^{3\Delta,i}_{\Lambda}|\geq h_{0,0,3\Delta}>|S^{ 3\Delta,i}|/2\). It follows that \(v_{i}\) outputs \((\Lambda,1)\). The argument for grade 0 is nearly identical, and we omit it. _Integrity_ follows from the fact that outputting \(\Lambda\) with any grade requires a majority of unique input senders to have input a descendant of \(\Lambda\)_and to not have equivocated_. Moreover, the condition \(|H_{0}|>f_{3\Delta}\) ensures that any majority must contain at least a validator from \(H_{0}\). Consider an honest validator \(v_{i}\) which outputs \(\Lambda\) with grade 0, after observing \(|V^{2\Delta,i}_{\Lambda}|>|S^{2\Delta,i}|/2\) at time \(2\Delta\), though no validator in \(H_{0}\) has input a \(\log\Lambda^{\prime}\succeq\Lambda\) at time \(t=0\). If any such input for a \(\log\) extending \(\Lambda\) is received by \(v_{i}\) from a validator in \(H_{0}\), this input is recognized as an equivocation, and it is therefore not contained in \(V^{2\Delta,i}\). Since \(V^{2\Delta,i}_{\Lambda}\) does not contain any input from validators in \(H_{0}\), but all such validators are counted as senders in \(S^{2\Delta,i}\), we have \(|S^{2\Delta,i}|\geq|V^{2\Delta,i}_{\Lambda}|+|H_{0}|\). Note that \(2|H_{0}|>|H_{0}|+f_{3\Delta}\geq|S^{2\Delta,i}|\), so \(|H_{0}|>|S^{2\Delta,i}|/2\). Then, \(|V^{2\Delta,i}_{\Lambda}|>|S^{2\Delta,i}|/2\) and \(|S^{2\Delta,i}|\geq|V^{2\Delta,i}_{\Lambda}|+|H_{0}|\) together imply \(|S^{2\Delta,i}|>|S^{2\Delta,i}|/2+|H_{0}|>|S^{2\Delta,i}|/2+|S^{2\Delta,i}|/2\), i.e., \(|S^{2\Delta,i}|>|S^{2\Delta,i}|\), a contradiction. The argument for grade 1 is almost identical, and we omit it. Finally, similarly to integrity, _uniqueness_ follows from the fact that outputting \(\Lambda\) with any grade requires a majority of unique input senders to have input a descendant of \(\Lambda\), _without counting inputs from equivocators_. This ensures that the sets of inputs which a validator counts in support of conflicting logs do not intersect, and so that a validator cannot see a majority for two conflicting logs. We only go through the argument for grade 0, because uniqueness for grade 1 is already implied by consistency. Note first that there is a natural injection of \(V^{2\Delta,i}_{\Lambda}\) into \(S^{2\Delta,i}\), since at most one input per validator is considered in the former, due to removing inputs from equivocators. Moreover, note that \(V^{2\Delta,i}_{\Lambda^{\prime}}\) and \(V^{2\Delta,j}_{\Lambda}\) are disjoint for conflicting \(\Lambda,\Lambda^{\prime}\), so \(|V^{2\Delta,i}_{\Lambda^{\prime}}|+|V^{2\Delta,i}_{\Lambda}|\leq|S^{2\Delta,i}|\). Therefore, \(|V^{2\Delta,i}_{\Lambda}|>|S^{2\Delta,i}|/2\) implies \(|V^{2\Delta,i}_{\Lambda^{\prime}}|\leq|S^{2\Delta,i}|/2\) for any conflicting \(\Lambda^{\prime}\). ### Graded Agreement with three grades We now present a Graded Agreement protocol with three grades 0, 1, and 2, which can be seen as an extension of the previous one. This protocol lasts \(5\Delta\) time instead of \(3\Delta\), and requires \(h_{0,0,5\Delta}>f_{5\Delta}\), similarly to the previous GA, but with \(5\Delta\) instead of \(3\Delta\) both for \(T_{c}\) and \(T_{b}\). Therefore, it in particular works in the \((\infty,5\Delta,0,5\Delta,1)\)-sleepy model. During the output phase for grade 1, honest validators that are awake at time \(4\Delta\) only participate if they were also active at time \(2\Delta\). In the output phase for grade 2, validators active at time \(5\Delta\) will only participate if they were awake at time \(\Delta\). This is because outputting a log with either grade 1 or 2 requires having _previously_ stored supporting inputs for them, as per the time-shifted quorum technique. **Theorem 2**.: _The protocol implemented in Figure 3 implements Graded Agreement._ The proof for all the properties of the Graded Agreement with three grades are similar to those for the Graded Agreement with two grades (Theorem 1). For this reason, we only discuss the _graded delivery_ property, which is where the key idea of the protocol is utilized, i.e., a nested application of the time-shifted quorum technique. Let \(v_{i}\), \(v_{j}\), and \(v_{k}\) be three honest validators participating in the output phase for grade 0, grade 1, and grade 2, respectively. Time \([2\Delta,4\Delta]\) in this GA functions exactly like time \([\Delta,3\Delta]\) in the Graded Agreement with two grades: at first, inputs \(V^{2\Delta}\) are stored for later use, then comes the output phase for grade 0, where a log \(\Lambda\) is output with grade 0 if \(|V^{3\Delta}_{\Lambda}|>|S|/2\), and finally comes the output phase for grade 1, where \(\Lambda\) is output with grade 1 if \(|V^{2\Delta}_{\Lambda}\cap V^{4\Delta}_{\Lambda}|>|S|/2\). In other words, for grade 0 and grade 1 we have a first application of the time-shifted quorum technique. For the same reasons as in the Graded Agreement with two grades, we then have that \(V^{2\Delta,j}\cap V^{4\Delta,j}\leq V^{3\Delta,i}\) and \(S^{3\Delta,i}\leq S^{4\Delta,j}\), which guarantees the graded delivery property from grade 1 to grade 0. This application of the time-shifted quorum technique is _nested inside another such application_, which guarantees the graded delivery property from grade 2 to grade 1. Firstly, the participation level for grade 2 outputs, i.e., \(S^{5\Delta}\), is determined time \(\Delta\)_after_ that for grade 1 outputs, \(S^{4\Delta}\), which ensures the correct inclusion, i.e., \(S^{4\Delta,j}\subseteq S^{5\Delta,k}\). Conversely, \(V^{\Delta}\), the initial supporting inputs for grade 2 outputs, are determined time \(\Delta\)_before_\(V^{2\Delta}\), those for grade 1 outputs. Finally, the sets of equivocating senders, whose votes are discarded from the supporting inputs, are determined in the same order as the participation level, to ensure that any sender which is considered an equivocator in output phase of grade 1 is also considered one in the output phase for grade 2. Together, these last two points guarantee that \(V^{\Delta,k}_{\Lambda}\cap V^{5\Delta,k}_{\Lambda}\subseteq V^{2\Delta,j}_{ \Lambda}\cap V^{4\Delta,j}_{\Lambda}\), so that \(|V^{\Delta,k}_{\Lambda}\cap V^{5\Delta,k}_{\Lambda}|>|S^{5\Delta,k}|/2\) implies \(|V^{2\Delta,j}_{\Lambda}\cap V^{4\Delta,j}_{\Lambda}|\geq|V^{\Delta,k}_{ \Lambda}\cap V^{5\Delta,k}_{\Lambda}|>|S^{5\Delta,k}|/2\geq|S^{4\Delta,j}|/2\), i.e., validator \(v_{k}\) outputting \((\Lambda,2)\) implies validator \(v_{j}\) outputting \((\Lambda,1)\). ## 6 Total-Order Broadcast with one vote per decision In this section, we introduce a Total-Order Broadcast protocol derived from the Graded Agreement with three grades discussed earlier (Figure 3). Our approach takes inspiration from the \(\frac{1}{4}\)-resilient Total-Order Broadcast presented by Malkhi, Momose, and Ren [9], which also uses a Graded Agreement with three grades to reduce the GA invocations to _one per view_. In their case, they trade-off adversarial resilience for this, while in our case only the time taken for each GA invocation increases from \(3\Delta\) to \(5\Delta\). Since each GA invocation corresponds to a round of voting, having a single invocation per view translates to enhanced communication efficiency. The protocol, which is presented in Figure 4, proceeds in views of \(4\Delta\) time each. We let \(t_{v}=4\Delta v\) be the beginning of view \(v\). To each view \(v\) corresponds a Graded Agreement \(GA_{v}\), which runs in the time interval \([t_{v}+\Delta,t_{v}+6\Delta]=[t_{v}+\Delta,t_{v+1}+2\Delta]\), i.e., \(GA_{v}\) takes up some of view \(v+1\) as well. Moreover, the GA invocations are not perfectly sequential, as \(GA_{v}\) and \(GA_{v+1}\) overlap during time \([t_{v+1}+\Delta,t_{v+1}+2\Delta]\). At the beginning of each view \(v\) there is a _proposal phase_ corresponding to the output phase for grade 0 of \(GA_{v-1}\). The leader of view \(v\) proposes a log \(\Lambda\) extending its grade 0 output (a _candidate_), if it has one. Next, at time \(t_{v}+\Delta\), comes a _voting phase_, corresponding both to the input phase of \(GA_{v}\) and to the output phase for grade 1 of \(GA_{v-1}\). Validators which have a grade 1 output treat it as a _lock_, and to preserve safety they only input to \(GA_{v}\) either the lock itself or a proposal which extends it. This is followed by a _decision phase_ at time \(t_{v}+2\Delta\), where logs proposed (at best) _in the previous view \((v-1)\)_ can be decided. In particular, this phase corresponds to the output phase for grade 2 of \(GA_{v-1}\), and such grade 2 outputs are decided. This is safe, because graded delivery of Graded Agreement ensures that all honest validators (among those participating in the output phase for grade 1) output with grade 1 a decided log, thus locking on it and therefore inputting to \(GA_{v}\) a log extending it. During the decision phase, validators also store in \(V\) all inputs received until that point in time for \(GA_{v}\), as part of the ongoing \(GA_{v}\). Finally, no further action is taken at time \(t_{v}+3\Delta\) other than what is required by \(GA_{v}\) itself, i.e., to store Figure 3: Graded Agreement with three grades – protocol for validator \(v_{i}\). in \(V^{2\Delta}\) all inputs received until that point. Whenever an action requires a \(GA\) output which is not computed, i.e., the validator chooses not to participate in the output phase due to not having been previously awake when required, the action is skipped. In particular, no decision is taken at time \(t_{v}+2\Delta\) and no input is broadcast at time \(t_{v}+\Delta\) when the required outputs are not available. **Theorem 3**.: _The protocol implemented in Figure 4 implements Total-Order Broadcast._ In order to prove this theorem, we first ensure that the properties of our Graded Agreement primitive (Figure 3) hold when employing it within the Total-Order Broadcast protocol (Figure 4). Recall that the latter works in the \((\infty,5\Delta,2\Delta,5\Delta,1)\)-sleepy model, i.e., when \(h_{t-2\Delta,t,t+5\Delta}>f_{t+5\Delta}\) holds for every time \(t\geq 0\). On the other hand, the Graded Agreement protocol itself (Figure 3) works in the \((\infty,5\Delta,0,5\Delta,1)\)-sleepy model, requiring \(h_{t,t,t+5\Delta}>f_{t+5\Delta}\), without any stability assumption (\(T_{s}=0\)). When invoking it as part of our Total-Order Broadcast, the only change is in the input phase, since honest validators use the outputs of \(GA_{v-1}\) to determine the inputs to \(GA_{v}\). In particular, validators in \(H_{t_{v}+\Delta}\) input something to \(GA_{v}\) if and only if they have a _lock_\(L_{v-1}\), such that they output \(L_{v-1}\) with grade \(1\) in \(GA_{v-1}\). For that to be the case, they must have participated in the output phase for grade \(1\) of \(GA_{v-1}\), which requires them to also have been awake at time \(t_{v-1}+3\Delta=t_{v}-\Delta\). The additional stability requirement of \(T_{s}=2\Delta\) in the Total-Order Broadcast model takes care of this, ensuring that we only consider those honest validators which are allowed to participate in the input phase of a \(GA\), i.e., validators in \(H_{t-2\Delta,t}\), for \(t=t_{v}+\Delta\). **Lemma 1**.: _If all honest validators participating in the output phase for grade \(1\) of \(GA_{v-1}\) output \((\Lambda,1)\), then, for any view \(v^{\prime}\geq v-1\), all honest validators participating in the output phase for grade \(1\) of \(GA_{v^{\prime}}\) output \((\Lambda,1)\)._ Proof.: By the uniqueness property of the Graded Agreement, any honest validator \(v_{i}\) that outputs \(\Lambda\) with grade \(1\) in \(GA_{v-1}\) does not output any log conflicting with \(\Lambda\) with grade \(1\). This means that the lock \(L_{v-1}\) of \(v_{i}\) extends \(\Lambda\). Therefore, every honest validator inputs to \(GA_{v}\) a log extending \(L_{v-1}\), and thus also \(\Lambda\). Since the honest validators participating in the output phase for grade \(1\) of \(GA_{v-1}\) exactly correspond to those that input something to \(GA_{v}\), we can apply the validity property of \(GA_{v}\) and conclude that all honest validators that participate in the output phase for grade \(1\) of \(GA_{v}\) output \((\Lambda,1)\). By induction, this then holds for all views \(v^{\prime}\geq v-1\). **Theorem 4** (Safety).: _The protocol implemented in Figure 4 satisfies safety._ Proof.: Suppose an honest validator \(v_{i}\) decides log \(\Lambda\) at time \(t_{v}+2\Delta\) by outputting \((\Lambda,2)\) in \(GA_{v-1}\). By the consistency property of Graded Agreement, any honest validator participating in the output phase for grade \(1\) of \(GA_{v-1}\) outputs \((\Lambda,1)\). By Lemma 1, for any \(v^{\prime}\geq v-1\), honest validators participating in the output phase for grade \(1\) of \(GA_{v^{\prime}}\) output \((\Lambda,1)\). Now, suppose that another honest validator \(v_{j}\) decides a conflicting \(\Lambda^{\prime}\) and, without loss of generality, let us assume that \(v_{j}\) does so during view \(v^{\prime\prime}\geq v\). Again, by Figure 4: Total-Order Broadcast protocol with one round of voting per decision – protocol for validator \(v_{i}\). the consistency property, every honest validator which participates in the output phase of \(GA_{v^{\prime\prime}-1}\) outputs \((\Lambda^{\prime},1)\). Since \(v^{\prime\prime}-1\geq v-1\) we have shown that any such validator also outputs \((\Lambda,1)\), contradicting the uniqueness property of Graded Agreement. For the following, we define a _good leader for view \(v\)_ to be a validator in \(H_{t_{v}}\cap\overline{B_{t_{v}+\Delta}}\), holding the highest VRF value for view \(v\) among validators \(H_{t_{v}}\cup B_{t_{v}+\Delta}\), i.e., among all validators from which a proposal for view \(v\) might be received by time \(t_{v}+\Delta\). Note that a good leader always proposes something, since it is in \(H_{t_{v}}\), i.e., honest and awake at the proposal time \(t_{v}\). **Lemma 2**.: _Any view has a good leader with probability greater than \(\frac{1}{2}\)._ Proof.: Observe that \(|H_{t_{v}}\cap\overline{B_{t_{v}+\Delta}}|\geq|H_{t_{v}-2\Delta,t_{v}}\cap \overline{B_{t_{v}+5\Delta}}|>f_{t_{v}+5\Delta}\geq f_{t_{v}+\Delta}=|B_{t_{v} +\Delta}|\). Thus, \(2|H_{t_{v}}\cap\overline{B_{t_{v}+\Delta}}|>|B_{t_{v}+\Delta}|+|H_{t_{v}} \cap\overline{B_{t_{v}+\Delta}}|\geq|H_{t_{v}}\cup B_{t_{v}+\Delta}|\). Equivalently, \(|H_{t_{v}}\cap\overline{B_{t_{v}+\Delta}}|>\frac{1}{2}|H_{t_{v}}\cup B_{t_{v} +\Delta}|\). View \(v\) has a good leader whenever a validator in \(|H_{t_{v}}\cap\overline{B_{t_{v}+\Delta}}|\) has the highest VRF value for view \(v\) out all validators \(|H_{t_{v}}\cup B_{t_{v}+\Delta}|\). The adversary is mildly adaptive, so corruptions which happen by time \(t_{v}+\Delta\) must have been scheduled by time \(t_{v}\). In particular, the adversary has to determine \(H_{t_{v}}\cap H_{t_{v}+\Delta}\)_before_ observing any of the VRF values of validators in \(H_{t_{v}}\). Therefore, view \(v\) has a good leader with probability \(\frac{|H_{t_{v}}\cap\overline{B_{t_{v}+\Delta}}|}{|H_{t_{v}}\cup B_{t_{v}+ \Delta}|}\geq\frac{1}{2}\). **Lemma 3**.: _If view \(v\) has a good leader \(v_{\ell}\) and \(v_{\ell}\) proposes a log \(\Lambda\), then all honest validators participating in the output phase for grade \(1\) of \(GA_{v-1}\) input \(\Lambda\) to \(GA_{v}\)._ Proof.: Consider any such honest validator \(v_{i}\) and its lock \(L_{v-1}\), which \(v_{i}\) outputs with grade \(1\) in \(GA_{v-1}\). As the leader \(v_{\ell}\) is honest and awake at time \(t_{v}\), by the consistency property of Graded Agreement, \(v_{\ell}\) outputs \((L_{v-1},0)\) in \(GA_{v-1}\), and does not output any conflicting log with grade \(0\) by the uniqueness property. This means that the proposal \(\Lambda\) made by leader \(v_{\ell}\) extends \(L_{v-1}\). The proposal is received by validator \(v_{i}\) by time \(t_{v}+\Delta\), and no other proposal from \(v_{\ell}\) is received by \(v_{i}\) at that point, because the leader is still honest at time \(t_{v}+\Delta\), since \(v_{\ell}\not\in B_{t_{v}+\Delta}\) by definition of a good leader. Moreover, no other proposal received by \(v_{i}\) at this point has higher VRF value, since a good leader for view \(v\) has the highest VRF value among all validators from which a proposal might have been received \((H_{t_{v}}\cup B_{t_{v}+\Delta})\). Therefore, validator \(v_{i}\) inputs \(\Lambda\) to \(GA_{v}\). **Theorem 5** (Reorg resilience).: _The protocol implemented in Figure 4 satisfies reorg resilience, i.e., proposals from good leaders are decided by all honest validators which are eventually awake for at least \(8\Delta\)._ Proof.: Suppose view \(v\) has a good leader, which proposes log \(\Lambda\). Then, by Lemma 3, all honest validators which participate in the output phase for grade \(1\) of \(GA_{v-1}\) input \(\Lambda\) to \(GA_{v}\). By the validity property of Graded Agreement, all validators which participate in the output phase for grade \(1\) of \(GA_{v}\) output \((\Lambda,1)\). By Lemma 1, this also holds in \(GA_{v^{\prime}}\) for all \(v^{\prime}\geq v\). Since all such validators output \((\Lambda,1)\), they also input an extension of \(\Lambda\) to \(GA_{v^{\prime}+1}\), for all \(v^{\prime}\geq v\). Again by the validity property of Graded Agreement, any honest validator which participates in the output phase for grade \(2\) of one such \(GA_{v^{\prime}+1}\), i.e., any honest validators awake both at \(t=t_{v^{\prime}+2}-2\Delta\) and \(t=t_{v^{\prime}+2}+2\Delta\), decides \(\Lambda\). Therefore, \(\Lambda\) is decided by any honest validator that is eventually continuously awake for an interval of time of at least \(8\Delta\). **Theorem 6** (Liveness).: _The protocol implemented in Figure 4 satisfies liveness._ Proof.: By Lemma 2, a view \(v\) has a good leader with probability greater than \(\frac{1}{2}\). When that is the case, as a result of Theorem 5, the proposal is decided by all validators which are eventually awake for long enough, i.e., \(8\Delta\). ## 7 Related work The initial formulation of a model that supports dynamic participation among network participants within a distributed system was introduced by Pass and Shi through their 'Sleepy Model of Consensus' [13]. The authors formalize a model for consensus that could accommodate participants transitioning from being active, or _awake_, to inactive, or _asleep_. An execution of a consensus protocol in the sleepy model progresses in rounds. In each round, participants can either be awake, participating in the protocol, or asleep, abstaining from participation. Furthermore, it is required that in every round, the majority of the online participants are honest. Addressing the latency issues associated with longest-chain protocols like Bitcoin [12], Momose and Ren [11] present a Byzantine Total-Order Broadcast protocol that not only supports changing participation levels but also maintains a constant latency. They innovatively extended the traditional Byzantine Fault Tolerance (BFT) approach from a static quorum size to a dynamic one, based on the current level of participation and build their protocol upon a weaker form of Graded Agreement. Their protocol, however, makes progress only in periods of "stable participation", i.e., periods during which the participants do not change too wildly. A significant advancement in addressing variable participation was also made by Malkhi, Momose, and Ren [10]. Their work introduces a Byzantine Total-Order Broadcast protocol within the sleepy model that boasts a best-case \(4\Delta\) latency, building upon a novel Graded Agreement protocol. Leveraging a novel view-based construction, their protocol, differently from the protocol of Momose and Ren, does not require stable participation in order to make progresses. Notably, previous efforts by Malkhi, Momose, and Ren [9] (as also partially detailed in Appendix A of [10]) have shown how to devise Byzantine Total-Order Broadcast protocols under fluctuating participation. Their protocols achieve \(3\Delta\) latency with \(1/3\) fault tolerance, and \(2\Delta\) latency with \(1/4\) fault tolerance, respectively. Gafni and Losa [8] introduce two consensus protocols that work seamlessly within the sleepy model. Such protocols can withstand minority corruption of honest participants. The first protocol achieves deterministic safety and probabilistic liveness with constant expected latency, while the second offers both deterministic safety and liveness. ## 8 Conclusions Our main contribution in this work has been the introduction of a novel Total-Order Broadcast protocol that supports dynamic participation and that works in (a variant of) the Sleepy Model. Our protocol has resilience against up to \(1/2\) adversarial participants and significantly reduces the complexity by necessitating only a single voting round per decision. However, this simplification comes with a cost. Specifically, a minor stable participation assumption becomes crucial, requiring participants to be online for a minimum duration of \(2\Delta\). While this might seem like a step back, we argue that this trade-off is well-justified. Attempting to entirely remove the need for stable participation would require trade-offs that aren't feasible in practical situations, especially considering the time participants need to allocate for gathering messages upon rejoining.
2310.07553
Simulation of Hadronic Interactions with Deep Generative Models
Accurate simulation of detector responses to hadrons is paramount for all physics programs at the Large Hadron Collider (LHC). Central to this simulation is the modeling of hadronic interactions. Unfortunately, the absence of first-principle theoretical guidance has made this a formidable challenge. The state-of-the-art simulation tool, \textsc{Geant4}, currently relies on phenomenology-inspired parametric models. Each model is designed to simulate hadronic interactions within specific energy ranges and for particular types of hadrons. Despite dedicated tuning efforts, these models sometimes fail to describe the data in certain physics processes accurately. Furthermore, fine-tuning these models with new measurements is laborious. Our research endeavors to leverage generative models to simulate hadronic interactions. While our ultimate goal is to train a generative model using experimental data, we have taken a crucial step by training conditional normalizing flow models with \textsc{Geant4} simulation data. Our work marks a significant stride toward developing a fully differentiable and data-driven model for hadronic interactions in High Energy and Nuclear Physics.
Tuan Minh Pham, Xiangyang Ju
2023-10-11T14:57:03Z
http://arxiv.org/abs/2310.07553v1
# Simulation of Hadronic Interactions with Deep Generative Models ###### Abstract Accurate simulation of detector responses to hadrons is paramount for all physics programs at the Large Hadron Collider (LHC). Central to this simulation is the modeling of hadronic interactions. Unfortunately, the absence of first-principle theoretical guidance has made this a formidable challenge. The state-of-the-art simulation tool, Geant4, currently relies on phenomenology-inspired parametric models. Each model is designed to simulate hadronic interactions within specific energy ranges and for particular types of hadrons. Despite dedicated tuning efforts, these models sometimes fail to describe the data in certain physics processes accurately. Furthermore, fine-tuning these models with new measurements is laborious. Our research endeavors to leverage generative models to simulate hadronic interactions. While our ultimate goal is to train a generative model using experimental data, we have taken a crucial step by training conditional normalizing flow models with Geant4 simulation data. Our work marks a significant stride toward developing a fully differentiable and data-driven model for hadronic interactions in High Energy and Nuclear Physics. ## 1 Introduction Hadronic interactions exhibit a well-defined description in high-energy domains, typically in the hundreds of GeV range, where perturbative Quantum Chromodynamics (QCD) proves effective. However, in lower energy regimes, the perturbative theory loses its applicability. To address this challenge, various phenomenological models have been devised to characterize hadronic interactions. These models are designed for specific kinematic regions and are tailored to a limited number of hadron flavors. To provide a comprehensive description across a wide energy spectrum, spanning from MeV to hundreds of GeV, these models must be integrated into a transport code. This integration is precisely achieved in the Geant4 framework [1]. It serves as a unified platform where these diverse models are combined to simulate hadronic interactions across an extensive energy range. The Geant4 framework plays a vital role in simulating detector effects for a broad spectrum of applications, including the Large Hadron Collider experiments and beyond. The Geant4 simulation, often referred to as "full simulation", constitutes the most computationally intensive aspect of collider physics programs. As detectors are progressively enhanced to achieve finer resolution, the computational demands of full simulation are poised to escalate significantly. To address this challenge, a concerted effort is emerged towards the development of fast simulation techniques employing deep generative models [2; 3; 4; 5; 6; 7; 8; 9; 10], despite that much improvement has been made within the Geant4 framework [11]. Recent studies have shown that Normalizing Flow [12] can achieve state-of-the-art precision while dramatically reducing generation time, yielding computational gains of orders of magnitude compared to full simulation [13; 14]. However, these generative models are tailored to high-level detector-specific features, making them unsuitable for generalized use across different particle detectors. Our ultimate objective is to develop a generative model that learns the non-perturbative hadronic interactions from the experimental data, such as those published in Refs. [15; 16]. ## 2 Methods ### Dataset The data is generated by the GEANT4 [1] toolkit with a physics list that includes Bertini Cascade and Fritiof model (a.k.a FTFP_BERT_ATL). The Bertini model is applicable for incident energies below 10 GeV in most use cases, while the Fritiof model is for incident energies from 9 GeV to 100 TeV. For simplicity, we use a uniform energy transition for the FTFP_BERT_ATL physics list. We chose the \(\pi^{-}p\) interactions for our studies, whose total cross sections are measured across a large range of kinetic ranges as shown in Fig. 1. There are peaks in the cross sections connected with the \(\Delta\)-isobar production in the \(s\)-channel, \(\pi^{-}+p\rightarrow\Delta^{0}\). The main decay mode of a \(\Delta^{0}\) is \(\Delta^{0}\rightarrow\pi^{-}+p\). Thus, we focus on simulating events with two outgoing hadrons. Figure 1: Total and elastic cross section of \(\pi^{-}p\) interactions as a function of incident \(\pi^{-}\) kinematic energy in the lab frame, taken from PDG data-base [17]. To span a broad spectrum of incident \(\pi^{-}\) energies, we selected 29 data points between 100 MeV and 8000 MeV, spaced approximately 200 MeV apart for training. Additionally, we employed 14 distinct energy levels within the range of 200 MeV to 6500 MeV for testing. For each energy point, we generated 100,000 events with randomly sampled momentum directions. In summary, we produced 290,000 training events and 140,000 testing events. At the center-of-mass (COM) frame, the two outgoing particles are produced back-to-back, distributed uniformly across the azimuthal angle. Thanks to energy conservation, our generative model only needs to predict the properties of one of these particles. Figure 2 shows the transverse momentum and pseudorapidity of the leading particle in events featuring two outgoing particles. The kinetic energies of the leading particles change dramatically when the incident kinetic energy changes, attributed to multiple physics processes. These dependencies gradually smooth out when the pion energy exceeds 10 GeV, at which point only the Fritiof model is employed. Accurately modeling the correlations between the leading particle energy and the incident kinetic energy poses a formidable challenge. ### Normalizing Flow and Training A normalizing flow (NF) uses an invertible function \(f\) (also known as a bijector) to transform a simple initial density \(\pi(\vec{z})\) to the target density distribution \(p(\vec{x})\), and the autoregressive density estimator models any joint density \(p(\vec{x})\) as a product of one-dimensional conditional distributions. We use a simple invertible function \(f\) in the MAF: \(x_{i}=f(z_{i})=z_{i}\exp(\alpha_{i})+\beta_{i}\) where \(\alpha_{i}\) and \(\beta_{i}\) are parameterized by neural networks, often by the MultiLayer Perceptrons (MLPs). The Adam optimizer then optimizes the learnable weights in the neural network by minimizing the negative log-likelihood function. A normalizing flow can be extended to a conditional normalizing flow by concatenating the conditional vector \(\vec{c}\) with the input vector \(\vec{x}\) and using the combined vector to estimate the target density distribution. Figure 2: Comparison of the transverse momentum \(p_{\mathrm{T}}\) and pseudorapidity \(\eta\) distributions of the leading final state particle for different kinematic energy of the incoming \(\pi^{-}\). Final state particles are sorted by their \(p_{\mathrm{T}}\). Our study employs a specific variant of normalizing flows called Masked Autoregressive Flow (MAF) [18]. MAF constructs multi-dimensional distributions by sequentially modeling each dimension based on previously modeled ones. While this autoregressive approach enhances modeling accuracy, it can introduce sampling bottlenecks and sensitivity to the input vector's order. To mitigate the ordering impact, we introduced a permutation bijection to each MAF block. Additionally, in the final block, we appended a tanh bijector layer to ensure that the outputs fall within the target distribution's range, namely [-1, 1]. Our final normalizing flow model uses the incident pion energy as the conditional variable and employs Gaussian distributions as the base distributions. Its target distributions are the four-vector of leading outgoing particles (\(p_{x}\), \(p_{y}\), \(p_{z}\), and \(E\)). The normalizing flow consists of 30 Masked Autoregressive Flow (MAF) blocks, with each MAF block using two-layer MLPs with a layer size of 128. The implementation is based on TensorFlow[19] (TF) and TF probability. The model undergoes training for over 2000 epochs with a learning rate that decreases from \(10^{-4}\) to \(10^{-6}\) following a polynomial function. ## 3 Results We evaluate the performance of the trained normalizing flows using the "Wasserstein distance" (WD) [20, 21]. This metric is calculated for each variable, comparing the NF-generated events with events simulated using GEANT4. Smaller WD values indicate better performance. Figure 3 presents the WD distances for various pion energies in both the training and testing datasets. The WD spectrum demonstrates that the NF performs exceptionally well within the intermediate range of pion incident energies. However, the NF model struggles to extrapolate the distributions to lower and higher bounds, which could be attributed to fewer sampled incident energies in these regions. Figure 4 compares the distributions generated by the normalizing flow (NF) with GEANT4-simulated ones for incident pion energies that were not part of the training Figure 3: Wasserstein distance between the NF-generated events and the GEANT4-simulated events for different pion energies for the training and testing dataset. data. It's important to note that the model is specifically trained to predict the four-vector components (\(p_{x},p_{y},p_{z}\), and \(E\)) of the particle. To accurately predict the transverse momentum (\(p_{\rm T}\)), the model must capture the correlations between \(p_{x}\) and \(p_{y}\). For incident energies of \(k_{\pi}=1.4\) GeV and 1.2 GeV, we observe a close agreement between the NF-generated and the Geant4-generated events. However, it becomes evident that more incident energies are necessary for further enhancing the model's performance, particularly in the lower and higher energy regions. In our initial exploration of the data generated at \(k_{\pi}>9\) GeV, we've noticed that the kinematic distributions exhibit a more gradual and continuous variation when only one model, namely the FRITIOF model, is used. Consequently, the normalizing flow can conditionally simulate the data with significantly improved agreement with the true distribution in this region. This improvement is evidenced by the Wasserstein distance observed in Fig. 5. ## 4 Conclusions Non-perturbative hadronic interactions pose significant physics challenges and formidable computational challenges. Without first-principle theoretical guidance, Figure 4: Comparison of energy \(E\), transverse momentum \(p_{\rm T}\), and pseudorapidity \(\eta\), \(p_{z}\), and four-vector mass of the leading outgoing particle between GEANT4-simulated events (“Truth”, blue lines) and Normalizing Flow-generated events (“Prediction”, black lines) for incidence pion kinetic energy of 200 MeV, 1.4 GeV, 2.2 GeV, and 6.5 GeV listed from top to bottom. We use the NF model to generate the same amount of events ten times with different seeds. The mean and standard deviations (“Prediction STD”) of the ten distributions are shown. the primary approach for simulating such interactions is to derive knowledge from experimental measurements, as seen in the implementation of numerous parametric models within the Geant4 framework. This study represents a promising avenue where Conditional Normalizing Flow (NF) is employed to learn and simulate non-perturbative hadronic interactions based on simulated events. The conditional NF can predict outgoing particle properties with reasonable accuracy in specific energy regions for incident pions. Nonetheless, there's still more work ahead to achieve a comprehensive simulation of hadronic interactions. The primary challenge lies in generating a variable number of outgoing particles and discrete particle types. The multiplicity of outgoing particles is not solely determined by the incident energy but also by the underlying physics processes. It could be intriguing to develop a physics-informed generative model to address this complexity. Another significant challenge is constructing a single model covering the entire energy spectrum. In Geant4 simulations, particularly in the FTFP_BERT_ATL physics list, different models are applied to describe hadronic interactions for incident energies below 9 GeV and those above 10 GeV, with an artifact of mixed models for energies in between. However, it's worth exploring whether a single generative model can simulate hadronic interactions consistently across the entire energy spectrum. Answering this question will require learning from real experimental data rather than solely relying on data generated by Geant4.
2308.12472
Nonlinear Wave Transformation over Steep Breakwaters
Wave shoaling of water waves over mild bottom slopes is well described by linearized theories. However, the analytical treatment of nonlinear wave shoaling subject to rapidly varying bottoms has proven to be elusive in the past decades. As the spatial evolution of the exceedance probability of irregular waves is affected by second-order effects in steepness, the nonlinear shoaling coefficient throughout a symmetrical and steep breakwater is investigated through a stochastic framework. By inverting the effect of slope on normalized wave height distribution, it is possible to obtain a closed-form slope dependence of the nonlinear shoaling coefficient compatible with experiments over steep breakwaters.
Saulo Mendes
2023-08-23T23:55:27Z
http://arxiv.org/abs/2308.12472v1
# Nonlinear Wave Transformation over Steep Breakwaters ###### Abstract Wave shoaling of water waves over mild bottom slopes is well described by linearized theories. However, the analytical treatment of nonlinear wave shoaling subject to rapidly varying bottoms has proven to be elusive in the past decades. As the spatial evolution of the exceedance probability of irregular waves is affected by second-order effects in steepness, the nonlinear shoaling coefficient throughout a symmetrical and steep breakwater is investigated through a stochastic framework. By inverting the effect of slope on normalized wave height distribution, it is possible to obtain a closed-form slope dependence of the nonlinear shoaling coefficient compatible with experiments over steep breakwaters. ## I Introduction Wave characteristics are modified when propagating nearshore until they eventually break. Their transformation due to shoaling is one of the most fundamental coastal processes and has been known since the works of Green [1] and Burnside [2]. Above all, the transformation of wave heights is an essential and required factor for many coastal and ocean engineering applications. On the other hand, a precise knowledge of the steepness evolution over a shoal is also fundamental, as it affects the solutions of perturbative wave theories. In fact, primary water wave variables such as group speed [3], dispersion relation [4], pressure underneath waves [5], energy flux and total energy [6; 7] as well as direct consequences of wave conservation principles such as set-down and set-up [8], wave run-up [9], wave breaking location [10], long-shore and rip currents [11; 12] are all affected by the growth in steepness. For waves approaching the coast, these primary variables can only be predicted accurately if a closed-form for the shoaling coefficient is known. Although the integral properties of coastal processes were generalized through the theory of radiation stress [13; 14], the study of the amplification of wave height due to a shoal has been commonly reduced to the conservation of the energy flux in the absence of refraction, reflection or any form dissipation since Burnside [2]. Shoaling models include but are not limited to linear theory [15], higher-order nonlinear theory of Stokes [3] as well as cnoidal [16] or Cokelet and Longuet-Higgins [17] theories. Although several nonlinear models for wave deformation and transformation for waves encountering bathymric changes exist, these theories do not provide simple closed-form expressions in terms of initial steepness, relative water depth or bottom slope [18; 19; 20], justifying the existence of several empirical models [21]. Naturally, linear wave theory significantly underpredicts shoaling coefficients for high values of the Ursell number, while nonlinear theories and empirical formulae thereof do not capture the effect of the slope magnitude \(|\nabla h|\)[21]. Nevertheless, Iwagaki and Sakai [22] computed the effect of the shoaling slope magnitude on the surface elevation of cnoidal waves under the assumption that \(k_{p}h<\pi/10\) and \(|\nabla h|<1/10\). However, it did not clarify how to compute the wave height from it. Furthermore, the nonlinear correction to the surface elevation is proportional to \(|\nabla h|^{-1}\), which instead of recovering the Green [1] law for the adiabatic case (\(|\nabla h|\ll 1/10\)) expects wave shoaling at steep slopes to recover this law, in clear contradiction to experimental observation [19]. In addition, Iwagaki and Sakai [22] model has not been formulated for the regions atop a shoal and over the de-shoal, such that it can not be readily extended to a breakwater. Indeed, it is widely agreed that no general theory or closed formula assessing the effect of slope magnitude exists for the nonlinear wave transformation, in particular for steep slopes [23; 24; 25; 26; 19]. For instance, theoretical and empirical models for the breaker height have been tested and seem to work well for mild slopes (\(|\nabla h|<1/15\)) only [27]. In addition to the wave transformation of ordinary waves nearshore, the study of rogue waves in this zone has grown exponentially in recent years. The threat to ocean vessels and structures posed by rogue waves have recently been discovered to be also important in transitional depths subject to shoaling [28; 29; 30; 31; 32; 33]. The latter phenomenon has been theoretically examined and attributed to among other factors the evolution of the steepness [34; 35; 36]. However, in order to become predictive, current theories of second-order rogue wave shoaling depend on precise modelling of the wave transformation over arbitrary bathymry. So far, a one-way street has been established between deterministic fundamental properties and the stochastic analysis of hydrodynamics: for a given water wave solution, a unique probability distribution can be obtained. In the present work, the stochastic analysis of irregular wave motion is shown to be synchronized to the fundamental property of wave transformation in such a way that is possible to compute the nonlinear shoaling coefficient with more precision than through deterministic (fundamental) methods. A new theory for the non linear and small amplitude wave transformation is developed for steep arbitrary slopes over a shoal or de-shoal, thus applicable to beaches and breakwaters. In particular, it is shown how the knowledge of the wave statistics of seas with high rogue wave occurrence is relevant for the computation of the slope-dependent nonlinear shoaling coefficient. Good agreement is found when comparing the theory with the experiments of Raustol [37]. ## II Nonlinear Shoaling: state of the art Whichever water wave solution is applied to the problem of shoaling, the theoretical evolution of either wave height or wave steepness is invariably performed through the conservation of the energy flux, see Rattanapitikon and Kanokrattananukul [21] and Le Mehaute and Wang [23] for a review. To compute the energy flux with the mean water level as _datum_, I adopt the formulation of Longuet-Higgins [6] and Klopman [38] and restrict it for small amplitude waves (\(\zeta\ll h\)): \[F=\left\langle\int_{-h}^{0}\left[p+\frac{1}{2}\rho(u^{2}+w^{2})+\rho g(z- \langle\zeta\rangle)\right]u\,dz\right\rangle\, \tag{1}\] which can be shown to be equivalent to the mean energy level as _datum_ subtracted by the mean water level counterpart [38; 39], \[F\equiv\left\langle\int_{-h}^{0}\left[\rho g\zeta^{\star}+\frac{1}{2}\rho(u^{ 2}+w^{2})\right]u\,dz\right\rangle-\rho g\langle\zeta\rangle c_{p}h\, \tag{2}\] where \(p\) is the water column pressure in the presence of waves, \((u,w)\) are the horizontal and vertical components of the velocity vector, \(c_{p}\) is the phase velocity, \(h\) the water depth, \(\zeta^{\star}\) the surface elevation corrected by depth function within \(\partial\Phi/\partial t\) and \(\langle\zeta\rangle\) the mean water level obtained from the momentum balance [13] computed by a time average \(\langle\cdot\rangle\) operator. ### Adiabatic Shoaling: Gently Sloping Beaches Typically, gently sloping beaches (adiabatic process) are assumed and no slope magnitude effect is considered for shoaling [40], even when the regime of cnoidal waves is reached [41; 42; 43; 44; 45]. Such approximation became common practice due to ease in treating shoaling in the absence of reflection or wave deformation [23]. In this section, I briefly review how the shoaling coefficient is computed through the conservation of energy flux, such that verification of classical formulae for the adiabatic case can be readily generalized to a finite and arbitrary slope magnitude. For linear waves, the surface elevation and velocity components for waves traveling over a mild slope are written as [46; 5; 15]: \[\Phi = \frac{a\omega}{k}\frac{\cosh\theta}{\sinh\Lambda}\sin\phi\ ;\ \zeta^{\star}=a\frac{\cosh\theta}{\cosh\Lambda}\cos\phi\ ;\] \[u = a\omega\frac{\cosh\theta}{\sinh\Lambda}\cos\phi\ ;\ w=a\omega\frac{\sinh \theta}{\sinh\Lambda}\sin\phi. \tag{3}\] with notation \(\theta=k(z+h)\), \(\Lambda=kh\) and \(\phi=kx-\omega t\) for regular waves. Thus, the energy flux with mean energy level as _datum_ (i.e. neglecting the change in mean water level) under an adiabatic shoaling process reads: \[F_{0}^{(1)} = \rho g\int_{-h}^{0}\left\langle\zeta^{\star}u+\frac{u}{2g}(u^{2}+ w^{2})\right\rangle\,dz\quad. \tag{4}\] Because the terms \(u(u^{2}+w^{2})\) entail odd powers of trigonometric functions such as \(\cos^{3}\phi\) and \(\sin^{2}\phi\cos\phi\), periodicity of these functions imply in \(\langle u(u^{2}+w^{2})\rangle=0\). Therefore, I arrive at: \[F_{0}^{(1)} = \rho g\int_{-h}^{0}\left\langle\zeta^{\star}u\right\rangle\,dz \tag{5}\] \[= \rho ga^{2}\cdot\frac{2\omega}{\sinh\left(2\Lambda\right)}\int_{- h}^{0}\cosh^{2}\theta dz\quad,\] \[= \rho ga^{2}\cdot\frac{\omega}{\sinh\left(2\Lambda\right)}\int_{- h}^{0}\cosh^{2}\theta dz\] \[= \rho ga^{2}\cdot\frac{\omega}{\sinh\left(2\Lambda\right)}\left[ \frac{h}{2}+\frac{\sinh\left(2\Lambda\right)}{4k}\right]\quad,\] \[= \rho ga^{2}\cdot\frac{\omega}{\sinh\left(2\Lambda\right)}\cdot \frac{\sinh\left(2\Lambda\right)}{4k}\left[1+\frac{2\Lambda}{\sinh\left(2 \Lambda\right)}\right]\,\] \[= \left(\frac{1}{2}\rho ga^{2}\right)\cdot\frac{\omega}{2k}\left[1 +\frac{2\Lambda}{\sinh\left(2\Lambda\right)}\right]\,\] \[= E^{(1)}\cdot\frac{c_{p}}{2}\left[1+\frac{2\Lambda}{\sinh\left(2 \Lambda\right)}\right]\equiv E^{(1)}\cdot c_{g}\quad,\] which is the well-known textbook form for the energy flux. Therefore, the shoaling coefficient for the wave height reads [47; 10]: \[\nabla\cdot(E^{(1)}c_{g}\hat{x})=0\ \therefore\ K_{s}=\frac{H}{H_{0}}=\sqrt{\frac{c_{g }\,0}{c_{g}}}. \tag{6}\] As the wavelength decreases due to shoaling, the transformation of the regular wave steepness in linear theory can be computed [10; 18]: \[K_{\varepsilon}=\frac{\varepsilon}{\varepsilon_{0}}\equiv\frac{H}{H_{0}}\frac{ \lambda_{0}}{\lambda}=\frac{1}{\tanh kh}\left[\frac{2\cosh^{2}kh}{2kh+\sinh 2kh}\right]^{1/2}\quad. \tag{7}\] The derivation of eq. (5) is notably simpler than the one provided by Longuet-Higgins [6]. The advantage of the above direct approach is that it does not depend on relationships between integral properties such as momentum flux, energy flux, mass flux and so on. For waves that become steeper over a steep slope, these relationships have to be carefully revised, while the computation in eq. (5) concerns only its own dependence on the inhomogeneity of the wave evolution. #### ii.1.1 Second-order Waves Travelling over a _Shoal_ Second-order regular waves has a potential [46]: \[\Phi=\frac{a\omega}{k}\frac{\cosh\theta}{\sinh\Lambda}\sin\phi+\left(\frac{3ka}{ 8}\right)\frac{a\omega}{k}\frac{\cosh\left(2\theta\right)}{\sinh^{4}\Lambda} \sin\left(2\phi\right)\,. \tag{8}\] In addition, the pressure field induced by a progressive wave field is [5]: \[p = \rho g(\zeta^{\star}-z)=\rho\frac{\partial\Phi}{\partial t}-\rho gz\quad,\] \[\rho\frac{\partial\Phi}{\partial t} = \rho ga\left[\frac{\cosh\theta}{\cosh\Lambda}\,\cos\phi\right. \tag{9}\] \[+ \left.\left(\frac{3ka}{4}\right)\frac{\cosh\left(2\theta\right)}{ \cosh\Lambda\sinh^{3}\Lambda}\,\cos\left(2\phi\right)\right]\quad,\] Similarly, one can easily obtain the second-order horizontal velocity component: \[u=a\omega\left\{\frac{\cosh\theta}{\sinh\Lambda}\cos\phi+\left(\frac{3ka}{4} \right)\frac{\cosh\left(2\theta\right)}{\sinh^{4}\Lambda}\cos\left(2\phi\right)\right\} \tag{10}\] Hence, denoting \((\delta u,\delta\zeta^{\star})\) as the second-order additional terms to the linear theory, I can write: \[F_{0}^{(2)}=\rho g\int_{-h}^{0}\left\langle\zeta^{\star}\cdot u+\delta\zeta^{ \star}\cdot u+\delta u\cdot\zeta^{\star}+\delta u\cdot\delta\zeta^{\star} \right\rangle\,dz\,\,. \tag{11}\] The terms \((\delta u\cdot\zeta^{\star},\delta\zeta^{\star}\cdot u)\) have major integrands \(\cos\phi\cos\left(2\phi\right)\) and therefore their averages vanish. Accordingly, the computation leads to: \[F_{0}^{(2)} = F_{0}^{(1)}+\rho g\int_{-h}^{0}\left\langle\delta u\cdot\delta \zeta^{\star}\right\rangle\,dz\quad, \tag{12}\] \[= F_{0}^{(1)}+\rho ga^{2}\cdot\frac{2\omega}{\sinh\left(2\Lambda \right)}\cdot\frac{1}{\sinh^{6}\Lambda}\times\] \[\left(\frac{3ka}{4}\right)^{2}\int_{-h}^{0}\cosh^{2}\left(2\theta \right)\langle\cos^{2}\left(2\phi\right)\rangle dz\quad,\] \[= E^{(1)}\cdot c_{g}\left[1+\left(\frac{3ka}{4}\right)^{2}\frac{ 1}{\sinh^{6}\Lambda}\times\right.\] \[\left.\left(1+\left.\frac{\sinh\left(4\Lambda\right)-2\sinh\left( 2\Lambda\right)}{4\Lambda+2\sinh\left(2\Lambda\right)}\right)\,\right]\neq E ^{(2)}c_{g}\,.\] Hence, the energy flux can not be understood as the product of the group velocity and the exact energy at each higher-order perturbation in steepness. This is of course a well-known fact, see for instance Longuet-Higgins [6] for a general analysis of finite amplitude waves. As pointed out by Longuet-Higgins [6], the form \(F^{(2)}=E^{(2)}c_{g}\) can be recovered only in deep or intermediate waters because the term containing \(\sinh^{-6}\Lambda\) is very small and the second-order correction to the energy will not exceed the linear term by more than \(0.5\%\), see eq. 3.10 of Mendes _et al._[35]. In figure 1 the relative growth of the second-order correction to the energy flux is compared to the increase in the energy itself as computed in Mendes _et al._[35], illustrating that the energy flux decomposition between energy and group velocity of second-order waves is only possible in deep water. Furthermore, in wave processes where the system is driven out of equilibrium, the inability to write the energy flux in the same manner of linear theory is even more apparent [48]. At higher orders, Jonsson and Arneborg [39] showed that the energy flux of wave-current interactions can be even written in Figure 1: Contour plot of the function \(r_{F}^{(2)}=F^{(2)}/E^{(2)}c_{g}-1\), describing the difference in the growth of energy and its flux due to the steepening of second-order waves at an arbitrary depth. four different ways, whose form in no way can be traced back to that of the linear theory of free waves. ### Shoaling Process over Arbitrary Finite Slopes Now, suppose the water depth change is not adiabatic and derivatives of \(h(x)\) can no longer be neglected. The slop-dependent part \(\Delta u\) of the horizontal velocity component (\(u\to u+\Delta u\)) is written as: \[\Delta u = \frac{a\omega}{k}\Bigg{\{}\sin\phi\,\frac{\partial}{\partial x} \left[\frac{\cosh\theta}{\sinh\Lambda}\right]\] \[+\left(\frac{3ka}{8}\right)\sin\left(2\phi\right)\frac{\partial} {\partial x}\left[\frac{\cosh\left(2\theta\right)}{\sinh^{4}\Lambda}\right] \Bigg{\}}\quad,\] \[= \frac{a\omega\nabla h}{\sinh^{2}\Lambda}\left\{\mathscr{H}_{1} \sin\phi+\left(\frac{3ka}{4}\right)\frac{\mathscr{H}_{2}\sin\left(2\phi \right)}{\sinh^{4}\Lambda}\right\}\,,\] with hyperbolic coefficients: \[\mathscr{H}_{1} = \sinh\theta\sinh\Lambda-\cosh\theta\cosh\Lambda\,\] \[\mathscr{H}_{2} = \sinh\left(2\theta\right)\sinh^{2}\Lambda-\cosh\left(2\theta \right)\cosh\left(2\Lambda\right)\,. \tag{14}\] The correspondent energy flux is found: \[F_{0,\,\nabla h}^{(2)} = \rho g\int_{-h}^{0}\left\langle\zeta^{\star}\cdot(u+\Delta u) \right\rangle dz\] \[+ \rho g\int_{-h}^{0}\left\langle\frac{(u+\Delta u)}{2g}\Big{[}(u+ \Delta u)^{2}+w^{2}\Big{]}\right\rangle\,dz\quad.\] All the pure velocity terms vanish because the contributions of \(\langle\sin^{2n+1}\phi\,\cos^{2m+1}\phi\rangle\), \(\langle\sin^{2n}\phi\,\cos^{2m+1}\phi\rangle\), and \(\langle\sin^{2n+1}\phi\,\cos^{2m}\phi\rangle\)) vanish for all \((m,n)\in\mathbb{N}^{*}\). The remaining term \(\langle\zeta^{\star}\Delta u\rangle\) has as integrand \(\sin\left(m\phi\right)\cos\left(n\phi\right)\) and also vanish. Then, I find that \(F_{0,\,\nabla h}^{(2)}=F_{0}^{(2)}\) and no slope dependence can be extracted from the energy flux by means of its definition in eq. (2) with mean energy level as _datum_. In other words, the only term containing the slope dependence is the actual set-down flux, although more terms have to be added in the most general case [39]. This exercise highlights the unfeasibility of seeking a slope-dependent correction to the conservation of the energy flux. Such a task is quite challenging, inasmuch as the author is unaware of any work that properly formulates the energy flux for a given arbitrary bathymetry. Regardless of the feasibility of this endeavor (energy flux generalization), I will later propose a theoretical shortcut of minimal algebraic work based on extreme wave statistics to effectively solve the problem of nonlinear wave transformation over arbitrary bathymetry. ### Observed Shoaling of Irregular Waves over Breakwaters When translating features of regular to those of irregular waves, the shoaling coefficients of significant wave heights and wavelengths are a good approximation for the regular counterpart [49]. Hence, one must use the "significant" wave number (of the 1/3 tallest waves) \(k_{1/3}=2\pi/\lambda_{1/3}\) within eq. (7). Considering empirical relations between peak, energy and 1/3 periods for a broad-banded JONSWAP spectrum [50; 51], transformation to peak wavelengths typically follow \(\lambda_{p}/\lambda_{1/3}\sim 1.2\). Only then a proper comparison between the linear theory for irregular wave shoaling with the observations can be performed. The experiments in Raustol [37] provide a wide range of pre-shoal sea states travelling past a symmetrical breakwater. In figure 2 its main results on the evolution of relative water depth are presented. A table containing all physical details of these experimental runs can be found in Trulsen _et al._[31]. Central to the problem discussed in this work is the Figure 2: Transformation of relative water depth \(k_{p}h\) over a breakwater according to observation in Raustol [37] (dots) and numerical fit thereof (solid lines) from Mendes _et al._[35]. Shoaling (\(0\leqslant x\leqslant 1.6\)) and de-shoaling zones (\(3.2\leqslant x\leqslant 4.8\)) are marked with dashed vertical lines. evolution of steepness over the breakwater, see for instance figure 3. The steepness along the shoaling zone is for the most part slightly underpredicted by linear theory, but this deviation grows as \(k_{p}h\) is lowered in the vicinity of and at the plateau following the shoal. Moreover, linear theory strongly overpredicts the steepness for most of the de-shoaling zone. As reviewed in the previous sections, no theoretical or empirical formulae is presently able to predict the nonlinear wave transformation for steep slopes. More importantly, there is also no physical explanation in terms of energy flux conservation for the drop in steepness as compared to linear wave theory in the de-shoaling zone. Nonetheless, since Shuto's shoaling theory is acclaimed as the most accurate model for mild slopes [21; 25], it is good to analyze its predictions for the sake of gaining insight. Using laboratory data of mild slopes, Kweon and Goda [20] reformulated the piecewise theory of Shuto [43] to obtain the relation: \[K_{\varepsilon,G}^{*}=K_{\varepsilon}+\frac{3\coth\left(k_{1/3}h\right)}{2000 }\left(\frac{\lambda_{1/3,0}}{h}\right)^{2.87}\left(\frac{H_{s0}}{\lambda_{1/ 3,0}}\right)^{1.27}\, \tag{16}\] where \(K_{\varepsilon,G}^{*}\) denotes the nonlinear shoaling coefficient, \(H_{s0}\) the significant wave height and \(\lambda_{1/3,0}\) the significant wavelength in deep water. Converting the wavelength to the spectral peak counterpart [50], two new forms can be found (dimensionless and otherwise): \[K_{\varepsilon,G}^{*}-K_{\varepsilon} \approx \frac{\coth\left(k_{p}h\right)}{438}\frac{H_{s0}^{1.27}T_{p0}^{3.2}}{h^{2.87}} \tag{17}\] \[\approx \frac{\pi\coth\left(k_{p}h\right)}{50\sqrt{2}}\frac{\varepsilon_ {0}}{(k_{p0}h_{0})^{3}}\left(\frac{h_{0}}{h}\right)^{3}\,,\] where \(\varepsilon=(\sqrt{2}/\pi)k_{p}H_{s}\) is the steepness measure originated in the non-homogeneous spectral theory of Mendes _et al._[35], Mendes and Kasparian [52]. While the observed nonlinear shoaling coefficient in the experiments of Trulsen _et al._[31], Raustol [37] lies in the range \(1.07\leqslant K_{\varepsilon}^{*}\leqslant 2\), the correction to linear theory according to eq. (17) is much smaller (\(K_{\varepsilon,G}^{*}-K_{\varepsilon}\lesssim 0.03\)) due to relatively deep waters prior to the shoal (\(k_{p0}h_{0}\geqslant 1.8\)). As seen in figure 3, the correction predicted by Kweon and Goda [20], and by extension Shuto's theory, can not describe the observed departure from linear theory over a breakwater. Furthermore, it provides no qualitative improvement over the de-shoaling zone, although the model was not originally formulated for this region. In the next section, I demonstrate how to compute the nonlinear shoaling coefficient compatible with observations, effective for both the maximum atop the shoal and de-shoaling zones. ## III Stochastic formulation for the wave transformation As unveiled in the previous section and reviewed in the literature [25], the best available shoaling theory [43] and its empirical parameterization [20] stumble on the regime of steep slopes under a wide range of pre-shoal conditions. This deficiency might stem from the manner it is used to describe the conservation of energy flux itself, and not necessarily the water wave solution. Be that as it may, it is evident that no available theoretical or approximated closed-form shoaling coefficient is able to portray the observed laboratory experiments of steep beaches and breakwaters. In this section, I attempt to relate statistical tools for the description of irregular waves with its nonlinear slope-dependent shoaling coefficient. Since Mendes and Kasparian [53] parameterized the effect of the slope magnitude on the probability of rogue waves travelling over a breakwater, I shall solve the inverse problem of describing this slope-dependent departure from non-Gaussian seas as a departure in the shoaling coefficient from its linear theory expectation. Figure 3: Evolution of the mean irregular wave steepness according to linear theory (dashed curve) compared with observations (dots) [37] and its numerical fit (solid curve) [35]. ### Non-homogeneous Wave Statistics Let me review the main theoretical aspects of inhomogeneous shoaling wave fields belonging to the works of Mendes _et al._[35] and Mendes and Kasparian [53]. As a remark, the model does not consider the effects of refraction, breaking, and reflection. This is justified by the low amount of reflection and absence of breaking in the experiments of Trulsen _et al._[31], see Zhang and Benoit [32] for a review. Perturbative up to \(m\)-th order in steepness, the generalized velocity potential \(\Phi\) and surface elevation \(\zeta\) solutions are given: \[\Phi=\sum_{m}\Omega_{m}\frac{\cosh\left(m\varphi\right)\sin\left(m\phi\right)} {mk}\ ;\ \zeta=\sum_{m}\tilde{\Omega}_{m}\cos\left(m\phi\right). \tag{18}\] For waves of second-order in steepness one has: \[\Omega_{1} = \frac{a\omega}{\sinh kh}\quad;\quad\Omega_{2}=\frac{3ka^{2}\omega }{4\sinh^{4}kh}\ ;\] \[\tilde{\Omega}_{1} = a\quad;\quad\tilde{\Omega}_{2}=\frac{ka^{2}}{4}\left[\frac{3- \tanh^{2}\left(kh\right)}{\tanh^{3}\left(kh\right)}\right]\,. \tag{19}\] When this group of waves is affected by a change in bathymetry, the Khintchine [54] theorem is modified such that the following ratio no longer equals unity: \[\Gamma(x):=\frac{\mathbb{E}[\zeta^{2}]}{\mathscr{E}}=\frac{\mathbb{E}[\zeta^ {2}(x,t)](x)}{\mathscr{E}(x)}\quad, \tag{20}\] where \(\mathbb{E}[\zeta^{2}]\) is the ensemble average of the square of the surface elevation and \(\mathscr{E}(x)\) the spectral energy density, i.e. gravity and water density are factored out. The energy density of small amplitude waves travelling over a steep slope satisfying \(|\nabla h|\geqslant 1/20\) up to second order in steepness (\(m\leqslant 2\)) reads: \[\mathscr{E} = \sum_{m}\frac{\tilde{\Omega}_{m}^{2}}{4}+\sum_{m}\frac{\Omega_{m }^{2}}{4g}\int_{-h}^{0}\cosh\left(2m\varphi\right)dz\, \tag{21}\] \[= \frac{1}{4}\sum_{m}\left[\tilde{\Omega}_{m}^{2}+\Omega_{m}^{2} \cdot\frac{\sinh\left(2mk_{p}h\right)}{2mgk_{p}}\right]\,,\] \[= \sum_{i}\frac{a_{i}^{2}}{2}\left[1+\frac{\pi^{2}\varepsilon^{2} \mathfrak{S}^{2}}{16}\left(\frac{\tilde{\chi}_{1}+\chi_{1}}{2}\right)\right]\,\] where \(\mathfrak{S}=ka/\pi\varepsilon\) is the wave vertical asymmetry measuring the mean ratio between wave crest and wave heights, and with trigonometric coefficients: \[\tilde{\chi}_{1}=\left[\frac{3-\tanh^{2}\left(k_{p}h\right)}{\tanh^{3}\left( k_{p}h\right)}\right]^{2}\ ;\ \chi_{1}=\frac{9\cosh(2k_{p}h)}{\sinh^{6}(k_{p}h)}\quad. \tag{22}\] On the other hand, assuming a weakly stationary process of waves travelling over a shoal while inhomogeneous in space, one may approximate the ensemble average by the time average and the variance of the surface elevation now reads: \[\langle\zeta^{2}\rangle_{t} = \frac{1}{T}\int_{0}^{T}\left[\sum_{m}\tilde{\Omega}_{m}\,\cos \left(m\phi\right)\right]\left[\sum_{n}\tilde{\Omega}_{n}\,\cos\left(n\phi \right)\right]dt, \tag{23}\] \[= \sum_{m}\frac{\tilde{\Omega}_{m}^{2}}{2}=\sum_{i}\frac{a_{i}^{2} }{2}\left[1+\frac{\pi^{2}\varepsilon^{2}\mathfrak{S}^{2}}{16}\tilde{\chi}_{1} \right]\quad.\] If waves are linear (\(\varepsilon\to 0\)), the process is ergodic, stationary and homogeneous by means of \(\mathbb{E}[\zeta^{2}]=\langle\zeta^{2}\rangle_{t}=\mathscr{E}=\sum_{i}a_{i}^{2}/2\) and thus \(\Gamma=1\). Then, it can be shown that the narrow-banded distribution of wave heights up to second order in steepness will be transformed to account for \(\Gamma\): \[\mathcal{R}_{\alpha,\Gamma}(H>\alpha H_{s})=e^{-2\alpha^{2}/\Gamma}\quad, \tag{24}\] with the analytical expression for the second-order inhomogeneity correction \(\Gamma\) reading: \[\Gamma=\frac{1+\frac{\pi^{2}\varepsilon^{2}\mathfrak{S}^{2}}{16}\,\tilde{ \chi}_{1}}{1+\frac{\pi^{2}\varepsilon^{2}\mathfrak{S}^{2}}{32}\,(\tilde{\chi}_{ 1}+\chi_{1})}\quad. \tag{25}\] Recently, this theory has been successfully generalized to an arbitrary bathymetry shape of mean slope magnitude \(|\nabla h|\)[53; 55]: \[\Gamma_{\nabla h}=\frac{1+\frac{\pi^{2}\mathfrak{S}^{2}\varepsilon^{2}}{16} \,\tilde{\chi}_{1}}{1+\frac{\pi^{2}\mathfrak{S}^{2}\varepsilon^{2}}{32}\,( \tilde{\chi}_{1}+\chi_{1})+\mathscr{E}_{p2}}\, \tag{26}\] where \(\mathscr{E}_{p2}\) is the change in potential energy due to a finite mean water level \(\langle\zeta\rangle\neq 0\) under the presence of waves as compared to the still water level: \[\mathscr{E}_{p2}=\frac{6\pi^{2}\varepsilon^{2}}{5\mathfrak{S}^{4}(k_{p}h)^{2 }}\,\tilde{\nabla}h\Big{(}1+\tilde{\nabla}h\Big{)}\,\ \tilde{\nabla}h\equiv\frac{\pi\nabla h}{k_{p0}h_{0}}\,. \tag{27}\] ### Inversion of the Slope Effect The maximum drop in the mean water level due to shoaling appears near the top of the breakwater and quickly vanishes further atop [56]. In the case of small amplitude waves and absence of wave breaking, the decrease of the set-down will still be manifested atop the shoal of a breakwater, because the latter induces a piling up in the de-shoaling zone [57]. As such, the effect of \(\mathscr{E}_{p2}\) quickly grows near the peak of mean water level drop in the shoaling zone while the wave transformation can still be well described by linear theory. On the other hand, with fractions of wavelength atop the shoal the nonlinear wave transformation kicks in when the mean water level quickly returns to the pre-shoal level. From a statistical perspective, the increase in exceedance probability encoded in \(\mathscr{E}_{p2}\) due to the mean water level can be understood as being carried out and transferred to the nonlinear shoaling of the wave steepness. Suppose I may write \(K_{\varepsilon}^{*}=K_{\varepsilon}(1+\mathcal{F}_{\nabla h})\) for the nonlinear modification of the linear shoaling coefficient. Then, the non-homogenous spectral correction \(\Gamma\) depends on the slope under the regime of linear shoaling over the ramp and is equivalent to the regime of nonlinear shoaling atop the breakwater: \[\Gamma_{\nabla h}\approx\frac{1+\frac{\pi^{2}}{16}\mathfrak{S}^{2}\varepsilon^{2 }(1+\mathcal{F}_{\nabla h})^{2}\,\tilde{\chi}_{1}}{1+\frac{\pi^{2}}{32} \mathfrak{S}^{2}\varepsilon^{2}(1+\mathcal{F}_{\nabla h})^{2}\,(\tilde{\chi}_ {1}+\chi_{1})}\,. \tag{28}\] Let the definition \(\Omega_{\varepsilon}^{2}=(\pi^{2}/16)\,\mathfrak{S}^{2}\varepsilon^{2}\) be employed. Hence, one finds the modification to the nonlinear shoaling coefficient through: \[\frac{1+\Omega_{\varepsilon}^{2}\tilde{\chi}_{1}}{1+\Omega_{\varepsilon}^{2} \frac{(\tilde{\chi}_{1}+\chi_{1})}{2}+\mathscr{E}_{p2}}\approx\frac{1+\Omega_ {\varepsilon}^{2}(1+\mathcal{F}_{\nabla h})^{2}\tilde{\chi}_{1}}{1+\Omega_{ \varepsilon}^{2}(1+\mathcal{F}_{\nabla h})^{2}\frac{(\tilde{\chi}_{1}+\chi_{ 1})}{2}} \tag{29}\] Solving the equation for \(\mathcal{F}_{\nabla h}\), I obtain: \[\mathcal{F}_{\nabla h}\approx\sqrt{\frac{\Omega_{\varepsilon}^{2}\left( \tilde{\chi}_{1}-\chi_{1}\right)-2\tilde{\mathscr{E}}_{p2}}{\Omega_{ \varepsilon}^{2}\left(\tilde{\chi}_{1}-\chi_{1}\right)+2\tilde{\mathscr{E}}_{ p2}\Omega_{\varepsilon}^{2}\tilde{\chi}_{1}}}-1\quad. \tag{30}\] Since \(\tilde{\mathscr{E}}_{p2}\) represents only a small fraction of either potential or kinetic spectral energies, a Taylor expansion up to first order in \(\tilde{\mathscr{E}}_{p2}/\left(\tilde{\chi}_{1}-\chi_{1}\right)\) may be performed, finding: \[\mathcal{F}_{\nabla h} = \sqrt{1-\frac{2\tilde{\mathscr{E}}_{p2}(1+\Omega_{\varepsilon}^{2 }\tilde{\chi}_{1})}{\Omega_{\varepsilon}^{2}\left(\tilde{\chi}_{1}-\chi_{1} \right)}}-1\, \tag{31}\] \[\approx -\frac{\mathscr{E}_{p2}(1+\Omega_{\varepsilon}^{2}\tilde{\chi}_{ 1})}{\Omega_{\varepsilon}^{2}\left(\tilde{\chi}_{1}-\chi_{1}\right)}\quad.\] Qualitatively, the slope-dependence of the nonlinear shoaling coefficient is proportional to the normalized variance of the surface elevation and the change in potential energy (due to the set-down), while being inversely proportional to the difference between second-order harmonic corrections. Within the second-order approximation, the difference \(\tilde{\chi}_{1}-\chi_{1}\) is always finite. These two harmonic corrections are bound to be equal in the limit of \(k_{p}h\to 0\), but in this limit, finite-amplitude wave theory has to be taken into account, precluding this difference from vanishing. Therefore, although the slope-dependent correction to linear shoaling theory will quickly grow in shallower water, it does not diverge if I no longer assume waves of small amplitude (see figure 4). Indeed, the model is out its scope for \(k_{p}h\leqslant(3\pi\varepsilon)^{1/3}\) from the limitations upon the Ursell number by the second-order theory [5]. As such, this divergence will be balanced by wave dissipation. Slope magnitude aside, if waves are linear their mean steepness is small and therefore the normalized variance of the surface elevation recovers unity, both set-down and second-order correction to the energy are negligible. Under these conditions, the linear shoaling coefficient is an accurate portrayal of wave transformation in deep water regardless of how steep waves are, see figure 4. Likewise, if the wave steepness is very low the linear shoaling will withstand even near shallow water. The full formula for the nonlinear shoaling coefficient reads: \[\frac{K_{\varepsilon}^{*}}{K_{\varepsilon}}\approx 1-\frac{96\left(1+\frac{\pi^{2} \mathfrak{S}^{2}\varepsilon^{2}}{16}\tilde{\chi}_{1}\right)}{5\mathfrak{S}^{6} (k_{p}h)^{2}\left(\tilde{\chi}_{1}-\chi_{1}\right)}\cdot\frac{\pi\nabla h}{k_{ p0}h_{0}}\left(1+\frac{\pi\nabla h}{k_{p0}h_{0}}\right)\,. \tag{32}\] In figure 5 contour plots are given and describe how the nonlinear correction to the shoaling coefficient varies with mean wave steepness, depth gradient for both shoaling and de-shoaling zones, as well as relative water depth. The more nonlinear waves are prior to the shoal the larger deviation from linear shoaling will be at steep slopes. While continuous shoaling will lead to dissipation till it reaches a saturation of nonlinear shoaling, the saturation of the effect of slope in the shoaling zone [53] will be translated in a saturation of the increase of the nonlinear shoaling coefficient even without dissipation. For the de-shoaling zone, the decrease in shoaling coefficient is associated with a higher mean water level and is not expected to saturate with steeper slopes. between a set-down and a piling-up near the start of the de-shoaling zone (\(2.8<x<3.4\)) and the remaining part of the de-shoal (\(3.4<x<4.8\)) the stochastic model provides the best description of the observations, albeit in some cases the magnitude of the decrease in steepness compared to linear theory is larger than in the experimental data. The occasional and small overprediction by the present model of the wave transformation in the shoaling zone and underprediction over the de-shoaling zone is likely due to a seldom overestimation of set-down and piling-up magnitudes. The latter highlights limitations in the slope parameterization of Mendes and Kasparian [53]. Further scrutiny in the deviations from linear theory is applied in figure 7 through the evaluation of absolute differences: the stochastic theory outperforms the linear theory atop the shoal and in the de-shoaling zone by almost an order of magnitude, where the nonlinear effects are most pronounced. However, in the region when nonlinear effects are still developing, the two models provide similar deviations from observation. Indeed, the stochastic model has a typical absolute average difference of 3% from observation through the entire breakwater evolution, whereas the linear theory averages 8%. Their difference is not only in magnitude, figure 8 shows that the stochastic model is consistent in all regimes while the lin Figure 6: New model prediction for the shoaling coefficient of significant steepness measured against observations in Raustol [37] and its linear theory modelling thereof. ear theory rapidly departs from observations as the peak in excess kurtosis in the breakwater plateau increases. Clearly, this trend is mirrored in the comparison of error as a function of the plateau relative depth \(k_{p}h\). When considering the combined regions of the plateau and de- shoal of the breakwater, the stochastic model deviation increases only slightly and the linear counterpart rises to \(11\%\). Hence, the comparison throughout multiple regions of the steep breakwater between experiments, stochastic nonlinear and linear theory suggests that the slope magnitude parameterization [53] returns an accurate prediction for the wave transformation, with an ever larger disparity between the models as nonlinear effects grow in intensity in shallower waters. Interestingly, a closer look beyond numerical accuracy shows that eq. (32) qualitatively agrees with the estimate from eq. (17) by assigning no role for the mean steepness in deep water because \(\varepsilon\ll(kh)^{3}\). This behaviour in deep water must be recovered by any shoaling theory, as deep water waves can not feel the bottom regardless of the slope magnitude. Remarkably, both the stochastic theory in eq. (32) and the numerical fit of Kweon and Goda [20] for the nonlinear correction are inversely proportional to \((kh)^{3}\). This similarity points to a unveiled convergence of the portrayal of the physical system by both deterministic and stochastic approaches. ## IV Conclusions A stochastic model for the evolution of steepness in terms of a nonlinear shoaling coefficient has been proven excellent in describing observations of well-known experiments over a steep symmetrical breakwater. The formulation proves to be much simpler and more effective than existing theoretical and empirical models based on the conservation of energy flux. While the new model fills the gap created by the underprediction of the steepness growth of shoaling waves by linear theory, Figure 8: Normalized absolute difference between experimental and theoretical steepness as a function of maximum excess kurtosis atop the shoal and the relative water depth. Figure 7: Absolute difference between experimental and theoretical steepness. it also describes the why linear theory overpredicts the steepness evolution over de-shoaling zones. The main novelty in this study is the approach: the principle of continuity of the stochastic and statistical behavior of the water waves implies a process that can be understood as a transfer from the nonlinearity due to the change in mean water level at the end of the shoaling zone to the shoaling coefficient atop the breakwater. Especially for out-of-equilibrium systems, this symbiosis suggests that it may be possible to compute fundamental properties of nonlinear waves through stochastic means that are otherwise elusive to deterministic methods, such as the conservation of integral properties. Because the new theory deals with small amplitude waves, wave breaking can not be considered. Thus, future work has to address a generalized shoaling coefficient from small up to finite and breaking amplitude waves. Nevertheless, the model can be applied to a beach as well, and the departure of the wave transformation from linear theory will be captured in the region near the peak of set-down. **Declaration of Interests**. The author reports no conflict of interests.
2310.08660
Learning RL-Policies for Joint Beamforming Without Exploration: A Batch Constrained Off-Policy Approach
In this work, we consider the problem of network parameter optimization for rate maximization. We frame this as a joint optimization problem of power control, beam forming, and interference cancellation. We consider the setting where multiple Base Stations (BSs) communicate with multiple user equipment (UEs). Because of the exponential computational complexity of brute force search, we instead solve this nonconvex optimization problem using deep reinforcement learning (RL) techniques. Modern communication systems are notorious for their difficulty in exactly modeling their behavior. This limits us in using RL-based algorithms as interaction with the environment is needed for the agent to explore and learn efficiently. Further, it is ill-advised to deploy the algorithm in the real world for exploration and learning because of the high cost of failure. In contrast to the previous RL-based solutions proposed, such as deep-Q network (DQN) based control, we suggest an offline model-based approach. We specifically consider discrete batch-constrained deep Q-learning (BCQ) and show that performance similar to DQN can be achieved with only a fraction of the data without exploring. This maximizes sample efficiency and minimizes risk in deploying a new algorithm to commercial networks. We provide the entire project resource, including code and data, at the following link: https://github.com/Heasung-Kim/ safe-rl-deployment-for-5g.
Heasung Kim, Sravan Kumar Ankireddy
2023-10-12T18:36:36Z
http://arxiv.org/abs/2310.08660v2
Learning RL-Policies for Joint Beamforming Without Exploration: A Batch Constrained Off-Policy Approach ###### Abstract In this work, we consider the problem of network parameter optimization for rate maximization. We frame this as a joint optimization problem of power control, beam forming, and interference cancellation. We consider the setting where multiple Base Stations (BSs) communicate with multiple user equipment (UEs). Because of the exponential computational complexity of brute force search, we instead solve this non-convex optimization problem using deep reinforcement learning (RL) techniques. Modern communication systems are notorious for their difficulty in exactly modeling their behavior. This limits us in using RL-based algorithms as interaction with the environment is needed for the agent to explore and learn efficiently. Further, it is ill-advised to deploy the algorithm in the real world for exploration and learning because of the high cost of failure. In contrast to the previous RL-based solutions proposed, such as deep-Q network (DQN) based control, we suggest an offline model-based approach. We specifically consider discrete batch-constrained deep Q-learning (BCQ) and show that performance similar to DQN can be achieved with only a fraction of the data without exploring. This maximizes sample efficiency and minimizes risk in deploying a new algorithm to commercial networks. We provide the entire project resource, including code and data, at the following link: [https://github.com/Heasung-Kim/safe-rl-deployment-for-5g](https://github.com/Heasung-Kim/safe-rl-deployment-for-5g). Wireless communications, coordinate multi-points, ## I Introduction The goal of this project is to develop a reinforcement learning-based policy for the control problem that aims to maximize transmission rate and minimize interference in the 5G Network. The multiple base stations (BS) design wireless signals for maximizing information transmission rate over the wireless channel while the signal should not affect the signal from the other BS. For many large-scale control problems in wireless communications, reinforcement learning algorithms have been applied widely to obtain significant improvements. Combined with deep neural networks, it is possible to model and approximate many continuous state and action spaces. However, modifying the service policy of a commercial network or trying a new service policy is very risky. These RL-based algorithms cannot be directly deployed because of the initial degradation in user service, which might hurt the customer relations of the service provider. Hence, to train the RL agent without directly deploying in real-world scenarios, we need an accurate environment model. Because of the large number of components involved, especially with 5G systems becoming more prevalent, accurately modeling the environment and the movements of users is intractable. To solve this practical problem, we aim to develop a policy that learns from a finite set of data, where interaction with the environment is severely limited. In contrast, some of the previous solutions to this problem of network parameter optimization are based on the assumptions of having access to an ideal simulator and a large number of interactions with the environment [1]. The authors here also assume that arbitrary radio waves (precoders) can be designed and sent for the exploration phase of a Q-learning approach. This is not feasible in practical settings; hence, we do not make this assumption in our work, which prevents our agent from exploring. We aim to develop a more practical algorithm using an offline model-based reinforcement learning that can increase sample efficiency where exploration involves a hefty cost. ## II Background and Related Work In this section, we provide a brief background on wireless networks and reinforcement learning. We look at some of the relevant works that use RL-based approaches to solve wireless networking problems. **Wireless Networks.** We consider the problem of maximizing the Signal Interference plus Noise Ratio (SINR) of a multi-user cellular communication network. We are particularly interested in three key parameters of the setting. The first is the choice of the joint beam forming vector \(\mathcal{F}\), which should increase the effective power for the intended user and decrease the interference for the other users. The second is the power \(\mathcal{P}\) allotted to each user. While increasing the power of a particular user can result in improved communication links, it also increases the interference for other users and hence needs to be varied carefully. Finally, we are interested in achieving a minimum target SINR of \(\gamma_{\text{target}}\) while maximizing the SINR and choosing the best power control and beam-forming vectors. We would like to jointly optimize all three parameters, formulated as a non-convex optimization problem, to achieve the best possible SINR. The authors in [2] discuss the problem of joint power control with the constraint of non-cooperative beam forming _i.e._ the two base stations (BSs) do not exchange any channel state information (CSI) to optimize for interference cancellation but a limited number of slow-fading parameters are exchanged that will improve the SINR. The authors derived bounds on the achievable SINR and provided a scheme to improve the SINR in the presence of a matched filter (MF) based beamforming. However, this approach is general enough to be applicable to all beamforming techniques. In [3], the authors propose an iterative algorithm to jointly update the power control and beam-forming weights. But a big drawback for this is the lack of consideration for the highly dynamic nature of the channel with scattering and shadowing, which are especially critical when dealing with millimeter wave settings in 5G and 6G. In the standards [4], the prevalent practice is to use an almost blank subframe (ABS) to deal with the co-channel inter-cell interference problem. But this works well only for a fixed beamforming pattern. Because of the highly dynamic nature of channels in mmWave communications, the ABS method is of limited use. Thus, as evident from the limitations of existing algorithms, we need an algorithm that can quickly adapt to the dynamic nature of the channels. Reinforcement Learning is a natural choice for an online learning-based algorithm for parameter control. In [5], the authors look at the beam-forming vector selection in non line of sight (NLOS) setting using a deep Q network. The sum-rate of UEs is maximized under the transmission power and minimum quality of service (QoS) constraints. A CNN-based approach was taken to estimate the Q-function. Finally, the estimated Q-function was used to employ deep Q-learning for an online control of the power allocation. Apart from power control, RL-based approaches have been used in solving other problems in the wireless domain. In [6], the authors develop a DQN-based approach to adaptively learn and find the policy that maximizes the long-term number of successful transmissions. From these works, it's evident that if used effectively, RL algorithms can provide a huge advantage over traditional algorithms in wireless communications in scenarios where analytical closed-form solutions are impossible to design. In the next section, we provide the necessary background and review of the RL algorithms that are relevant to the problem of joint power control that is being considered in this project. **Reinforcement Learning.** RL is a machine learning technique that learns the optimal set of actions to be taken to maximize the expected reward by interacting with the environment. The following are some of the key terms in RL 1. _Agent:_ The learning algorithm that tries to maximize the expected reward. 2. _State:_ One of the possible scenarios that the agent can encounter during its interaction with the environment is denoted as \(s\in\mathcal{S}\), where \(\mathcal{S}\) is the collection of all possible states. 3. _Action:_ One of the possible ways in which the agent can interact with the environment is denoted as \(a\in\mathcal{A}\), where \(\mathcal{A}\) is the collection of all possible actions. 4. _Reward:_ The quantified return \(r\) received by the agent for taking action \(a\). 5. _Policy:_ The probabilistic mapping from state \(s\) to action \(a\) denoted by \(\pi(a|s)\). 6. _Discount factor:_ The penalization term that controls the importance of future rewards, denoted by \(\gamma\). 7. _Episode:_ Set of all states that are part of the experience, from the starting state to the end of the interaction. 8. _Terminal State:_ Special state where the episode ends or the reward is 0. 9. _Value function:_ The overall expected reward starting from state \(s\) assuming policy \(\pi\) is followed, given by \[V(s)=\mathbb{E}\left[\sum_{t=0}^{T-1}\gamma^{t}R_{t}|\pi\right]\] (1) 10. _Q-function:_ The overall expected reward starting from state \(s\) and action \(a\), assuming policy \(\pi\) is followed for future actions, given by \[Q(s,a)=\mathbb{E}\left[\sum_{t=0}^{T-1}\gamma^{t}R_{t}|\pi\right]\] (2) RL algorithms for solving MDP problems are derived from tabular-based methods. Given the state transition model, the dynamic programming method of finding a solution using the contraction operator can be adopted. However, it is rare for practical problems to be given a perfect model. When a transition model is not given, model-free methods that can learn without knowledge of the model can be used to find a solution. One of the most widely used model-based tabular reinforcement learning algorithms is Q-learning [7]. Q-learning approximates the action-value function using a table, which indicates a finite discrete memory that can store action-values for all the possible state-action pairs and periodically updates the approximated action-value table using the bellman operator. However, the MDP we are dealing with in this paper considers a large state space and deals with a multidimensional action space that contains all the cases we can choose from for power control and beam design. This requires tabular-based learning agents to store a large number of state-action pairs, which is not practical in terms of memory. It is also computationally expensive to obtain a desired value on the tabular storage. In such scenarios, it is often more efficient to use a function approximator to model the action-value function, which is especially beneficial for high-dimensional optimization problems. Artificial neural networks are a popular choice for function approximators and multi-layered neural networks are widely used to approximate high-dimensional state-action spaces because they have large representational capabilities [8]. Approximation and update of action-value functions using deep neural networks are discussed in depth in [9]. The weights constituting the neural network are trained through online updates based on the Bellman equation. The authors in this paper employ various techniques to circumvent the issues encountered in practice while using the function approximators. One of the strategies is the _replay memory_ method, where multiple samples generated during the learning process are not discarded and instead stored in a limited size. Some of them are selected in each training process to form a batch, and the learning agent updates the weights based on that batch. Furthermore, the authors proposed the use of two neural network architectures that approximate an identical action-value function and tried reducing the variance of the action-value that is utilized as a basis for updating the weights, by fixing one of the artificial neural networks for a certain step. This resulted in a performance beyond the expert human level in several Atari gameplay. Function approximation has been applied to policy-based reinforcement learning as well as action-value-based reinforcement learning. Using the policy gradient theorem [10], it is possible to obtain a gradient of a total return, which is the sum of the reward that an agent can obtain through the interaction with an environment, with respect to the policy of the agent to maximize the return. The authors of [11] adopted the actor-critic method that combines the policy gradient-based method and the action-value-based method. Furthermore, the authors combined the function approximation method using an artificial neural network with the actor-critic method. The authors design actors representing policy and critics representing action-value using two different neural networks. Since the actor-network is trained with the policy gradient technique and the critic network represents an action-value function, it is updated in the direction of maximizing the MSE error based on the Bellman equation. Policy and critic representing action-value use two different neural networks. This structure showed high performance in autonomous driving, and in particular, it was shown that reinforcement learning techniques using artificial neural networks can achieve good performance even in continuous control. We will define various parameters that determine the performance of the wireless network as action. We define state as a collection of various indicators of the wireless network and the user terminals connected to the network. From this point of view, the network can be interpreted as an environment commonly referred to in reinforcement learning problems. The characteristics of this problem have additional difficulties beyond the problems of the curse of dimension for the space that we need to consider. The interaction between the learning agent and the environment requires a large cost of failure. This is because, in the wireless network system, if a new agent tries various unverified behaviors for its exploration, it can seriously impair the quality of commercial networks. To avoid performance degradation in commercial networks due to the deployment of untrained reinforcement learning agents, we can consider developing simulators to represent the commercial wireless networks, but making such a simulator that mimics a huge network also incurs a significant cost due to numerous sensitive factors that we must consider, such as user behavior patterns, signal strength, and channel environment. Hence, we can instead use the data collected previously and employ learning algorithms that do not require additional interactions with the environment. This is the reason we chose batch reinforcement learning. Batch reinforcement learning is a reinforcement learning approach that trains agents using only fixed batches. In [12], the authors proposed batch reinforcement learning using neural network-based function approximators and the action-value and actor-critic approaches. To overcome the limitations of learning in limited batches, they introduced a generative model that helps to generate data likely to be in an already obtained batch. They also proposed a perturbation model, whose function is to take a behavior as input and re-scale it to a value within a certain interval. Such a process helps an agent trained within a limited batch to learn an appropriate behavioral pattern between imitation learning and Q-learning. These factors combined with double-Q learning [13] have been proposed as an algorithm named BCQ. We aim to train an agent to select the optimal parameters for a wireless network based on limited data by adopting BCQ approach. It is expected that the performance of the wireless network can be improved by conducting agent learning using the data already obtained without installing an unfinished algorithm in the commercial network. ### _Our contributions_ * We propose an algorithm that attempts a model-based rollout policy at the action decision phase based on the prior knowledge of the transition model while exploiting a batch-constrained RL scheme. We call the proposed approach BCMQ which stands for Batch Constrained Model-based rollout policy Q-Learning. * We implemented a wireless system with _gym_ environment class format. This system represents an environment where multiple base stations and multiple UEs can cooperate with each other to mitigate interference. This implementation allows us to fairly compare the performance of various reinforcement learning algorithms with the performance of our proposed algorithm. Through such a performance comparison, our algorithm shows that the performance of the existing state-of-the-art approaches can be approximated even with limited data given. * We compared the performance of our proposed algorithm with the coordinate multipoint strategy using reinforcement learning, which is being intensively studied in wireless communication. We experimentally show that the performance of existing approaches using DQN can be surpassed without exploration. To the best of our knowledge, our approach is the first 5G network parameter optimization technique using batch batch-constrained RL approach. It is expected that our approach can be usefully applied in the commercial network environment where interaction with the environment is expensive. ## III System Model and Problem Formulation ### _System Model_ In this work, we consider the setting of multiple base stations communicating with multiple UEs, causing interference with each other, as shown in Fig. 1. We assume a centralized controller that can not only control the transmit power of each BS but also coordinate with other BSs to control the interfering transmit power as well. This is a realistic setting with the key difficulty that transmitting power that controls the quality of connection to one serving user is also the cause of degradation in quality for another user since it acts as interference. We consider a multi-antenna set-up where each BS consists \(M\) antennas and all the UEs have a single antenna. The received signal by an UE from the \(l^{\textrm{th}}\) BS is given by \[y_{l}=h_{l,l}^{*}f_{l}x_{l}+\sum_{b\neq l}h_{l,b}^{*}f_{b}x_{b}+n_{l}, \tag{3}\] where \(x_{b},x_{l}\) are the transmit signals corresponding to \(b^{\textrm{th}}\) and \(l^{\textrm{th}}\) BS respectively, while satisfying the power constraint \(\mathbb{E}[|x_{l}|^{2}]=P_{TX,l}\). The \(M\times 1\) beam-forming vectors, \(f_{b},f_{l}\) are chosen by the BSs in a coordinated manner to reduce the interference, based on the \(M\times 1\) channels experienced by each user, \(h_{l,l},h_{l,b}\) respectively. Finally, \(n_{l}\) is the additive white Gaussian noise that's added at the receiver according to \(\mathcal{N}\sim(0,\sigma^{2}\). ### _Problem formulation and terminology_ We formulate the problem in an RL setting where the agent is not allowed to interact with the environment and can only learn from a finite set of samples _i.e.,_ the number of samples included in the usable dataset is \(N_{q}<\infty\). In other words, only \(N_{q}\) queries can be accessed. We consider that this assumption is crucial for two key reasons. The first is the high cost associated with failure in commercial networks which prevents us from deploying a sub-optimal algorithm and hope to improve via exploration. The second is the fact that because of the high cost associated with data collection in real networks, we want to be as efficient as possible with the data given to us, unlike the settings where a perfect simulator is available. We now define and revisit some of the key terms and their usage in the context of our problem formulation. * **Signal-to-Interference-plus-Noise Ratio (SINR).** To measure the effective quality of our connection with UE, we take the ratio of the strength of our signal of interest and the sum of all unwanted signals _i.e,_ interference, plus noise. This is a reliable proxy for the sum-rate capacity experienced by the user and our goal is to find the optimal parameters that can maximize this. The SINR for user \(k\) in timeslot \(t\) is calculated, based on Eq. 3, as \[\textrm{SINR}_{t,u}=\frac{P_{t,u}|\textrm{h}_{t,u,b}^{\textrm{H}}\textbf{v}_{ t,u}|^{2}}{\sum_{j\neq k}P_{t,u}|\textrm{h}_{t,u,b}^{\textrm{H}}\textbf{v}_{t,u}|^{2 }+\sigma_{N}^{2}}\] (4) where \(P_{t,u=k}\) indicates a transmission power for the UE with index \(k\) when time slot is \(t\). The Multiple-input-single-output (MISO) channel vector between \(b=k\)-th BS and the UE with index \(u=k\) at time slot \(t\) is denoted by \(\textrm{h}_{t,u=k,b=k}^{\textrm{H}}\). The noise power density is \(\sigma_{N}^{2}\). Note that the signal from the undesired BS, which is not connected with the UE \(k\) is considered as an interfering signal that degrades the quality of the communications. For the desired signal on the numerator of the definition (4), \(\textrm{h}_{t,u,b}^{\textrm{H}}\) has the same index \(k\) for both of the UE and BS. * **State.** We define our state as a collection of relevant information about the BS-UE connection. In this setting, we consider the transmit power and beamforming vector, represented by the index of the codebook, which is affected by the actions taken. For the setting under consideration, we consider continuous state space. We also include the geographical coordinates of the users as an observable part of the state. While the actions taken by the RL agent do not affect the coordinates of the user, these coordinates are necessary for generating the channel realization. * **Action.** We define our action as the change of the relevant parameters _i.e.,_ transmit power and beamforming a vector. For the experiments performed, we restrict ourselves to a discrete action space. 1 Each action taken will result in either an increase or decrease of transmit power, in multiples of pre-defined discrete steps. Similarly, the action will also result in an increase or decrease of the index for the beamforming vector. For example, consider a simple two-user case _i.e,_\(N_{\textrm{UE}}=2\). For each user, both the increase and decrease of the transmit power and the increase and decrease of the codebook index should be determined. That is, a discrete action space of \(2^{2}\times 2^{2}=16\) dimension is required, which makes the problem computationally infeasible in real-time. When determining the communication method of users, the communication status of other users must be carefully considered and the status of users must be controlled at the same time so that multiple BSs can cooperate, which makes the exhaustive search even more prohibitive. Footnote 1: Note that our network environment is applicable to both continuous action and discrete action, but for a direct performance comparison with existing baseline algorithms, we focus on the discrete action space. * **Reward.** We define our reward as the cumulative SINR experienced by all the UEs in the system. This is reflective of our goal to maximize the sum-rate capacity of all the users. The reward is tricky in the sense that each can that results in an increased reward for a particular UE _i.e.,_ SINR of the UE, might result in a reduced SINR for other users. That is, this problem can be interpreted as a trade-off problem in which the powers of the desired Fig. 1: A user being served by one BS, experiences interference by other BSs. A centralized _agent_ performs a joint optimization over the transmit power and beam forming selection to minimize the interference and maximize the sum-rate of the users. signal and the interference signal between each UE must be well adjusted. Hence taking the cumulative sum of the rewards across all users is a reasonable representation of the system performance. Additionally, because of extremely high variance in the channel conditions experienced by the users, the SINRs can often go to a very low or very high value, which can make learning the action-value function difficult. To circumvent this, we clip the reward to remain in between a certain range, \([SINR_{min},SINR_{max}]\). In order to be more representative of the underlying wireless channel, average performance is measured for a large number of samples. We finally define the reward as \[R_{t}=clip(\sum_{k=1}^{N_{\text{UE}}}\text{SINR}_{t,u=k},\text{SINR}_{max}, \text{SINR}_{min}).\] * **Dynamics.** For the RL algorithm to be meaningful, we first need to formulate our problem in a sequential manner. In our problem, at any time step \(t\) the agent can take action \(a_{t}\) that affects the choice of transmit power and beamforming vectors in the next time step \(t+1\). Because of this, the reward observed at time step \(t\), \(R_{t+1}\), is dependent on the action \(a_{t}\) which in turn affects action at time \(t+1\), \(a_{t+1}\). Hence the transmit power and the beamforming vector vary sequentially till the end of the episode. Thus, the problem can be formulated as an MDP, and the future states are decided by the current actions taken by the RL agent. On a side note, the geographical location of the users is an observable state that is unaffected by the actions of the agent and is only essential for channel realization. To provide more context, the RL agent can take an action to increase or decrease the transmit power for each user individually and similarly for the codebook index. The result of these actions jointly decides the transition to the next state. **Our goal.** The goal of this problem is to maximize the discounted sum of UEs' SINR in the maximum number of slots \(T\) that determines one radio frame _i.e.,_ we consider an episodic setting of length \(T\). \[\underset{\{a_{t}\}_{t=0}^{T-1}}{\text{maximize}}\quad\mathbb{E}\left[\sum_{ t=0}^{T-1}\gamma^{t}R_{t}\right]\] (5) As mentioned earlier, this metric is a good proxy for the overall performance of the UEs, and optimizing this results in a high data rate in a real network. In particular, we will use and develop the batch-constrained method appropriately to solve this problem only for a given dataset without interaction with the environment. From this point of view, the expectation of the problem (5) is calculated on the state dynamics that can be expressed from the given \(N_{q}\) data samples. ### _Complexity of the Parameter Optimization Problem_ Without any intelligent learning-based policy, the naive way to select the parameters will be through brute force search over the parameter space. At each time instant \(t\), we need to search over \(\mathcal{P}\) power levels and \(\mathcal{F}\) beamforming vectors, resulting in a search space of \(\mathcal{P}\times\mathcal{F}\) for each user. The resultant search complexity for brute force would thus be \((\mathcal{P}\times\mathcal{F})^{N_{UE}}\). This scales rapidly with an increase in number of users, antennas, and feasible power levels, making it intractable. Even for a single UE coordinate. the variations in the resultant channel experienced are huge, making brusteforce search for any optimal configuration impossible. Fig. 2 shows a scatter plot for the two latent components from T-SNE [14] of the channel realization from the geometrical channel distribution. Hence, this supports our argument that we need a better approach to solve this huge non-convex optimization problem and we choose to approach this as a MDP problem using reinforcement learning techniques. ## IV Our approach Since our state space is continuous due to the users' coordinates and the transmission power levels, we use an approximated version of the action-value function to reduce the algorithm complexity. Moreover, even for simple 2-user settings, we saw that the action space is huge \(a\in\{0,1\}^{N_{\text{UE}}\times 2^{2}}\). Hence, using function approximation is the rational choice for this setting. As mentioned earlier, we consider a discrete action space so that we can make a direction comparison with the baseline performance of DQN from [1]. The original BCQ algorithm introduced in [12] is for a continuous action space, which contributed to most of the complexity of the algorithm. For our setting here, a simpler variant of BCQ, called discrete batch-constrained deep Q-learning, introduced in [15] is sufficient. As the name suggests, this is a discrete action space variant of the original BCQ algorithm. Now that we decided on the algorithm of choice, we will explain our approach in more detail below. Fig. 2: A scatter plot for two 10,000 channel state realizations \(\mathbf{h}_{t,u,b}\) on the two latent spaces from T-SNE result. The geometrical channel model follows a distribution in which the power and phase of radio waves are determined based on the location of the user and the base station. However, it is difficult to obtain a rule for selecting the optimal codeword under this distribution in closed form. ### _Batch Constrained Reinforcement Learning_ In the original BCQ, a generator \(G_{w}\) is trained that can enable us to sample from an action that can be selected after sampling from \(G_{w}\) and perturbing appropriately. In a discrete case, instead of this complicated procedure, we can simply compute the probabilities of every action conditioned on a state and consider the top contenders, based on a threshold as \[\pi(s)=\text{argmax}_{a|G_{w}(a|s)}/\text{max}_{\bar{a}G_{w}(a|s)>\tau}Q_{\theta }(s,a) \tag{6}\] It is important to adaptively adjust the threshold to achieve the best performance. Thus, we scale the probabilities by the maximum of all probabilities over all actions to allow only those actions whose probability is above a certain threshold. Finally, the training of the Q function approximator network is done by formulating the policy selection as \[\mathcal{L}(\theta)\] \[=l_{k}(r+\gamma\max_{a^{\prime}|G_{w}(a^{\prime},s^{\prime})/ \mu=\bar{a}G_{w}(\bar{a}|s^{\prime})>\tau}Q_{\theta}(s^{\prime},a^{\prime}),Q _{\theta}(s,a)), \tag{7}\] where \(l_{k}\) is the loss function for action-value training. We use the mean squared error loss for our training. Since we have \(G_{w}(s,a)\approx\pi_{\text{batch}}(s,a)\) where \(\pi_{\text{batch}}\) is a policy which has been used, the trainable parameters \(\Theta_{G_{w}}\) for \(G_{w}(s,a)\) can be updated in direction of minimizing negative log-likelihood loss for the input \(s,a\), and the trainable parameters \(\Theta_{Q}\) of the action-value function can be updated in direction of minimizing the mean squared error. The Adam optimizer [16] is adopted for both updates. Fig. 3 shows the overall structure of the neural network. Our state dimension can be interpreted as a latent vector of valid and significant values. To this end, we do not use a layer that aggregates information such as convolution but construct a function approximation with a dense net and activation. ### _One-step Tree Search for Action Selection_ We started with our baseline DQN approach and replaced the algorithm with a BCQ-based learning model. Now, to further explore better algorithms, we make a simple addition to our BCQ algorithm to make in even better. We use a one-step rollout, which has been proven to make good policy even better. Since only a tiny amount of time is allowed for action decisions for the scheduling of communication systems, we use the learned \(G_{w}\) and \(Q\) to implement a one-step approximation rollout. This methodology does not deviate from the agent's behavior pattern learned by the off-policy policy, and at the same time, the model can be actively used. More specifically, action candidates that satisfy the condition of \(\frac{G_{w}(s,a)}{\arg\max_{a^{\prime}}G_{w}(s,a^{\prime})}>\tau\) are found. These are actions that exactly follow the learned agent's policy. Because our environment is partially predictable, the next states derived from these behaviors are approximately predictable. We use \(k=2\), that is, the agent chooses the two most preferred actions, and then re-estimates the maximum action-value in the state derived from those actions. Through this logic, the action that can achieve the highest reward in one step future is finally selected. While we are constraining our setup to be not too complicated because of the limited time frame of the project, it is easy to understand the benefit of this addition. We call the new algorithm a Batch Constrained Q-Learning with a One-step Rollout algorithm. We provide the algorithm in detail in Algorithm 1. ``` 1:Generate limited data (\(N_{q}\) samples) 2:Start Training Phase 3:Initialize the parameters \(\Theta_{G_{w}}\). \(\Theta_{Q}\) and \(\Theta_{Q^{\prime}}\) 4:while\(i<\text{max}\) learning iteration do 5: Get mini-batch, \((s_{t},a_{t},r_{t},s_{t+1})\) pairs 6: Adam update \(\Theta_{Q}\) with MSE 7: Adam update \(\Theta_{G_{w}}\) with negative log-likelihood 8: Soft target update \(\Theta_{Q^{\prime}}=\Theta_{Q^{\prime}}\tau_{s}+\Theta_{Q}(1-\tau_{s})\) 9:\(i=i+1\) 10:endwhile 11:Training Phase Ends. 12:Deployment (Start Test) 13:while\(t<T\)do 14: Observe state \(s_{t}\) 15: Get a action set \(\mathcal{A}_{\text{BCQ}}=\{a|\frac{G_{w}(s,a)}{\arg\max_{a^{\prime}}G_{w}(s,a ^{\prime})}>\tau\}\) 16: Top k actions \(\mathcal{A}_{\text{Top k}}=\{a|\text{Top-k}_{a\in\mathcal{A}_{\text{BCQ}}}(s,a)\}\) 17: Expect Next states \(s^{\prime}\) for each actions 18: Action \(a=\arg\max_{a\in\mathcal{A}_{\text{Top k}}}Q(s^{\prime},a^{\prime})\) 19: Apply action \(a\) and get \(s_{t+1}\) 20:\(t=t+1\) 21:endwhile 22:End Test ``` **Algorithm 1** Batch Constrained Q-Learning with One-step Rollout ## V Numerical Results ### _Environment Setting_ For performance comparison with the existing approach, we used the same network parameters to reproduce the results of [1], and these network settings follow the radio propagation path loss models for millimeter-wave wireless networks channel model implementation of [17]. The area around each BS where a UE can communicate with the BS successfully is referred to as a cell. We consider an environment with 2 cells Fig. 3: Approximated action-value function \(Q\) and the batch-policy estimator \(G_{w}\). \(ls\) indicates the log-softmax output of \(|\mathcal{A}|\)-sized output from the dense neural network. and the two cells can cause interference to the other users. The \(X-Y\) coordinates of the BS corresponding to the first cell is (0,0) and the BS corresponding to the second cell is located at (0,255). Throughout the work, we consider the unit of measurement as meters. The users are randomly located around the BS in a radius of 150, with the center assumed at the location of BS, at the beginning of each episode. That is, our model includes both the behavior and channel distribution of UEs communicating in an area covered by a set of specific BSs. For the radio propagation, \(T_{K}=290\) Kelvins, \(B=15000\)Hz, and the Boltzmann constant is \(1.38e-23\). SINR\({}_{max}\) and SINR\({}_{min}\) are set to \(-50\) and \(200\), respectively, for the clipping. ### _Hyperparameter Setting_ The hyperparameter selection and tuning were done mostly via heuristics and turning. We began by using the same hyperparameter settings as the baseline algorithms and varied them slowly to arrive at the best set of parameters for our setting. For example, you can see from Fig. 5 that the choice learning rate affects the learning and the reward significantly. Similarly, we decided to use a batch size of 32. The actual hyperparameters used for generating each plot are included in the label and description of the figures. ### _Performance Evaluation_ We repeated the same experiment 20 times with the same parameter set to clearly and accurately measure the performance of our proposed approach. This experimental approach allows us to know the overall performance of the reinforcement learning algorithm as mentioned in [18]. This is for the sake of fair comparison among the competitors and their final performance, as well as uncertainty. * **Batch Constrained Q Learning with Model-based Rollout (BCMQ, proposed)** BCMQ is the proposed approach we described in the previous section. Note that this algorithm only learns using the given data and no interaction with the environment is required. * **Soft-Actor Critic (SAC)** For comparison with the State of the art algorithm, we employed the SAC of [19, 20]. SAC is a method of updating the policy in the direction of maximizing the value function while inducing the policy to take as many various actions as possible in entropy-based MDP. Both the value function and policy function are calculated via function approximations using a deep neural network. The training uses the standard mini-batch stochastic gradient descent (SGD) based updates in the direction of the estimated value from the transition dynamics and the estimated value function. To reproduce the performance, we start with the implementation of [21] and add additional modules required to incorporate changes required specific to our setting. This allows us for a fair comparison, minimizing any differences in performance caused by differences in implementation. * **Deep Q-Learning (DQN)** Deep Q-Learning [9] uses a deep neural network for the action-value function approximation. For the sake of fair performance comparison, the performance of this algorithm is reproduced with the implementation in [21]. **Absolute performance.** In Fig. 4, we plot the Complementary Cumulative Distribution Function (CCDF) of the effective SINR experienced by the UE \(\gamma_{eff}^{l}\). For any random variable, CCDF is defined as \[\bar{F}_{X}(x)=P(X>x).\] Thus, the CCDF of the SINR can be used as a proxy for the quality of communication experienced by users. We limit the SINR to a reasonable range of \([-5\text{dB},60\text{dB}]\). The reason for limiting the range of SINRs considered is because of the transmit power limitations and the Quality of Service (QoS) guarantees that the service provider needs to satisfy, which varies significantly because of the high variation in the channel and interference experienced by the users. Hence we only consider the simulation where the connection to a UE is valid _i.e,_ the SINR experienced satisfies these conditions. We discard the remaining simulations at the end of the episodes. The plot labeled as _Optimal_ in Fig. 4 represents the maximum theoretically achievable and the goal of any policy would be to get as close to this as possible. Note that this is very hard to achieve in a practical wireless network where the channel is unknown and the complexity of brute force search is very high. We can see that in comparison with the existing baseline DQN [1], we see that the algorithm proposed here BCMQ is considerably better. But the most important result here is that we achieve this performance without any exploration, using only a fraction of the data of DQN. **Convergence.** Fig. 5 shows the convergence of the algorithms BCMQ, SAC, and DQN. We chose SAC to compare the performance with the state-of-the-art reinforcement learning algorithm as a target for comparison. Additionally, to compare against the existing baseline, we chose the DQN approach proposed in [1] as a target for comparison. It is important to note that both SAC and DQN require exploration. For Fig. 4: Learning curves of average sum of rewards over 1,000 radio frames (episodes). The width equal to the deviation from the mean is filled with the corresponding color. In order to minimize the uncertainty due to randomness and to measure the exact performance of the algorithms, the same experiment was repeated 10 times with the same set of parameters for each of the approaches. our experiments, we construct a batch by choosing actions at uniform, random, over all states and generate a data set of 20,000 samples. Even with this constrained batch data set, the proposed BCQM algorithm eventually achieves performance on par with SAC. We note that despite using the same learning rate, the convergence is different for different algorithms considered, which is expected because of differences in architectures. As seen from the figure, while the convergence for SAC is faster, BCQM is also comparable in this aspect. In our opinion, for practical network settings, the pros of not having to use exploration for BCQM far outweigh the cons of slower convergence compared to SAC. **Learning rate.** In Fig. 6, we plot the performance of our algorithm for some of the learning rates we tried. By varying the learning rate gradually over a range of values, we pick the best-suited rate for our model. For the purpose of representation, we plot the trends for learning rates \(\{10^{-3},10^{-4},10^{-5}\}\). We can see from the plot that when a learning rate of \(10^{-5}\) was applied, the convergence was significantly slow. Moreover, there is no significant difference in performance between a learning rate of \(10^{-3}\) and \(10^{-4}\), though the former is slightly more unstable. Hence we fix our learning rate as \(10^{-4}\) for all of our experiments. **Size of data.** In Fig. 7, we plot the performance of our algorithm for different _batch sizes_2. We define the batch size as the size of the given data, from which we can sample a mini-batch to train the RL agent. For our experiment, we consider the batch sizes \(\{20,000,10,000,1000,100,50\}\). We can see from the plot that the higher the batch size, the more representative and rich the dataset is and the better the performance. But we also see that the performance improvement beyond 1000 samples is very negligible and the final convergence is very close to that of a batch size of \(10,000\) or \(20,000\). This supports our argument that while it might take a large number of iterations for the RL agent to converge, by taking advantage of the nature of the problem we can achieve that with only a limited amount of data. This is especially very crucial for settings like wireless networks where having access to an accurate simulator and hence large amount of data is very difficult and expensive. We also note that having a very small batch size like \(50\) or \(100\) is often not sufficient to learn a meaningful policy and this threshold for the batch size should be decided intelligently based on the complexity of the network under consideration. Thus, these experimental results show that batch-constrained learning approaches can be deployed for the parameter optimization of network control problems with a suitable batch size that can be adapted based on the network. Footnote 2: This is not to be confused with the mini-batch size of the SGD algorithm used for training **Quality of data.** In Fig. 8, we study the effect of the quality of data on the performance of the BCMQ algorithm. In a wireless communication system, if multiple BSs do not cooperate with each other, each BS naturally allocates a higher amount of power to a UE with low communication quality. This cooperation is one of the key metrics for deciding the usefulness of the data. To study this more, we consider two different scenarios. The first is the case where the BSs do not cooperate with each other at all and the second is the case where the BSs cooperate with each other, which is what we Fig. 5: Learning curves of average sum of rewards over 1,000 radio frames (episodes). The width equal to the deviation from the mean is filled with the corresponding color. In order to minimize the uncertainty due to randomness and to measure the exact performance of the algorithms, the same experiment was repeated 10 times with the same set of parameters for each of the approaches. Fig. 6: Comparison of performance according to learning rate. Fig. 7: Performance comparison according to batch size. High learning performance can be expected only when a sufficient number of samples is guaranteed. have considered in the previous simulations. We call the data collected from the former scenario as **biased batch** and the data from the latter scenario as **uniform batch**. We can see statistically that the uniform batch is more representative of the practical scenario and also covers more of the state-action space making it of _higher quality_. It is noteworthy that the agent trained from the biased batch shows only a small performance improvement, even if the amount of data is sufficient. BCMQ can deliver the performance of the latest batch reinforcement learning by adding a 1-step rollout policy to the batch-constrained Q-Learning technique. Nevertheless, if the quality of the underlying data is seriously bad due to the biased samples, this observation is natural because the agent has no information about the transition dynamics. ### _Results Summary_ We have demonstrated the following remarkable results through the experiments conducted in this work. * We were able to train the reinforcement learning agent with only limited data without interaction with the environment, and surpassed the performance of the existing DQN-based method [1]. More notably, even though the given data was generated randomly, a significant performance improvement was achieved. * Given a large enough dataset, we have shown that the batch-constrained offline approach is often sufficient to achieve the same performance as that of exploration-based methods. However, we have also seen that if the size of the dataset provided is below a certain critical threshold, no amount of training can improve the performance of the RL agent. * Performance is also significantly affected by the quality of the dataset as well as the number of training data samples for reinforcement learning. Based on these observations, we conclude our project and discuss its future direction in the next section. ## VI Conclusion and Future Directions Through this project, we proposed an algorithm that maximizes the SINR in radio frames of a 5G network by learning an agent without interacting with the environment. Commercial network providers can not accept the high cost of failure associated with the deployment of a sub-optimal policy, which makes the _interaction less_ learning very attractive. Similar to the previous RL-based approaches, we formulated the large and computationally expensive non-convex optimization problem as a learning-based sequential problem. But we are different and better than the existing approaches in two key metrics. The first is the data sampling efficiency and the second is the lack of need for exploration. Moreover, even with these two constraints, we have shown that our proposed approach outperforms the existing DQN baseline. In order to understand and interpret the results more, we conduct experiments to study the importance of different key factors of the problem. We demonstrate the importance of the diversity of the data and show that even with the help of the recent sophisticated SOTA algorithms such as SAC, and BCQ with rollout, we cannot achieve the best performance if the data is not good to begin. However, we show that even if only a small number of samples that cover the region of possible state domain are available, they can be used to their full potential to achieve high performance without exploration. We have experimentally shown that even a relatively small number of datasets (1000 samples) can lead to significant performance improvements. It is expected that our method can be applied for optimization in various network environments. Future research directions include applying our methodology in more diverse network environments. This methodology can be used not only for 5G networks but also for optimization of existing 4G networks that require interference control. Furthermore, if the environment in which more detailed information can be obtained from the UE is assumed, our approach will show higher performance. For example, if channel information can be transmitted accurately, we can choose an action that is closer to the optimum. In a more advanced communication system, network parameter optimization problems are expected to be effectively solved by the proposed approach.
2303.17678
Equivariant birational geometry of cubic fourfolds and derived categories
We study equivariant birationality from the perspective of derived categories. We produce examples of nonlinearizable but stably linearizable actions of finite groups on smooth cubic fourfolds.
Christian Böhning, Hans-Christian Graf von Bothmer, Yuri Tschinkel
2023-03-30T19:41:46Z
http://arxiv.org/abs/2303.17678v2
# Equivariant birational geometry of cubic fourfolds and derived categories ###### Abstract. We study equivariant birationality from the perspective of derived categories. We produce examples of nonlinearizable but stably linearizable actions of finite groups on smooth cubic fourfolds. ## 1. Introduction Let \(X\) be a smooth projective algebraic variety over a field \(k\) and \(\mathrm{D}^{b}(X)\) its bounded derived category of coherent sheaves. It is a rich algebraic object: a celebrated theorem of Bondal and Orlov [1] states that \(\mathrm{D}^{b}(X)\) determines \(X\) uniquely, if its canonical or anticanonical class is ample. This uniqueness can fail, there exist nonisomorphic but _derived equivalent_ varieties, e.g., K3 surfaces or abelian varieties. These results and constructions inspired active investigations of derived categories, and derived equivalences in various contexts. In some sense, \(\mathrm{D}^{b}(X)\) contains _too much_ information, or rather, the data that are relevant in concrete geometric applications are hard to visualize. The overarching goal is to extract computable, more compact, invariants of derived categories that would allow to answer basic questions about geometry, such as * existence of \(k\)-rational points, or * \(k\)-rationality. This has been pursued in, e.g., [1], [2], [1], [1]. One natural candidate for an invariant of \(\mathrm{D}^{b}(X)\) is the _Kuznetsov component_\(\mathcal{A}_{X}\), an admissible subcategory of \(\mathrm{D}^{b}(X)\), which however depends on the choice of a maximal semiorthogonal decomposition (see Section 2 for definitions). The expectation is that this component captures, in particular, rationality properties of \(X\); this idea has been tremendously influential. It has been tested in many situations, e.g., Fano threefolds, or special cubic fourfolds. In these cases, the Kuznetsov component is identified as the orthogonal to a naturally defined _exceptional sequence_ of objects in \(\mathrm{D}^{b}(X)\). Due to its universality, one might expect that this approach is valid over nonclosed fields, as well as in presence of group actions. Here, we explore this in detail in the equivariant context, for smooth cubic fourfolds, equipped with a regular, generically free action of a finite group \(G\). Our main result is: **Theorem 1**.: _There exist smooth Pfaffian cubic fourfolds \(X\) with a regular generically free action of a finite group \(G\) such that_ * _the_ \(G\)_-action is not linearizable, i.e., not equivariantly birational to a (projective) linear_ \(G\)_-action on_ \(\mathbb{P}^{4}\)_,_ * _the_ \(G\)_-action on_ \(X\times\mathbb{P}^{1}\)_, with trivial action on the second factor, is linearizable,_ * _the standard Kuznetsov component_ \(\mathcal{A}_{X}\) _is_ \(G\)_-equivalent to_ \(\mathrm{D}^{b}(S)\)_, with the_ \(G\)_-action induced by an embedding of_ \(G\) _into the automorphisms of a K3 surface_ \(S\)_,_ * _the variety of lines_ \(F_{1}(X)\) _is_ \(G\)_-birational to_ \(S^{[2]}\)_, the Hilbert scheme of two points on_ \(S\)_._ A more precise version is given in Theorem 16. Our main theorem contradicts natural equivariant analogs of existing rationality conjectures, as explained in Section 3. The stable linearizability proof is based on an adaptation to the equivariant context of the classical Pfaffian construction. This allows us to establish new stable linearizability results for, e.g., quadric surfaces, see Section 7. **Acknowledgments:** The first author was supported by the EPSRC New Horizons Grant EP/V047299/1. The third author was partially supported by NSF grant 2000099. We are grateful to Chunyi Li for helpful discussions, to Brendan Hassett for proving a result on \(G\)-Hodge structures, in the Appendix, and to Alexander Kuznetsov for several comments. ## 2. Derived categories We recall basic notions concerning derived categories that are used in applications to birational geometry, see, e.g., [11]. ### Notation Let \(G\) be a finite group. A \(G\)-variety over \(k\) is an algebraic variety with a regular action of \(G\). From now on, by a _category_ we mean a \(k\)-linear triangulated category. A \(G\)-category is a category \(\mathcal{A}\) together with a homomorphism \[G\to\operatorname{Auteq}(\mathcal{A}),\] the group of autoequivalences of \(\mathcal{A}\). A strictly full \(k\)-linear triangulated subcategory \(\mathcal{B}\subset\mathcal{A}\) of a \(G\)-category \(\mathcal{A}\) is \(G\)-stable if for every object \(E\in\mathcal{B}\) and every \(g\in G\) we have \(g_{*}E\in\mathcal{B}\). ### Semiorthogonal decompositions Let \(X\) be a smooth projective variety over a field \(k\) and \(\operatorname{D}^{b}(X)\) its derived category of coherent sheaves. An object \(E\in\operatorname{D}^{b}(X)\) is called _exceptional_ if \[\operatorname{Hom}(E,E)\simeq k,\quad\operatorname{Ext}^{r}(E,E)=0,\;r\neq 0.\] An exceptional sequence is an ordered tuple of exceptional objects \[(E_{1},\ldots,E_{n})\] such that \[\operatorname{Ext}^{l}(E_{r},E_{s})=0,\quad\forall\,r>s,l.\] An exceptional sequence is called _full_ if the smallest full triangulated subcategory containing the \(E_{r}\) is equivalent to \(\operatorname{D}^{b}(X)\), i.e., if the sequence generates \(\operatorname{D}^{b}(X)\). The notion of _semiorthogonal decomposition_ generalizes the preceding concepts. A full subcategory \(\mathcal{A}\) of \(\operatorname{D}^{b}(X)\) is called _admissible_ if the inclusion functor has a left and right adjoint. A sequence \[(\mathcal{A}_{1},\ldots,\mathcal{A}_{n})\] of admissible subcategories of \(\operatorname{D}^{b}(X)\) is called a _semiorthogonal decomposition_ of \(\operatorname{D}^{b}(X)\) if * the \(\mathcal{A}_{1},\ldots,\mathcal{A}_{n}\) generate \(\operatorname{D}^{b}(X)\) and * there are no derived Hom's from any object in \(\mathcal{A}_{r}\) to an object in \(\mathcal{A}_{s}\), for \(r>s\). A semiorthogonal decomposition is called _maximal_ if the \(\mathcal{A}_{s}\) do not admit a further nontrivial semiorthogonal decomposition. Explicit semiorthogonal decompositions have been computed in many examples, not only over fields but also over more general bases, see, e.g., [13]. **Example 2**.: Let \(X\subset\mathbb{P}^{n}\) be a smooth Fano variety with Picard group of rank one, generated by the hyperplane class, and of index \(r\). Then there is a semiorthogonal decomposition \[\mathrm{D}^{b}(X)=\langle\mathcal{A}_{X},\mathcal{O}_{X},\mathcal{O}_{X}(1), \ldots,\mathcal{O}_{X}(r-1)\rangle;\] here \(\mathcal{A}_{X}\) is called the _Kuznetzov component_ of \(\mathrm{D}^{b}(X)\). For smooth cubic fourfolds \(X\), one has \(r=3\), and the subcategory \(\mathcal{A}_{X}\) has _some_ formal properties of the derived category of a K3 surface. ### Essential dimension and blowups A \(k\)-linear triangulated category \(\mathcal{T}\) is said to be of _essential dimension_ at most \(m\), if \(\mathcal{T}\) embeds as a full admissible subcategory into a derived category \(\mathrm{D}^{b}(Z)\) of a smooth projective variety \(Z\) of dimension at most \(m\). Usually, this definition is applied to a piece in a semiorthogonal decomposition of \(\mathrm{D}^{b}(X)\). We now recall Orlov's blowup formula [10, Theorem 4.3]: Let \[q:\tilde{X}=\mathrm{Bl}_{Z}(X)\to X\] be a blowup of \(X\) in a smooth subvariety \(Z\). Then there is a diagram and we have a collection of subcategories \[\mathcal{A}_{s}:=j_{*}\left(p^{*}(\mathrm{D}^{b}(Z)\otimes\mathcal{O}_{ \mathbb{P}(\mathcal{N}_{Z})}(s))\right),\] (where all the functors are in the derived sense). Note that \(\mathcal{A}_{s}\) are all equivalent to \(\mathrm{D}^{b}(Z)\). If \(r:=\mathrm{codim}(Z)\), then \[\langle\mathcal{A}_{-r+1},\ldots,\mathcal{A}_{-1},q^{*}(\mathrm{D}^{b}(X))\rangle\] is a semiorthogonal decomposition of \(\mathrm{D}^{b}(X)\). Since \(\mathbb{P}^{n}\) has a full exceptional collection, and every smooth projective rational variety \(X\) can be linked to \(\mathbb{P}^{n}\) by a sequence of blowups and blowdowns along smooth centers, it has become a guiding principle that such \(X\)_should_ have semiorthogonal decompositions with pieces of essential dimension at most \(n-2\). There are various issues that arise, e.g., maximal decompositions are by no means unique, see [1], [11], [12, Remark 5.6]. Still, this point of view has been the basis of conjectures concerning rationality of higher-dimensional varieties over closed and nonclosed fields, e.g., cubic fourfolds [13, Conj 4.2], Gushel-Mukai varieties [14], and Brauer-Severi varieties, del Pezzo surfaces, Fano threefolds [1], [1], [1], [15]. \(G\)-categories.: We turn to \(G\)-varieties \(X\), where \(G\subset\operatorname{Aut}(X)\) is a finite group. There is an induced embedding \[G\hookrightarrow\operatorname{Auteq}(\operatorname{D}^{b}(X)),\] so that \(\operatorname{D}^{b}(X)\) is a \(G\)-category. The reconstruction theorem of [1] admits a natural generalization to the equivariant context: **Proposition 3**.: _Suppose \(X\) and \(Y\) are smooth projective \(G\)-varieties, \(X\) is Fano, and_ \[\Phi\colon\operatorname{D}^{b}(X)\simeq\operatorname{D}^{b}(Y)\] _is an equivalence of \(G\)-categories. Then there exists a \(G\)-equivariant isomorphism_ \[\varphi\colon X\to Y\] _inducing \(\Phi\)._ Proof.: The fact that \(Y\) is also Fano and \(X\) and \(Y\) are isomorphic as varieties is just the reconstruction theorem of Bondal and Orlov [1]. More precisely, one can define a _point object_ in \(\operatorname{D}^{b}(X)\) (and similarly \(\operatorname{D}^{b}(Y)\)) as an object \(P\) such that 1. \(S_{X}(P)\simeq P[n]\), where \(S_{X}\) is the Serre functor and \(n\in\mathbb{Z}\). 2. For \(r<0\) one has \(\operatorname{Ext}^{r}(P,P)=0\). 3. \(\operatorname{Hom}(P,P)\simeq k\). These conditions imply, if \(X\) is Fano, that \(n=\dim X\) and \(P\) is, up to shift, the skyscraper sheaf \(k_{\mathfrak{p}}\) of a closed point \(\mathfrak{p}\in X\). Furthermore, from the given equivalence \(\Phi\), the reconstruction procedure of Bondal and Orlov outputs an isomorphism \(\varphi\) with the property that if \(\Phi\) maps a point object \(P\) with support \(\mathfrak{p}\) in \(X\) to a point object \(Q\) with support \(\mathfrak{q}\) in \(Y\), then \(\varphi(\mathfrak{p})=\mathfrak{q}\). Since \(\Phi\) is assumed to be an equivalence of \(G\)-categories, it follows that \(\varphi\) is a \(G\)-morphism as well. **Proposition 4**.: _Let \(X\) be a smooth projective \(G\)-variety. Assume that there is a semiorthogonal decomposition_ \[\operatorname{D}^{b}(X)=\langle\mathcal{A},\mathcal{B}\rangle,\] _where \(\mathcal{B}\) is a \(G\)-stable subcategory. Then \(\mathcal{A}\) is \(G\)-stable._ Proof.: Indeed, \(\mathcal{A}\) is, by definition, the full subcategory consisting of all objects \(X\) that satisfy \[\operatorname{Hom}(Y,X[i])=0,\quad\forall\,i\in\mathbb{Z},\] for all \(Y\) in \(\mathcal{B}\). Denoting the action of an element \(g\in G\) on an object in \(\mathrm{D}^{b}(X)\) by \(g_{*}\) we obtain \[\mathrm{Hom}(g_{*}Y,g_{*}X[i])=0,\quad\forall\,i\in\mathbb{Z},\] because \(g\) acts by an autoequivalence on \(\mathrm{D}^{b}(X)\), and in particular, \(\mathrm{Hom}(g_{*}Y,g_{*}X[i])\) is isomorphic to \(\mathrm{Hom}(Y,X[i])\) as \(k\)-vector space. Since \(g_{*}Y\) is another object of \(\mathcal{B}\) and all objects in \(\mathcal{B}\) are of this form (because \(\mathcal{B}\) is \(G\)-stable), we get that \(g_{*}X\) is an object of \(\mathcal{A}\). **Corollary 5**.: _In the notation of Example 2, the Kuznetsov component \(\mathcal{A}_{X}\) of a \(G\)-Fano variety is naturally a \(G\)-category._ We recall from [10] the notion of a \(G\)-linearized object of \(\mathrm{D}^{b}(X)\): **Definition 6**.: A complex \(E^{\bullet}\) in \(\mathrm{D}^{b}(X)\) is \(G\)-linearized if it is equipped with a \(G\)-linearization, i.e., a system of isomorphisms \[\lambda_{g}\colon E^{\bullet}\to g^{*}E^{\bullet}\] for each \(g\in G\), satisfying the compatibility condition \[\lambda_{1}=\mathrm{id}_{E^{\bullet}},\quad\lambda_{gh}=h^{*}(\lambda_{g}) \circ\lambda_{h}.\] ### Nonbirational linear actions Let \(\mathfrak{p}\in X\) be a closed point and \(T_{\mathfrak{p}}X\) the tangent space at \(\mathfrak{p}\). For the skyscraper sheaf \(k_{\mathfrak{p}}\) we have \[\mathrm{Ext}^{r}(k_{\mathfrak{p}},k_{\mathfrak{p}})\simeq\Lambda^{r}T_{ \mathfrak{p}}X,\quad r\in[0,\dim X],\] and zero otherwise. In particular, \(\mathrm{Ext}^{1}(k_{\mathfrak{p}},k_{\mathfrak{p}})\) parametrizes length \(2\) zero-dimensional subschemes supported at \(\mathfrak{p}\), which are tangent vectors at \(\mathfrak{p}\) to \(X\). When \(G\) is abelian and \(P\) in \(\mathrm{D}^{b}(X)\) is a point object fixed under \(G\), we can consider the weights of the \(G\)-action on \[\mathrm{Hom}(P,P[1])=\mathrm{Ext}^{1}(P,P),\] these are the weights of the \(G\)-action on \(T_{\mathfrak{p}}X\) for the \(G\)-fixed point \(\mathfrak{p}\in X\) that is the support of \(P\). These weights play a role in the computation of the class of the \(G\)-action in the _equivariant Burnside group_, introduced in [11] and [11]. In particular, this formalism allows to distinguish birational types of linear \(G\)-actions on a variety as simple as \(\mathbb{P}^{2}\): **Example 7**.: Let \(G=C_{m}\times\mathfrak{S}_{3}\), \(m\geq 5\), the product of the cyclic group of order \(m\) and the symmetric group on three letters, \(V_{2}\) the standard \(2\)-dimensional representation of \(\mathfrak{S}_{3}\), and \(k_{\chi},k_{\chi^{\prime}}\)\(1\)-dimensional representations of \(C_{m}\) with primitive characters \(\chi,\chi^{\prime}\), \(\chi\neq\pm\chi^{\prime}\). Then \[\mathbb{P}(k_{\chi}\oplus V_{2}),\quad\mathbb{P}(k_{\chi^{\prime}}\oplus V_{2})\] are not \(G\)-birational to each other. However, both varieties admit \[\mathcal{O},\mathcal{O}(1),\mathcal{O}(2)\] as a full (strong) exceptional sequence of \(G\)-linearized line bundles (albeit with different linearizations on those bundles). The failure of \(G\)-birationality is proved in [11, Example 5.3] and [11, Section 10], using the Burnside formalism of [11]. This example indicates that essential information is contained in the \(G\)-linearizations of the objects of the collection, respectively, in the attachment functors/nonzero Hom-spaces between the pieces of the decomposition. ## 3. Cubic fourfolds: geometry Let \(X\subset\mathbb{P}^{5}\) be a smooth cubic fourfold, over \(k=\mathbb{C}\). Let \(F=F_{1}(X)\) be the variety of lines of \(X\), it is a holomorphic symplectic fourfold _deformation equivalent_ to \(S^{[2]}\), the Hilbert scheme of two points on a K3 surface \(S\). ### Rationality In this context, there are three main conjectures concerning the rationality of \(X\) (see [13] for background, latest results, and references): each of the following conditions is conjectured to be _equivalent_ to the rationality of \(X\). 1. There is a primitive isometric embedding of Hodge structures \[\operatorname{H}^{2}(S,\mathbb{Z})_{\operatorname{pr}}\hookrightarrow \operatorname{H}^{4}(X,\mathbb{Z})_{\operatorname{pr}}(1),\] for some polarized K3 surface \((S,h)\). 2. There is an equivalence of \(k\)-linear triangulated categories \[\operatorname{D}^{b}(S)\simeq\mathcal{A}_{X},\] for some K3 surface \(S\). 3. There is a _birationality_ \[F_{1}(X)\sim S^{[2]},\] for some K3 surface \(S\). Recall that (1) and (2) are equivalent, by [1]. A motivic version has been addressed in [12]: cubic fourfolds over arbitrary fields with derived equivalent Kuznetsov components have isomorphic Chow motives. While there is growing evidence for the validity of these conjectures, some of it based on extensive numerical experiments, the compatibility of these constructions with group actions remained largely unexplored. ### Pfaffians We recall some multilinear algebra occurring in the construction of Pfaffian cubic fourfolds. Let \(V\) be a \(k\)-vector space of dimension \(6\), and consider the nested strata \[\operatorname{Gr}(2,V)\subset\operatorname{Pf}(V)\subset\mathbb{P}(\Lambda^{2}V),\] where \(\operatorname{Pf}(V)\) parametrizes skew \(6\times 6\) matrices of generic rank \(4\) and \(\operatorname{Gr}(2,V)\) those of rank \(2\). Dually, we also have \[\operatorname{Gr}(2,V^{*})\subset\operatorname{Pf}(V^{*})\subset\mathbb{P}( \Lambda^{2}V^{*}).\] Given a \(5\)-dimensional subspace \(\mathbb{P}(L)\subset\mathbb{P}(\Lambda^{2}V)\), we have an associated \(8\)-dimensional subspace \(\mathbb{P}(L^{\perp})\) in \(\mathbb{P}(\Lambda^{2}V^{*})\). If \[X=\operatorname{Pf}(V)\cap\mathbb{P}(L)\] is smooth then it is a Pfaffian cubic fourfold with associated K3 surface \[S=\operatorname{Gr}(2,V^{*})\cap\mathbb{P}(L^{\perp}). \tag{3.1}\] In this context, Conjectures (1), (2) and (3) have been checked for all Pfaffian cubic fourfolds. ### Automorphisms Actions of a finite group \(G\) on \(S\) and \(X\) induce actions on related geometric objects: * the punctual Hilbert schemes, * the varieties of rational curves on \(X\), e.g., \(F=F_{1}(X)\), * (polarized) Hodge structures (if \(G\)-preserves the polarizations), * derived categories; note that if the \(G\)-action on \(X\subset\mathbb{P}^{5}\) arises from a (projectively) linear action on \(\mathbb{P}^{5}\), then we obtain a natural \(G\)-action on the Kuznetsov component \(\mathcal{A}_{X}\). A useful notion is that of _symplectic_ automorphisms \(G_{s}\subseteq G\): in the case of K3 surfaces these act trivially on \(\operatorname{H}^{2,0}(S,\mathbb{Z})\) and for cubic fourfolds on \(\operatorname{H}^{3,1}(X,\mathbb{Z})\). In both cases, there is an exact sequence \[1\to G_{s}\to G\to C_{m}\to 1.\] All finite automorphisms of K3 surfaces have been classified, see [1]. Symplectic automorphisms of cubic fourfolds have been classified in [10]. The Torelli theorem implies that we have embeddings \[\operatorname{Aut}(S)\hookrightarrow\operatorname{O}(\operatorname{L}_{S}),\] \[\operatorname{Aut}(X)\hookrightarrow\operatorname{O}(\operatorname{L}_{X}),\] the group of isometries of the lattices \[\operatorname{L}_{S}:=\operatorname{H}^{2}(S,\mathbb{Z})_{\operatorname{pr}}, \quad\operatorname{L}_{X}:=\operatorname{H}^{4}(X,\mathbb{Z})_{\operatorname {pr}},\] the latter group coinciding with the group of Hodge isometries of \(\mathrm{H}^{4}(X,\mathbb{Z})\) fixing the polarization. In a similar vein, one has injective homomorphisms \[\mathrm{Aut}(S)\hookrightarrow\mathrm{Auteq}(\mathrm{D}^{b}(S)),\quad\mathrm{Aut }(X)\hookrightarrow\mathrm{Auteq}(\mathcal{A}_{X}),\] into the group of autoequivalences of the corresponding categories, see, e.g., [1, Theorem 1.3]. Given the naturality of the above constructions, one would expect the following versions of rationality conjectures: 1. There is a primitive isometric embedding of \(G\)-Hodge structures \[\mathrm{H}^{2}(S,\mathbb{Z})_{\mathrm{pr}}\hookrightarrow\mathrm{H}^{4}(X, \mathbb{Z})_{\mathrm{pr}}(1),\] for some polarized \(G\)-K3 surface \((S,h)\). 2. There is a \(G\)-equivariant equivalence of \(k\)-linear triangulated categories \[\mathrm{D}^{b}(S)\simeq\mathcal{A}_{X},\] for some \(G\)-K3 surface \(S\). 3. There exists a \(G\)-equivariant birationality \[F_{1}(X)\sim S^{[2]},\] for some \(G\)-K3 surface \(S\). In Section 7, we present counterexamples to all three statements. These are based on a \(G\)-equivariant Pfaffian construction, in which case both \(X\) and \(S\) carry _compatible_\(G\)-actions. ## 4. Automorphisms and Hodge structures Let \(F=F_{1}(X)\) be the variety of lines of a smooth cubic fourfold \(X\subset\mathbb{P}^{5}\). Let \[P\subset F\times X\] be the universal line/incidence correspondence, with projections \[p\colon P\to F,\quad q\colon P\to X.\] By [1], we have the Abel-Jacobi map \[\alpha\colon\mathrm{H}^{4}(X,\mathbb{Z})\to\mathrm{H}^{2}(F,\mathbb{Z})(-1),\] where \(\alpha=p_{*}q^{*}\); here we use Poincare duality twice to make sense of \(p_{*}\). This homomorphism is an isomorphism of polarized Hodge structures, with the natural polarization on \(X\) and Beauville-Bogomolov form on the Picard group of the holomorphic symplectic variety \(F\). Given a regular \(G\)-action on \(X\) we obtain a natural \(G\)-action on \(F\), and on the associated Hodge structures. As \(p,q\) are \(G\)-morphisms and Poincare duality is compatible with the natural \(G\)-actions on homology and cohomology, \(\alpha\) is an isomorphism of \(G\)-Hodge structures in this case; passing to primitive cohomology we obtain a \(G\)-equivariant isomorphism of polarized Hodge structures \[\alpha\colon\mathrm{H}^{4}(X,\mathbb{Z})_{\mathrm{pr}}\stackrel{{ \sim}}{{\longrightarrow}}\mathrm{H}^{2}(F,\mathbb{Z})_{\mathrm{pr}}(-1). \tag{4.1}\] If \(X\) is Pfaffian and \(S\) is the associated K3 surface then we have a _birational_ isomorphism \[\varphi:S^{[2]}\stackrel{{\sim}}{{\dashrightarrow}}F, \tag{4.2}\] constructed as follows: fixing general points \[\mathfrak{p},\mathfrak{q}\in S=\mathrm{Gr}(2,V^{*})\cap\mathbb{P}(L^{\perp})\] we regard them as 2-planes in \(V^{*}\) and consider their span, a 4-plane in \(V^{*}\). The two-forms in \(\mathbb{P}(L)\subset\mathbb{P}(\Lambda^{2}V)\) that are zero on \(\mathfrak{p}+\mathfrak{q}\) form a line in \(X\). This extends to the birational isomorphism (4.2), which is an _isomorphism_ if \(S\) does not contain a line and \(X\) does not contain a plane, by [1]. By [10], \(\varphi\) induces a primitive isometric embedding of polarized Hodge structures \[\mathrm{H}^{2}(S,\mathbb{Z})_{\mathrm{pr}}\hookrightarrow\mathrm{H}^{4}(X, \mathbb{Z})_{\mathrm{pr}}(1) \tag{4.3}\] since \[\mathrm{H}^{2}(S^{[2]},\mathbb{Z})\simeq\mathrm{H}^{2}(S,\mathbb{Z})\oplus \mathbb{Z}\delta,\] as polarized Hodge structures. Here \(2\delta\) is the divisor corresponding to length-2 non-reduced subschemes of \(S\); concretely, one has a natural blowup morphism \[\epsilon\colon S^{[2]}\to S^{(2)}\] resolving the singularities of the second symmetric product \(S^{(2)}\), the map \(\epsilon\) associates to a subscheme its associated zero cycle. All of the above constructions are obviously \(G\)-equivariant. The following theorem, proved by Brendan Hassett in the Appendix, ensures that (4.3) is valid in the \(G\)-equivariant context as well. **Theorem 8**.: _Let \(\phi:Y^{\prime}\dashrightarrow Y\) be a \(G\)-equivariant birational map of smooth projective holomorphic symplectic varieties over \(k=\mathbb{C}\). Then there exists an isomorphism of \(G\)-Hodge structures_ \[\psi:\mathrm{H}^{2}(Y,\mathbb{Z})\to\mathrm{H}^{2}(Y^{\prime},\mathbb{Z}).\] ## 5. Automorphisms and Kuznetzov components via stability conditions We recall results from [1], connecting actions of automorphisms on derived categories of Pfaffian cubic fourfolds with those on associated K3 surfaces. A labelled cubic fourfold of discriminant \(d\) is a pair \((X,K)\) consisting of a smooth cubic fourfold \(X\) and a rank 2 primitive sublattice \(K\subset\mathrm{H}^{2,2}(X,\mathbb{Z})\) containing \(h^{2}\), where \(h\) is the hyperplane class, and of discriminant \(d=\mathrm{disc}(K)\). The subgroup of labeled automorphisms \[\mathrm{Aut}(X,K):=\{f\in\mathrm{Aut}(X)\,\mid\,f|_{K}=1\}\subset\mathrm{ Aut}(X)\] consists of automorphisms fixing every element of \(K\). Assume that \(d\) satisfies (*) \(d>6\) and \(d\equiv 0\) or \(2\pmod{6}\), (**) \(d\) is not divisible by 4,9 or odd primes \(p\equiv 2\pmod{3}\). These conditions are _equivalent_ to the rationality of \(X\), via Conjecture (1), and imply the existence of an associated K3 surface \(S\) such that \[\mathrm{H}^{2}(S,\mathbb{Z})_{\mathrm{pr}}\hookrightarrow\mathrm{H}^{4}(X, \mathbb{Z})_{\mathrm{pr}}(1).\] Given any object \[\mathcal{E}\in\mathrm{D}^{b}(S\times X)\] we obtain in the standard way the Fourier-Mukai functor \[\Phi_{\mathcal{E}}:\mathrm{D}^{b}(S)\to\mathcal{A}_{X}\] (where we tacitly compose with the projection functor \(\mathrm{D}^{b}(X)\to\mathcal{A}_{X}\) to get to \(\mathcal{A}_{X}\)). If \(\Phi_{\mathcal{E}}\) is an equivalence and \(f\in\mathrm{Aut}(X)\) we get the corresponding autoequivalence \[f_{\mathcal{E}}:=\Phi_{\mathcal{E}}^{-1}\circ f_{*}\circ\Phi_{\mathcal{E}}: \mathrm{D}^{b}(S)\to\mathrm{D}^{b}(S)\] via the diagram We recall the main theorem from [1]: **Theorem 9**.: _For \(d\) satisfying_ (*) _and_ (**) _as above there exists an_ \[\mathcal{E}\in\mathrm{D}^{b}(S\times X)\] _such that \(\Phi_{\mathcal{E}}\) is an equivalence. Moreover, if we start with an automorphism \(f\in\operatorname{Aut}(X,K)\) in the labelled automorphism group \(\operatorname{Aut}(X,K)\), then \(f_{\mathcal{E}}\) is in the image of the natural embedding_ \[\operatorname{Aut}(S,h)\hookrightarrow\operatorname{Auteq}(\operatorname{D}^{b }(S))\] _and the induced map_ \[\operatorname{Aut}(X,K)\to\operatorname{Aut}(S,h)\] _is an isomorphism._ This means that given a \(G\)-action on a smooth cubic fourfold \(X\) fixing the sublattice \(K\subset\operatorname{H}^{2,2}(X,\mathbb{Z})\) as above, there exists a polarized associated K3 surface \((S,h)\), with a \(G\)-action on \(S\) preserving the polarization \(h\), such that \(\operatorname{D}^{b}(S)\) is equivariantly equivalent to \(\mathcal{A}_{X}\). Note however, that there may be nonisomorphic but derived equivalent K3 surfaces. Under some assumptions on \(G\), the uniqueness of \(S\) follows, e.g., if the subgroup of symplectic automorphisms \(G_{s}\subseteq G\) is not the trivial group or the cyclic group \(C_{2}\)[1, Theorem 8.4.]. However, _a priori_ it is not guaranteed that different \(G\)-actions on \(S\) are related by an autoequivalence in \(\operatorname{D}^{b}(S)\). There are examples of \(G\subset\operatorname{Aut}(S)\) which are not conjugated by automorphisms of \(S\) but are conjugated via autoequivalences of \(\operatorname{D}^{b}(S)\)[14, Section 8]. ## 6. Automorphisms and Kuznetsov components via equivariant HPD We investigate the Homological Projective Duality (HPD) construction in presence of actions of finite groups \(G\), in the special case of the Pfaffian construction, as described in [16]. This will allow us to construct a functor that identifies, \(G\)-equivariantly, the Kuznetsov component of a Pfaffian \(G\)-cubic fourfold with the derived category of the associated \(G\)-K3 surface. We work over an algebraically closed field \(k\) of characteristic zero, and adhere to the notation of [16] and [16] (which differs from the notation in [11] and our notation in other sections). We first explain the general structure of (HPD), following [16]. A _Lefschetz decomposition_ of a derived category \(\operatorname{D}^{b}(X)\) is a semiorthogonal decomposition of the form \[\operatorname{D}^{b}(X)=\langle\mathcal{A}_{0},\mathcal{A}_{1},\ldots, \mathcal{A}_{i-1}(i-1)\rangle,\] where \[0\subset\mathcal{A}_{i-1},\ldots,\mathcal{A}_{1}\subset\mathcal{A}_{0}\subset \operatorname{D}^{b}(X)\] is a chain of _admissible_ subcategories of \(\mathrm{D}^{b}(X)\)[10, Definition 4.1]. Let \(V\) be a vector space over \(k\) and \[Q\subset\mathbb{P}(V)\times\mathbb{P}(V^{*})\] the incidence quadric. An algebraic variety \(g:=Y\hookrightarrow\mathbb{P}(V^{*})\) is called _projectively dual_ to \(f:X\hookrightarrow\mathbb{P}(V)\), with respect to a fixed Lefschetz decomposition on \(X\), if there exists an \(\mathcal{E}\in\mathrm{D}^{b}(Q(X,Y))\), where \[Y\times_{\mathbb{P}(V^{*})}\mathcal{X}_{1}=Q(X,Y):=(X\times Y)\times_{\mathbb{ P}(V)\times\mathbb{P}(V^{*})}Q\] and \(\mathcal{X}_{1}\) is the universal hyperplane section of \(X\), such that the corresponding kernel functor \[\Phi=\Phi_{\mathcal{E}}:\mathrm{D}^{b}(Y)\to\mathrm{D}^{b}(\mathcal{X}_{1})\] is fully faithful and gives the following semiorthogonal decomposition \[\mathrm{D}^{b}(\mathcal{X}_{1})=\langle\Phi(\mathrm{D}^{b}(Y)),\mathcal{A}_{ 1}(1)\boxtimes\mathrm{D}^{b}(\mathbb{P}(V^{*})),\ldots,\mathcal{A}_{i-1}(i-1) \boxtimes\mathrm{D}^{b}(\mathbb{P}(V^{*}))\rangle.\] The main result [10, Theorem 6.3] says that * if \(X\) is smooth then \(Y\) is smooth and admits a _canonical_ dual Lefschetz decomposition, * there is a base-change functor (see [10, Section 2]) which allows to restrict these structures to an _admissible_ linear subspace \(L\subset V^{*}\). In detail, if \[X_{L}:=X\times_{\mathbb{P}(V)}\mathbb{P}(L^{\perp}),\quad Y_{L}:=Y\times_{ \mathbb{P}(V^{*})}\mathbb{P}(L)\] then there is a _canonical_ decomposition of their categories. The main point of [10] is to produce the Fourier-Mukai kernel \(\mathcal{E}\) and check the required properties. We introduce the following actors: 1. \(X:=\mathbf{G}=\mathrm{Gr}(2,W)\), where \(W\) is a \(6\)-dimensional vector space, 2. \(\mathcal{U}\subset W\otimes\mathcal{O}_{X}\) the tautological subbundle of rank \(2\) on \(X\), 3. \(\mathbb{P}:=\mathbb{P}(V^{*})\), \(\mathbb{P}^{\vee}:=\mathbb{P}(V)\), where \(V:=\wedge^{2}W\), 4. \(Q\subset\mathbb{P}^{\vee}\times\mathbb{P}\), the incidence quadric, 5. \(\mathcal{X}=\mathcal{X}_{1}\subset X\times\mathbb{P}\) is the universal hyperplane section of \(X\), 6. \(Y:=\mathrm{Pf}(W^{*})\hookrightarrow\mathbb{P}\), 7. \(\tilde{Y}\subset Y\times\mathbb{G}\) is the incidence correspondence; we have \[\tilde{Y}=\mathbb{P}_{\mathbf{G}}(\wedge^{2}\mathcal{K}^{\perp}),\] where \(\mathcal{K}\subset W\otimes\mathcal{O}_{\mathbf{G}}\) is the tautological subbundle of rank \(2\), and \(\mathcal{K}^{\perp}\subset W^{*}\otimes\mathcal{O}_{\mathbf{G}}\) is its orthogonal, 8. the projections \[g_{Y}:\tilde{Y}\to Y\quad\text{ and }\quad\zeta:\tilde{Y}\to\mathbf{G},\] * \(\mathcal{R}:=g_{Y*}\mathcal{E}nd(\mathcal{O}_{\tilde{Y}}\oplus\mathcal{K})\), a sheaf of Azumaya algebras on \(Y\), * the incidence quadric \[Q(X,\tilde{Y}):=(X\times\tilde{Y})\times_{(\mathbb{P}^{\vee}\times\mathbb{P})}Q \stackrel{{ j}}{{\longrightarrow}}\mathcal{X}\times\tilde{Y},\] with its embedding, the pullback of \(Q\) under the composition \[X\times\tilde{Y}\to X\times Y\hookrightarrow\mathbb{P}(\wedge^{2}W)\times \mathbb{P}(\wedge^{2}W^{*}),\] forming a divisor \[Q(X,\tilde{Y})\subset X\times\tilde{Y},\] * the locus of pairs of intersecting subspaces \[X\times\mathbf{G}=\operatorname{Gr}(2,W)\times\operatorname{Gr}(2,W)\] and its preimage \[T\subset X\times\tilde{Y}\] under the natural morphism \[(\operatorname{id}\times\zeta):X\times\tilde{Y}\to X\times\mathbf{G},\] note that \[T\subset Q(X,\tilde{Y}),\] * the bundle \[\mathcal{E}:=\mathcal{J}_{T,Q(X,\tilde{Y})}(H_{X}+H_{\mathbf{G}}),\] the sheaf of ideals of \(T\) in \(Q(X,\tilde{Y})\) twisted by the sum of hyperplane classes of \(X\) and of \(\mathbf{G}\), The main difference to the original HPD is that the role of \(Y\) (and its derived category) is now played by the pair \((Y,\mathcal{R})\) and \[\operatorname{D}^{b}(Y,\mathcal{R}),\] which is the derived category of coherent sheaves of right \(\mathcal{R}\)-modules. By [10, Theorem 3.2], there is a fully faithful embedding of \[\operatorname{D}^{b}(Y,\mathcal{R})\hookrightarrow\operatorname{D}^{b}(\tilde{ Y}),\] as an admissible subcategory, with image denoted by \(\tilde{D}\). Consider the coherent sheaf \[j_{*}\mathcal{E}\in\mathsf{Coh}(\mathcal{X}\times\tilde{Y}).\] By [10, Lemma 8.2], and the discussion on p. 11 of that paper, we can view \(j_{*}\mathcal{E}\) as an object in \[\operatorname{D}^{b}(X\times Y,\mathcal{O}_{X}\boxtimes\mathcal{R}^{\text{ opp}}),\] derived category of coherent sheaves of right modules over \(\mathcal{O}_{X}\boxtimes\mathcal{R}^{\mathrm{opp}}\). The associated kernel functor \[\Phi_{j\ast}\mathcal{E}:\mathrm{D}^{b}(Y,\mathcal{R})\to\mathrm{D}^{b}( \mathcal{X})\] is fully faithful, by [10, Corollary 9.16]. First, writing \(V=\Lambda^{2}W\), we consider the Grassmannians \[\mathbf{G}_{6}:=\mathrm{Gr}(6,V^{\ast}),\] parametrizing \(6\)-dimensional subspaces of \(V^{\ast}\), with tautological subbundle \[\mathcal{L}_{6}\subset V^{\ast}\otimes\mathcal{O}_{\mathbf{G}_{6}}\] and denote by \[\mathcal{L}_{6}^{\perp}\subset V\otimes\mathcal{O}_{\mathbf{G}_{6}}\] the orthogonal subbundle. The universal families of linear sections of \(X\) of interest to us are \[\mathcal{X}_{6} =(X\times\mathbf{G}_{6})\times_{\mathbb{P}(V)\times\mathbf{G}_{6 }}\mathbb{P}_{\mathbf{G}_{6}}(\mathcal{L}_{6}^{\perp})\] \[\widetilde{\mathcal{Y}}_{6} =(\tilde{Y}\times\mathbf{G}_{6})\times_{\mathbb{P}(V^{\ast}) \times\mathbf{G}_{6}}\mathbb{P}_{\mathbf{G}_{6}}(\mathcal{L}_{6}),\] but actually one considers pairs \[(\mathcal{Y}_{6},\mathcal{R}_{6})\] consisting of the variety \[\mathcal{Y}_{6}:=(Y\times\mathbf{G}_{6})\times_{\mathbb{P}(V^{\ast})\times \mathbb{G}_{6}}\mathbb{P}_{\mathbf{G}_{6}}(\mathcal{L}_{6}),\] together with a sheaf of Azumaya algebras \(\mathcal{R}_{6}\), obtained by pullback. These are fibred over \(\mathbf{G}_{6}\); note that the fibres over a (sufficiently general) \(L\) are just \(X_{L}\), the K3 surface associated to \(Y_{L}\), a Pfaffian cubic fourfold. We consider the natural projection \[\mathcal{X}_{6}\times_{\mathbf{G}_{6}}\mathcal{Y}_{6}\to X\times Y\] and denote by \(\mathcal{E}_{6}\) the pull-back of \(j_{\ast}\mathcal{E}\), as a sheaf of Azumaya algebras, to \(\mathcal{X}_{6}\times_{\mathbf{G}_{6}}\mathcal{Y}_{6}\), viewed as a sheaf on \(\mathcal{X}_{6}\times\mathcal{Y}_{6}\). Base-changing to \([L]\in\mathbf{G}_{6}\) gives similar objects, which we denote by the same symbols as above, but with an added subscript \(L\), e.g., \[\mathcal{E}_{6,L},\quad\mathcal{X}_{6,L}\times\mathcal{Y}_{6,L}=X_{L}\times Y _{L}.\] We get the functor \[\Phi_{6}\colon\mathrm{D}^{b}(\mathcal{Y}_{6},\mathcal{R}_{6})\to\mathrm{D}^{b} (\mathcal{X}_{6}),\] induced by \(\mathcal{E}_{6}\). With our notational conventions, we also get functors \[\Phi_{6,L}\colon\mathrm{D}^{b}(Y_{L})\to\mathrm{D}^{b}(X_{L}).\] Then Kuznetsov shows, in the commutative context, that both \(\Phi_{6}\) and the \(\Phi_{6,L}\) are _splitting_, in the sense of [14, Section 3]; this uses the faithful base change theorems [14, Section 2.8]. The base change theorem holds as well in the context of varieties equipped with sheaves of Azumaya algebras (called _Azumaya varieties_) [14, Section 2.6, and Proposition 2.43]. The splitting property of \(\Phi_{6}\), in the context of more general _noncommutative varieties_, which include derived categories of Azumaya varieties over fields, is established in [11, Theorem 8.4]. The argument proceeds by induction on dimension of linear sections in the universal families, starting with hyperplanes, and is essentially the one presented for _varieties_ in [14, Section 6]. In particular, if we restrict \(\Phi_{6,L}\) to the Kuznetsov component \(\mathcal{A}_{Y_{L}}\) in \[\mathrm{D}^{b}(Y_{L},\mathcal{R})=\mathrm{D}^{b}(Y_{L})=\langle\mathcal{O}(-3),\mathcal{O}(-2),\mathcal{O}(-1),\mathcal{A}_{Y_{L}}\rangle,\] where \(Y_{L}\) is a smooth Pfaffian cubic fourfold, we obtain an equivalence \[\mathcal{A}_{Y_{L}}\simeq\mathrm{D}^{b}(X_{L}),\] which in this case is the derived category of the associated K3 surface \(X_{L}\). All of the above constructions are \(G\)-equivariant if we endow \(W\) with a linear \(G\)-action. To summarize, we have: **Proposition 10**.: _Let \(G\) be a finite group with a faithful 6-dimensional representation \(W\). Assume that \(\wedge^{2}(W)^{*}\) contains a 6-dimensional subrepresentation \(L\). Then the functor \(\Phi_{6,L}\) induces an equivalence of \(G\)-categories_ \[\mathcal{A}_{Y_{L}}\simeq\mathrm{D}^{b}(X_{L}).\] ## 7. Equivariant birational geometry In this section, we work over an algebraically closed field \(k\) of characteristic zero. We write \[X\sim_{G}Y,\] when the \(G\)-varieties \(X\) and \(Y\) over \(k\) are \(G\)-birational. Standard examples include _linear_ or _projectively linear_ actions of \(G\), i.e., generically free actions of \(G\) on \(\mathbb{P}^{n}=\mathbb{P}(V)\) arising from a linear faithful representation \(V\) of \(G\), respectively, a linear representation of a central extension of \(G\) with center acting trivially on \(\mathbb{P}(V)\). Among the main problems in \(G\)-equivariant birational geometry is to identify: * _(projectively) linearizable_ actions, i.e., \[X\sim_{G}\mathbb{P}(V),\] * _stably (projectively) linearizable_ actions, i.e., \[X\times\mathbb{P}^{m}\sim_{G}\mathbb{P}(V),\] with trivial action on the second factor. In particular, as a variety, \(X\) is _(stably) rational_ over \(k\). Note that the same variety, even \(\mathbb{P}^{n}\), can sometimes be equipped with equivariantly nonbirational actions of the same group. The classification of such actions is ultimately linked to the classification of embeddings of \(G\) into the Cremona group, up to conjugation in that group. ### Nonlinearizable actions on hypersurfaces There are many instances when \(G\)-actions on varieties of dimension \(d\) are not (projectively) linearizable for the simple reason that \(G\) does not admit (projectively) linear actions on \(\mathbb{P}^{d}\). An example of this situation is the Del Pezzo surface of degree 5, viewed as the moduli space of 5 points on \(\mathbb{P}^{1}\), with the natural action of \(\mathfrak{S}_{5}\); there are no regular actions of \(\mathfrak{S}_{5}\) on \(\mathbb{P}^{2}\). Other examples are, possibly singular, hypersurfaces \[X\subset\mathbb{P}(V),\quad\dim(V)=q-1,\] for some prime power \(q>3\), admitting the action of the Frobenius group \[G=\operatorname{AGL}_{1}(\mathbb{F}_{q}),\] for the finite field \(\mathbb{F}_{q}\). Indeed, the smallest faithful representation of \(G\) is its unique irreducible representation \(V\), of dimension \(q-1\), so that the \(G\)-action is not linearizable. In many cases, \(G\) admits no nontrivial central extensions, so that \(G\) does not admit even projectively linear actions on projective spaces of smaller dimension than \(\dim(\mathbb{P}(V))\). This is the case for * \(q=5\) and \(X\) the smooth quadric surface given by (7.1) \[\sum_{i=1}^{5}x_{i}^{2}=\sum_{i=1}^{5}x_{i}=0.\] * \(q=7\) and \(X\subset\mathbb{P}^{5}\): the space of invariants is 1-dimensional, these are the Pfaffian cubic fourfolds * \(x_{1}^{2}x_{2}+x_{2}^{2}x_{3}+x_{3}^{2}x_{4}+x_{4}^{2}x_{5}+x_{5}^{2}x_{6}+x_{ 1}x_{6}^{2}+\lambda^{2}(x_{1}x_{3}x_{5}+x_{2}x_{4}x_{6}),\) smooth for \(\lambda\neq 0,\xi,\sqrt{3}\xi\), with \(\xi\) a 6-th root of unity. * \(q=8\) and \(X\subset\mathbb{P}^{6}\) is either the quadric \[x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}+x_{5}^{2}+x_{6}^{2}+x_{7}^{2}=0,\] or the singular cubic fivefold \[x_{1}x_{2}x_{6}+x_{1}x_{3}x_{4}+x_{1}x_{5}x_{7}+x_{2}x_{3}x_{7}+x_{2}x_{4}x_{5} +x_{3}x_{5}x_{6}+x_{4}x_{6}x_{7}=0.\] * \(q=9\) and \(X\subset\mathbb{P}^{7}\) is the (singular) quartic (7.3) \[\sum_{i=1}^{9}x_{i}^{4}=\sum_{i=1}^{9}x_{i}=0.\] ### Stably linearizable actions and \(G\)-Pfaffians By [10], the \(G\)-quadric surface (7.1) is stably linearizable. The proof relied on a \(G\)-equivariant version of the universal torsor formalism. The Pfaffian construction yields stable linearizability in a fundamentally different way: **Theorem 11**.: _Let \(G\) be a finite group and \(V\) a faithful representation of \(G\) of even dimension \(n=2m\). Assume that there exists a \(G\)-subrepresentation \(L\subset\wedge^{2}V\) of dimension \(n\). Let_ \[X:=\operatorname{Pf}(V)\cap\mathbb{P}(L),\] _and assume that the \(G\)-action on \(X\) is generically free and that the generic rank of the \(G\)-vector bundle \(\mathcal{K}_{X}\to X\) is 2. Then \(X\times\mathbb{P}^{1}\), with trivial action on the second factor, is \(G\)-linearizable._ Proof.: Viewing each point \(x\in X\) as a skew-symmetric map \(V^{*}\to V\), we let \[K_{x}\subset V^{*}\] be the kernel of \(x\), where \(x\) is viewed as a skew-matrix. For general \(x\), we have \(\dim(K_{x})=2\); birationally, this gives a \(G\)-linearized vector bundle \(\mathcal{K}_{X}\to X\) of rank 2. The No-Name Lemma [1], [12], [13, SS3.2, Cor. 3.12] implies that \(\mathcal{K}_{X}\) is \(G\)-birational to \(X\times\mathbb{A}^{2}\) (where the \(G\)-action on \(\mathbb{A}^{2}\) is trivial), moreover for the associated projective bundle we have \[\mathbb{P}_{X}(\mathcal{K}_{X})\sim_{G}X\times\mathbb{P}^{1},\] with trivial action on the second factor. On the other hand, we have a \(G\)-equivariant birational map \[\mathbb{P}_{X}(\mathcal{K}_{X})\dashrightarrow\mathbb{P}(V^{*}).\] Indeed, given a point \([v^{*}]\) in \(\mathbb{P}(V^{*})\), its preimage in \(\mathbb{P}_{X}(\mathcal{K}_{X})\) is the set of all skew-symmetric maps in \(\mathbb{P}(L)\subset\mathbb{P}(\Lambda^{2}V)\) containing \(v^{*}\) in their kernel, thus equal to \[\mathbb{P}(L)\cap\mathbb{P}(\Lambda^{2}(v^{*})^{\perp})\subset\mathbb{P}( \Lambda^{2}V).\] For generic \(v^{*}\), this is a point, by dimension count. This shows that \(X\times\mathbb{P}^{1}\), which is \(G\)-birational to \(\mathbb{P}_{X}(\mathcal{K}_{X})\), is linearizable. ### Pfaffian quadrics The \(G\)-Pfaffian formalism yields new results already for quadric surfaces. Let \(X=\mathbb{P}^{1}\times\mathbb{P}^{1}\) and assume that the \(G\)-action on \(X\) is generically free and minimal, i.e., \(\operatorname{Pic}(X)^{G}=\mathbb{Z}\). Then there is an extension \[1\to G_{0}\to G\to C_{2}\to 1,\] where \(G_{0}\) is the intersection of \(G\) with the identity component of \(\operatorname{Aut}(\mathbb{P}^{1})^{2}=\operatorname{PGL}_{2}^{2}\), and \(C_{2}\) switches the factors in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). The linearizability problem of such actions is settled, see, e.g., [10]. The stable linearizability problem has been settled in [11, Proposition 16]: the only relevant case is when \[G_{0}=\mathfrak{D}_{2n}\times_{D}\mathfrak{D}_{2n},\] with \(D\) the intersection of \(G_{0}\) with the diagonal subgroup, and \(n\)_odd_. Here the dihedral group \(\mathfrak{D}_{2n}\) of order \(2n\) acts generically freely on \(\mathbb{P}^{1}\). Then \(X\times\mathbb{P}^{2}\), with trivial action on the second factor, is linearizable. In this situation, we obtain the following improvement: **Proposition 12**.: _The quadric surface \(X\) is not linearizable but is stably linearizable of level \(1\), i.e., \(X\times\mathbb{P}^{1}\), with trivial action on the second factor, is linearizable._ This answers a question raised in [11, Remark 9.14], strengthening a theorem from [11, Section 9], in the case \(G=C_{2}\times\mathfrak{S}_{3}\), and [11, Proposition 16] in general. Proof.: The nonlinearizability statement follows from [10], see the discussion preceding [11, Proposition 16]. Let \(W^{\prime},W^{\prime\prime}\) be irreducible faithful two-dimensional representations of \(\mathfrak{D}_{2n}\). Put \(L=W^{\prime}\otimes W^{\prime\prime}\) and \(V=W^{\prime}\oplus W^{\prime\prime}\). There is a natural \(\mathfrak{D}_{2n}\)-invariant skew symmetric matrix \(M\) corresponding to \[W^{\prime}\otimes W^{\prime\prime}\subset\Lambda^{2}(W^{\prime}\oplus W^{ \prime\prime})\] This matrix is also anti-invariant under the \(C_{2}\) exchanging \(W^{\prime}\) and \(W^{\prime\prime}\). More precisely, if \(w^{\prime}_{1},w^{\prime}_{2}\) is a basis of \(W^{\prime}\) and \(w^{\prime\prime}_{1},w^{\prime\prime}_{2}\) is a basis of \(W^{\prime\prime}\), we can choose \(x_{ij}:=w^{\prime}_{i}\otimes w^{\prime\prime}_{j}\) as a basis of \(W^{\prime}\otimes W^{\prime\prime}\). In this basis, \[M=\begin{pmatrix}0&0&x_{11}&x_{12}\\ 0&0&x_{21}&x_{22}\\ -x_{11}&-x_{21}&0&0\\ -x_{12}&-x_{22}&0&0\end{pmatrix}.\] The induced Pfaffian representation of \(X\) satisfies the assumptions of Theorem 11 if and only if \(n\) is odd. ### Pfaffian cubic fourfolds We return to the setup of the equivariant Pfaffian construction in Theorem 11. Concretely, we proceed as follows: 1. Let \(V\) be a \(G\)-representation of dimension \(6\). 2. Assume there is a decomposition of representations \[\Lambda^{2}V=L\oplus L^{\perp},\] with \(L\) a \(6\)-dimensional and \(L^{\perp}\) a \(9\)-dimensional \(G\)-representation. 3. Assume that the cubic fourfold \(X\subset\mathbb{P}(L)\) is smooth; then the associated K3 surface \(S\subset\mathbb{P}(L^{\perp})\) is also smooth, see e.g. [17, Lemma 4.4]. As immediate consequence of Theorem 11, we have: **Corollary 13**.: _Let \(G\) be a finite group admitting a 6-dimensional faithful representation \(V\) over \(k\), yielding a Pfaffian cubic fourfold \(X\subset\mathbb{P}(L)\) as described above. Assume that the \(G\)-action on \(X\) is generically free. Then \(X\times\mathbb{P}^{1}\) is \(G\)-linearizable._ In this setting, the obvious rationality construction need no longer work in the \(G\)-equivariant context, and could thus yield nonlinearizable \(G\)-actions on cubic fourfolds. With this in mind, we excluded in (1) the existence of a \(G\)-invariant hyperplane in \(\mathbb{P}(V)\). The following example yields a nonlinearizable action. **Example 14**.: Consider the _Frobenius group_ \[G:=\operatorname{AGL}_{1}(\mathbb{F}_{7})=\mathbb{F}_{7}\rtimes\mathbb{F}_{7} ^{\times}=C_{7}\rtimes C_{6}.\] We describe its representations: 1. There are nonisomorphic \(1\)-dimensional representations \[k_{\chi_{i}},\quad i=1,\dots,6,\] corresponding to characters of the quotient \(C_{6}\). 2. There is a single faithful irreducible \(6\)-dimensional representation \(V\), induced from a nontrivial character of \(C_{7}\). In particular, \(G\) has no faithful representations of dimension \(\leq 5\). One checks that \(\wedge^{2}V\) contains \(V\) as a subrepresentation, with multiplicity \(2\). Thus we have a \(\mathbb{P}^{1}\)-worth of choices for a \(G\)-subrepresentation \(L\simeq V\) inside \(\Lambda^{2}V\), and the general one gives a smooth cubic fourfold. Concretely, the matrix \[M_{\lambda}=\begin{pmatrix}0&-\lambda x_{4}&-x_{2}&0&x_{6}&-\lambda x_{3}\\ \lambda x_{4}&0&\lambda x_{5}&x_{3}&0&-x_{1}\\ x_{2}&-\lambda x_{5}&0&-\lambda x_{6}&-x_{4}&0\\ 0&-\lambda x_{3}&x_{6}&0&\lambda x_{1}&x_{5}\\ -x_{6}&0&x_{4}&-\lambda x_{1}&0&-\lambda x_{2}\\ \lambda x_{3}&x_{1}&0&-x_{5}&\lambda x_{2}&0\end{pmatrix}\] is invariant under \[g:(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6})\mapsto(\zeta_{7}x_{1},\zeta_{7}^{5}x_ {2},\zeta_{7}^{4}x_{3},\zeta_{7}^{6}x_{4},\zeta_{7}^{2}x_{5},\zeta_{7}^{3}x_{6})\] (with \(\zeta_{7}\) a primitive \(7\)th root of unity) and \[h\colon x_{i}\mapsto-x_{i+1},\] which generate \(G\). Its Pfaffian is given by \[\lambda\big{(}x_{1}^{2}x_{2}+x_{2}^{2}x_{3}+x_{3}^{2}x_{4}+x_{4}^{2}x_{5}+x_{5} ^{2}x_{6}+x_{1}x_{6}^{2}+\lambda^{2}(x_{1}x_{3}x_{5}+x_{2}x_{4}x_{6})\big{)},\] which is smooth for \(\lambda\neq 0,\xi,\sqrt{3}\xi\) with \(\xi\) a \(6\)-th root of unity; these fourfolds appeared in [10, Theorem 1.2, Case 7(b)]. The associated K3 surface \(S\subset\mathbb{P}(L^{\perp})\) is also smooth and carries a natural, generically free, \(G\)-action by construction. **Remark 15**.: One has \(\operatorname{AGL}_{1}(\mathbb{F}_{7})\subset\mathfrak{S}_{7}\), and it is possible to write the \(\mathfrak{S}_{7}\)-invariant smooth cubic fourfold \[\sum_{i=0}^{6}z_{i}^{3}=0,\quad\sum_{i=0}^{6}z_{i}=0\] in the above Pfaffian form: indeed, consider the substitution \[f\colon z_{i}\mapsto\sum_{j=0}^{6}\zeta^{ij}x_{j}\] with \(\zeta=\zeta_{7}\) a primitive \(7\)th root of unity. It satisfies \[f(z_{0}+\cdots+z_{6})=7x_{0}\] and \(f(z_{0}^{3}+\cdots+z_{6}^{3})|_{z_{0}=0}\) turns out to be equal to \[21(x_{2}^{2}x_{3}+x_{1}x_{3}^{2}+2x_{1}x_{2}x_{4}+x_{1}^{2}x_{5}+x_{4}x_{5}^{2}+ x_{4}^{2}x_{6}+2x_{3}x_{5}x_{6}+x_{2}x_{6}^{2}).\] Cyclically permuting \[x_{1}\mapsto x_{4}\mapsto x_{6}\mapsto x_{1}\] gives a multiple of our equation with \(\lambda^{2}=2\). **Theorem 16**.: _Let \(G=\mathrm{AGL}_{1}(\mathbb{F}_{7})\) and \(X\subset\mathbb{P}^{5}\) be a smooth cubic fourfold constructed in Example 14. Then:_ 1. _The_ \(G\)_-action on_ \(X\) _is not (projectively) linearizable._ 2. _The_ \(G\)_-action on_ \(X\times\mathbb{P}^{1}\)_, with trivial action on the second factor, is linearizable._ 3. _There is a_ \(G\)_-equivariant primitive embeddings of polarized integral Hodge structures_ \[\mathrm{H}^{2}(S,\mathbb{Z})_{\mathrm{pr}}\hookrightarrow\mathrm{H}^{4}(X, \mathbb{Z})_{\mathrm{pr}}(1).\] 4. _The Kuznetsov component_ \(\mathcal{A}_{X}\) _from the natural semiorthogonal decomposition_ \[\mathrm{D}^{b}(X)=\langle\mathcal{A}_{X},\mathcal{O}_{X},\mathcal{O}_{X}(1), \mathcal{O}_{X}(2)\rangle,\] _is equivalent, as a_ \(G\)_-category, to_ \(\mathrm{D}^{b}(S)\) _for the_ \(G\)_-_\(\mathrm{K}3\) _surface_ \(S\) _obtained in the Pfaffian construction._ 5. _The Fano variety of lines_ \(F_{1}(X)\) _is_ \(G\)_-birational to_ \(S^{[2]}\)_._ Proof.: Item (1) follows since \(G\) has no faithful \(5\)-dimensional linear representations, thus the action is not linearizable; and since the Schur multiplier of \(G\) is trivial (all Sylow subgroups of \(G\) are cyclic), every projective representation of \(G\) lifts to a linear representation of \(G\). Item (2) is Corollary 13. Item (3) is proved in the Appendix, by Brendan Hassett. Item (4) follows from the \(G\)-equivariance of the functor \[\mathcal{A}_{X}\to\mathrm{D}^{b}(S),\] given by Kuznetsov's HPD construction; we summarized the main ingredients in Section 6 (with Kuznetsov's notation), see Proposition 10. Alternatively, Ouchi's work [10], recalled in Section 5, yields the statement in a similar, although slightly weaker form: first, the \(G\)-action in our example fixes the sublattice \(K\subset\mathrm{H}^{2,2}(X,\mathbb{Z})\) spanned by \(h^{2}\) and the _class_ of a quintic del Pezzo surface \(\Sigma\) in \(X\) (but not an actual such cycle \(\Sigma\) representing that class!). Points in \(X\) can be viewed as skew-symmetric maps \(V^{*}\to V\), and it makes sense to consider the locus of points in \(X\) giving skew maps with kernel contained in some fixed chosen five-dimensional subspace \(R_{5}\subset V^{*}\). In general, this is a smooth quintic del Pezzo surface \(\Sigma=\Sigma_{R_{5}}\). All such \(\Sigma\)'s yield the same class in cohomology, in fact, they are all algebraically equivalent (they form one connected algebraic family of cycles in \(X\) parametrized by points in the Grassmannian \(\operatorname{Gr}(5,V^{*})\)). In particular, \(g(\Sigma)\) and \(\Sigma\) give the same class. Ouchi's Theorem 9 (and the subsequent discussion concerning the uniqueness of \(S\)) imply that in our example, \(\mathcal{A}_{X}\) is equivalent as a \(G\)-category to \(\operatorname{D}^{b}(S)\) for _some_ action of the group \(G\) on \(S\) (but we cannot conclude immediately that it is the one given by the Pfaffian construction). Note that in our case the subgroup of symplectic automorphisms \(G_{s}\) of \(G\) cannot be reduced to the trivial group or \(C_{2}\) because these are not subgroups of \(G\) with cyclic quotients. Item (5) follows as the construction in (4.2) is \(G\)-equivariant. ## References * [AAHF21] Nicolas Addington, Benjamin Antieau, Katrina Honigs, and Sarah Frei. Rational points and derived equivalence. _Compos. Math._, 157(5):1036-1050, 2021. * [AB17] Asher Auel and Marcello Bernardara. Cycles, derived categories, and rationality. In _Surveys on recent developments in algebraic geometry_, volume 95 of _Proc. Sympos. Pure Math._, pages 199-266. Amer. Math. Soc., Providence, RI, 2017. * [AB18] Asher Auel and Marcello Bernardara. Semiorthogonal decompositions and birational geometry of del Pezzo surfaces over arbitrary fields. _Proc. Lond. Math. Soc. (3)_, 117(1):1-64, 2018. * [AT14] Nicolas Addington and Richard Thomas. Hodge theory and derived categories of cubic fourfolds. _Duke Math. J._, 163(10):1885-1927, 2014. * [BD85] Arnaud Beauville and Ron Donagi. La variete des droites d'une hypersurface cubique de dimension 4. _C.R. Acad. Sc. Paris, Serie I_, 301:703-706, 1985. * [Ber09] Marcello Bernardara. A semiorthogonal decomposition for Brauer-Severi schemes. _Math. Nachr._, 282(10):1406-1413, 2009. * [BGvBS14] Christian Bohning, Hans-Christian Graf von Bothmer, and Pawel Sosna. On the Jordan-Holder property for geometric derived categories. _Adv. Math._, 256:479-492, 2014. * [BH21] Simon Brandhorst and Tommy Hofmann. Finite subgroups of automorphisms of K3 surfaces, 2021. arXiv:2112.07715. * [BK85] F. A. Bogomolov and P. I. Katsylo. Rationality of some quotient varieties. _Mat. Sb. (N.S.)_, 126(168)(4):584-589, 1985. * [BO01] Alexei Bondal and Dmitri Orlov. Reconstruction of a variety from the derived category and groups of autoequivalences. _Compositio Math._, 125(3):327-344, 2001. * [CTS05] Jean-Louis Colliot-Thelene and Jean-Jacques Sansuc. The rationality problem for fields of invariants under linear algebraic groups (with special regards to the brauer group). _Algebraic Groups and Homogeneous Spaces_, 08 2005. * [Dol87] Igor V. Dolgachev. Rationality of fields of invariants. In _Algebraic geometry, Bowdoin, 1985 (Brunswick, Maine, 1985)_, volume 46 of _Proc. Sympos. Pure Math._, pages 3-16. Amer. Math. Soc., Providence, RI, 1987. * [FV20] Lie Fu and Charles Vial. Cubic fourfolds, Kuznetsov components and Chow motives, 2020. arXiv:2009.13173. * [HT13] Brendan Hassett and Yuri Tschinkel. Hodge theory and Lagrangian planes on generalized Kummer fourfolds. _Mosc. Math. J._, 13(1):33-56, 189, 2013. * [HT17] Brendan Hassett and Yuri Tschinkel. Rational points on K3 surfaces and derived equivalence. In _Brauer groups and obstruction problems_, volume 320 of _Progr. Math._, pages 87-113. Birkhauser/Springer, Cham, 2017. * [HT22] Brendan Hassett and Yuri Tschinkel. Torsors and stable equivariant birational geometry, 2022. arXiv:2204.03106. * [HT23] Brendan Hassett and Yuri Tschinkel. Involutions on K3 surfaces and derived equivalence, 2023. arXiv:2303.03294. * [Huy97a] Daniel Huybrechts. Birational symplectic manifolds and their deformations. _J. Differential Geom._, 45(3):488-513, 1997. * [Huy97b] Daniel Huybrechts. Birational symplectic manifolds and their deformations. _J. Differential Geom._, 45(3):488-513, 1997. * [Huy99] Daniel Huybrechts. Compact hyper-Kahler manifolds: basic results. _Invent. Math._, 135(1):63-113, 1999. * an update, 2023. arXiv:2303.03820. * [KP18] Alexander Kuznetsov and Alexander Perry. Derived categories of Gushel-Mukai varieties. _Compos. Math._, 154(7):1362-1406, 2018. * [KP19] Alexander Kuznetsov and Yuri Prokhorov. Rationality of Fano threefolds over non-closed fields, 2019. arXiv:1911.08949. * [KPT23] Maxim Kontsevich, Vasily Pestun, and Yuri Tschinkel. Equivariant birational geometry and modular symbols. _J. Eur. Math. Soc. (JEMS)_, 25(1):153-202, 2023. * [KT21] Andrew Kresch and Yuri Tschinkel. Equivariant Burnside groups: structure and operations, 2021. arXiv:2105.02929. * [KT22a] Andrew Kresch and Yuri Tschinkel. Equivariant birational types and Burnside volume. _Ann. Sc. Norm. Super. Pisa Cl. Sci. (5)_, 23(2):1013-1052, 2022. * [KT22b] Andrew Kresch and Yuri Tschinkel. Equivariant Burnside groups and representation theory. _Selecta Math. (N.S.)_, 28(4):Paper No. 81, 39, 2022. * [Kuz06a] Alexander Kuznetsov. Homological projective duality for Grassmannians of lines, 2006. arXiv:0610957. * [Kuz06b] Alexander Kuznetsov. Hyperplane sections and derived categories. _Izv. Ross. Akad. Nauk Ser. Mat._, 70(3):23-128, 2006. * [Kuz07] Alexander Kuznetsov. Homological projective duality. _Publ. Math. Inst. Hautes Etudes Sci._, 105:157-220, 2007. * [Kuz13] Alexander Kuznetsov. A simple counterexample to the Jordan-Holder property for derived categories, 2013. arXiv:1304.0903. * [Kuz16] Alexander Kuznetsov. Derived categories view on rationality problems. In _Rationality problems in algebraic geometry_, volume 2172 of _Lecture Notes in Math._, pages 67-104. Springer, Cham, 2016. * [Kuz22] Alexander Kuznetsov. Derived categories of families of Fano threefolds, 2022. arXiv:2202.12345. * [LPR06] Nicole Lemire, Vladimir L. Popov, and Zinovy Reichstein. Cayley groups. _J. Amer. Math. Soc._, 19(4):921-967, 2006. * [LZ22] Radu Laza and Zhiwei Zheng. Automorphisms and periods of cubic fourfolds. _Math. Z._, 300(2):1455-1507, 2022. * [Orl92] Dmitri Orlov. Projective bundles, monoidal transformations, and derived categories of coherent sheaves. _Izv. Ross. Akad. Nauk Ser. Mat._, 56(4):852-862, 1992. * [Orl03] Dmitri Orlov. Derived categories of coherent sheaves and equivalences between them. _Uspekhi Mat. Nauk_, 58(3(351)):89-172, 2003. * [Orl16] Dmitri Orlov. Smooth and proper noncommutative schemes and gluing of DG categories. _Adv. Math._, 302:59-105, 2016. * [Ouc21] Genki Ouchi. Automorphism groups of cubic fourfolds and K3 categories. _Algebr. Geom._, 8(2):171-195, 2021. * [Per19] Alexander Perry. Noncommutative homological projective duality. _Adv. Math._, 350:877-972, 2019. * [Plo07] David Ploog. Equivariant autoequivalences for finite group actions. _Adv. Math._, 216(1):62-74, 2007. * [Sar20] Arman Sarikyan. On linearization problems in the plane Cremona group, 2020. arXiv:2009.05761. ## 8. Appendix, by Brendan Hassett Fix a finite group \(G\). Let \(Y\) and \(Y^{\prime}\) be projective hyperkahler manifolds with regular \(G\)-actions. We assume throughout the existence of a \(G\)-equivariant birational map \[\phi:Y^{\prime}\stackrel{{\sim}}{{\dashrightarrow}}Y.\] Without group actions, Huybrechts [209, Cor. 4.7] shows that \(\phi\) induces an isomorphism of Hodge structures \[\psi:\mathrm{H}^{*}(Y,\mathbb{Z})\stackrel{{\sim}}{{\to}} \mathrm{H}^{*}(Y^{\prime},\mathbb{Z}).\] Indeed, this follows from a geometric construction [209, Th. 4.6]: * a connected complex pointed curve \((S,0)\); * families \[\mathcal{Y},\mathcal{Y}^{\prime}\to S\] of smooth hyperkahler manifolds with distinguished fibers \[Y\simeq\mathcal{Y}_{0},\quad Y^{\prime}\simeq\mathcal{Y}_{0}^{\prime};\] * an isomorphism \[\Phi:\mathcal{Y}^{\prime}|_{S\setminus\{0\}}\simeq\mathcal{Y}|_{S\setminus\{0\}}\] over \(S\setminus\{0\}\). The induced isomorphisms on cohomology yield the desired \(\psi\), on specialization to \(0\). Here we explain how to carry out the argument while respecting the group action. We record some elementary facts: **Lemma 17**.: _Let \(\phi\) be a birational map of hyperkahler varieties with \(G\)-action as above. Then we have_ * _the indeterminacy of_ \(\phi\) _and_ \(\phi^{-1}\) _has codimension_ \(\geq 2\)_;_ * \(\phi\) _induces isomorphisms_ \[\phi^{*}:\mathrm{H}^{2}(Y,\mathbb{R})\stackrel{{\sim}}{{\to}} \mathrm{H}^{2}(Y^{\prime},\mathbb{R})\] _whence_ \[\phi^{*}:\Gamma(\Omega^{2}_{Y})\stackrel{{\sim}}{{\to}}\Gamma( \Omega^{2}_{Y^{\prime}})\text{ and }\phi^{*}:\mathrm{Pic}(Y)\stackrel{{\sim}}{{\to}} \mathrm{Pic}(Y^{\prime}),\] _all compatible with the group action. In particular, the symplectic forms on_ \(Y\) _and_ \(Y^{\prime}\) _yield the same characters of_ \(G\)_._ Proof.: The indeterminacy of our maps is \(G\)-invariant and has (complex) codimension \(>1\) because both \(Y\) and \(Y^{\prime}\) have trivial canonical class. This precludes any exceptional divisors. The isomorphism on cohomology follows from dimensional considerations. The compatible isomorphisms for holomorphic \(2\)-forms and the Picard group reflect Hartogs-type extension theorems. Choose \(L^{\prime}\) to be an ample line bundle on \(Y^{\prime}\) that admits a linearization of the \(G\)-action. Let \(L\) be the corresponding line bundle on \(Y\) under the pull-back homomorphism, which is necessarily \(G\)-invariant as well. Note that the Beauville-Bogomolov-Fujiki forms \(q_{Y}\) and \(q_{Y^{\prime}}\) take the same values (see [10, p. 92]) \[q_{Y}(L)=q_{Y^{\prime}}(L^{\prime}).\] The deformation spaces \(\operatorname{Def}(Y,L)\) and \(\operatorname{Def}(Y^{\prime},L^{\prime})\) (as polarized varieties) are germs of analytic spaces, with tangent spaces \[L^{\perp}\subset\operatorname{H}^{1}(Y,T_{Y})\simeq\operatorname{H}^{1}(Y, \Omega^{1}_{Y}),\quad(L^{\prime})^{\perp}\subset\operatorname{H}^{1}(Y^{\prime},T_{Y^{\prime}})\simeq\operatorname{H}^{1}(Y^{\prime},\Omega^{1}_{Y^{\prime}}).\] These come with natural \(G\)-actions and equivariant isomorphisms \[\operatorname{Def}(Y,L)\stackrel{{\sim}}{{\to}}\operatorname{ Def}(Y^{\prime},L^{\prime}).\] **Remark 18**.: The group \(G\) may fail to act faithfully on \(\operatorname{Def}(Y,L)\). The kernel \(G_{\circ}\subset G\) acts on fibers of the family [11]. Observe that \(G_{\circ}\times\operatorname{Def}(Y,L)\) has a natural \(G\)-action \[g\cdot(g_{\circ},y)=(gg_{\circ}g^{-1},gy)\] commuting with the fiberwise \(G_{\circ}\)-action. **Lemma 19**.: _Let \(0,\mathfrak{p}\in\operatorname{Def}(Y,L)\) denote the distinguished point and an arbitrary point. There exists a smooth pointed curve \((S,0)\) with \(G\)-action fixing \(0\), along with an equivariant morphism_ \[(S,0)\to(\operatorname{Def}(Y,L),0),\] _whose image contains \(\mathfrak{p}\)._ Proof.: Consider the universal family \[\mathcal{Y}\to\operatorname{Def}(Y,L)\] and the diagram \[\begin{array}{ccccc}G\times\mathcal{Y}&\to&G\times^{G}\mathcal{Y}=\mathcal{Y }&\to&G\backslash\mathcal{Y}\\ \downarrow&&\downarrow&&\downarrow\\ G\times\operatorname{Def}(Y,L)&\to&G\times^{G}\operatorname{Def}(Y,L)= \operatorname{Def}(Y,L)&\to&G\backslash\operatorname{Def}(Y,L)\end{array}\] When \(X\) has a left \(G\)-action, \(G\times^{G}X\) is the quotient of \(G\times X\) under the relation \((hg,x)=(h,gx)\) for \(g,h\in G\) and \(x\in X\). The left horizontal arrows are quotients; the right horizontal arrows are induced by projections onto second factors. Start with an irreducible curve \(S_{1}\) in the quotient space \(G\backslash\operatorname{Def}(Y,L)\) containing the images of \(0\) and \(\mathfrak{p}\). The diagram above and resolution of singularities give a finite morphism \(\gamma:S_{2}\to S_{1}\) from a non-singular curve and a \(G\)-equivariant morphism \[\mathcal{Y}_{2}\to S_{2}\] such that the classifying morphism \(S_{2}\to G\backslash\operatorname{Def}(Y,L)\) coincides with \(\gamma\). We choose \(\mathfrak{p}\in\operatorname{Def}(Y,L)\) such that \(\operatorname{Pic}(\mathcal{Y}_{\mathfrak{p}})=\mathbb{Z}L\). Using Lemma 19, choose compatible \[(S,0)\to(\operatorname{Def}(Y,L),0),\quad(S,0)\to(\operatorname{Def}(Y^{\prime}, L^{\prime}),0)\] so that the corresponding families \[\mathcal{Y}\to S,\quad\mathcal{Y}^{\prime}\to S\] have generic Picard rank one. We may repeat the argument of [10]; the birationality construction appears in [10, SS4]. Our families are \(G\)-equivariantly isomorphic over a \(G\)-invariant non-empty open \(U\subset S\), hence the fibers have isomorphic Hodge structures over _all_\(s\in S\), including \(0\). Thus we obtain \(G\)-equivariant isomorphisms \[\operatorname{H}^{*}(Y^{\prime},\mathbb{Z})=\operatorname{H}^{*}(\mathcal{Y} _{0}^{\prime},\mathbb{Z})\simeq\operatorname{H}^{*}(\mathcal{Y}_{0},\mathbb{Z} )=\operatorname{H}^{*}(Y,\mathbb{Z}),\] compatible with Hodge structures.
2303.00304
Renderable Neural Radiance Map for Visual Navigation
We propose a novel type of map for visual navigation, a renderable neural radiance map (RNR-Map), which is designed to contain the overall visual information of a 3D environment. The RNR-Map has a grid form and consists of latent codes at each pixel. These latent codes are embedded from image observations, and can be converted to the neural radiance field which enables image rendering given a camera pose. The recorded latent codes implicitly contain visual information about the environment, which makes the RNR-Map visually descriptive. This visual information in RNR-Map can be a useful guideline for visual localization and navigation. We develop localization and navigation frameworks that can effectively utilize the RNR-Map. We evaluate the proposed frameworks on camera tracking, visual localization, and image-goal navigation. Experimental results show that the RNR-Map-based localization framework can find the target location based on a single query image with fast speed and competitive accuracy compared to other baselines. Also, this localization framework is robust to environmental changes, and even finds the most visually similar places when a query image from a different environment is given. The proposed navigation framework outperforms the existing image-goal navigation methods in difficult scenarios, under odometry and actuation noises. The navigation framework shows 65.7% success rate in curved scenarios of the NRNS dataset, which is an improvement of 18.6% over the current state-of-the-art. Project page: https://rllab-snu.github.io/projects/RNR-Map/
Obin Kwon, Jeongho Park, Songhwai Oh
2023-03-01T08:00:46Z
http://arxiv.org/abs/2303.00304v4
# Renderable Neural Radiance Map for Visual Navigation ###### Abstract We propose a novel type of map for visual navigation, a renderable neural radiance map (RNR-Map), which is designed to contain the overall visual information of a 3D environment. The RNR-Map has a grid form and consists of latent codes at each pixel. These latent codes are embedded from image observations, and can be converted to the neural radiance field which enables image rendering given a camera pose. The recorded latent codes implicitly contain visual information about the environment, which makes the RNR-Map visually descriptive. This visual information in RNR-Map can be a useful guideline for visual localization and navigation. We develop localization and navigation frameworks that can effectively utilize the RNR-Map. We evaluate the proposed frameworks on camera tracking, visual localization, and image-goal navigation. Experimental results show that the RNR-Map-based localization framework can find the target location based on a single query image with fast speed and competitive accuracy compared to other baselines. Also, this localization framework is robust to environmental changes, and even finds the most visually similar places when a query image from a different environment is given. The proposed navigation framework outperforms the existing image-goal navigation methods in difficult scenarios, under odometry and actuation noises. The navigation framework shows 65.7% success rate in curved scenarios of the NRNS [23] dataset, which is an improvement of 18.6% over the current state-of-the-art. Project page: [https://rllab-snu.github.io/projects/RNR-Map/](https://rllab-snu.github.io/projects/RNR-Map/) ## 1 Introduction In this paper, we address how to explicitly embed the visual information from a 3D environment into a grid form and how to use it for visual navigation. We present _renderable neural radiance map (RNR-Map)_, a novel type of a grid map for navigation.We point out three main properties of RNR-Map which make RNR-Map navigation-friendly. First, it is _visually descriptive_. Commonly used grid-based maps such as occupancy maps [10, 11, 17] and semantic maps [21, 48, 9, 48], record obstacle information or object information into grids. In contrast, RNR-Map converts image observations to latent codes which are then embedded in grid cells. Each latent code in a grid cell can be converted to a neural radiance field, which can render the corresponding region. We can utilize the implicit visual information of these latent codes to understand and reason about the observed environment. For example, we can locate places based on an image or determine which region is the most related to a given image. RNR-Map enables image-based localization only with a simple forward pass in a neural network, by directly utilizing the latent codes without rendering images. We build a navigation framework with RNR-Map, to navigate to the most plausible place given a query image. Through extensive experiments, we validate that the latent codes can serve as important visual clues for both image-based localization and image-goal navigation. More importantly, a user has an option to utilize the renderable property of RNR-Map for more fine-level of localization such as camera tracking. RNR-Map is _generalizable_. There have been a number of studies that leverage neural radiance fields (NeRF) for various applications other than novel view synthesis. The robotic applications of NeRF are also now beginning to emerge [1, 55, 44, 15, 31]. However, many of the approaches require pretrained neural radiance fields about a specific scene and are not generalizable to various scenes. This can be a serious problem when it comes to visual navigation tasks, which typically assume that an agent performs the given task in an unseen environment [16]. In contrast, RNR-Map is applicable in arbitrary scenes without additional optimization. Even with the unseen environment, the RNR-Map can still embed the useful information from images to the map and render images. The neural radiance fields of RNR-Map are conditioned on the latent codes. A pair of encoder and decoder is trained to make these latent codes from images of arbitrary scenes and reconstruct images using neural radiance fields. These pretrained encoder and decoder enable the generalization to unseen environments. Third, RNR-Map is _real-time capable_. The majority of the present NeRF-based navigation methods require a significant time for inference because of required computation-heavy image rendering and rendering-based optimization steps. The RNR-Map is designed to operate fast enough not to hinder the navigation system. By directly utilizing the latent codes, we can eliminate the rendering step in mapping and localization. The mapping and image-based localization frameworks operate at 91.9Hz and 56.8Hz, respectively. The only function which needs rendering-based optimization is camera tracking, which can localize under odometry noises, and it operates at 5Hz. To the best of our knowledge, the RNR-Map is the first method having all three of the aforementioned characteristics as a navigation map. The RNR-Map and its localization and navigation frameworks are evaluated in various visual navigation tasks, including camera tracking, image-based localization, and image-goal navigation. Experimental results show that the proposed RNR-Map serves as an informative map for visual navigation. Our localization framework exhibits competitive localization accuracy and inference speed when compared to existing approaches. On the image-goal navigation task, the navigation framework displays 65.7% success rate in curved scenarios of the NRNS [23] dataset, where the current state-of-the-art method [49] shows a success rate of 55.4%. As RNR-Map finds a place based on the visual information of the map and the query image, we also consider a variant version of image-based localization. In real-world scenarios, there can be partial changes in the target place (changes in furniture placement, lighting conditions,...). Also, the user might only have images from similar but different environments. We test the proposed localization framework in both cases. We find that the RNR-Map is robust to environmental changes and is able to find the most visually similar places even when a novel query image from a different environment is provided. The contributions of this paper can be summarized as follows: * We present RNR-Map, a novel type of renderable grid map for navigation, utilizing neural radiance fields for embedding the visual appearance of the environment. * We demonstrate efficient and effective methods for utilizing the visual information in RNR-Map for searching an image goal by developing RNR-Map-based localization and navigation framework. * Extensive experiments show that the proposed method shows the state-of-the-art performance in both localization and image-goal navigation. ## 2 Related Work Embodied AI with spatial memories.One of the central issues in recent embodied AI research is how to construct a useful memory for the embodied agent [16]. A memory that contains the navigation history, as well as information about the observed environment, is required for successful task execution. There is a large body of works using occupancy maps for visual navigation [10, 11, 17, 41]. An occupancy map expresses the environment in a grid form, and each grid cell has obstacle information about the corresponding region. An occupancy map represents the overall structure of the environment and can guide a robot to navigate the environment safely. There have been various kinds of research about building a spatial memory which contains additional information more than obstacles. The additional information can be object classes of the observed objects [7, 9, 21, 48], or implicitly learned useful information for a specific task [22, 24, 36, 38]. MapNet [24], SMNet [7] and ISS [38] have a similar mapping architecture with our method. Like our approach, they learn how to convert RGBD observations into useful latent information, and record it in the spatial memories using 3D inverse camera projection. Using the recorded latent information, MapNet [24] learns to localize the agent pose, and SMNet learns to generate a semantic map. ISS [38] is more related to ours since this method addresses scene generation and novel view image synthesis from a grid map. Our research is focused on how we can utilize such latent information for visual navigation. We develop localization and navigation frameworks which actively utilize the embedded visual information in RNR-Map. Robotics with neural radiance fields.The neural radiance field (NeRF) [33] has gained significant popularity in various AI tasks. Not only in computer vision or graphics tasks, but NeRF is also adopted for robot applications in recent years. NeRF predicts the RGB color and density of a point in a scene so that an image from an arbitrary viewpoint can be rendered. This property enables pose estimation [1, 30, 31, 44] based on the photometric loss between the observed image and the rendered image, or manipulation of tricky objects [5, 12, 14, 25, 29]. A pretrained NeRF can also work as a virtual simulator, in which a robot can plan its trajectory [1] or can be used to train an action policy for the real-world [6]. Among the existing approaches, NICE-SLAM [55] is relevant to our work because it performs mapping and localization in arbitrary environments. NICE-SLAM builds a 3D implicit representation of the environment from image observations. The camera pose is inferred from optimizing the photometric loss between the observation image and the rendered image. Our method, on the other hand, places less emphasis on mapping quality, and it is designed for successfully completing navigation tasks. We focus on how the RNR-Map can be efficiently used for visual navigation, in terms of both speed and performance. The mapping and the target-searching function of RNR-Map are designed to operate fast enough to not hinder the other parts of the system. Also, the proposed RNR-Map method is generalizable in various environments without additional fine-tuning. ## 3 RNR-Map A robot agent has an RGB-D camera, and also knows its odometry information. Here, the odometry means how much the agent has moved from its previous position and we consider 3-DoF pose in this paper. At time \(t\), the robot observes an RGBD image \(I_{t}\) and its relative pose \(\Delta p_{t}=(\Delta x_{t},\Delta y_{t},\Delta a_{t})\) from the previous pose (xy position and heading angle). By cumulating pose differences, the agent can determine its relative pose \(p_{t}\) from the start pose \(p_{0}=(0,0,0)\). A pair of the pretrained encoder and decoder is used when building a RNR-Map. The training process of these encoder and decoder resembles the autoencoder method. However, unlike 2D images, autoencoding a 3D environment is not a trivial problem. We build a reconstruction framework as shown in Figure 0(a). The encoder encodes an image and embeds the pixel features into the RNR-Map. We denote each embedded feature in a grid of RNR-Map as a latent code. A query pose is then provided, and the decoder samples latent codes along each camera ray corresponding to each pixel and renders the corresponding images. We present details of each part in the following section. Registration \(\mathbf{F}_{\mathrm{reg}},\mathbf{F}_{\mathrm{map}}\).When an RGBD image \(I_{t}\in\mathbb{R}^{H\times W\times 4}\) comes in, the encoder encodes the image into a same-sized feature \(C_{t}\in\mathbb{R}^{H\times W\times D}\), where \(H\) and \(W\) refers to height and width of the image, respectively, and \(D\) refers to the channel size. First, each pixel \(c_{h,w}\in\mathbb{R}^{D}\) in \(C_{t}\) is mapped to its corresponding 3D world position \([q_{x},q_{y},q_{z}]^{T}\) using the known camera intrinsic \(\mathbf{K}\), the extrinsic matrix \([\mathbf{R}|\mathbf{t}]\) based on the agent pose, and the depth information \(d_{h,w}\) of the pixel. The world position of each pixel is calculated using inverse camera projection, as follows: \[\begin{bmatrix}q_{x}(h,w)\\ q_{y}(h,w)\\ q_{z}(h,w)\end{bmatrix}=d_{h,w}\mathbf{R}^{-1}\mathbf{K}^{-1}\begin{bmatrix}h \\ w\\ 1\end{bmatrix}-\mathbf{t}. \tag{1}\] Then we digitize each position by normalizing with the map resolution \(s\), and get map position \((u,v)\) as shown in (2). We aggregate the pixel features that belong to the same 2D map position, and average the aggregated features into a single feature vector. The pixel features are registered in the corresponding grid in the RNR-Map \(m\in\mathbb{R}^{U\times V\times D}\), where \(U\) and \(V\) refer to the height and width of the RNR-Map, respectively. The number of averaged features at each 2D map position is also recorded in mask \(n\in\mathbb{R}^{U\times V}\). We denote the element of \(m\) at the map position \((u,v)\) by \(m(u,v)\) and the mask at \((u,v)\) by \(n(u,v)\). The registered feature \(m(u,v)\in\mathbb{R}^{D}\) is the latent code which contains visual information of the region corresponding to the map position \((u,v)\). This process can be written as follows: \[X_{(u,v)} =\Big{\{}c_{h,w}\in C_{t}\big{|}u=\Big{\lfloor}\frac{q_{x}(h,w)} {s}\Big{\rceil},v=\Big{\lfloor}\frac{q_{y}(h,w)}{s}\Big{\rceil}\Big{\}}\] \[m(u,v) =\frac{1}{n(u,v)}\underset{c_{i}\in X_{(u,v)}}{\sum}c_{i},\quad n (u,v)=|X_{(u,v)}|. \tag{2}\] The registration process \(F_{\mathrm{reg}}\) includes the encoding of \(C_{t}\), inverse projection, and feature registration. The \(F_{\mathrm{reg}}\) is represented as: \[m_{t}^{l},n_{t}^{l}=F_{\mathrm{reg}}(I_{t},p_{t};\theta_{\mathrm{enc}}), \tag{3}\] where \(\theta_{enc}\) refers to the network parameters of the encoder. The registration process \(F_{\mathrm{reg}}\) outputs a local map \(m_{t}^{l}\) and a local mask \(n_{t}^{l}\). The local map \(m_{t}^{l}\) only contains the information from the image \(I_{t}\), and this will be integrated with other local maps to form the global map \(m^{g}\).1 When multiple observations are given, we can use \(n\) to compute the average value from the original and new latent codes. We name this integration process \(F_{\mathrm{map}}\), which operates \(F_{\mathrm{reg}}\) over multiple observations. \(F_{\mathrm{map}}\) at time \(t\) with previous \(m_{t-1}^{g}\) is formulated as follows: Footnote 1: For simplicity, \(m\) without any superscript refers to the global map (\(m^{g}\)) in the rest of the paper. \[(m_{t}^{g},n_{t}^{g}) =F_{\mathrm{map}}(I_{t},p_{t},m_{t-1}^{g},n_{t-1}^{g};\theta_{ \mathrm{enc}})\] \[m_{t}^{g}(u,v) =\frac{m_{t}^{l}(u,v)\cdot n_{t}^{l}(u,v)+m_{t-1}^{g}(u,v)\cdot n _{t-1}^{g}(u,v)}{n_{t}^{l}(u,v)+n_{t-1}^{g}(u,v)}\] \[n_{t}^{g}(u,v) =n_{t}^{l}(u,v)+n_{t-1}^{g}(u,v)\.\] Figure 1: (a) **Illustration of the reconstruction framework.** Two neural networks, encoder \(\theta_{enc}\) and decoder \(\theta_{dec}\) are used in this reconstruction framework. (b) **Examples of the rendered images.** Odd columns are the given observations, and even columns are reconstructed results. The proposed method can reconstruct the images from the novel view (the last row). Decoding \(\mathbf{F}_{\mathrm{dec}}\).To make these latent codes contain visual information, we reconstruct the image observation from the latent codes. We use a decoder which has a structure similar to the generative scene network (GSN) [13] for rendering an RGBD image from the 2D latent map. Originally, GSN generates a random indoor floorplan from random variables. Then GSN proposed how to render images from the generated floorplan, based on locally conditioned radiance fields. Our approach involves designing the encoder \(\theta_{\mathrm{enc}}\) and the registration process \(F_{\mathrm{reg}}\) to transform image observations into latent codes, which can be converted to the locally conditioned radiance fields. We utilize the locally conditioned radiance fields from GSN to render an image from \(m\). Given the camera parameters, we can sample latent codes on points along the camera ray, corresponding to the pixel location which will be rendered in the camera. The sampled latent codes are converted into modulation linear layer-based locally conditioned radiance fields [4, 13]. The decoder is trained to render an RGBD image from the latent codes to be close to the image observations \(I_{t}\). The reader can refer to [13] for a more detailed explanation about the rendering procedure. By training the rendered RGBD images \(\tilde{I}_{t}\) to be close to the original observation \(I_{t}\), the encoder is trained to extract the visual information from the image. This mechanism is similar to training an autoencoder, to extract and summarize useful information from an image into a latent vector. We sample \(N\) images in a random scene and embed them into the RNR-Map. Then, we render each image from the final RNR-Map and compare them with the original images. The encoder and the decoder are trained using the following loss: \[m_{i}^{g},n_{i}^{g} =F_{\mathrm{map}}(I_{i},p_{i},m_{i-1}^{g},n_{i-1}^{g};\theta_{ \mathrm{enc}}),\,_{i=1:N}\] \[\mathrm{Loss}(\theta_{\mathrm{enc}},\theta_{\mathrm{dec}}) =\frac{1}{N}\sum_{i=1}^{N}||I_{i}-F_{\mathrm{dec}}(m_{N}^{g},p_{i };\theta_{\mathrm{dec}})||_{1}, \tag{5}\] where \(\theta_{\mathrm{enc}}\) and \(\theta_{\mathrm{dec}}\) are weight parameters of the encoder and decoder, respectively. Since the rendering process is conditioned on the latent codes from the image observation, our proposed reconstruction framework is capable of embedding and rendering arbitrary images from unseen environments. This leads to the generalizable property of RNR-Map. Also, the decoder can synthesize a novel view, different from the observed direction, based on \(m\). Examples of reconstructed observations are shown in Figure 0(b). The decoder can render images from novel viewpoints in the observed region, based on the latent codes. Note that the rendering boundary is limited to the observed 3D points. We can see that the decoder generates grey color for the unobserved regions, in the last row of Figure 0(b). More examples of reconstructed images are provided in the supplementary material A, as well as the network architectures of the encoder and decoder B.1. Mapping.After training, we can use \(F_{\mathrm{map}}\) for logging visual information from the environment. Note that the rendering function \(F_{\mathrm{dec}}\) is not needed for mapping. The RNR-Map is built incrementally during navigation, based on the odometry information and the known camera intrinsics. At time \(t\), the agent obtains \(m_{t}\) and \(n_{t}\) using \(F_{\mathrm{map}}\), as formulated in (4). If the same 3D point in the environment is observed multiple times, \(F_{\mathrm{map}}\) averages the latent codes based on the number of observation instances. ## 4 Localization One of the crucial components of the navigation task is localization. In the following sections, we describe how RNR-Map is used for two localization tasks: _image-based localization_ and _camera tracking_. Here, we consider 3-DoF pose \((x,y,a)\) for localization. ### Image-Based Localization The implicit visual information of the latent codes in RNR-Map can provide useful clues for finding out which grid cell is the closest to the given target image \(I_{\mathrm{trg}}\). Inspired by the fast localization method in [24], we directly compare latent codes from the target image \(I_{\mathrm{trg}}\) and the RNR-Map \(m\) for localization. We denote this image-based localization function as \(F_{\mathrm{loc}}\). Suppose that the target image is taken at the pose \(p_{\mathrm{trg}}=(x_{\mathrm{trg}},y_{\mathrm{trg}},a_{\mathrm{trg}})\), and the corresponding map position is \((u_{\mathrm{trg}},v_{\mathrm{trg}})\). \(F_{\mathrm{loc}}\) outputs a heatmap \(E\in\mathbb{R}^{U\times V}\) and the predicted orientation of the target \(a_{\mathrm{trg}}\). \(E\) highlights which grid cell corresponds to the target location \((u_{\mathrm{trg}},v_{\mathrm{trg}})\), among the observed area in \(m\). The localization process \(F_{\mathrm{loc}}\) is shown in Figure 2. The RNR-Map \(m\) is constructed from a partial observation about the environment. The query image is transformed into \(m_{\mathrm{trg}}\) using \(F_{\mathrm{reg}}(I_{\mathrm{trg}},p_{0};\theta_{\mathrm{enc}})\). Here, we use origin \(p_{0}=(0,0,0)\) as an input to \(F_{\mathrm{reg}}\) since the agent does not know the position of the target location. \(F_{\mathrm{loc}}\) includes three convolutional neural networks, \(F_{k}\), \(F_{q}\), and \(F_{E}\). \(F_{k}\) and \(F_{q}\) are for proccessing \(m\) and \(m_{\mathrm{trg}}\) into \(m_{k}^{\prime}\) and \(m_{q}^{\prime}\), respectively. We found it helpful to introduce neural networks (\(F_{k}\), \(F_{q}\)) for better localization results. Then, we search query \(m_{q}^{\prime}\) by convolving (cross-correlation, denoted as \(\mathrm{Conv}\)) with \(m_{k}^{\prime}\). The query map \(m_{q}^{\prime}\) is rotated into \(R\) different angles \(\{0^{\circ},...,360^{\circ}\times\frac{R-1}{R}\}\). \((m_{q}^{\prime})_{r}\) denotes \(Rot_{r}(m_{q}^{\prime})\), where \(Rot_{r}\) represents the \(r\)-th from the \(R\) possible rotations. Each rotated query \((m_{q}^{\prime})_{r}\) works as a filter, and the output from the \(\mathrm{Conv}\) is forwarded to \(F_{E}\). \(F_{E}\) outputs \(E\in\mathbb{R}^{U\times V}\) which highlights the most query-related region in \(m\). Each pixel \(e_{u,v}\in E\) in the heatmap represents the probability of the query image being taken at \((u,v)\). Also, \(F_{E}\) has an additional head which predict the orientation of the target observation. The overall localization process can be put together as: \[F_{\mathrm{loc}}(m,m_{\mathrm{trg}};\theta_{loc})=F_{E}(\mathrm{Conv}(F_{k}(m), \{F_{q}(m_{\mathrm{trg}})_{r}\}_{1}^{R})), \tag{6}\] where \(\theta_{loc}\) denotes the weight parameters of \(F_{k}\), \(F_{q}\), and \(F_{E}\). Detailed explanations about the network architecture are provided in the supplementary material B.2. We make a ground-truth Gaussian heatmap \(E_{\rm gt}\) which highlights the ground-truth map location \((u_{\rm trg},v_{\rm trg})\) of the query image. The representation of an orientation \(a_{\rm trg}\) is discretized into 18 bins, and \(F_{\rm E}\) is trained to select the appropriate bin using cross-entropy loss. The framework is trained to generate a distribution similar to the ground-truth heatmap \(E_{\rm gt}\) and predicts the \(a_{\rm trg}\). The following loss term is used to train \(F_{\rm loc}\): \[(\hat{E},\hat{a}_{\rm trg}) =F_{\rm loc}(m,m_{\rm trg};\theta_{loc}) \tag{7}\] \[\mathrm{Loss}(\theta_{\rm loc}) =D_{KL}(E_{\rm gt},\hat{E})+CE(a_{\rm trg},\hat{a}_{\rm trg}),\] where \(D_{KL}\) refers to KL divergence, and \(CE\) refers to cross-entropy loss. We provide quantitative and qualitative experimental results of \(F_{\rm loc}\) in the supplementary material C.2. ### Camera Tracking During the navigation, the agent needs to be aware of its own pose to accurately record the observed visual observation. By cumulating odometry readings, the agent can calculate its relative pose to the start pose. However, in the real world, it is difficult to determine the accurate pose of a robot due to noises in odometry readings. The differential rendering function of \(F_{\rm dec}\) can be used for adjusting the rough estimation of the agent pose \(p_{t}\). The pose optimization is based on the photometric loss between the rendered image and the current observation. As the rendering process \(F_{\rm dec}\) is differential, the pose of the agent \(p_{t}\) can be optimized with the gradient descent method. We name this camera tracking function as \(F_{\rm track}\). At time \(t\), the agent has the previously estimated pose \(\hat{p}_{t-1}\), and takes the odometry data \(\Delta p_{t}\). The rough estimation of the current pose \(p_{t}\) can be calculated by simply adding \(\Delta p_{t}\) to the previous pose \(\hat{p}_{t-1}\). \(\bar{p}_{t}\) denotes such roughly estimated pose: \(\bar{p}_{t}=\hat{p}_{t-1}+\Delta p_{t}\). Using \(F_{\rm track}\), \(\bar{p}_{t}\) is optimized to \(\hat{p}_{t}\). The output of pose optimization can be derived by the following equation: \[\hat{p_{t}}=F_{\rm track}(m_{t-1},\bar{p}_{t})=\arg\min_{\delta p_{t}}|F_{\rm dec }(m_{t-1},\bar{p}_{t}+\delta p_{t})-I_{t}|, \tag{8}\] which minimizes the error between the current observation and the rendered image from the latent map. \(\bar{p}_{t}\) is the initial value of the pose in the optimization process. By sampling a small subset of pixels from the image, we can make this optimization process fast enough to use it in navigation. We provide the accuracy and inference speed of the proposed camera tracking method in the supplementary material C.1. ## 5 Navigation Now we describe how the RNR-Map and RNR-Map-based localization functions can be utilized in a navigation system. We consider the visual navigation task, especially image-goal navigation. The objective of image-goal navigation is to find the target location given the image taken from the target location. We build a visual navigation framework which includes \(F_{\rm map}\) for mapping, \(F_{\rm track}\) for localization, and \(F_{\rm loc}\) for target searching. Figure 3 shows an overview of the proposed navigation system. The system consists of three modules, _mapping, localization_, and _navigation_. In the following section, we describe how each module works during navigation. ### Mapping Module The mapping module builds RNR-Map using the pre-trained encoder and \(F_{\rm map}\). At the start of the episode (\(t=0\)), the mapping module transforms the target image \(I_{\rm trg}\) into an RNR-Map \(m_{\rm trg}\). While maintaining \(m_{\rm trg}\), the module updates the current RNR-Map \(m_{t}\) using each image observation \(I_{t}\) with \(F_{\rm map}\). Also, the mapping module builds an occupancy map using depth information. This occupancy map is used for collision avoidance in a point navigation policy. The navigation module also uses this occupancy map to add more exploration property to the navigation system. Figure 3: **Navigation System Overview.** Figure 2: **Localization using \(\mathbf{F}_{\rm loc}\). A target observation can be localized by cross-correlation (Conv) between \(m\) and \(m_{\rm trg}\). Before the cross-correlation, each RNR-Map is forwarded to the CNN \(F_{k}\) and \(F_{q}\). After \(\mathrm{Conv}\), \(F_{E}\) takes the output of the \(\mathrm{Conv}\) and outputs a heatmap \(E\) which highlights the most plausible target area.** ### Localization Module The localization framework described in Section 4 works as a localization module in the proposed navigation system. This localization module has two objectives during navigation. First, the localization module finds the most probable area which is similar to the target location. Second, considering a noisy odometry sensor, the localization module is needed to figure out the precise current pose of the agent. The image-based localization function \(F_{\mathrm{loc}}\) and camera tracking function \(F_{\mathrm{track}}\) are actively used in this module. With high probability, the target location may not be in the current RNR-Map \(m_{t}\) in the navigation scenario. Hence, \(F_{\mathrm{loc}}\) for the navigation task is trained to find the region in \(m_{t}\) which is closer to the target location. ### Navigation Module The navigation module consists of three submodules: exploration, point navigation, and stopper. Exploration.The objective of the exploration module is to select the most plausible region to explore. The module decides where to visit in order to search the target location, based on the probabilty heatmap \(E\) from \(F_{\mathrm{loc}}\) in the localization module. We have adopted the concept from robot exploration [27, 34, 53, 56], which builds a generalized Voronoi graph on the occupancy map. We draw a Voronoi graph on the occupancy map and calculate visitation priority scores for each node of the created graph. Based on the scores, the exploration module selects the nodes to explore. Two types of scores are used for selecting exploration candidates, the latent score and the exploration score. The latent score is based on the heatmap \(E\) and represents how probable the node is near the target location. The exploration score of each node is simply calculated based on the values in the occupancy map. The occupancy map has three types of value: occupied, free and unseen. The exploration score of a node is proportional to the number of unseen pixels in the neighborhood of a node. The visitation priority of a node is determined based on the sum of the latent score and the exploration score. Point navigation policy and Stopper.The point navigation policy is a simple occupancy-map-based path-following algorithm. When the exploration target node is selected, the point navigation module draws the shortest path to the target position. Following the path, the point navigation policy heuristically avoids obstacles using the occupancy map. The stopper module determines the arrival at the target location and calculates the relative pose from the target location. We employ a neural network \(F_{\mathrm{stop}}\) which decides whether the agent is near the target location. This neural network is trained to output a binary value (1 if the target location is reached and 0, otherwise) based on the \(m_{trg}\) and \(m_{t}\). For efficient target reaching, we adopted the recent last-mile navigation method [49] in stopper module. Based on keypoint matching, the relative pose between the target location and the current location is calculated using Perspective-n-Point [28] and RANSAC [19]. After \(F_{\mathrm{stop}}\) detects the target location, the point navigation policy navigates to the target using the estimated relative pose. We provide detailed explanations about graph generation and exploration planning in the supplementary material G. ### Implementation Details The modules that require training are the encoder and decoder, and the neural networks used in \(F_{\mathrm{loc}}\) and \(F_{\mathrm{stop}}\). We trained them using the same dataset. We have collected 200 random navigation trajectories from each scene in 72 Gibson [51] environments with the Habitat simulator [32]. The pair of encoder and decoder is first trained, then \(F_{\mathrm{loc}}\) and \(F_{\mathrm{stop}}\) are trained based on the pretrained encoder. Further implementation details (network architectures and training details) are provided in the supplementary material B. ## 6 Experiments We have evaluated RNR-Map in both localization and navigation tasks. However, as the main objective of the proposed RNR-Map is visual navigation, we focus on analyzing the experiment results of the navigation task in the main manuscript. For localization tasks, we summarize the experimental results here and provide detailed information and analyses in the supplementary material C. ### Localization We have tested \(F_{\mathrm{track}}\) and \(F_{\mathrm{loc}}\) with other related baselines [24, 55]. In **camera tracking task**, the RNR-Map \(F_{\mathrm{track}}\) shows high speed (5Hz) and accuracy (0.108m error) which are adequate for a real-time navigation system. More than localizing the current pose of the agent, the RNR-Map \(F_{\mathrm{loc}}\) is able to locate the previously seen images **(image-based localization task)** with a high recall rate (inliers less than 50cm with 99%) in the recorded map. We can leverage this RNR-Map for searching the most similar place to the query image even if the exact place is not in the current environment. Two scenarios can be considered: (1) (Object Change) There have been large changes in the environment so that the object configuration of the environment is different from the information in the RNR-Map. (2) (Novel Environment) The user only has a query image from a different environment but wants to find the most similar place in the current environment. We have tested the RNR-Map in both scenarios and observed that \(F_{\mathrm{loc}}\) robustly localizes query images even if some object configuration changes. When 33.9% of the observed images have changed, \(F_{\mathrm{loc}}\) localizes the query image with less than 50cm error in 97.4% of cases, and 20\({}^{\circ}\) error in 97.5% of cases. Also, when given a query image from a different scene, the localized location achieves 94.5% of visual similarity compared to the best possible visual similarity in the given scene. The examples of the image-based localization tasks are shown in Figure 4. We can see that \(F_{\mathrm{loc}}\) finds visually similar places based on the RNR-Map, even when the environment has changed or given in a novel environment. Additionally, we found that the suggested method \(F_{\mathrm{loc}}\) mislocates when there are multiple, similar locations in the environment or when the query has poor visual information. We provide more detailed experiments with baselines (MapNet [24], NICE-SLAM [55]) and examples in the supplementary material C. ### Image-Goal Navigation #### 6.2.1 Baselines We compare RNR-Map with learning-based agents using behavior cloning (**BC+RNN**) and reinforcement learning (**DDPPO**[50]). Also, to figure out the advantages of the RNR-Map over an occupancy map, we include the occupancy-map-based coverage method [10] as a baseline. The objective of this method is to visit all possible area in the given scene. We modified this method with the target distance prediction from [23], to make the agent reach the target when it is detected while exploring the environment (**ANS [10]+Pose Pred**). We also compare our method with the recent state-of-the-art image-goal navigation methods. **ZSEL**[2], and **OVRL**[52] are reinforcement learning-based methods which learn image-goal navigation task with specially designed rewards and the pretrained visual representation. **NRNS** builds a topological map and select the nodes to explore by predicting the distances between the target image and the node images with a distance prediction network. **SLING** is a last-mile navigation method which predict the relative pose of the target based on keypoint matching, after the target is detected. This method needs to be integrated with the exploratory algorithm such as NRNS or OVRL. Note that our method adopted this method, for efficient target reaching. The digits of the baseline DDPPO, OVRL, (NRNS,OVRL)+SLING are from [49] and their open-sourced code2, and NRNS, ZSEL are from the original papers [23], [2], respectively. Footnote 2: [https://github.com/Jbwasse2/SLING](https://github.com/Jbwasse2/SLING) #### 6.2.2 Task setup We have tested each method in the public image-goal navigation datasets from NRNS [23] and Gibson [51] with Habitat simulator [32]. Gibson dataset consists of 72 houses for training split, and 14 houses for the validation split. NRNS dataset consists of three difficulty levels (easy, medium, hard) with two path types (straight and curved). Each difficulty level has 1000 episodes for each path type, except for the hard-straight set (806). The objective of the image-goal navigation is to find the target place given the image of the target location. The agent only has access to the current RGBD observations and an odometry reading. We consider a noisy setting [10] where the odometry sensor and the robot actuation include noises as in the real world. The RGBD image observation comes from a directional camera with \(90^{\circ}\) of HFOV. A discrete action space is used in this paper with four types of actions: move forward \(0.25m\). turn right \(10^{\circ}\), turn left \(10^{\circ}\), and stop. The maximum time step of each episode is set to 500. An episode is considered a success when the agent takes a stop action within \(1m\) from the target location. Two evaluation metrics are used: success rate (SR) and success weighted by (normalized inverse) path length (SPL) [3], which represents the efficiency of a navigation path. #### 6.2.3 Image-goal navigation Results RNR-Map helps efficient navigation.Table 1 shows the average SR and SPL of each method. We can see that the proposed navigation framework with the RNR-Map shows competitive or higher performance compared to the baselines, on image-goal navigation tasks. Many of the baselines (DDPPO, ZSEL, OVRL, OVRL+SLING) include reinforcement learning which is sample inefficient and computationally heavy, while having relatively simple representation about the environment. In contrast, the RNR-Map shows higher performances while only requiring an offline trajectory dataset for training neural networks. Based on this result, we argue that having a good internal representation of the environment and finding how to extract exploratory signals from such representation are important. An agent with an informative environmental representation can navigate well without the numerous inefficient interactions often required in reinforcement learning. Compared to baselines which have their own internal representation of the environment (NRNS, NRNS+SLING, ANS), our method shows a much higher performances in curved scenarios. From this result, we can infer that the RNR-Map indeed provides useful information for searching targets, more than coverage signals, and better than the existing methods. The ablation study shown in Table 2 also displays a similar trend. We ablated the main functions of the navigation framework \(F_{\mathrm{loc}}\) and \(F_{\mathrm{track}}\), as well as noises. Without the latent score from Figure 4: **Examples of image-based localization. \(\blacklozenge\)** location of the query image on RNR-Map, and \(\vphi\) is the location found by \(F_{\mathrm{loc}}\). More examples are provided in the supplementary material C.2 and C.3. \(F_{\mathrm{loc}}\), the success rate and SPL dramatically drop. We also have tested the proposed method in MP3D [8] dataset, and observed similar results with Gibson dataset. We provide the results in the supplementary material E, and additional ablation study about the submodules of the navigation module is also provided in D. \(\mathbf{F}_{\mathrm{track}}\) makes RNR-Map robust to noise.Comparing the third row and the last row of Table 2, we can infer that the pose adjusting function \(F_{\mathrm{track}}\) helps the agent to find the image goal more effectively, even with the noisy odometry. The proposed method shows higher success rates in noisy settings, while SPL values are low. We believe this is from the randomness of noises, which helps the local navigation policy to escape from being stuck in a single location. RNR-Map is real-time capable.We analyzed the runtime for each feature of RNR-Map and report them in Table 3. The runtimes are measured on a desktop PC with a Intel i7-9700KF CPU @ 3.60GHz, and an NVIDIA GeForce RTX 2080 Ti GPU. We can see that each function of RNR-Map operates fast enough for real-time navigation, even when including NeRF-based rendering. Navigation example.Figure 5 provides an example of an image-goal navigation episode. \(F_{\mathrm{loc}}\) highlights two types of places, a place that looks similar to the target image and an explorable place like the end of the aisle, or doorway. We can see that the highlighted area at the start of the episode has a similar observation to the target image. First, the agent navigates to a similar-looking place but decided not to stop at the location. While navigating to another place that shows a high latent score, the agent has found the target location, and the latent score also changed to highlight the target. Additional qualitative results of navigation episodes are provided in the supplementary material F and video. ## 7 Conclusion In this paper, we have proposed RNR-Map for visual navigation, which captures visual information about the environment. The RNR-Map is helpful for visual navigation in two ways: (1) The latent codes in the RNR-Map can provide rich signals to find the target location, given a query image. (2) The rendering property of the RNR-Map helps the agent to accurately estimate its pose based on a photometric error, in an unseen environment. The proposed method has outperformed other methods in image-based localization. Also, we have found that the image-based localization of RNR-Map is robust to environmental changes. In image-goal navigation tasks, the proposed method outperforms the current state-of-the-art image-goal navigation methods. Furthermore, the fast inference time of the RNR-Map shows its potential for real-world applications. However, the proposed method still has limitations. Once the observation images are embedded in grids, RNR-Map is hard to correct the past odometry error. We can consider applying loop closure using the proposed localization framework. Inspired by existing graph-based SLAM methods, a pose graph with local RNR-Maps can be leveraged to optimize poses, leading to consensus in the \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Stragint**} & \multicolumn{4}{c}{**Curved**} \\ \hline & \multicolumn{2}{c}{**Easy**} & \multicolumn{2}{c}{**Medium**} & \multicolumn{2}{c}{**Hard**} & \multicolumn{2}{c}{**Overall**} & \multicolumn{2}{c}{**Easy**} & \multicolumn{2}{c}{**Medium**} & \multicolumn{2}{c}{**Hard**} & \multicolumn{2}{c}{**Overall**} \\ & **SR** & **SPL** & **SR** & **SPL** & **SR** & **SPL** & **SR** & **SPL** & **SR** & **SPL** & **SR** & **SPL** & **SR** & **SPL** \\ \hline **BC + RNN** & 39.4 & 27.9 & 25.7 & 15.9 & 13.3 & 9.0 & 26.1 & 17.6 & 26.4 & 12.6 & 20.3 & 10.5 & 8.4 & 4.8 & 18.4 & 9.3 \\ **DDPPO [59]** & 43.2 & 38.5 & 36.4 & 34.8 & 7.4 & 7.2 & 29.0 & 26.8 & 22.2 & 16.5 & 20.7 & 18.5 & 4.2 & 3.7 & 15.7 & 12.9 \\ **ANS + Target Pred [10]** & 68.8 & 55.1 & 54.0 & 30.3 & 24.4 & 22.9 & 55.1 & 36.1 & 48.0 & 21.0 & 46.0 & 20.5 & 31.3 & 14.6 & 41.8 & 18.7 \\ **NRNS [23]** & 64.1 & 55.4 & 47.9 & 39.5 & 25.2 & 18.1 & 45.7 & 37.7 & 27.3 & 10.6 & 23.1 & 10.4 & 10.5 & 5.6 & 20.3 & 8.8 \\ **ZSEL [2]** & - & - & - & - & - & - & - & 41.0 & 28.2 & 27.3 & 18.6 & 9.3 & 6.0 & 25.9 & 17.6 \\ **OVLR [52]** & 53.6 & 34.7 & 48.6 & 33.3 & 32.5 & 21.9 & 44.9 & 30.0 & 53.6 & 31.8 & 47.6 & 30.2 & 35.6 & 22.0 & 45.6 & 28.0 \\ **NRNS + SLING [49]** & **85.3** & **74.4** & 66.8 & **49.3** & 41.1 & 28.8 & 64.4 & **50.8** & 58.6 & 16.1 & 47.6 & 16.8 & 24.9 & 10.1 & 43.7 & 14.3 \\ **OVLR + SLING [49]** & 71.2 & 54.1 & 60.3 & 44.4 & 43.0 & 29.1 & 58.2 & 42.5 & 68.4 & 47.0 & 57.7 & 39.8 & 40.2 & 25.5 & 55.4 & 37.4 \\ \hline **RNR-Map (ours)** & 76.4 & 55.3 & **73.6** & 46.1 & **54.6** & **30.2** & **68.2** & 43.9 & **75.3** & **52.5** & **70.9** & **42.3** & **51.0** & **27.4** & **65.7** & **40.8** \\ \hline \hline \end{tabular} \end{table} Table 1: **Image-goal Navigation Result**. SR: Success Rate. SPL: Success weighted by Path Length. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Mapping \(F_{\mathrm{reg}}\) & Tracking \(F_{\mathrm{track}}\) & Localization \(F_{\mathrm{loc}}\) & Rendering \(F_{\mathrm{loc}}\) \\ \hline 91.9 Hz (10.9 ms) & 5.0 Hz (200 ms) & 56.8 Hz (17.6 ms) & 14.7 Hz (68.0 ms) \\ \hline \hline \end{tabular} \end{table} Table 2: **Ablation study**. The values are the average of the straight and curved scenarios. Figure 5: **Example of image-goal navigation episode.** The heatmap value of \(E\) from \(F_{\mathrm{loc}}\) is highlighted on the map according to the colorbar on the left. pixel renderings from the global RNR-Map.
2305.16830
Leaving the Nest: Going Beyond Local Loss Functions for Predict-Then-Optimize
Predict-then-Optimize is a framework for using machine learning to perform decision-making under uncertainty. The central research question it asks is, "How can the structure of a decision-making task be used to tailor ML models for that specific task?" To this end, recent work has proposed learning task-specific loss functions that capture this underlying structure. However, current approaches make restrictive assumptions about the form of these losses and their impact on ML model behavior. These assumptions both lead to approaches with high computational cost, and when they are violated in practice, poor performance. In this paper, we propose solutions to these issues, avoiding the aforementioned assumptions and utilizing the ML model's features to increase the sample efficiency of learning loss functions. We empirically show that our method achieves state-of-the-art results in four domains from the literature, often requiring an order of magnitude fewer samples than comparable methods from past work. Moreover, our approach outperforms the best existing method by nearly 200% when the localness assumption is broken.
Sanket Shah, Andrew Perrault, Bryan Wilder, Milind Tambe
2023-05-26T11:17:45Z
http://arxiv.org/abs/2305.16830v2
# Leaving the Nest \(\char 130\): ###### Abstract Predict-then-Optimize is a framework for using machine learning to perform decision-making under uncertainty. The central research question it asks is, "How can the structure of a decision-making task be used to tailor ML models for that specific task?" To this end, recent work has proposed learning task-specific loss functions that capture this underlying structure. However, current approaches make restrictive assumptions about the form of these losses and their impact on ML model behavior. These assumptions both lead to approaches with high computational cost, and when they are violated in practice, poor performance. In this paper, we propose solutions to these issues, avoiding the aforementioned assumptions and utilizing the ML model's features to increase the sample efficiency of learning loss functions. We empirically show that our method achieves state-of-the-art results in four domains from the literature, often requiring an order of magnitude fewer samples than comparable methods from past work. Moreover, our approach outperforms the best existing method by nearly 200% when the localness assumption is broken. ## 1 Introduction Predict-then-Optimize (PtO) [5; 6] is a framework for using machine learning (ML) to perform decision-making under uncertainty. As the name suggests, it proceeds in two steps--first, an ML model is used to make predictions about the uncertain quantities of interest, then second, these predictions are aggregated and used to parameterize an optimization problem whose solution provides the decision to be made. Many real-world applications require both prediction and optimization, and have been cast as PtO problems. For example, recommender systems need to predict user-item affinity to determine which titles to display [21], while portfolio optimization uses stock price predictions to construct high-performing portfolios [3]. In the context of AI for Social Good, PtO has been used to plan intervention strategies by predicting how different subgroups will respond to interventions [20]. The central research question of PtO is, "How can we use the structure of an optimization problem to learn predictive models that perform better _for that specific decision-making task?_". In this paper, we refer to the broad class of methods used to achieve this goal as _Decision-Focused Learning_ (DFL). Recently, multiple papers have proposed learning task-specific loss functions for DFL [4; 9; 15]. The intuition for these methods can be summarized in terms of the Anna Karenina principle--while perfect predictions lead to perfect decisions, different kinds of imperfect predictions have different impacts on downstream decision-making. Such loss functions, then, attempt to use learnable parameters to capture how bad different kinds of prediction errors are for the decision-making task of interest. For example, a Mean Squared Error (MSE) loss may be augmented with tunable parameters to assign different weights to different true labels. Then, a model trained on such a loss is less likely to make the kinds of errors that affect the quality of downstream decisions. Learning task-specific loss functions poses two major challenges. First, learning the relationship between predictions and decisions is challenging. To make learning this relationship more tractable, past approaches learn different loss functions for each instance of the decision-making task, each of which locally approximates the behavior of the optimization task. However, the inability to leverage training samples across different such instances can make learning loss functions sample inefficient, especially for approaches that require a large number of samples to learn. This is especially problematic because creating the dataset for learning these loss functions is the most expensive part of the overall approach. In this paper, rather than learning separate loss functions for each decision-making instance, we learn a mapping from the feature space of the predictive model to the parameters of different local loss functions. This 'feature-based parameterization' gives us the best of both worlds--we retain the simplicity of learning local loss functions while still being able to generalize across different decision-making instances. In addition to increasing efficiency, this reparameterization also ensures that the learned loss functions are _Fisher Consistent_--a fundamental theoretical property that ensures that, in the limit of infinite data and model capacity, optimizing for the loss function leads to optimal decision-making. Past methods for learning loss functions do not satisfy even this basic theoretical property! The second challenge with learning loss functions is that it presents a chicken-and-egg problem--to obtain the distribution of predictions over which the learned loss function must accurately approximate the true decision quality, a predictive model is required, yet to obtain such a model, a loss function is needed to train it. To address this, Shah et al. [15] use a simplification we call the "localness of prediction", i.e., they assume that predictions will be 'close' to the true labels, and generate candidate predictions by adding random noise to the true labels. However, this doesn't take into account the kinds of predictions that models actually generate and, as a result, can lead to the loss functions being optimized for unrealistic predictions. In contrast, Lawless and Zhou [9] and Chung et al. [4] use a predictive model trained using MSE to produce a single representative sample that is used to construct a simple loss function. However, this single sample is not sufficient to learn complex loss functions. We explicitly formulate the goal of sampling a _distribution_ of realistic model predictions, and introduce a technique that we call _model-based sampling_ to efficiently generate such samples. Because these interventions allow us to move away from localness-based simplifications, we call our loss functions 'Efficient Global Losses' or _EGLs_\(\not{\nabla}\!\! where \(f\) is the objective and \(\mathbf{\Omega}\subseteq\mathbb{R}^{\dim(\mathbf{z})}\) is the feasible region. The solution of this optimization task \(\mathbf{z}=\mathbf{z}^{*}(\mathbf{\hat{y}})\) provides the optimal decision for the set of predictions \(\mathbf{\hat{y}}\). We call a full set of inputs \(\mathbf{\hat{y}}\) or \(\mathbf{y}=[y_{1},\ldots,y_{N}]\) to the optimization problem an _instance_ of the decision-making task. However, the optimal decision \(\mathbf{z}^{*}(\mathbf{\hat{y}})\) for the predictions \(\mathbf{\hat{y}}\) may not be optimal for the true labels \(\mathbf{y}\). To evaluate the _Decision Quality_ (DQ) of a set of predictions \(\mathbf{\hat{y}}\), we measure how well the decisions they induce \(\mathbf{z}^{*}(\mathbf{\hat{y}})\) perform on the set of _true labels_\(\mathbf{y}\) with respect to the objective function \(f\): \[DQ(\mathbf{\hat{y}},\mathbf{y})=f(\mathbf{z}^{*}(\mathbf{\hat{y}});\mathbf{y}) \tag{1}\] The central question of Predict-then-Optimize, then, is about how to learn predictive models \(M_{\mathbf{\theta}}\) that have high DQ. When models are trained in a task-agnostic manner, e.g., to minimize Mean Squared Error (MSE), there can be a mismatch between predictive accuracy and \(DQ\), and past work (see Section 3) has shown that the structure of the optimization problem \(\mathbf{z}^{*}\) can be used to learn predictive models with better \(DQ\). We refer to this broad class of methods for tailoring predictive models to decision-making tasks as Decision-Focused Learning (DFL) and describe one recent approach below. ### Task-Specific Loss Functions Multiple authors have suggested learning task-specific loss functions for DFL [4; 9; 15]. These approaches add learnable parameters to standard loss functions (e.g., MSE) and tune them, such that the resulting loss functions approximate the'regret' in DQ for 'typical' predictions. Concretely, for the distribution over predictions \(\mathbf{\hat{y}}=[M_{\mathbf{\theta}}(x_{1}),\ldots,M_{\mathbf{\theta}}(x_{N})]\) of the model \(M_{\mathbf{\theta}}\), the goal is to choose a loss function \(L_{\mathbf{\phi}}\) with parameters \(\mathbf{\phi}\) such that: \[\mathbf{\phi}^{*}=\operatorname*{arg\,min}_{\mathbf{\phi}}\mathbb{E}_{ \mathbf{x},\mathbf{y},\mathbf{\hat{y}}}\left[\left(L_{\mathbf{\phi}}(\mathbf{\hat{y}},\mathbf{y})-DQ_{ \text{regret}}(\mathbf{\hat{y}},\mathbf{y})\right)^{2}\right]\] \[\text{where}\quad DQ_{\text{regret}}(\mathbf{\hat{y}},\mathbf{y})\equiv DQ (\mathbf{y},\mathbf{y})-DQ(\mathbf{\hat{y}},\mathbf{y}) \tag{2}\] where \(DQ\) is defined in Equation (1). Note here that the first term in \(DQ_{\text{regret}}\) is a constant w.r.t. \(\hat{y}\), so minimizing \(DQ_{\text{regret}}\) is equivalent to maximizing \(DQ\). Adding the \(DQ(\mathbf{y},\mathbf{y})\) term, however, makes \(DQ_{\text{regret}}\) behave more like a loss function--a minimization objective with a minimum value of 0 at \(\mathbf{\hat{y}}=\mathbf{y}\). As a result, parameterized versions of simple loss functions can learn the structure of \(DQ_{\text{regret}}\) (and thus \(DQ\)). A meta-algorithm for learning predictive models \(M_{\mathbf{\theta}}\) using task-specific loss functions is as follows: 1. **Sampling \(\mathbf{\tilde{y}}\):** Generate \(K\) candidate predictions \(\mathbf{\tilde{y}}^{k}=[y_{1}\pm\epsilon_{1},\ldots,y_{N}\pm\epsilon_{N}]\), e.g., by adding Gaussian noise \(\epsilon_{n}\sim\mathcal{N}(0,\sigma)\) to each true label in the instance. This strategy is motivated by the 'localness of predictions' assumption, i.e., that predictions will be close to the true labels. 2. **Generating dataset:** Run a optimization solver on the sampled predictions \(\mathbf{\tilde{y}}\) to get the corresponding decision quality values \(DQ_{\text{regret}}(\mathbf{\tilde{y}},\mathbf{y})\). This results in a dataset of the form \([(\mathbf{\tilde{y}}^{1},DQ_{\text{regret}}^{1}),\ldots,(\mathbf{\tilde{y}}^{K},DQ_{ \text{regret}}^{K})]\) for each instance \(\mathbf{y}\) in the training and validation set. 3. **Learning Loss Function(s):** Learn loss function(s) that minimize the MSE on the dataset from Step 2. Lawless and Zhou [9] and Chung et al. [4] re-weight the MSE loss _for each instance_: \[\text{L\&}Z_{w}^{\mathbf{y}}(\mathbf{\hat{y}})=w^{\mathbf{y}}\cdot||\mathbf{\hat{y}}-\mathbf{y}|| _{2}^{2}\] (3) Shah et al. [15] propose 2 families of loss functions, which they call 'Locally Optimized Decision Losses' (LODLs). The first adds learnable weights to MSE _for each prediction_ that comprises the instance \(\mathbf{\hat{y}}\). The second is more general--an arbitrary quadratic function of the predictions that comprise \(\mathbf{\hat{y}}\), where the learned parameters are the coefficients of each polynomial term. \[\text{WMSE}_{\mathbf{w}}^{\mathbf{y}}(\mathbf{\hat{y}})=\sum_{n=1}^{N}w_{n}^{\mathbf{y}}\cdot( \hat{y}_{n}-y_{n})^{2}\quad\text{and}\quad\text{Quadratic}_{H}^{\mathbf{y}}(\mathbf{ \hat{y}})=(\mathbf{\hat{y}}-\mathbf{y})^{T}H^{\mathbf{y}}(\mathbf{\hat{y}}-\mathbf{y})\] (4) The parameters for these losses \(w>w_{\min}>0\) and \(H=L^{T}L+w_{\min}\cdot I\) are constrained to ensure that the learned loss is convex. They also propose 'Directed' variants of each loss in which the parameters to be learned are different based on whether \((\hat{y}-y)>0\) or not. These parameters are then learned for every instance \(\mathbf{y}\), e.g., using gradient descent. 4. **Learning predictive model \(M_{\mathbf{\theta}}\):** Train the predictive model \(M_{\mathbf{\theta}}\) on the loss functions learned in the previous step, e.g., a random forest [4], a neural network [15], or a linear model [9]. In this paper, we propose two modifications to the meta-algorithm above. We modify Step 1 in Section 6 and Step 3 in Section 5. Given that these contributions help overcome the challenges associated with the losses being "local", we call our new method _EGL_\(\bigtriangledown\) (Efficient Global Losses). ## 3 Related Work In Section 2.2, we contextualize why the task-specific loss function approaches of Chung et al. [4], Lawless and Zhou [9], Shah et al. [15] make sense, and use the intuition to scale model-based sampling and better evaluate the utility of different aspects of learned losses on multiple domains. In addition to learning loss functions, there are alternative approaches to DFL. The most common of these involves optimizing for the decision quality directly by modeling the Predict-then-Optimize task end-to-end and differentiating through the optimization problem [1; 2; 5]. However, discrete optimization problems do not have informative gradients, and so cannot be used as-is in an end-to-end pipeline. To address this issue, most past work either constructs task-dependent surrogate problems that _do_ have informative gradients for learning predictive models [7; 11; 17; 21; 22] or propose loss functions for specific classes of optimization problems (e.g. with linear objectives [6; 12]). However, the difficulty of finding good task-specific relaxations for arbitrary optimization problems limits the adoption of these techniques in practice. On the other hand, learning a loss function is more widely applicable as it applies to _any optimization problem_ and only requires access to a solver for the optimization problem; as a result, we focus on improving this approach in this paper. ## 4 Motivating Example To make the steps involved in learning loss functions more concrete and provide an illustration of where past approaches fail, we describe a simple example below. We will return to this example in Section 5.1 to address the limitations that are highlighted by it. Consider a PtO problem in which the goal is to (a) predict the utility of a resource for two individuals (say, \(A\) and \(B\)), and then (b) give the resource to the individual with higher utility. Consider the utilities to be drawn from: \[\boldsymbol{y}=(y_{A},y_{B})=\begin{cases}(0,0.55),&\text{with probability }0.5\\ (1,0.55),&\text{with probability }0.5\end{cases}\] The optimal decision for this problem, then, is to give the resource to individual B because \(\mathbb{E}[\hat{y}_{B}]=0.55>\mathbb{E}[\hat{y}_{A}]=0.5\). However, past approaches fail even in this case. To learn a predictive model for this example, we follow the four steps described in Section 2.2: 1. We sample K = 25 points in the 'neighborhood' of each of the true labels \((0,0.55)\) and \((1,0.55)\). For simplicity, we add uniform-random noise \(\epsilon^{k}\sim U[-1,1]\) only to \(y_{A}\) to get \(\boldsymbol{\hat{y}}^{k}=(y_{A}\pm\epsilon^{k},y_{B})\). 2. We calculate \(DQ_{\text{regret}}\) for each sample. We plot \(DQ_{\text{regret}}\) vs \(\hat{y}_{A}\) in Figure 1 (given \(\hat{y}_{B}\) is fixed). Figure 1: **A plot of \(DQ_{\text{regret}}\) vs. \(\hat{y}_{A}\) (for fixed \(\hat{y}_{B}\)). The samples from Step 1 correspond to the blue dots for the \((0,0.55)\) instance and the orange dots for the \((1,0.55)\) instance. We also plot the learned weighted MSE loss for each instance using solid lines in their corresponding colors.** 3. Based on this dataset, we fit a loss of the form given by Equation (3). Because there is only one weight being learned here, this can be seen as either a WeightedMSE LODL from Shah et al. [15] or a reweighted task-loss from Lawless and Zhou [9]. This leads to the loss seen in Figure 1. 4. Finally, we estimate a predictive model based on this loss. The optimal prediction given this loss is \(\boldsymbol{\hat{y}}^{*}=(\hat{y}^{*}_{A},\hat{y}^{*}_{B})\approx(0.602,0.55)\) (see Appendix B for details). Importantly, under this predictive model, the optimal decision would be to give the resource to individual \(A\) because they have higher predicted utility, i.e., \(\hat{y}^{*}_{A}>\hat{y}^{*}_{B}\). _But this is suboptimal!_ ## 5 EGLs (Part One): Feature-based Parameterization The challenge of learning the \(DQ_{\text{regret}}(\boldsymbol{\hat{y}},\boldsymbol{y})\) function is that it is as hard as learning a closed-form solution to an arbitrary optimization problem in the worst case. To get around this, past approaches typically learn multiple local approximations \(DQ^{\boldsymbol{y}}_{\text{regret}}(\boldsymbol{\hat{y}})\)_e.g., for every decision-making instance \(\boldsymbol{y}\)_ in the training and validation set. This simplification trades off the complexity of learning a single complex \(DQ_{\text{regret}}\) for the cost of learning many simpler \(DQ^{\boldsymbol{y}}_{\text{regret}}\) functions. Specifically, learning each local \(DQ^{\boldsymbol{y}}_{\text{regret}}\) requires its own set of samples which can require as many as \(\Theta(\dim(\boldsymbol{y})^{2})\) samples (for the 'DirectedQuadratic' loss function from Shah et al. [15]). This is especially problematic because calculating \(DQ_{\text{regret}}\) for each of the sampled predictions \(\boldsymbol{\tilde{y}}\) is the most expensive step in learning task-focused loss functions. To make this approach more scalable, we propose learning a mapping from the features of a given prediction \(x\) to the corresponding loss function parameter(s), which we call _feature-based parameterization (FBP)_. This allows us to get the best of both worlds--we retain the simplicity of learning local loss functions while still being able to generalize across different decision-making instances. In fact, our approach even allows for generalization _within_ problem instances as we learn mappings from individual features to their corresponding parameters. Concretely, we learn the mapping \(P_{\boldsymbol{\psi}}(x)\) for LODLs loss families in the following way: * **WeightedMSE:** We learn a mapping \(P_{\boldsymbol{\psi}}:x\to w\) from the features of a prediction to the 'weight' that prediction is associated with in the loss function. * **Quadratic:** For every pair of predictions \(\hat{y}_{i}\) and \(\hat{y}_{j}\), we learn a mapping \(P_{\boldsymbol{\psi}}:(x_{i},x_{j})\to L_{ij}\) where \(L=[[L_{ij}]]\) is the matrix that parameterizes the loss function (see Equation (4)). * **Directed Variants:** Instead of learning a mapping from the features \(x\) to a single parameter, we instead learn a mapping from \(x\to[w^{+},w^{-}]\) for 'Directed WeightedMSE' and \((x_{i},x_{j})\to[L^{++},L^{+-},L^{-+},L^{--}]\) for 'Directed Quadratic'. We then optimize for the optimal parameters \(\boldsymbol{\psi}^{*}\) of the learned losses along the lines of past work: \[\boldsymbol{\psi}^{*}=\operatorname*{arg\,min}_{\boldsymbol{\psi}}\mathbb{E} _{\boldsymbol{x},\boldsymbol{y},\boldsymbol{\hat{y}}}\left[\left(L_{P_{ \boldsymbol{\psi}}(\boldsymbol{x})}(\boldsymbol{\hat{y}},\boldsymbol{y})-DQ_ {\text{regret}}(\boldsymbol{\hat{y}},\boldsymbol{y})\right)^{2}\right]\] where \(L_{\boldsymbol{\phi}}=L_{P_{\boldsymbol{\psi}}(\boldsymbol{x})}\) is the learned loss function. For our experiments, the model \(P_{\boldsymbol{\psi}}\) is a 4-layer feedforward neural network with a hidden dimension of 500 trained using gradient descent. We'd like to acknowledge here that FBP reduces the expressivity of our learned losses--it implicitly asserts that two predictions with similar features must have a similar impact on the decision quality regret \(DQ_{\text{regret}}\). However, we find in our experiments that this trade-off is typically beneficial. In fact, we show in the section below that this reduction in expressivity is even _desirable_ in some ways. ### Fisher Consistency One desirable property of a Predict-then-Optimize loss function is that, in the limit of infinite data and model capacity, the optimal prediction induced by the loss also minimizes the decision quality regret. If this is true, we say that loss \(\ell\) is "Fisher Consistent" w.r.t. the decision quality regret \(DQ_{\text{regret}}\)[6]. **Definition 5.1** (Fisher Consistency).: A loss \(\ell(\boldsymbol{\hat{y}},\boldsymbol{y})\) is said to be Fisher Consistent with respect to the decision quality regret \(DQ_{\text{regret}}\) if the set of minimizers of the loss function \(\boldsymbol{\hat{y}}^{*}(\boldsymbol{x})=\operatorname*{arg\,min}_{\boldsymbol {\hat{y}}}\mathbb{E}_{\boldsymbol{y}|\boldsymbol{x}}[\ell(\boldsymbol{\hat{y}},\boldsymbol{y})]\) also minimize \(DQ_{\text{regret}}\) for all possible distributions \(P(\boldsymbol{x},\boldsymbol{y})\). However, the methods proposed in past work do not satisfy this property even for the simplest of cases--in which the objective \(f\) of the optimization problem \(\mathbf{z}^{*}\) is linear. Concretely: **Proposition 5.2**.: _Weighting-the-MSE losses are not Fisher Consistent for Predict-then-Optimize problems in which the optimization function \(\mathbf{z}^{*}\) has a linear objective._ Proof.: See the counterexample in Section 4. More generally, because 'WeightedMSE' can be seen as a special case of the other methods in Shah et al. [15], none of the loss functions in past work are Fisher Consistent! To gain an intuition for why this happens, let us analyze the predictions that are induced by weighting-the-MSE type losses: **Lemma 5.3**.: _The optimal prediction \(\hat{y}^{*}(x)\) for some feature \(x\) given a weighted MSE loss function with weights \(w^{\mathbf{y}}\) associated with the label \(y\in\mathbf{y}\) is \(\hat{y}^{*}(x)=\frac{\mathbb{E}_{\mathbf{y}|x}[w^{\mathbf{y}}\cdot y]}{\mathbb{E}_{ \mathbf{y}|x}[w^{\mathbf{y}}]}\), given infinite model capacity._ The proof is presented in Appendix A. In their paper, Elmachtoub and Grigas [6] show that the optimal prediction that minimizes \(DQ_{\text{regret}}\) is \(\hat{y}^{*}(x)=\mathbb{E}_{\mathbf{y}|x}[y]\), i.e., the optimal prediction should depend only on the value of the labels corresponding to that feature. However, \(\hat{y}^{*}(x)=\frac{\mathbb{E}_{\mathbf{y}|x}[w^{\mathbf{y}}\cdot y]}{\mathbb{E}_{ \mathbf{y}|x}[w^{\mathbf{y}}]}\) in weighted MSE losses, and so the optimal prediction depends on not only the labels but also the _weight_ associated with them. While this is not a problem by itself, the weight learned for a label \(y\in\mathbf{y}\) is dependent on not only the label itself but also the _other labels \(\mathbf{y}_{-y}\) in that instance_. In the context of the Section 4 example, the weight associated with individual \(A\) is dependent on the utility of individual \(B\) (via \(DQ_{\text{regret}}\)). As a result, it's possible to create a distribution \(P(\mathbf{x},\mathbf{y})\) for which such losses will not be Fisher Consistent. However, this is not true for WeightedMSE with FBP! **Theorem 5.4**.: _WeightedMSE with FBP is Fisher Consistent for Predict-then-Optimize problems in which the optimization function \(\mathbf{z}^{*}\) has a linear objective._ Proof.: In WeightedMSE with FBP, the weights associated with some feature \(x\) are not independently learned for each instance \(\mathbf{y}\) but are instead a _function of the features \(x\)_. As a result, the weight \(w^{\mathbf{y}}\) associated with that feature is the same across all instances, i.e., \(w^{\mathbf{y}}=w(x)\), \(\forall\mathbf{y}\). Plugging that into the equation from Lemma 5.3: \[\hat{y}^{*}(x)=\frac{\mathbb{E}_{\mathbf{y}|x}[w^{\mathbf{y}}\cdot y]}{\mathbb{E}_{ \mathbf{y}|x}[w^{\mathbf{y}}]}=\frac{w(x)\cdot\mathbb{E}_{\mathbf{y}|x}[y]}{w(x)}=\mathbb{ E}_{\mathbf{y}|x}[y]\] which is a minimizer of \(DQ_{\text{regret}}\)[6]. Thus, WeightedMSE with FBP is always Fisher Consistent. ## 6 EGLs (Part Two): Model-based Sampling Loss functions serve to give feedback to the model. However, to make them easier to learn, their expressivity is often limited. As a result, they cannot estimate \(DQ_{\text{regret}}\) accurately for _all_ possible predictions but must instead concentrate on a subset of "realistic predictions" for which the predictive model \(M_{\mathbf{\theta}}\) will require feedback during training. However, there is a chicken-and-egg problem in learning loss functions on realistic predictions--a model is needed to generate such predictions, but creating such a model would in turn require its own loss function to train on. Past work has made the assumption that the predictions \(\mathbf{\hat{y}}\) will be close to the actual labels \(\mathbf{y}\) to efficiently generate potential predictions \(\mathbf{\tilde{y}}\). However, if the assumption does not hold, Gaussian sampling may not yield good results, as seen in Section 6.1. Instead, in this paper, we propose an alternative: _model-based sampling (MBS)_. Here, to generate a _distribution_ of potential predictions using this approach, we train a predictive model \(M_{\mathbf{\theta}}\) on a standard loss function (e.g., MSE). Then, at equally spaced intervals during the training process, we use the intermediate model to generate predictions for each problem instance in the dataset. These form the set of _potential predictions_\(\mathbf{\tilde{y}}\) based on which we create the dataset and learn loss functions. In the context of the meta-algorithm from Section 2.2, this changes only Step 1 of the approach. The hyperparameters associated with this approach are: * **Number of Models:** Instead of sampling predictions from just one model, we can instead train multiple models to increase the diversity of the generated predictions. In our experiments, we choose from \(\{1,10,100,500\}\) predictive models. * **LR** and **Number of Training Steps:** The learning rates are chosen from \(\{10^{-6},10^{-5},\ldots,1\}\) with a possible cyclic schedule [16]. We use a maximum of \(50000\) updates across all the models. We choose from among these different hyperparameter values using iterations of random search interleaved with manual extrapolations of promising combinations. We find that for best performance with MBS, a high learning rate and large number of models works best. Both of these choices increase the diversity of the generated samples and help create a richer dataset for learning loss functions. ### Localness of Predictions To illustrate the utility of model-based sampling, we analyze the Cubic Top-K domain proposed by Shah et al. [15]. The goal in this domain is to fit a linear model to approximate a more complex cubic relationship between \(x\) and \(y\) (Figure 2, Appendix C). This could be motivated by explainablity [14; 8], data efficiency, or simplicity. The localness assumption breaks here because it is not possible for linear models (or low-capacity models more generally) to closely approximate the true labels of a more complex data-generating process. The objective of the learned loss function, then, is to provide information about _what kind of suboptimal predictions are better than others_. Learned losses accomplish this by first generating plausible predictions and then learning how different sorts of errors change the decision quality. In this domain, the decision quality is solely determined by the point with the highest predicted utility. As can be seen in Figure 2, the highest values given \(x\sim U[-1,1]\) are either at \(x=-0.5\) or \(x=1\). In fact, because the function is flatter around \(x=-0.5\), there are more likely to be large values there. When Gaussian sampling [15] is used to generate candidate predictions, the highest sampled values are also more likely to be at \(x=-0.5\) because the added noise has a mean of zero. However, _a linear model cannot have a maximum value at \(x=-0.5\)_; it can only have a maxima at the extremes, i.e., either \(x=-1\) or \(x=1\). As a result, the loss functions learned based on the samples from this method focus on the wrong subset of labels and lead to poor downstream performance. On the other hand, when using model-based sampling, the generated candidate predictions are the outputs of a linear model. Consequently, the samples generated by this method have maxims only at \(x\in\{-1,1\}\), allowing the loss functions to take this into account. We visualize this phenomenon in Figure 3 (Appendix C). In their paper, Shah et al. [15] propose a set of 'directed models' to make LODLs perform well in this domain. However, these models only learn useful loss functions because the value of the label at \(x=1\) is _slightly higher_ than the value at \(x=-0.5\). To show this, we create a variant of this domain called "(Hard) Cubic Top-K" in which \(y_{x=-0.5}>y_{x=1}\). In Section 7.1 we show that, in this new domain, even the 'directed' LODLs fail catastrophically while _all_ the loss functions learned with model-based sampling perform well. ## 7 Experiments In this section, we validate EGLs empirically on four domains from the literature. We provide brief descriptions of the domains below but refer the reader to the corresponding papers for more details. **Cubic Top-K [15]** Learn a model whose top-k predictions have high corresponding true labels. Figure 2: **Cubic Top-K Domain. The underlying mapping from \(x\to y\) is given by the green dashed line. The set \(\mathbf{y}\) consists of \(N=50\) points where \(x_{n}\sim U[-1,1]\), and the goal is to predict the point with the largest \(y\). The linear model that minimizes the MSE loss is given in blue.** * _Predict:_ Predict resource \(n\)'s utility \(\hat{y}_{n}\) using a linear model with feature \(x_{n}\sim U[-1,1]\). The true utility is \(y_{n}=10x_{n}^{3}-6.5x_{n}\) for the standard version of the domain and \(y_{n}=10x_{n}^{3}-\mathbf{7.5}x_{n}\) for the 'hard' version. The predictive model is linear, i.e., \(M_{\mathbf{\theta}}(x)=mx+c\). * _Optimize:_ Out of \(N=50\) resources, choose the top \(K=1\) resources with highest utility. Web Advertising [21]The objective of this domain is to predict the Click-Through-Rates (CTRs) of different (user, website) pairs such that good websites to advertise on are chosen. * _Predict:_ Predict the CTRs \(\mathbf{\hat{y}}_{m}\) for \(N=10\) fixed users on \(M=5\) websites using the website's features \(x_{m}\). The features for each website are obtained by multiplying the true CTRs \(\mathbf{y}_{m}\) from the Yahoo! Webscope Dataset [24] with a random \(N\times N\) matrix \(A\), resulting in \(\mathbf{x}_{m}=A\mathbf{y}_{m}\). The predictive model \(M_{\mathbf{\theta}}\) is a 2-layer feedforward network with a hidden dimension of 500. * _Optimize:_ Choose which \(K=2\) websites to advertise on such that the expected number of users who click on an advertisement at least once (according to the CTR matrix) is maximized. The optimization function to be maximized is \(\mathbf{z}^{*}(\mathbf{\hat{y}})=\operatorname*{arg\,max}_{\mathbf{z}}\sum_{j=0}^{N}(1- \prod_{i=0}^{M}(1-z_{i}\cdot\hat{y}_{ij}))\), where \(z_{i}\) can be either 0 or 1. This is a submodular maximization problem. Portfolio Optimization [5, 18]Based on the Markovitz model [10], the aim is to predict future stock prices in order to create a portfolio that has high returns but low risk. * _Predict:_ Predict the future stock price \(y_{n}\) for each stock \(n\) using its historical data \(x_{n}\). The historical data includes information on 50 stocks obtained from the QuandlWIKI dataset [13]. The predictive model \(M_{\mathbf{\theta}}\) is a 2-layer feedforward network with a hidden dimension of 500. * _Optimize:_ Choose a distribution \(\mathbf{z}\) over stocks to maximize \(\mathbf{z}^{T}\mathbf{y}-\lambda\cdot\mathbf{\hat{y}}^{T}Q\mathbf{\hat{y}}\) based on a known correlation matrix \(Q\) of stock prices. Here, \(\lambda=0.001\) represents the constant for risk aversion. For each set of experiments, we run 10 experiments with different train-test splits, and randomized initializations of the predictive model and loss function parameters. Details of the computational resources and hyperparameter optimization used are given in Appendix E. For all of these domains, the metric of interest is the decision quality achieved by the predictive model \(M_{\mathbf{\theta}}\) on the hold-out test set when trained with the loss function in question. However, given that the scales of the decision quality for each domain vary widely, we linearly re-scale the value such that 0 corresponds to the DQ of making predictions uniformly at random \(\hat{y}=\epsilon\sim U[0,1]\) and 1 corresponds to making perfect predictions \(\hat{y}=y\). Concretely: \[\text{Normalized DQ}(\mathbf{\hat{y}},\mathbf{y})=\frac{DQ(\mathbf{\hat{y}},\mathbf{y})-DQ( \mathbf{\epsilon},\mathbf{y})}{DQ(\mathbf{y},\mathbf{y})-DQ(\mathbf{\epsilon},\mathbf{y})}\] ### Overall Results We compare our approach against the following baselines from the literature in Table 1: * **MSE:** A standard regression loss. * **Expert-crafted Surrogates:** The end-to-end approaches described in Section 3 that require handcrafting differentiable surrogate optimization problems for each domain separately. * **L&Z:** Lawless and Zhou [9]'s approach for learning losses (equivalent to Chung et al. [4]). * **LODL:** Shah et al. [15]'s approach for learning loss functions. Trained using 32 and 2048 samples. (Hard) Cubic Top-KWe empirically verify our analysis from Section 6.1 by testing different baselines on our proposed 'hard' top-k domain. In Table 1, we see that all our baselines perform extremely poorly in this domain. Even the expert-crafted surrogate only achieves a DQ of \(0.24\) while EGLs achieve the best possible DQ of \(0.69\); this corresponds to a gain of nearly 200% for EGLs. Domains for the LiteratureWe find that our method reaches state-of-the-art performance in all the domains from the literature. In fact, we see that EGLs achieve similar performance to LODLs with an order of magnitude fewer samples in two out of three domains. In Section 7.2 below, we see that this corresponds to _an order of magnitude speed-up over learning LODLs of similar quality!_ ### Computational Complexity Experiments We saw in Section 7.1 that EGLs perform as well as LODLs with an order of magnitude fewer samples. In Table 2, we show how this increased sample efficiency translates to differences in runtime. We see that, by far, most of the time in learning LODLs is spent in 'Step 2' of our meta-algorithm. As a result, despite the fact that EGLs take longer to perform steps 1 and 3, this is made up for by the increase in sample efficiency, resulting in an order-of-magnitude speedup over LODLs. ### Ablation Study In this section, we compare EGLs to their strongest competitor from the literature, i.e., LODLs [15]. Specifically, we look at the low-sample regime--when 32 samples per instance are used to train both losses--and present our results in Table 3. We see that EGLs improve the decision quality for almost every choice of loss function family and domain. We further analyze Table 3 below. **Feature-based Parameterization (FBP):** Given that this is the low-sample regime, 'LODL + FBP' almost always does better than just LODL. These gains are especially apparent in cases where adding more samples would improve LODL performance--the 'Directed' variants in the Cubic Top-K domain, and the 'Quadratic' methods in the Web Advertising domain. **Model-based Sampling (MBS):** This contribution is most useful in the Cubic Top-K domain, where the localness assumption is broken. Interestingly, however, MBS also improves performance in the other two domains where the localness assumption does not seem to be broken (Table 4 in Appendix E.1). We hypothesize that MBS helps here in two different ways: 1. **Increasing effective sample efficiency:** We see that, in the cases where FBP helps most, the gains from MBS stack with FBP. This suggests that MBS helps improve sample-efficiency. Our hypothesis is that model-based sampling allows us to focus on predictions that would lead to a 'fork in the training trajectory', leading to improved performance with fewer samples. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & & \multicolumn{2}{c}{New Domain} & \multicolumn{2}{c}{Domains from the Literature} \\ \cline{3-6} Category & Method & \begin{tabular}{c} (Hard) Cubic \\ Top-K \\ \end{tabular} & \begin{tabular}{c} Cubic \\ Top-K \\ \end{tabular} & \begin{tabular}{c} Web \\ Advertising \\ \end{tabular} & \begin{tabular}{c} Portfolio \\ Optimization \\ \end{tabular} \\ \hline 2-Stage & MSE & -0.65 \(\pm\) 0.04 & -0.50 \(\pm\) 0.06 & 0.60 \(\pm\) 0.04 & 0.04 \(\pm\) 0.00 \\ \hline \multirow{4}{*}{\begin{tabular}{c} Expert-crafted \\ Surrogates \\ \end{tabular} } & \begin{tabular}{c} SPO+ [6] \\ Entropy-Regularized \\ Top-K [23] \\ Multilinear Relaxation [22] \\ Differentiable QP [5, 18] \\ \end{tabular} & \begin{tabular}{c} -0.68 \(\pm\) 0. 2. **Helping WeightedMSE models:** MBS also helps improve the _worst-performing_ WeightedMSE models in these domains which, when combined with FBP, outperform even LODLs with 2048 samples. This suggests that MBS does more than just increase sample efficiency. We hypothesize that MBS also reduces the search space by limiting the set of samples \(\mathbf{\tilde{y}}\) to'realistic predictions', allowing even WeightedMSE models that have fewer parameters to perform well in practice. **Portfolio Optimization:** The results for this domain don't follow the trends noted above because there is distribution shift between the validation and test sets in this domain (as the train/test/validation split is temporal instead of i.i.d.). In Table 5 (Appendix E.2), we see that EGLs significantly outperform LODLs and follow the trends noted above if we measure their performance on the validation set, which is closer in time to training (and hence has less distribution shift). \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{3}{*}{Domain} & \multirow{3}{*}{Method} & \multicolumn{2}{c}{Directed} & \multicolumn{2}{c}{Directed} & \multirow{2}{*}{Quadratic} & \multirow{2}{*}{WeightedMSE} \\ & & Quadratic & WeightedMSE & & \\ \hline \multirow{6}{*}{Cubic Top-K} & LODL & -0.38 \(\pm\) 0.29 & -0.86 \(\pm\) 0.10 & -0.76 \(\pm\) 0.19 & -0.95 \(\pm\) 0.01 \\ & LODL (2048 samples) & -0.94 \(\pm\) 0.01 & 0.96 \(\pm\) 0.00 & -0.95 \(\pm\) 0.01 & -0.96 \(\pm\) 0.00 \\ \cline{2-6} & EGL (MBS) & **0.96 \(\pm\) 0.00** & **0.96 \(\pm\) 0.00** & 0.77 \(\pm\) 0.13 & **0.77 \(\pm\) 0.13** \\ & EGL (FBP) & 0.58 \(\pm\) 0.26 & **0.96 \(\pm\) 0.00** & -0.28 \(\pm\) 0.21 & -0.77 \(\pm\) 0.11 \\ & EGL (Both) & **0.96 \(\pm\) 0.00** & 0.77 \(\pm\) 0.13 & **0.96 \(\pm\) 0.00** & **0.77 \(\pm\) 0.13** \\ \hline \multirow{6}{*}{Web Advertising} & LODL & 0.75 \(\pm\) 0.05 & 0.72 \(\pm\) 0.03 & 0.84 \(\pm\) 0.04 & 0.71 \(\pm\) 0.03 \\ & LODL (2048 samples) & 0.93 \(\pm\) 0.01 & 0.84 \(\pm\) 0.02 & 0.93 \(\pm\) 0.01 & 0.78 \(\pm\) 0.03 \\ \cline{2-6} & EGL (MBS) & 0.86 \(\pm\) 0.03 & **0.83 \(\pm\) 0.03** & 0.78 \(\pm\) 0.06 & 0.77 \(\pm\) 0.04 \\ \cline{2-6} & EGL (FBP) & 0.93 \(\pm\) 0.02 & 0.80 \(\pm\) 0.03 & **0.92 \(\pm\) 0.01** & 0.75 \(\pm\) 0.04 \\ \cline{2-6} & EGL (Both) & **0.95 \(\pm\) 0.01** & 0.78 \(\pm\) 0.06 & **0.92 \(\pm\) 0.02** & **0.81 \(\pm\) 0.04** \\ \hline \multirow{6}{*}{Portfolio Optimization} & LODL & **0.146 \(\pm\) 0.003** & 0.136 \(\pm\) 0.003 & 0.145 \(\pm\) 0.003 & 0.122 \(\pm\) 0.003 \\ & LODL (2048 samples) & 0.154 \(\pm\) 0.005 & 0.141 \(\pm\) 0.004 & 0.147 \(\pm\) 0.004 & 0.113 \(\pm\) 0.014 \\ \cline{1-1} \cline{2-6} & EGL (MBS) & 0.135 \(\pm\) 0.011 & 0.138 \(\pm\) 0.010 & 0.146 \(\pm\) 0.015 & 0.108 \(\pm\) 0.009 \\ \cline{1-1} & EGL (FBP) & 0.139 \(\pm\) 0.005 & 0.141 \(\pm\) 0.008 & **0.147 \(\pm\) 0.008** & 0.136 \(\pm\) 0.004 \\ \cline{1-1} & EGL (Both) & 0.134 \(\pm\) 0.013 & **0.127 \(\pm\) 0.011** & 0.145 \(\pm\) 0.011 & **0.153 \(\pm\) 0.004** \\ \hline \hline \end{tabular} \end{table} Table 3: **Comparison to LODLs. MBS \(\rightarrow\) Model-based Sampling, FBP \(\rightarrow\) Feature-based Parameterization, and EGL = LODL + MBS + FBP. The entries represent the Mean Normalized Test DQ \(\pm\) SEM. EGLs improve the DQ for almost every choice of loss function family and domain.**
2308.01368
Empirical Translation Process Research: Past and Possible Future Perspectives
Over the past four decades, efforts have been made to develop and evaluate models for Empirical Translation Process Research (TPR), yet a comprehensive framework remains elusive. This article traces the evolution of empirical TPR within the CRITT TPR-DB tradition and proposes the Free Energy Principle (FEP) and Active Inference (AIF) as a framework for modeling deeply embedded translation processes. It introduces novel approaches for quantifying fundamental concepts of Relevance Theory (relevance, s-mode, i-mode), and establishes their relation to the Monitor Model, framing relevance maximization as a special case of minimizing free energy. FEP/AIF provides a mathematically rigorous foundation that enables modeling of deep temporal architectures in which embedded translation processes unfold on different timelines. This framework opens up exciting prospects for future research in predictive TPR, likely to enrich our comprehension of human translation processes, and making valuable contributions to the wider realm of translation studies and the design of cognitive architectures.
Michael Carl
2023-08-02T18:22:49Z
http://arxiv.org/abs/2308.01368v1
Empirical Translation Process Research: Past and Possible Future Perspectives Michael Carl, Kent State University ###### Abstract Over the past four decades, efforts have been made to develop and evaluate models for Empirical Translation Process Research (TPR), yet a comprehensive framework remains elusive. This article traces the evolution of empirical TPR within the CRITT TPR-DB tradition and proposes the Free Energy Principle (FEP) and Active Inference (AIF) as a framework for modeling deeply embedded translation processes. It introduces novel approaches for quantifying fundamental concepts of Relevance Theory (relevance, s-mode, i-mode), and establishes their relation to the Monitor Model, framing relevance maximization as a special case of minimizing free energy. FEP/AIF provides a mathematically rigorous foundation that enables modeling of deep temporal architectures in which embedded translation processes unfold on different timelines. This framework opens up exciting prospects for future research in predictive TPR, likely to enrich our comprehension of human translation processes, and making valuable contributions to the wider realm of translation studies. **Keywords**: translation process research; relevance theory; monitor model; free energy principle; active inference + Footnote †: To be published in Translation, Cognition and Behavior: ”Translation and cognition in the 21st century: Goals met, goals ahead” ## 1 Introduction and historical background In the 1970s, Translation Studies emerged as an empirical, scientific (descriptive) discipline. By the mid-1980s a branch in of this field now often referred to as Cognitive Translation Studies (CTS, or more recently CTIS, Cognitive Translation and Interpretation Studies) started to investigate and model how the translators' minds work -how translators create meaning, how they arrive at strategies and translation choices, how translation competence develops, how cultural and linguistic factors impact translated text, etc. (see, e.g., Risku 2012). Studies in this line of research "refer to and expand" (Risku 2012, 675) models of the mind as developed in Cognitive Science, to explain translators' behavior and translation processes. While the first attempts to study translation as a cognitive activity date back to the 1960s and 1970s (e.g., Albir 2015, Munoz 2017), Translation Process Research (TPR) is often said to begin in the 1980s with the analysis of thinking aloud protocols (TAP) and to investigate "What happens in the minds of translators" (Krings 1986; 2001; see also Konigs 1987) and to assess "by what observable and presumed mental processes do translators arrive at their translations?" (Jakobsen 2017, 21). Empirical TPR thereby uses a range of technologies to record and analyze behavioral data. Since the 1980s and early 1990s, TPR has evolved in several phases with the increasing availability and usage of new sensor and tracking technologies, suitable for recording and analyzing the translation process. A new era in TPR was introduced in 1996 with the development of a special-purpose software Translog (Jakobsen and Schou 1999) and the TRAP (Translation Process) project, in which researchers from different language departments at the Copenhagen Business School (CBS) launched a translation project with the aim of promoting research into the translation process... [as] it was felt that our understanding of the mental processes could be improved if the traditional qualitative approaches could be supplemented by quantitative data. Hansen 1999, 7 Successively, Munoz (2010) suggested ten principles by which he defined a new framework for _cognitive translatology_, which would ground translation research, among other things, in empirical data. Munoz made out an "urgent need to establish experimental paradigms" (169) in which "research must be firmly grounded in observable translation reality" (174). Coincidently, these needs were addressed around the same time at the Center for Research and Innovation in Translation and Translation Technology (CRITT) which started gathering behavioral data and making it available to the public as a database, the CRITT TPR-DB. The CRITT TPR-DB (Carl et al. 2016) has been a major endeavor to gather and compile Translation Process Data (TPD) into a consistent and coherent format that can be used for advanced analysis of the process. The TPR-DB started within the EU Eye-to-IT project (2005-2009) and successively CASMACAT (2011-2014) in which several Ph.D. projects collected numerous datasets that were then integrated into an experimental database. The CRITT TPR-DB has thus evolved into an open-access framework with several possibilities for further extension into various directions, to accommodate diverse data-acquisition tools (e.g., Translog-II, Carl 2012, and CASMACAT; Alabau et al. 2014), and more recently Trados (Zou et al. 2022, Yamada at al. 2022), to prototype new features and explore different explanatory models of the translation process. Initially, a query language had been conceived to provide several basic operators that could be composed into more complex functions to generate and extract features from the raw TPR-DB data (Carl and Jakobsen 2009). However, this quickly turned out to be impractical, due to the size of the data, the processing time, and the complexity of the operators. Rather than implementing a (meta) language to generate ad-hoc features on the fly, the operations were aggregated in a separate processing step to generate a set of summary tables. These summary tables are now an integral component of the TPR-DB; they list a large number of product and process features that are used for data analysis1. Footnote 1: The features are described on the CRITT website: [https://sites.google.com/site/centretranslationinnovation/tpr-db/features](https://sites.google.com/site/centretranslationinnovation/tpr-db/features). The technological repository of data collection methods have dramatically increased in the last decade and includes, besides keylogging and eye-tracking, also EEG, fMRI, and fNIRS technologies. Such data is, however, not part of the TPR-DB. Jakobsen (2017) distinguishes three phases in the development of TPR: the TAP phase, a keylogging and eye-tracking phase, and more recently the integration and deployment of methods originating in data-analytics and data-sciences. Jakobsen says: TPR has been dominated methodologically, first by the use of introspective methods, primarily TAPs and retrospection, then by (micro-) behavioral methods, keylogging and eye-tracking, sometimes in combination with cued retrospection, and more recently by the application of computational methods in the analysis of very large amounts of process data. Jakobson 2017: 39 A large body of data and research findings have been produced in the past two decades. More than 5000 translation sessions and hundreds of hours of TPD have been recorded so far. A part of the data is publicly accessible to everyone, free of charge; the raw logging data is permanently stored in a public repository2. Users of the TPR-DB can obtain a personal account to organize their data in studies. Studies consists of one or more, sometimes hundreds of, (translation) sessions that contain the logged data. Each session comprises 11 summary tables with a total of more than 300 features describing properties of the translation process and the translation product (see Carl et al. 2016). The summary tables are under constant revision and extension, as new features are added, and summary tables re-generated. A browser interface has been put in place with direct access to the TPD and a Jupyter/python toolkit has been set up that allows for advanced data-analytics methods in empirical TPR. Footnote 2: Source forge SVN repository. Instructions are available on [https://sites.google.com/site/centretranslationinnovation/tpr-db/public-studies](https://sites.google.com/site/centretranslationinnovation/tpr-db/public-studies) Based on the TPR-DB framework numerous studies3 have investigated the relation between the translation processes and the product (i.e., behavioral and linguistic data), personal and demographic properties of translators (expertise, experience, education, etc.), as well as translation goals (e.g., translation guidelines) and methods of the translation tasks (e.g., from-scratch translation, post-editing, etc.). These studies investigate, among other things, the role of expertise, and translation directionality (L1/L2 translation), ergonomic, linguistic, and emotional factors, as well as the usage of (external) resources - such as computer assisted translation and machine translation (MT). TPR focuses thereby primarily on process related issues, such as temporal aspects (e.g., translation duration), translation effort, and the distribution of (visual) attention as e.g., gathered by eye-tracking devices. Footnote 3: For an overview see publications on [https://sites.google.com/site/centretranslationinnovation/tpr-db-publications](https://sites.google.com/site/centretranslationinnovation/tpr-db-publications) Several competing theoretical approaches have been deployed to explain the reported observations. Hvelplund (2011) draws on theories of working memory (Baddeley 1974; 2000) and of a central executive system (Baddeley 2007) to explain translation processes, while Sjorup (2013) and Schmaltz (2015) refer to Lakoff and Johnson (1980) to assess the cognitive effort in metaphor translation. Serbina (2015) and Heilmann (2020) deploy Cognitive and Systemic Functional Grammar for understanding and explaining translation processes, while Alves and Vale (2009) develop an empirical interpretation of translation units (TUs) to ground Relevance Theory (RT, Gutt 1991; 2000) in empirical behavioral data. Schaefer and Carl (2013/2015) suggest a _Monitor Model_ that builds on findings of Tirkkonen-Condit (2005) and aspects of bilingualism (Halverson 2003; 2010). A further development of the Monitor Model (Carl and Schaeffer 2019) joins it with insights from RT. In the meantime, post-cognitivist approaches have emerged within the field of CTS that account for the extended and enacted nature of translation (Risku 2012; Risku and Rogl 2020; Carl 2021; Munoz 2021), while Schaeffer et al. (2020, 3939) announce the "predictive turn in translation studies" in which machine learning approaches would be used for modelling human translation processes and for predicting "when and why a translator is having trouble carrying out the [translation] task" (3940). In this article I argue that the Free Energy Principle (FEP, Friston 2009; 2010) and Active Inference (AIF, Parr et al. 2022) constitute a suited framework to account for these new demands: FEP and AIF are formulated within a mathematical rigorous framework (i.e., Bayesian reasoning) in which (artificial) agents are modelled as Partially Observable Markov Decision Processes (POMDP) that can learn from experiences, and it has been analyzed within an enactivist framework (Bruineberg et al. 2018; Kirchhoff and Kiverstein 2021; Carl 2023). FEP and AIF constitute a general unifying perspective of how biological organisms maintain a balance between their internal states and the external environment, necessary to ensure their survival in an ever-changing environment. I argue that AIF may also be a promising pathway to advance (predictive) TPR. Several TPR models imply a "deep temporal architecture" (Parr et al. 2023), in which concurrent processes in distinct temporal strata complement and interact with each other. The Monitor Model (Schaeffer and Carl 2013/2015), for instance, assumes two processes, i.e., automatized translation routines are at the basis of human translation production until "interrupted by a monitor that alerts about a problem in the outcome. The monitor's function is to trigger off conscious decision-making to solve the problem"4 (Tirkkonen Condit 2005, 11). RT defines translation as interpretive language use and suggests two distinct translation modes, a "stimulus mode" and an "interpretive mode" that, I argue -similar to the automatized and monitoring processes of the Monitor Model- unfold in different timelines, while Munoz and Apfelthaler (2022) suggest a "task segment framework" in which assumed intentional and unintentional processes generate pauses of different length. In section 2, I extend the Monitor Model with elements of RT. I argue that Gutt's (2004, 2005) distinction between _stimulus mode_ and _interpretive mode_ roughly fits the dichotomy between translation automaton and monitor processes as suggested in the Monitor Model, in which monitoring processes control the interlingual resemblance of the source and the target. Section 3 elaborates a novel operationalization of relevance. Relevance is a central concept in RT, defined as a function of effort and effect (Sperber and Wilson 1995, 132). Communication, in their view, is geared towards the maximization of relevance, but, as I will show, the concept has been deployed in various ways in translation, where s-mode and i-mode translations imply different presumptions of relevance. While RT stipulates that relevance cannot be quantified, I introduce a new notion, the _field of relevance_, opening the possibility for quantifying and determining optimal relevance. In section 4 I argue that relevance is inversely proportional to _free energy_, which is, according to FEP, a quantity that living agents (such as translators) need to minimize. In conclusion, I argue that the FEP and AIF are suited to accommodate those notions (and many more) in a rigorous mathematical framework, which may be suited to advance predictive TPR in the next decade or so. ## 2 The Monitor Model The Monitor Model, as proposed by Schaeffer and Carl (2013/2015), stipulates that automatized priming routines are the basis of human translation processes. Priming processes activate and integrate entrenched translation patterns which map ST expressions into (default) TL equivalents. In addition, on another timeline, the Monitor Model assumes that higher order monitoring strategies provide translators with criteria to decide whether the produced translations actually correspond to the translation aims, guidelines or goals. While priming is quick and associated with low levels of effort, monitoring processes may take up a substantial amount of time and higher levels of translation effort. For instance, monitoring processes seem to disintegrate loops of concurrent perception-action into successive reading and typing activities (cf. Carl and Dragsted 2012). Loops of (ST) reading and (TT) typing in translation production have been referred to as TUs and constitute the empirical basis for some of the modelling in TPR (e.g., Alves and Vale 2009; Carl and Kay 2011). TUs are thought to be basic units by which translations are produced. Malmkjaer (1998), for instance, defines process oriented TUs as a "stretch of the source text that the translator keeps in mind at any one time, in order to produce translation equivalents in the text he or she is creating" (286). While Malmkjaer thinks of TUs as basically mental constructs, TUs have also been conceptualized as a combination of both, mental events and physical (observable) behavioral acts that exhibit properties of automatized and/or monitoring processes. An adaptation of the Monitor Model is plotted in Figure 1 (adapted from Carl and Schaeffer 2019). The model stipulates that the translation process unfolds in terms of TUs, while the similarity of the input and output is controlled on a higher-level by monitoring processes which compare the source and the target, for instance, as to whether translation guidelines are met. The model draws on Relevance Theory (RT) to address the importance of monitoring functions that ensure translation goals (e.g., translation quality, style, etc. as specified in translation guidelines) Figure 1: The Monitor Model, adapted from Carl and Schaeffer (2019) are achieved despite possibly diverging source and target contexts. It depicts the interaction of automatized horizontal and vertical monitoring processes. According to RT, communication -as well as translation as a special form of interlingual communication- unfolds within the cognitive environment of an SL speaker and a TL hearer, mediated by the translator, where "the relevance-theoretic account brings out the **crucial role which context plays in translation**" (Gutt 2004, 3 emp. original). In order to make full use of the principle of relevance (see section 3), translators need to "meta-represent" the cognitive environment of the SL speaker, the TL audience and the context of the translation. Different translation strategies can be used depending on how much the cognitive environment in the SL and the TL overlap, where "a clear understanding of the influence of context can equip translators... [with] the possibilities and limitations of translation as a mode of interlingual communication" (Gutt 2004, 3). Gutt (2005) describes several scenarios in which the SL author, the translator and the TL receptor share the same or a different cognitive environment to different degrees. Gutt (2004; 2005) introduces a distinction between a _stimulus_ mode (s-mode) and an _interpretive_ mode (i-mode) of translation production that result in different kinds of inter-lingual resemblance, where "the resemblance does not have to be between the intended interpretations [i-mode] but can also lie in the sharing of linguistic properties [s-mode]" (Gutt 2004, 4). Provided the cognitive environment of the SL and the TL audience overlaps to a large extent, replication of stimulus characteristics into the TL (s-mode translations) may be sufficient for the informed target audience to re-construct the interpretive resemblance with the source: "it seems not unreasonable to consider an expression of language B a token of an expression of language A to the extent that they have properties in common" (Gutt 2005, 40). An informed receptor can recover the intended meaning from the traces of the stimulus when the translator informs the audience merely of the evidence, rather than the meaning. The i-mode, in contrast, may help translators bridge communication barriers, if the cognitive environment and/or the context between the SL and the TL audience is vastly different. Gutt is not explicit about the interaction between the s-mode and the i-mode. The Monitor Model (Figure 1) models two processes as horizontal (priming) and vertical (monitoring) processes respectively which, I will argue, have properties of the s-mode and i-mode. In this view, s-mode translation is the basis of translational activity, on top of which i-mode translation may (or may not) adjust any potential communication gaps that may be left unaddressed by the s-mode. Measuring Relevance RT posits that a message is relevant if it connects with the background of the cognitive environment and the available contextual information to answer a question or a doubt, to confirm/reject a suspicion, or correct an impression, etc. thus to yield conclusions that matter. The relevance of an input can be assessed as a function of effort and effect (Gutt 1989, 50): 1. if the contextual effects in the given context are large 2. if the effort required to process the input in this context is small RT postulates a cognitive principle of relevance according to which human cognition tends to maximize relevance, and a communicative principle according to which an act of communication presumes optimum relevance (Sperber and Wilson 1995, 260). From these principles follows that expectations of relevance raised by an utterance are precise and predictable so as to guide the hearer toward the speaker's meaning. A speaker (or writer) will seek at formulating their utterances such that the first interpretation that a hearer generates conveys the intended meaning. A speaker will stop whenever s/he thinks the desired effect may be retrieved by the hearer, thereby minimizing her effort. A hearer (or reader), on the other hand, follows a path of least effort and stops whenever a worthwhile effect has been generated (Sperber and Wilson 1995; Gutt 1989; 1991; 2000). Thus, RT defines relevance as a function of effort and effect but has not developed measures for quantification. In the following section, I suggest conceiving relevance within a two-dimensional field of relevance, as shown in Figure 2. I will point out a number of factors and contexts that may play a role in the quantification of relevance. In section 4, I present a formal framework that, I believe, is suited to capture essential components of RT and the Monitor Model. ### _The field of Relevance_ Figure 2 depicts several hypothetical paths of relevance, as a trade-off between effort and effect. The upper left rectangle indicates areas of high relevance that can be achieved by expending an acceptable amount of effort while creating worthwhile effects. The lower right rectangle amounts to irrelevant cases that require too much effort and provide unworthy effects.5 The red line marks a (hypothetical) boundary between levels of relevance and amounts of effort and effect. The curvature of the green lines suggests that the relation between effort and effect might be non-linear, dotted lines indicate unsuccessful paths through the field of relevance. It suggests that effort increases in time, while effects relate to changes in system configurations (see Table 1 and Footnote 5: Note that the terminology, “worthwhile”, “unworthy”, “acceptable”, “excessive” is taken from Gutt (and RT) while the formulation of the field of relevance is my own. Table 2). While RT has originally been conceived as a theory of communication in a monolingual setting, the relevance criteria have also been applied to explain translation. In the translation context, it may be easy for a translator to generate the worthwhile intended effects of a text in another target language if the source speech/text is simple and/or both the source and the target audience share the same cognitive environment. In this case, a translator can make use of the s-mode if the target audience can recover the intended meaning from the transposition of the Figure 2: The field of Relevance as trade-off between effort and effect stimulus. Such instances may have a shape similar to relevance path **1** in Figure 2. However, a translation is likely to become increasingly difficult and time consuming to generate if, for instance, the utterance is ambiguous or unclear, when the context is unavailable, implicatures cannot be retrieved and/or the cognitive environment in the source and target are very different. Such challenging translations may exhibit a relevance path similar to path **4** in Figure 2. Memory constraints may play a role if utterances are too long or too complex, and/or the required inferences are not obvious or unachievable in the current situation. However, irrespective of the context, a hearer, reader, or translator will stop processing as soon as a satisfying interpretation is generated (or believed to the retrievable for the TL audience). Or otherwise, according to RT, processing efforts may be given up all together when a worthwhile effect cannot be reached within an acceptable amount of effort, as exemplified in relevance paths **5** and **6**. Gile and Lei (2020) point out that a balance between effort and effect is central in translational action. They assume a strong correlation between effort and effect for low effort which then flattens out as higher amounts of effort are invested. "Beyond a certain point, further effort may not contribute much or may even become counterproductive, though where this point is can vary greatly." (265). In Gile and Lei, a distinction between low-intensity and high-intensity effort is made. Low-intensity effort occurs in activities such as terminological search, prolonged preparation of glossaries, and repeated revision. High-intensity effort has often been attributed to the exhaustion of non-automatic processes that draw on limited available resources during the translation process. High-intensity effort, they say, occurs basically only during interpretation. ### _S-mode, I-mode and Relevance_ Gutt (2005) posits that the distribution of effort and effects are different for translators and audience under the s-mode and i-mode. Table 1 is adapted from (Gutt 2005); it summarizes parameters for these differences. S-mode translations are fast and easy _for the translator_, as, I suppose, they are largely based on priming processes (Carl 2023). I-mode translations, in contrast, can be expected to be more effortful for the translator, but are easier to comprehend for the audience. According to Gutt, the main advantage of i-mode translations consists in the ease of comprehension for the TL audience, but it also bears risks as the translated message heavily depends on the interpretation, understanding, relevance judgements, and preferences of the translator, which may not always correspond to those of the SL speaker. The s-mode, in contrast, preserves a maximum amount of resemblance of the source and the translation; it implies less effort on the side of the translator but requires higher levels of awareness from the TL audience. In this view, provided the expected effects are sufficient, RT thus predicts that a translator will choose the s-mode whenever possible, as the expended translation effort is lower and the overall outcome is, thus, more relevant. It turns out that relevance from a translator's point of view is different as compared to the audience. They may not be symmetrical. Footnote 1: The reader is referred to the _s-mode_-telling _‘what was said’_, which is the _s-mode_-telling _‘what was meant’_, which is the _s-mode_-telling _‘what was meant’_, which may not always correspond to those of the SL speaker. The s-mode, in contrast, preserves a maximum amount of resemblance of the source and the translation; it implies less effort on the side of the translator but requires higher levels of awareness from the TL audience. In this view, provided the expected effects are sufficient, RT thus predicts that a translator will choose the s-mode whenever possible, as the expended translation effort is lower and the overall outcome is, thus, more relevant. It turns out that relevance from a translator's point of view is different as compared to the audience. They may not be symmetrical. * Sperber and Wilson (2002) define three degrees of metarepresentational ability, of which Gutt (2005) assumes two (1 and 2) to play a role for the audience of translations: 1. A Naively Optimist accepts the first relevant interpretation regardless of whether it could plausibly have been intended. 2. A Cautious Optimist is capable of dealing with mismatches of first-order false belief tasks, but unable to deal with deliberate deception. 3. A Sophisticated Understander has the capacity to deal simultaneously with mismatches and deception. Option 3) can be excluded in a translation context, as it can be assumed that a translator will not attempt to deceive the target audience. \begin{table} \begin{tabular}{c|p{113.8pt}|p{113.8pt}} \multicolumn{2}{c}{**s-mode–telling ‘what was said’**} & \multicolumn{1}{p{113.8pt}}{_i-mode–telling ‘what was meant’_} \\ \hline & & _Effort for the Translator: Low_ based on quick priming mechanisms & _Effort for the Translator: High_ implies evaluation of context, reflective thought, search etc. \\ \hline & Effort for audience & Potentially high (acquisition of context knowledge required; ‘cautious optimism’)* & Comparatively low (can simply use own context; ‘naïve optimism’)* \\ \cline{2-4} & Effects on audience & Contents not changed by translator to fit receptor context & Contents changed by translator to fit receptor context \\ \cline{2-4} & Meaning resemblance & High –independent of current cognitive environment of receptors receptors & Variable –dependent on current cognitive environment of receptors \\ \hline \end{tabular} \end{table} Table 1: differences between s-mode and i-mode, adapted from Gutt (2005). ### _Shape of Relevance_ Effort, effect, and thus relevance may also be conceived of as multivariate continuous variables. For instance, an utterance may have an _effect_ on one level, conveying important aspects of a message while lacking a worthwhile other effect on another level. An utterance may convey appropriate content (e.g., in terms of lexical items) but be syntactically or stylistically erroneous. It may express simple facts in a complicated manner or lack the required contextualization. Similarly, expenditure of effort may relate to retrieving low frequency words, ad-hoc metaphors, or unusual collocations, or it may be spent on analyzing long sentences or the disambiguation/contextualization of complex relations. All those parameters may be modelled as continuous variables and they may be integrated in different ways. Expenditure of effort and/or the generation of worthwhile effects may be the joint result of several of those parameters and they may heavily depend on the context. ### _Location of Relevance_ In addition, notions of effort and effect have been used for different phases and actors in the translation process. Table 2 indicates where translation effort and translation effect -and thus phenomena of relevance- have been suggested to occur: within the audience or/and within the translator. Gutt (1989; 2000), for instance, analyses textual features of _the translation product_ as proxy for assumed processing effort and cognitive effects of a hypothetical _translation receiver_. In his framework, effort, effects and thus relevance are approximated qualitatively and mainly subjectively (through linguistic analysis), which makes this approach difficult to quantify or falsify. \begin{table} \begin{tabular}{|l|l|} \hline **Effort (measures)** & **Effect (measures)** \\ \hline Receiver’s Mind (properties of TT) & Receiver’s mind (properties of TT) \\ \hline Translator’s behavior (keystrokes / gaze) & Receiver’s mind (properties of TT) \\ \hline Translator’s behavior (keystrokes / gaze) & Target text (translation quality) \\ \hline Translator’s perception (properties of ST) & Translator’s brain activity (brain imaging) \\ \hline \end{tabular} \end{table} Table 2: Location of effort and effects / relevance on the side of the receiver audience or translator Indicators of effort have also been measured in the _translators' behavior_ as captured by gaze movements (Alves and Vale 2009; Carl 2016) and/or keystroke pauses (e.g., Lacruz and Shreve 2014). This line of research opens the possibility to quantify effort and has given rise to empirical TPR and the TPR-DB, as outlined above. There are also various suggestions as to how translational _effects_ should be measured. As Goncalves (2020) points out, a troublesome assumption is the temporal and spatial dissociation of effort and effects which locates the effort in translators' behavior and the effects in the receivers' minds. This makes an assessment of relevance and the determination of optimum relevance practically impossible. One approach to overcome this dissociation has been to measure translational effects as properties of the produced translation: either in the final translation product or in the typing itself. This allows for seamless (co)relation as effort (in terms of behavior) and effect parameters can be immediately available to the researcher. This has been a productive path of enquiry which has produced many studies and insights. A variation of this approach includes (retrospective) interviews to gauge effort and effect with translator's satisfaction self-assessment. However, measuring translation effects by means of characteristics of the translation product presumes that a translation brief (or guidelines) is available (either implicitly or explicitly) by which translators are able to produce the desired translation quality that is believed to trigger the expected effects in an (assumed) receiver's mind. The assumption of a translation brief, thus, allows researchers to investigate aspects of effort and effects, observed during a translation session, independently from effort and effects for an audience, which would then fall under reception studies, see e.g., Whyatt (in print). Some recent studies conceptualize translation effects in the _translator's brain_ (mental actions), which can be measured using brain imaging technologies (e.g., fMRI, fNIRS, EEG). This approach assumes that the activation of different brain areas are effects of different processing mechanisms, triggered by targeted and reproducible stimuli. By gathering evidence from other brain studies, conclusions can be drawn about the properties and relations between these processes, and their relevance in terms of effort/effects. In some sense, this approach parallels the effort/effect assumptions in the receivers mind. However, it is unclear how all these different notions of relevance correlate. Free Energy Principle and Active Inference RT assumes that, due to the selection pressure towards increasing efficiency, the human cognitive system has developed in such a way that our perceptual mechanisms tend to automatically pick out relevant stimuli, our memory retrieval mechanisms tend automatically to activate potentially relevant assumptions, and our inferential mechanisms tend spontaneously to process them in the most productive way (Hedberg 2004) Thus, RT is a "normative approach" to communication and translation, which appeals to an optimality criterion (Parr et al. 2023). However, RT rejects the possibility to measuring relevance and thus to quantify optimality. The FEP (Friston 2009; 2010) and AIF (Parr et al. 2022), in contrast, explain similar normative assumptions in a rigorous mathematical framework, providing a general theory as to why our cognitive systems must have evolved so as to automatically maximize relevance. The Free Energy Principle (FEP) and Active Inference (AIF) are two closely related theoretical frameworks that have recently gained attention in cognitive science and neuroscience. These frameworks provide a unifying perspective on how biological organisms maintain their viability in the face of an uncertain and ever-changing environment. The FEP (Friston 2009; 2010) is a theory of how living systems maintain a balance between their internal states and the external environment. It posits that biological systems are constantly seeking to minimize the discrepancy between their expected and actual sensory input, which is captured by a quantity called _free energy_. This term is related to the amount of energy required to move the system from a baseline configuration into a (more) desirable configuration, which allows for smooth interaction with its environment. The reduction of free energy is said to be crucial for survival and for an agent to stay in its "phenotypical niche". According to FEP (Parr et al. 2022, 24-26), free energy can be reduced in two different ways: either by updating our beliefs about the external world or by acting on the world to make it more similar to our preferences. Updating beliefs comes as a consequence of observations (i.e., through perception) as we reduce the difference between the prior beliefs (before receiving new input) and the posterior beliefs (after receiving new input). In the second case, we do not change our beliefs, but modify the world so as to generate successive observations that fit our expectations (or preferences). In this view, _cognitive effort_ can be defined in terms of updating beliefs and the reduction in the gap between prior and posterior beliefs (Parr et al. 2023), while _effects_ are the results of modifications in the world. Optimality (e.g., optimal relevance) can then be defined as a trade-off, i.e., a cost function, to optimize (or minimize) the relation between these two quantities, which is known as variational free energy. The FEP has been applied to many areas of research, including perception, action, learning, and decision-making. AIF (Parr et al., 2022), on the other hand, is a computational framework that models how organisms can achieve optimum relevance by making predictions about the world and taking actions that minimize free energy. This framework has also been applied to a wide range of cognitive processes, including perception, attention, decision-making, and learning (Friston et al., 2017; Parr et al., 2022, chapter 10). Figure 3 illustrates the FEP in a translation context. It shows the two factors that impact the discrepancy between the translator's expectations \(Q\) of a (translation) task and the distribution \(P\) of task related observations in an 'outside world'. \(Q\) is a probability distribution over beliefs6 (\(x\)) that takes into account the translator's habits (e.g., related to expertise), preferences (as communicated e.g., via a translation brief), skills (e.g., knowledge of translation tools, etc.), Figure 3: Free energy \(F\)[\(Q\),y] in different factorizations, adapted from (Parr et al., 2022) preferred strategies, etc.; it quantifies a _distribution_ of the translator's beliefs concerning the task. The term \(P(x\mid y)\) quantifies the probability of a belief \(x\) when observing \(y\). The term \(P(y\mid x)\) quantifies the probability of an observation \(y\) under the belief \(x\). The Kullback-Leibler divergence (\(D_{KL}\)) quantifies the difference between the distributions \(Q\) and \(P\). If the translator has no information about the translation task whatsoever, \(Q\) should have maximum entropy (i.e., all 'beliefs' are equally probable), which provides maximal compatibility with any and all possible observations and produces the smallest \(D_{KL}\) value. The Kullback-Leibler divergence is minimal (zero) if the two distributions are identical, that is, if the observations fully coincide with the expectations. The _Divergence_ term in the first line of equation in Figure 3 scores the amount of belief updating (also called Bayesian surprise) when confronted with a new observation. In our definition above, this amounts to (translation) effort. This divergence is zero if the predictions fully coincide with the observations. In this case, free energy, \((F[Q,y])\) amounts to the _Evidence_, that is, the negative log probability - or Shannon surprise (\(-\ln\)\((P(y))\) - of the observation. The formulation of the free energy in the first line of the equation in Figure 3 thus suggests reducing Bayesian surprise by approximating \(Q\) as much as possible. Another, equivalent, formulation of free energy is shown in the second line in that figure. The _Complexity_ term, just as the _Divergence_, quantifies Bayesian surprise, indicating how much a translator has to update their beliefs following an observation. The _Accuracy_ term quantifies the conditional (Shannon) surprise of an observation weighted by predictions \(Q(x)\). Crucially, the maximization of _Accuracy_ depends on how much the outcomes of previous (translation) actions (that is, the effects) are in tune with forthcoming beliefs so as not to increase surprise (i.e., effort). That is, an action reduces entropy if it anticipates/coincides with an upcoming belief. Thus, the effect (of an action) can be assessed by the effort it generates during successive observation. Parr et al. (2023, 2) specify that the complexity quantifies the degree to which we must update our prior beliefs to explain the data at hand. This must be offset against the accuracy with which we can predict those data... [actions] allow us to modify the data we will receive in the future, and so decisions about which action to take must be based upon anticipated data. Parr et al. 2023, 2 In this sense, an agent will selectively sample the sensory inputs that it expects (Friston 2010), and it will aim at producing changes in the environment that are in tune with her predictions. In other terms, an individual self-evidences herself by acting on the world to implement her own preferences (Kirchhoff and Kiverstein 2021). AIF stipulates that agents follow _action policies (\(\pi\))_ that specify (probably hierarchically organized) sequences of action. From a set of possible action policies, an agent will select the one that maximally reduces the _expected free energy_(Friston et al. 2017; Pezzulo et al. 2018). In this sense, AIF "assumes that both _perception and action cooperate to realize a single objective_--or optimize just one function--rather than having two distinct objectives" (Parr 2022, 24, emph. in original). The hierarchical organization of the action policies allows us to conceive of "deep temporal" cognitive architectures in which different processes run in different timelines. Thus, in the translation context, the AIF framework makes it possible to formalize fast processes that realizes horizontal/s-mode translations following sets of preferences (e.g., priming mechanisms) and assumptions, which may be interrupted by slower i-mode processes that are driven by different preferences or habits. FEP and AIF provide, thus, a formal framework to Monitor Model. ## 5 Discussion The paper reviews the development of research and modelling approaches in empirical TPR and elaborates a novel view on the translation process as a possible framework for future research. The paper points out compatibilities between the Monitor Model (Tirkkonen-Condit 2005; Schaeffer and Carl 2013/2015), Relevance Theory (Gutt 2000; 2004: 2005) and the FEP/AIF (Friston 2010; Parr et al. 2022), and their complementary with respect to (1) a temporal stratification of different processing layers that presumes different timelines for distinct processing routes, (2) a normative approach that assumes an optimization of (translation) behavior which is based in fundamental principles of cognition (or even life in general). The Monitor Model draws on a body of research in psycholinguistic studies, suggesting that automated translation routines are associated with comparatively low levels of effort, as they involve minimal cognitive resources. The Monitor Model suggests that much of basic translation production emerges out of these _horizontal,_ automatized processes, but these horizontal translation routines may be interrupted and complemented/controlled by higher-level (monitoring) or _vertical_ processes (Tirkkonen-Condit 2005). These distinct processes involve different cognitive resources and evolve on different timelines. Drawing on Relevance Theory (RT), Gutt (2004; 2005) describes translation as a form of (cross-linguistic) _interpretive language use_. Translation is, according to him, an act of (re)producing cross-linguistic similarity, rather than describing or evaluating the truth of source language statements. Gutt suggests two modes of interpretive language use: the _stimulus mode_ (s-mode, "what was said") which informs the target audience about the linguistic properties of the source vs. the _interpretive mode_ (i-mode, "what was meant") which addresses the intended interpretation. S-mode translations rely on the similarity of the linguistic properties in the input and output while i-mode translations rely on the similarity of their interpretations. Gutt does not specify how these two modes interact during the translation process. In this paper, I suggest that the s-mode and i-mode roughly correspond to horizontal and vertical processes, respectively, as proposed in the Monitor Model. In this view, the s-mode serves as the basis of translational production which is interrupted by the i-mode to account for (better) interpretive resemblance. These concepts are related to _default_ translations (Carl and Dragsted 2012; Halverson 2019) or the _cruise mode_(Pym 2017), a processing mode which Pym characterizes as "all goes well until there is a 'bump', attention is required, and something needs to be done" (cf. Tirkkonen-Condit 2005). A central concept in RT is the notion of relevance. RT defines relevance as the trade-off between cognitive effort and effects. Lower cognitive effort combined with higher effects leads to increased relevance, while higher cognitive effort and/or lower effects decrease the relevance of a communicative act. RT stipulates that relevance is a principle, rather than a maxim, that is sought to be followed, but does not provide clear measures for quantifying relevance. The FEP, in contrast, provides a framework that is suited to quantify relevance in terms of free energy. The FEP, and as its corollary AIF (Parr et al. 2022), constitute a framework to quantify the discrepancy between action and perception as surprise (or its upper bound, free energy). Following Parr et al. (2022) "surprise minimization can be construed as the reduction of the discrepancy between the model and the world." (39). Parr and colleagues (2023) show how this can be achieved in 'deep temporal' systems that assume concurrent processes in multiple timelines. In future, we will discuss in detail how deep temporal translation processes can be modelled within FEP/AIF. According to FEP, an agent aims at reducing free energy by adjusting its internal model in line with the perceived observations, and/or by changing the environment so that it becomes more similar to her preferences. Observations that are in tune with expectations and actions that are in tune with preferences (or habits) impliy least cognitive effort. Cognitive effort can then be conceptualized as "the qualitative experience of committing to a behavior that diverges from priori habits" (Parr et al. 2023). Parr et al model cognitive effort as the divergence between context sensitive beliefs about how to act and a context insensitive prior belief such as habits or preferences. Behavioral patterns that correspond to habits (or preferences) can thus be assumed to exert least effort, while behavioral patterns that involve - or result from - changes in the agent's belief system are expected to be more effortful. In this view, the trade-off between the cognitive effort and effects (and thus relevance) can be measured as the best fit between the predicted and observed sensory input, and is thus a special case of free energy. Provided the s-mode and the i-mode result in comparable translational effects for a hypothetical audience, the s-mode turns out to be the more relevant one. In other words, assuming similar effects, s-mode translations can be expected to be less effortful as they do not involve an evaluation or change of the translator's belief system, as the i-mode translations do. There are limits and interesting predictions to this approach that may be worth adressing in future TPR. Discussing difficulties of Bible translation for speakers of an Ethiopian language, Gutt (2005) maintains that: either the audience's cognitive environment [i.e., their prior beliefs] needs to be adjusted so that it can process this information, or this information needs to be left aside in the higher-order communication act. Gutt 2005: 47 That is, either the audience's model (i.e., their cognitive environment) has to be adjusted or the translation for those speakers has to change (e.g., left aside), since "it is entirely unreasonable to expect that information for which their cognitive environment is not prepared can be communicated" (Gutt, 2005). In the context of AIF, Parr et al. (2023, 3) come to very similar conclusions when they find that "cognitive effort must be deployed to overcome a habit that is incongruent with our goals, but a sufficiently strong habit prevents deployment of effort to overcome that habit." Future predictive TPR should be able to assess such situations.
2306.09298
Lakat: An open and permissionless architecture for continuous integration academic publishing
In this paper, we present three contributions to the field of academic publishing. Firstly, we introduce Lakat, a novel base layer for a publishing system that fosters collaboration, pluralism and permissionless participation. Drawing inspiration from the philosophy of Imre Lakatos, Lakat is designed as a peer-to-peer process- and conflict-oriented system that supports continuous integration across multiple branches. This architecture provides a robust foundation for the integration of existing reputation systems and incentive structures or the development of new ones. Secondly, we propose a new consensus mechanism, called Proof of Review, which ensures the integrity and quality of the content while promoting active participation from the community. Lastly, we present Lignification, a new finality gadget specifically designed for branched, permissionless systems. Lignification provides a deterministic way to find the consensual state in these systems, ensuring the system's robustness and reliability in handling complex scenarios where multiple contributors may be proposing changes simultaneously. Together, these contributions aim to provide a convenient starting point to tackle some of the issues in traditional paper-formatted publishing of research output. By prioritizing collaboration, process-orientation, and pluralism, Lakat aims to improve the way research is conducted and disseminated and ultimately hopes to contribute to a healthier and more productive academic culture.
Leonhard Horstmeyer
2023-06-15T17:27:16Z
http://arxiv.org/abs/2306.09298v1
# Lakat: An open and permissionless architecture ###### Abstract In this paper, we present three contributions to the field of academic publishing. Firstly, we introduce Lakat, a novel base layer for a publishing system that fosters collaboration, pluralism and permissionless participation. Drawing inspiration from the philosophy of Imre Lakatos, Lakat is designed as a peer-to-peer process- and conflict-oriented system that supports continuous integration across multiple branches. This architecture provides a robust foundation for the integration of existing reputation systems and incentive structures or the development of new ones. Secondly, we propose a new consensus mechanism, called Proof of Review, which ensures the integrity and quality of the content while promoting active participation from the community. Lastly, we present Lignification, a new finality gadget specifically designed for branched, permissionless systems. Lignification provides a deterministic way to find the consensual state in these systems, ensuring the system's robustness and reliability in handling complex scenarios where multiple contributors may be proposing changes simultaneously. Together, these contributions aim to provide a convenient starting point to tackle some of the issues in traditional paper-formatted publishing of research output. By prioritizing collaboration, process-orientation, and pluralism, Lakat aims to improve the way research is conducted and disseminated and ultimately hopes to contribute to a healthier and more productive academic culture. ###### Contents * 1 Introduction * 1.1 Related Work * 1.2 Imre Lakatos * 1.3 Overview * 2 Data Structure * 2.1 Bucket * 2.2 Branch * 2.3 Submit * 2.4 Data-Trie * 2.5 Storage * 2.6 Branch-Requests * 3 Participants * 3.1 User Identity * 3.2 Contributor * 3.3 Contribution * 4 Protocol * 4.1 Networking * 4.2 Local Consensus * 4.3 Feature Branches * 4.4 Proof of Review (PoR) * 4.5 Broadcasting and Lignification * 4.6 Branch Config Changes ### 4.7 Branch Operations * 5 Integration and Adaptability * 5.1 Onramping * 5.2 Interfaces * 6 Conclusion ## 1 Introduction With the vast amount of data structures, of query and storage systems, of versioning and networking tools and of large language models, one may engineer publishing systems by posing certain requirements that give rise to a different and arguably more collaborative, efficient and healthy academic culture. This approach can be contrasted with an incremental adjustment of the existing system, which in many quantitative sciences is called the greedy approach. We propose an architecture leveraging the available technology that we call _Lakat_. Lakat is a distributed database with a local peer-review consensus layer. The system serves as a permissionless continuous integration solution for collaborative research. One may conceptually think of Lakat as a peer-to-peer version of Wikipedia with a branch structure similar to git and a peer review system. Our starting point is a set of eight core requirements, that we posit for a publishing system: **1. Open** - Content and code base1 should be accessible freely2. **2. Permissionless** - No one should be barred from contributing. **3. Pluralistic** - No monopoly on research opinion.3 Footnote 1: Here we refer to the code base of any client implementation. Footnote 2: Internet service providers are not free. So we refer here to additional charges. Footnote 3: This is not not necessarily the same as “No single source of truth”. **4. Process-oriented** - Emphasizing the process rather than an outcome. **5. Conflict-oriented** - Making conflicts a feature rather than a bug. **6. Curatable** - Making the presentation and organization of the content part of the output process. **7. Sustainable** - Data and compute resources should be low and reuse of fragments encouraged. **8. AI friendly** - Allowing all kind of entities to contribute, individuals, groups or AI agents. The research paper, as the gold standard of publicizing research output, poses several threats to the overall scientific endeavor. It is a relic from the times where the printing press had been the latest innovation and where the channels for communicating had a large latency. We mention six issues associated with the paper-formatted research output that are addressed by Lakat: * It incentivizes the creators of scientific output to withhold preliminary results or results that are either not significant or at odds with a hypothesis. Even if there are significant results4, they may not meet the eye or mind of other creators or consumers of scientific output until the entire paper has been published. It may then even take on the order of tens of months for the paper-formatted research output to be accessible, which is particularly problematic for impactful research. Thus the process of building on top of previous work and of critical engagement is hindered and in the best case deferred. Footnote 4: These may be perceived as significant or later recognized as significant by the community * It incentivizes creators to wrap minor changes into the guise of an entire research paper, reusing a possibly templated introduction over and over again. * The output is but a polished snapshot of a process, an inorganic blob "data structure". The process of reaching a result or of not reaching it as well as the review process are generally not part of the output and not naturally representable in the rigid paper-format. The process often doesn't stop with the paper-publication, but continues thereafter and it requires awkward hacks in the form of addenda, corrections or new paper-formatted versions to account for changes. * It creates rigid and isolated islands of content, disregarding potentially conflicting or agreeing intersections. Papers address these intersections with citations that are often placed in an unspecific context, and tend to reference an entire paper or body of work rather than a particular part. These intersections between different scientific outputs are not only constrained to citations, but entire paragraphs such as introduction or method sections are often simply replicated from previous papers. Thus, making conflicting or agreeing intersections a manifest part of the data structure can overcome the hacky fixes and shortcomings of the paper-format. * The question of who contributed how much to a research output often causes conflict among researchers. A process-oriented publication system facilitates the tracking of contributions and may reduce the cases of unjust allocation of contributorship. In paper-formatted publications the contributorship is proxied by a negotiated ordered list of co-authors, which cannot capture contributions and inevitably leads to unjust allocations. * The effective barring of potential contributors in paper-formatted research does not increase the level of scrutiny, creativity, or quality of the output. On the contrary, maybe another set of eyes can add insights or expand on the results. Why should the self-declared co-authors be in the best position to conduct the research? The fear for the theft of ideas is mostly inherent to bulk-publications and less to process-based research output. Apart from the abovementioned problems with paper-formatted research, Lakat may also be instrumental for solving other problems with scientific publishing such as the exploitation of scientists regarding their review services and production of output. Even though Lakat does not directly address this, it does provide a base layer upon which a system of incentives can be built. ### Related Work Various solutions have been proposed to improve the process of science publishing with respect to transparency, review, ownership, decentralization, collaboration, openness, and fairness. We exhibit proposed solutions and their benefits or shortcomings. Since Lakat sits at the intersection of branchable version control (c.f. Git [1, 2]), large collaborative encyclopedias (c.f. wikipedia [3]) and peer-to-peer (c.f. human society [4]) protocols (c.f. Urbit [5] or file sharing protocols [6, 7]), we will focus on solutions in that general triangle. The platform Scholarpedia was launched in 2006 [8]. It is a wiki-based format with a peer review layer, where institutional affiliation is required for contribution. It is thus integrating a scholarly component into wikipedia. The requirement of affiliation is also one of the drawbacks of this solution, as it bars some potential contributors. Furthermore, the authors of an article are either chosen or elected. This to our mind has two further problems. First, it raises the question who elects those that elect. Second, the collaborative dimension of wikipedia is lost. In contrast Lakat - like wikipedia - retains the permissionless so that no one is barred from editing or from proposing pull requests to change content (see Section 4 for details). In 2007 the Citizendium fork of the English wikipedia launched [9] with the objective to add a quality assurance layer on top of wikipedia. The concept of approved articles played an important role. However, who approves the articles? What happens to subsequent changes? Would they have to be approved again or does the approval yield a sort of finality for the manuscript? Another wiki-formatted solution is the Manubot platform [10], which allows for the collaborative preparation of research articles that can then be sent to peer-reviewed journals. However, Manubot is not a publication platform itself but aids the collaborative process of reaching a traditional publication. There are also many attempts to put part of the existing publishing logic onto a cryptographically secure distributed ledger. Everipedia[11] was a fork of wikipedia. They have also tried to build a quality assurance system on top of it using reputation tokens that can be staked and potentially lost in the process of edits, thus leveraging distributed ledger technology. So instead of tokenizing ownership of edits, they tokenized reputation. Those tokens were deployed on a blockchain (EOS and later Polygon). The project has been archived. Orvium [12] on the other hand aims to put submission of manuscripts, revisions and publications onto a blockchain or at least have them stored using some decentralized storage provider. Unfortunately it is not evident who stores what, how and where. There is for instance not much information about whether they are creating a dedicated blockchain or use an existing one. The Scienceroot project [13] was launched in 2018 with the intention to create an on-chain economy around the publishing system using a reward token called Science Token (ST), which is deployed on the Waves blockchain. They also created or attempted to create an academic journal that ties into their economy. Pluto[14] is a blockchain-based platform for academic publishing that supports peer review, open access and micropayments. ARTiFACTS[15] is a project that aims to create a blockchain-based platform for scholarly research that enables researchers to create a permanent, time-stamped record of their various items that support their research such as data sets, images, figures etc. PubChain[16] is a project that aims to create a decentralized open-access publication platform that combines a funding platform with decentralized publishing. Like Scienercroot, it has its own token coincidentally also called Science Token (ST), which is used to exchange funds, store articles on IPFS and store their content identifiers on the blockchain. They also plan to integrate crowdfunding through their marketplace. TimedChain [17] is a project that aims to create a blockchain-based editorial management system that organizes manuscripts by publishers, authors, readers and other third parties. EUREKA [18] is a project that aimed to create a blockchain-based peer-to-peer scientific data publishing platform with peer review, open access and micropayments. It was developed by the team behind ScienceMatters, an existing open access publisher that conducts triple-blind peer review. EUREKA also aimed to provide a blockchain-based rating and review system that allows readers to evaluate the impact of published articles. It is, however, not any more maintained. The Open Science company Desci Labs is developing a project called Desci Nodes [19]. Similar to Scienercroot, DeSci Nodes is a tool for creating research objects, which are a type of verifiable scientific publication that combines manuscripts, code, data, and more into a coherent unit of knowledge. The 2018 "nature index" article [20] entitled "Could Blockchain Unblock Science?" focusses on the question of how blockchain could be used to improve the process of current science publishing. Brock also mentions that data edits could be made permanently visible, which alludes to the idea of securing continuous editing in an immutable and consensual manner. He also developed and deployed the Frankl, which is an open source blockchain-based publishing platform [21]. Further insights into the landscape of blockchain-based solutions for scientific publishing are provided in [22]. Apart from providing an overview of the landscape until 2019, they propose a governance framework for scientific publishing based on a consortium blockchain model. Some of those solutions aim to make the reward structure more open and introduce on-chain reward systems [23]. When developing solutions for academic publishing, blockchain technology seems appealing because it yields effectively immutable, globally agreed data in an open and transparent way without the need for a single source of trust. However, one must not fall into the fallacy of searching for nails for a hammer. At the heart of the blockchain paradigm lies the idea of a consensus about a global unique truth. This is a very useful technology for fiat (e.g. printed money or cryptocurrency), which exists through a global consensus. However, research output is not a fiat currency. It is subject to conflicting theories, opposing views and possibly irreconcilable results. All of those drive the continuous process that is science. One may build solutions on top of a blockchain to allow for potentially conflictual editing, but this is not what it was designed for. Instead we suggest to make Lakat a base layer that satisfies the requirements for a publication system by design. A comprehensive study on the development of decentralized consensus mechanisms in blockchain networks, such as the work by Wang et al. [24], which provides an in-depth review of the state-of-the-art consensus protocols from both the perspective of distributed consensus system design and incentive mechanism design. There are also some solutions that attempt to decentralize version control systems or anchor them in a blockchain. One of the most prominent examples of a decentralized version of a version control system with branches is git-ssb, a decentralized version control system based on the secure scuttlebutt protocol that allows for distributed version control without a central authority [25]. The Radicle protocol is another example, which is a peer-to-peer network for code collaboration that extends git with a networking protocol called Radicle Link [26]. The project is governed through the RAD token, which is deployed on the Ethereum blockchain [27]. Another project that explores ways to decentralize the storage of versioned data is Ceramic, a decentralized network for managing mutable information based on the idea of streams, which are append-only logs of JSON objects. The streams are anchored in a blockchain, which is used as a global ordering mechanism, and stored in a decentralized storage network [28]." With the onset of large language models (LLMs) and AI agents that are capable of statistically extrapolating from a vast set of existing resources we are entering an era where some portions of the scientific research process can and should be outsourced to those models. AI agents should be able to take part in the process of scientific discovery. The impact and power of AI-aided or AI-generated research can be seen in multiple ways. For instance in the field of health and drug research, AI has helped improve the accuracy and efficiency of imaging [29], the interpretation of large datasets [30, 31, 32] or the discovery of drugs[33]. Moreover, the emergence of large language models has led to the development of autonomous scientific research capabilities, where these models can generate new hypotheses, design experiments to test these hypotheses, and interpret the results to draw conclusions, thereby playing a significant role in the scientific discovery process [34]. ### Imre Lakatos The entire architecture of Lakat is heavily inspired by concepts developed by the Hungarian philosopher, Imre Lakatos. In an attempt to contribute to the demarcation problem [35, 36, 37, 38] that was prominent in the field of philosophy of science during Lakatos' times, he developed the concept of a _research programme_[39, 40, 41], also called _Lakatosian research programme_, to avoid confusion with the colloquial use of the former term. The demarcation problem asks about the criteria that distinguish science from 'pseudo-science'. Lakatos develops his theory on the grounds of a process-oriented account of science. So rather than saying that this or that monolithic bulk of work or set of statements is or is not scientific, he posits that this distinction can only be made on the grounds of processes of theoretical amendments to an existing corpus of statements. He distinguishes between progressive and degenerative amendments depending on whether they strengthen the programme's predictive power. For Lakatos a research programme consists of a _hard core_, which is a set of constituting assumptions, axioms as it were, that capture the essence of a research endeavor and a _protective belt_ of auxiliary hypotheses. The key ideas that the Lakat-architecture takes from the concept of the Lakatosian research programmes are threefold: 1) The pluralism of various research undertakings. 2) The process-orientation 3) The distinction between a core and a protective belt. At the heart of these foundational concepts lies the idea that science lives through arguments, differences and discourse. The input of Lakatosian concepts into Lakat can then be described as follows: A research programme corresponds to a branch or a set of branches to which researchers contribute changes or amendments. There is no single master branch, but rather every research programme has its own branch or set of branches. Conflicts with other branches or even within the same branch are an important aspect of Lakat and can be the source of progress (c.f. progressive amendments in Lakatosian research programmes). A programme can maintain a set of feature branches that support the core branch. These side branches behave like a protective belt. ### Overview With Lakat, we propose a manifestly pluralistic, process-oriented, and conflict-oriented architecture for the continuous integration of publications, with a primary use case of research publications. In this way Lakat becomes a living document. At its core, the architecture consists of a linked data structure that resembles a DAG, where the main objects are branches. This data structure facilitates collaborative work in much the same way as git does. Branches may be thought of as the analogue of a journal in traditional publishing. The role of journal editors is covered largely by branch contributors. Branches are chains of blocks that contain submissions. The addition of another block happens via a proof of peer review, where the peers are the contributors to that branch. In that sense branches resemble blockchains with blocks consisting of submitted changes instead of transactions. As a consensus mechanism we discuss a solution that combines a proof-of-review at branch-level, a local (i.e. involving just branch-contributors) consensus rather than a global one, with a new finality gadget called Lignification. The review process is open. In a first version of Lakat the identities of the reviewers and the creators of the reviewed content are disclosed, however we wish to migrate to a weak5 form of a double-blind protocol leveraging zero-knowledge proofs, where each party may reveal their identity. Data is content-addressable and conforms to the IPLD CID format [42]. Storage is handled by a networking component in Lakat, which delegates the bulk of data storage to a selection of other storage providers, including decentralized storage networks such as IPFS [43, 7, 44], storj, and others. This improves resilience and longevity. Footnote 5: Weak is to be understood in the sense that both parties may choose to reveal their identity. We emphasize that the main contributions of this paper are the high-level ideas for an architecture of a pluralistic process- and conflict-oriented peer-to-peer publishing system. We put forth one possible data structure and protocol. However, we also want to make it clear that these specifications are work in progress. In the following, we discuss the individual elements of the proposed system and highlight their interaction. We start with the data structure in Section 2, which is the core of the system. There we introduce the objects of a data bucket, a branch, a'submit' as well as storage related aspects such as the data-trie and the database. We also discuss non-persisted parts of the data structure, namely the branch requests. In Section 3 we discuss the participants of the system and in particular the concept of a branch contributor. Then in Section 4 we discuss the protocol that handles the broadcasting, the consensus mechanism via a proof of review, a new finality gadget called Lignification and also how branches can be created, modified or operated on in this protocol. Data Structure ### Bucket The most elementary data object is the _bucket_. Each and every submitted item is submitted in a bucket: datasets, paragraphs, images and formulae are contained in buckets. These are examples of _atomic buckets_, expressing the fact that they are the building blocks of the system. Instead of a folder structure, we solve the containment relation through designated buckets that we call _molecular buckets_ (like _tree_ nodes in git). The data part of those buckets contains merely an arrangement of atomic buckets. One may think of them as the analogue of an article, a book or some other curated content. Every bucket contains six entries: A _schema_, a _creatorRoot_, a _parent_, a _dataRoot_, a _refsRoot_ and a _timestamp_. See Figure 1 for an illustration. Here and henceforth the word root refers to the root of a Merkle tree. We go through the entries in turn. The _schema_ contains details about the format of the data. For instance we have already mentioned that the data in the molecular buckets is formatted as an arrangement6. The _creatorRoot_ points to information about the creator of this bucket. In _Lakat_ contributors have one or many public-private keys and contributions are signed off with them (see Subsection 3.1). We wish to transition to a system where contributors only submit proofs of their contribution without revealing their identity (public keys etc). The _parent_ is the _content identifier_ of the parent bucket. For genesis buckets that would be 0. The _dataRoot_ is a content identifier of the data contained in the bucket. In future versions the schema could be absorbed into the dataRoot using the IPLD CID format. This would require a Lakat-specific codec. The _refsRoot_ points to all references made to other buckets within the data. This is necessary, since references to other buckets might be obscured inside the data-encoding. This is an analogue of a list of citations. The _timestamp_ records the time of inclusion of the bucket into the branch. It is important to note that we use Ethereum [27] and some Layer2 block hashes as time stamps in our first version, since the local consensus is too weak to ensure that all participants are truthful to the time otherwise. Anticipating block hashes is close to impossible. One cannot change the data inside the bucket. One would have to create a new bucket that points to the original bucket via its parent entry. Footnote 6: The purposefully vague formulation of an ‘arrangement’ is due to the intention to keep that format flexible. One may think of this as an ordered list, but one might also consider further directives or clustering of content in a directed hypergraph. ### Branch The central object type of Lakat is the _branch_. See Figure 2 for an illustration. Branches represent journals or research communities. They share some properties with _git_-branches and some with blockchains. Every branch contains an id, called _branchID_, that uniquely identifies it. The immutable entries of a branch and the initial head are hashed to produce the branch identifier. The branch also points to a parent branch from which it originated. This entry may however be empty for a certain type of branch, namely the sprout (see below). The corresponding entry is called _parentBranch_. This construction turns the set of branches into a linked data structure. In _git_ a branch is simply a pointer to the head commit. In blockchains one often encounters ids attached to the chain (so-called _chainid_) to avoid issues when the consensus mechanism yields two different chains. At creation time the branch receives a _timestamp_. The previous entries are all immutable. There are then four mutable entries, namely _stableHead_, then the two consensus entries _sprouts_ and _sproutSelection_ and finally _branchToken_. The Figure 1: The most elementary type of data container is the bucket. It contains only immutable entries (orange), such as the _schema_, the _creatorRoot_, the _parent_, the _dataRoot_, the _refsRoot_ and the _timestamp_. stable head is a pointer to the latest stable submit. A _submit_ is a set of changes (see Subsection 2.3 on submits). One may think of it as the Lakat version of an article submission. It has similarities to a commit in git - not only phonetically - but also to a block in _ethereum_. The addition of new submits works through a consensus mechanism called _proof of review (PoR)_ and _lignification_ (see Subsection 4.5). Also in this respect the branch behaves a lot like a blockchain. Every branch has it's own token, the _branchToken_. It allows funding bodies to fund a particular branch. Token logic is not handled by _Lakat_. Instead this entry essentially points to proofs of transactions on a blockchain where the respective token lives. The purpose of the integration of tokens is to create an incentive layer on top of Lakat, because (unfortunately) _humans_ as well as _AI_ do not work without incentives. The branch also carries configuration metadata, stored in _branchConfig_. It points to information about the branch type, whether merge conflicts are accepted (see Subsection 2.3), the consensus rules and the proofs that are accepted, such as proofs of token transfer or proofs of time. We use timestamps from latest blocks on various blockchains as proofs of time (see [45] and also opentimes.org). The branchConfig's mutability is more constrained than that of the stable head (See Subsection 4.6). Finally, we envision a way to extend the config schema. This would be done by an additional entry that points to a _schema bucket_, where the schema for the config is defined. An empty entry would signify the use of the default schema. There are three types of branches: _proper branches_, _sprouts_ and _twigs_. The branch type is stored in the branch config and can be changed under certain conditions. _Proper branches_ can only be modified through the local consensus mechanism (see Subsection 4.2). They point to a set of sprouts, which helps with the process of producing stable heads in the proper branch. Proper branches cannot be changed to any other branch type. A _sprout_ is a short-lived branch that is solely used to grow proper branches. Sprouts behave a bit like ommers in the ethereum protocol in the sense that they are contestants to produce the next stable head. They do not have an empty parent branch entry. Sprout branches point to an empty set of sprouts themselves. The sproutSelection contains all the sprouts that are rooted in it. The branchToken entry is empty. The stableHead is immutable. There is only one way to modify the sprout, namely indirectly when it turns into a proper branch during the lignification process (See Subsection 4.5). Once a sprout turns into a proper branch the parent branch entry is filled with the id of the branch that it is rooted in. Finally, a _twig_ can be thought of as a little feature branch. Twigs can be modified through submits by _contribors_ of the twig (See Subsection 3.2 for more information on contributors) or through merges. However, the process of merging into a twig does not need to go through the consensus mechanism of proper branches (See Subsection 4.2). In this paragraph we merely introduce some nomenclature. We distinguish between _core_ and _belt_ branches, which correspond to _this_ and _other_ in git. These are not intrinsic properties of branches, but denote the role they play during a merge. Lakat only has one type of merge. The core branch will be updated and the belt branch will not (see Subsection 4.7 for information on merges). A branch may be a core with respect to one merge and a belt with respect to another merge. This terminology originates in the core-belt dichotomy of Lakatosian research programmes. There is a further distinction that is purely conceptual and is not manifested in the technical specification, but in the nomenclature. We distinguish a _derived branch_ from a _seedling branch_ in that the seedling branch has a _singularity submit_ without a parent (See Subsection 2.3 for information on submits). A singularity submit corresponds to the genesis block in a blockchain. We invoke here a cosmological Figure 2: A schematic illustration of the branch object and its entries. metaphor rather than a biblical one. The seedling branch has no parent branch and the corresponding entry points to zero. A derived branch on the other hand has a parent branch that it points to. We say that the derived branch is _rooted_ in the parent branch. The _root_ of a derived branch is the last submit in the submit history that is also in the history of the parent branch. We also note that there are various levels at which Lakat can be viewed as a graph, going from high level to low level. At the level of the branches one can form a graph \(\mathcal{B}\), where a branch is a node and a directed link from one branch \(A\) to another branch \(B\) means that \(B\) is the parent of \(A\) or that \(B\) is merged into \(A\) (See Subsection 4.7 regarding merging). This directed graph is not necessarily a-cyclic, because \(A\) can be rooted in \(B\) and merge back into \(B\), however if one excludes merges it is. At the level of the submits, a graph \(\mathcal{S}\) can be created with the submits being nodes and a link can be drawn from a submit \(q\) to \(p\) when \(p\) is the parent of \(q\). This yields a directed acyclic graph (DAG). Finally at the level of the data buckets there exists a graph structure \(\mathcal{D}\) induced by the parent reference inside the bucket. There is a graph homomorphism from \(\mathcal{S}\) to \(\mathcal{B}\), but not vice versa and there are no homomorphisms between \(\mathcal{S}\) and \(\mathcal{D}\) or \(\mathcal{B}\) and \(\mathcal{D}\). The lack of a homomorphism between the submit structure and the data structure indicates that these are two separate layers. The relation between the elementary bucket object and the higher level branch object is not simply a many-to-one relation. Different branches may share some data buckets. In practice one would expect that most of the data inside a branch is shared with at least one other branch. See Figure 3 for an illustration of this relation. ### Submit Submits bundle up changes to the data with some additional metadata. Every submit points to a previous submit, the _parent_ submit. There exist _singularity_ submits that have no parent. The parent entry of those submits is zero. Like in Git [1] or Ethereum [27], there is a field reserved for submit-specific data that we call _submitMessage_. The change of the data within the submit is subsumed in _trieRoot_, which is the root of the _DataMPTrie_, a Merkle-Patricia-Trie that references the data state of Lakat (see Section 2.4). The leaves of the trie are the data buckets. They have some resemblance with accounts in the ethereum state trie. Usually only a small part of the entire trie gets updated in a submit. Imagine the trie being all of wikipedia and a submit being just the creation of a new page or even just editing a page. Even though the bucket identifier is immutable it points to mutable entries. This is similar to ethereum, where the leaf nodes are immutable account addresses that point to mutable entries like amount of ETH, the contract storage data or the account nonce. The mutable entries in the case of Lakat are made up of information that is attached by other users to the bucket. It is information that is not intrinsic to the bucket. This includes _socialRefs_, _reviews_, _tokens_, _bucketRefs_ and _storageProofs_. The socialRefs entry resolves to tokens of appreciation, such as thumbs up or down - the gold standard of social media user interaction. The reviews point to data buckets that contain a review or comments on the bucket in question. The tokens entry allows for the integration of tokens to data buckets. The bucketRefs are two collections of references to other buckets. The first collection is immutable and contains all those other buckets that are referenced inside the bucket data. This second collection is mutable and consists of all those molecular buckets that the atomic bucket is part of. This is a reverse registry that can be understood as how much a content has been reused. There is no analogue in classical publishing. StorageProofs are a ledger of timestamped proofs of storage for the bucket. There are some submits with a specific structure. These are the _pull requests_ (see Subsection 4.4) and the merge submits (see Subsection 4.7). The pull request contains at least one context bucket, called the _review container_, that references all the subsequent reviews. It also leaves a trace of the pull request in the submitTrace. The merge submit contains all the data buckets of the belt branch and it points to the merged branch id in the submitTrace. In Lakat conflicts are at the heart of the protocol. They are cherished as the source of progress and sets Lakat apart from conventional publishing systems. We provide a clear definition of a conflict. A _submit conflict_ with respect to a branch \(\mathfrak{B}\) is a set of three submits \(\pi\), \(s_{1}\) and \(s_{2}\) where \(\pi\) is the parent of both \(s_{1}\) and \(s_{2}\) and all three are included in \(\mathfrak{B}\). We denote this 4-tuple by \((\mathfrak{B},\pi,s_{1},s_{2})\). A submit that creates a submit conflict is called a _conflicted submit_ and a submit that does not create a submit conflict is called _conflictless submit_. A _merge conflict_ is a submit conflict that arises from a merge submit. Depending on the branch configuration (see Subsection 2.2) merge submits may or may not bring about merge conflicts. ### Data-Trie The data buckets as well as the mutable information attached to them can be looked up with the help of a Merkle-Patricia trie, called the _DataMPTrie_. This is cryptographically secure and very useful when resolving the information attached to buckets inside of an article. The keys that are stored in the trie are truncated versions of the content identifiers of the data buckets. And the values are the mutable entries attached to the buckets. To look up the bucket data itself one uses simply the content identifier of the bucket. Storage is handled separately (see Storage in Section 2.5). We propose to use a modified Merkle-Patricia trie - very similar to the one used in Ethereum - with four types of nodes: null nodes, leaf nodes, extension nodes and branch nodes. The data at each node is serialized and hashed. The specifics of this encoding are yet to be specified. One may use any of the existing IPLD-formats. The encoding should have the property that data lists with a lot of empty entries are serialized in a very compact way to save space. Many data items in Lakat have a lot of empty fields. A bucket without any interaction information is mostly empty fields. Twigs and sprouts have many empty fields as well. The leaf-nodes (in the trie) are special in this respect, because the hashing uses a salt that equals the content identifier of the bucket. Why do we need a salt at all? When a data bucket is published it doesn't have any information attached to it, so without the salt all new data buckets would have the same hash, which is not desirable. ### Storage The data is stored in key-value databases and is content addressed in the sense that the key equals the CID of the data. We would like to use the IPLD standard for linked data [42]. A piece of data can be stored with multiple protocols. A contributor may also choose to store the information on their own machine of course. The more branches point to a piece of data and the more subsequent submits rely on it the more important the persistence of that data becomes. The idea is that the availability, the longevity and the redundancy of data will scale with its importance in a self-organized fashion. A branch with many contributors will make sure to have the storage well secured and also well distributed. A newly created branch on the other hand needs to broadcast its creation (see Subsection 2.6 for branch creation broadcasting) to allow for distributed storage and attract contributors to ensure decentralized persistence of its data. This has two advantages. 1) Data that is pointed at by many branches is highly available and more redundant. 2) One cannot attack the system by creating lots and lots of Figure 3: A schematic illustration of the two main objects: The branch and the data buckets. A branch typically references multiple buckets and any bucket may be referenced by many branches. branches. To prove that a certain data bucket has been stored, i.e. pinned, that proof is attached to the mutable information of the bucket in the _storageProofs_ entry. There are a few more constraints about storage and pinning. It should be encouraged that every data bucket belongs to at least one molecular bucket so that there are no buckets without a context. Thus when a new data bucket is submitted the submission won't be accepted unless it is present in at least one context bucket. When a new branch is created the data is initially just stored by the branch creator, but broadcasted through the network. Some nodes may pick up the data and store it as well. The branch creator may also choose to pin the data bucket in a certain storage system. Data that is close in the branch data structure is also close in the storage system. This is a very important feature of Lakat. It allows for a very efficient retrieval of data. The storage of data pertaining to a branch can be rewarded in branch tokens. This is not a feature of Lakat, but may be added on top of it to incentivize storage. There may also be a market for storage, where branch creators can buy storage space for their branch data. This is also not a feature of Lakat, but may be added on top of it. ### Branch-Requests Every branch has its own staging area, where any type of branch interactions is waiting to be included. This is called the _Branch-Requests_. It is similar to the mempool in ethereum. Everyone participating in the branch (See Subsection 3.2) may receive branch interactions from users and broadcast them to the network. Here we refer to a _client_ as a piece of software that is yet to be written, which interacts with the network. A _light client_ is a client that is not capable of doing branch operations, but is capable of receiving and broadcasting branch requests. Inclusion of requests into the branch, however, requires more (see Subsection 4.4). There are eight channels in the branch requests (see Figure 4): _submit requests_, _pull requests_, _review commits_, _review submit requests_, _social transactions_, _token transactions_, _storage updates_ and _branch creation broadcast_. The requests inside the Branch requests are not permanently stored as part of the protocol. Requests are kept for as long as any of the branch contributors keeps track of them. That is where the similarity to the mempool stems from. Every channel in the Branch Requests has a certain capacity. In particular this aims to prevent that one channel clutters the entire pool of requests, which might happen if the capacity was channel-independent. All requests or broadcasts are serialized. Submit requests contain serialized versions of the data buckets that are requested to be added to the branch. Pull requests are simply notifications from other branches that seek reviewers. Only by means of a pull request can contributors from the target branch be allowed to make modifications to the requesting branch (see Proof of Review 4.4). ## 3 Participants ### User Identity We propose to use an identity management system that ties in with some of the existing decentralized identity schemes and allows for the integration of multiple signing methods. For the first version of Lakat we propose to Figure 4: The eight channels of the branch requests. Branch requests are the staging area of branches. Each channel has a limited capacity. use 3id as the identity management system. 3id is a W3C compliant decentralized identity management system that allows for the integration of multiple signing keys. The identifier for 3id within the did system is did:3. It relies on a mutable document type in the ceramic network, called a stream. In the future we would also like to reduce identity to the ability to prove the submission of content without exposing further information about the identity using zero knowledge technology without a trusted setup. This would allow for a double blind review system. In order to publish content or send messages into the network a user needs to have an identity, which at this point is a did:3 identity registered in the ceramic network. The private keys in the did are used to sign messages and to prove authorship of content. The keys are also used to sign messages that are used to propose new states of a branch. ### Contributor Every branch has _contributors_, or rather contributors have branches. A contributor is an account that can prove to have contributed to a given branch. There are four types of contributors for any given branch: _content contributors_, _review contributors_, _token contributors_ and _storage contributors_. A content contributor can prove to have submitted to the branch. A review contributor is someone who can prove to have pushed reviews to the branch (see Subsection 4.4 for information on proof of review). A token contributor is someone who can prove to have deposited funds into the branch. A storage contributor is someone who can prove to store data of that branch. Being a contributor means that you have to prove your contribution for the submits from the root submit of the branch till the current stable head. How does the set of contributors change during a merge? What is the relation between the contributors of two branches before the merge and after? When the belt branch is merged into the core following a pull request, then the new set of contributors is simply the union of the two branches (see Subsection 2.2 for the terms core and belt). That holds for all contributor types. When there is no pull request preceding the merge the contributors of this branch are unaltered. The main idea behind the concept of contributors is derived from the mutability, governance and autonomy of branches. Branches can only be modified by their contributors. This attempts to preempt attacks on branches. ### Contribution A contribution is any type of interaction with the branch. There are four types of contributions: _submit, review, token_ and _storage_. A submit is a contribution that adds data to the branch. A review submit is a contribution that adds a review to a branch. A token submit is a contribution that adds a proof of a token creation, token unit or token transfer to the branch. A storage contribution is a proof about the storage of data contained in the branch, i.e. pointed at by submits of the branch. A contribution is always associated with a branch and a contributor. In the first minimal viable version of Lakat, we are planning to use zk-STARKs as a proof system. This is due to the fact that zk-STARKs do not require a trusted setup. We use Cairo programmes to generate proofs and point to their verification. The proof of contribution is a hash of the zero-knowledge proof of the contribution and its verification. When a branch request is sent into the network it is being routed using the Kademlia protocol [46] to the contributors of the corresponding branch. The backlog of requests is being shared and continuously broadcasted and updated by storage and content contributors using the libp2p library [47]. If the request has a payload that ought to modify the branch state, the receiving node checks the proof of contribution. If the proof is valid the node adds the contribution to the branch. If the proof is invalid the branch rejects the contribution. If the contribution is a submit and the submit is not valid it cannot be included in the branch. ## 4 Protocol In this section we describe the Lakat protocol. We start with a high-level overview of the protocol and then go into the details of the individual components. Lakat is a shared key-value database of branches and data buckets together with a peer-to-peer protocol that governs the modification of this database. The modifications happen through submits to branches. The protocol needs to cover five functions: 1) Define a mechanism to construct new contributions 7. 2) Broadcast information about requests and new branch modifications through the network. 3) Check the validity of the branch modification. 4) Define a strategy to finalize the state of a branch. 5) Incentivize contributors to propose modifications. Footnote 7: This is called block proposal for blockchains. Examples include proof-of-work or proof-of-stake Lakat proposes a local consensus mechanism that relies on the notion of branch contributors. In principle Lakat could be used with various consensus mechanisms at branch level, such as proof-of-work or proof-of-stake. However, we propose a new one that we consider more suitable for academic publishing. This mechanism combines three concepts: 1) The distinction between _feature and production_ branches 2) A _proof-of-review_ mechanism that is used to propose new states of the production branch 3) A finality mechanism that is used to finalize the head of a branch, which we call _ignification_. The incentive mechanism is not built into Lakat, but may be added on top of it through the token handling at the level of the branch. Even in the current publishing business the incentives are outsourced to reputation, job promises and in some cases mere scientific curiosity. If anything, there is an anti-incentive to publish. In the following we describe the individual components of the protocol in detail. ### Networking One of Lakat's components is an asynchronous networking protocol, where peers can enter and leave at any time. The state of the individual processes of each peer is communicated and updated through a gossip protocol. The gossip protocol is used to broadcast requests and branch modifications to the network. We use the Kademlia DHT for this purpose. In Lakat the gossiping network is used to store the information state of the individual peers. This includes the branch requests (see Subsection 2.6) and the high-level information about the states of the branches that this peer keeps track of. This high level information consists of the branchId, the parentBranch, the branchConfig (branchType, acceptConflicts, acceptedProofs, consensusRoot), the stableHead (parent submit, submitMessage, trieRoot, submitTrace), the sprouts, sproutSelection, branchToken and timestamp (see Subsection 2.2). What about the bulk of the data, namely the data trie with all the data buckets and their respective interaction information, and the trace of the stableHead? That is optionally outsourced to other protocols. The protocols are then part of the multihash. They could include e.g. IPFS [43, 7], filecoin, Urbit8 or if that peer chooses to do so, it could also store the data locally in the hash table. In Kademlia proximity of data is measured in proximity of its content identifier. In future releases of Lakat we propose to tweak the proximity such that data stored on the same branch is also close to each other in the routing distance. We are planning to use the libp2p library as a basis of the networking protocol [47]. It is a modular networking stack that uses Kademlia. Footnote 8: We also consider building on top of the Urbit OS using linedb[48] as a key-value storage and networking solution ### Local Consensus Who decides which content will be added to a branch? In Lakat there is no global notion of what counts as science and what does not. There is only a local notion, the details of which is the subject of this section. A global consensus mechanism seems to be a good fit for a ledger that keeps track of values that are or ought to be globally agreed upon, i.e. for values that exist qua their global agreement. In contrast to money transactions, the global scope seems ill-fitted for the publication of research content. In our view this requires a local form of consensus. In the context of Lakat, the scope of the locality is at the branch-level. What does a branch-level scope mean? This means that the scope is constrained to the _contributors_ of a branch. Every branch has a history of submits and is _rooted_ in some parent branch or is itself a _seedling_ (See Subsection 2.2 regarding roots and seedlings). In either case there is a set of contributors to every branch between its root and the current head. Any actor, human or AI, who can prove to have contributed content in any of the branch's submits counts as a contributor (See Subsection 3.2). Branch contributors form the basis of the consensus mechanism. We entrench this deep into the protocol by allowing only branch contributors to make changes to the branch that they are contributing to. This design choice also keeps potential attackers from pushing unwanted content to a branch. Lakat does not make statements about what counts as science and what not. What counts as a legitimate scientific contribution purely emerges through the local consensus. One contribution that is viewed as being unfounded or unscientific for one branch might be viewed the opposite on another branch. In some sense this reflects a Feyerabendian approach [37]. It gives space for pluralism and allows for the organic selection of branches with possibly differing criteria on what counts as valid output. There is, however, a convergence in accepted method and output expected to emerge within a branch and also in branches that are close to each other. Branches that are close have branched off recently and possibly disagree more on technical grounds than on methodic grounds or they are simply feature branches that are to be merged back into the main branch soon. There is an overall incentive to merge branches, derived from the persistence of the data and the value of the token. The local consensus paradigm is governing amendments to branches, both to twigs and to proper branches. Sprouts on the other hand are just auxiliary objects that cannot be modified directly and are thus not amenable to a consensus mechanism. The consensus paradigm for twigs simply states that any branch contributor can push submits to the twig whereas merging into a twig requires a certain fraction of contributors to agree (See Subsection 2.2 for twigs and Subsection 4.3 for consensus on twigs). For proper branches the local consensus takes on a different form. It is divided into proof of review (See Subsection 4.4), broadcasting and lignification (See Subsection 4.5). ### Feature Branches Twigs are meant to be used for rather quick iteration. They behave like feature branches. Here is an example where twigs are expected to be used: If a contributor, human or AI, would like to add content to a target branch, say an article or some modifications or both, it creates a feature branch rooted in the target branch which subsequently goes through the proof-of-review consensus mechanism (see Subsection 4.4). Typically the number of content contributors on a twig will be low. Maybe a single author or a small group of authors, as it is the case for article publications. In order to not compromise the momentum and the quick iteration both content contributors and review contributors (if there are any) can push to the branch directly. Merges can also be pushed, but require a fraction of approvals of the content contributors. The fraction is determined in the config of the twig. ### Proof of Review (PoR) Before a branch can be merged into a proper branch it needs to undergo a review. Table 1 summarizes the steps. To start the review process an _issuing branch_ creates a pull request from a _requesting branch_ to a _target branch_. The pull request is a submit with two properties: First it contains a newly created context data bucket, called the _review container_, that will hold all the forthcoming information of the review. The submit may of course contain other buckets besides that. Second, it leaves a trace of the information about the pull request in the pullRequests entry of the _submitTrace_, namely pointers to the review container, to the target branch and the requesting branch. In most cases the review happens on a twig, which acts as a feature branch. There the issuing branch and the requesting branch are identical, because the twig requests for itself to be pulled into the target branch. However, the requesting branch may also act as a proxy requester. This is the case when a proper branch rather than a twig seeks to be merged into a target branch. Since this intention itself must pass through the consensus rule of that proper branch, one would have to create a twig and include therein the proxy pull request. Once that twig is successfully merged into the actual requesting branch by passing the consensus, the review process can begin on that proper branch for it to be mergeable into the target branch. We call a pull request _mature_ once it is included in the requesting branch. In the most common scenario where the issuing and requesting branch coincide, maturity is immediate. Once a pull request becomes mature a message will be sent to the target branch where its contributors are invited to review the requesting branch. The message is simply a reference to the pull request sent to the pullRequests channel of the target's _branchRequests_ (see Subsection 2.6 for branch requests). Any content contributor of the target branch who is not also contributing to the requesting branch can then become a _review contributor_ of the requesting branch. They must first publish a review commitment on the requesting branch. This makes them official contributors to that branch. It also helps to gauge general reviewers engagement prior to the actual review. This is helpful both for those who seek to merge and those who seek to review. It also increases accountability of the committing reviewer. Failing to supply a review after a commitment could be penalized via the social engagement. Committers publish their commitment in the reviewsTrace of the submitTrace. They cannot submit reviews without a prior commitment. Also, the identity of the reviewer is not public in the sense that the commitment solely contains a zero-knowledge proof that the reviewer is a contributor to the target branch (see Subsection 3.2). Of course the reviewer may decide to reveal their identity and this may or may not be in line with the configuration of the target branch. Reviewers then push review submits to the requesting branch. The submits just contain a proof of contributors in the target branch. A review submit consists of the following: A bucket with a review, called a _review item_. This bucket should reference all the data buckets that it has reviewed. In the respective interaction data (see Subsection 2.2) of all those reviewed buckets a reference to the review item is stored within the reviews entry. Finally the review item gets referenced in the review container of the pull request. Updating the review bucket, as with any bucket update, consists of creating a new review bucket that points to the old one through the parent entry9. The branchConfig of the target branch specifies the prerequisites for a merge. This consists of the minimum number of reviewers, a rule for acceptance and a minimum number of review rounds, which could be one by default. The rule of acceptance could be preset as well. For instance one could reject requests when a certain fraction of reviewers reject and accept when there are no rejections and specify some rule for the middle ground. Once all the requirements of the target branch are satisfied the branch is ready to be merged. How about merging branches that do not seek to be merged? This can be the case when trying to merge the newest developments from a remote branch. This case is in fact already covered by the respective consensus mechanisms of twigs and proper branches. Merging into twigs requires a fraction of content contributors to agree (see Subsection 4.3). Merging into proper branches requires a pull request and subsequent reviews, so it is not possible to just merge other un-reviewed branches in the same way that one merges reviewed twigs or reviewed proper branches. Therefore, one would have to create a twig that merges the remote branch as a feature. It then requests to be merged and the merge undergoes a review. ### Broadcasting and Lignification How are the reviewed pull requests bundled up and sequenced into a single proper branch? Why is the process important? How is the required attention bandwidth for this process kept to a minimum? In order to explain the Lakat answer to this question we first contrast it to the case of blockchains: Their transactions are bundled into blocks. They are then broadcasted across the network of nodes. When different blocks with the same parent are broadcasted, there will be conflicting versions of the blockchain state, which for a single source of truth is undesirable. In ethereum prior to the transition from the proof-of-work to the proof-of-stake these alternative versions were called ommers and were mostly the result of latency in the broadcasting, but of course also attacks or client-software issues. To make sure that a transaction has irrevocably been added into the blockchain one would have to wait for a few block confirmations. In Lakat we solve the issue through a process that we call _lignification_. The idea is that amongst the potentially plentiful and conflicting versions of the new branch state eventually a new head will be chosen. This head is then called the stable head. The versions are stored as short-lived branches, called sprouts. The _sprouts_ entry of the branch points to them. Why is the process of choosing a successor to the stable head important? Here is an explanation: The branch is an object that is kept alive by an ecosystem of contributors. It could get hijacked by a group of bad actors who became branch contributors through a mal-reviewed pull-request. In principle, if this happened, the contributors that disagreed with this malicious onboarding could bail out by creating a new branch. However, this new branch would have to grow the reputation of its contributors anew, seek new storage providers, have a new branch token and would generally have to start from scratch. It might not even be an attack, but a disagreement in the community that leads to a branching. Even though the process of finding a new stable head constitutes an important security measure for the branch, it should not create an overload of attention demanded from the target branch contributors. In most cases there will be no action required. But it is precisely those rare cases, where such a security measure becomes valuable. So one of the requirements for this process is that the branch production continues unambiguously when there is no interference from the community of contributors. In the following we introduce the process of broadcasting and lignification in more detail. \begin{table} \begin{tabular}{|c|l|l|} \hline \# & **Step** & **Description** \\ \hline \hline 1 & Create pull request & The issuing branch creates a request for the requesting branch (in most cases identical) to be merged into the target branch. A review container is created. \\ 2 & Maturity of the pull request & The pull request is included in the requesting branch (in most cases immediate) \\ 3 & Commitment & A content contributor of the target branch publishes a review commitment to the requesting branch. That makes them review contributors of the requesting branch. \\ 4 & Review & The review contributors create review submits that are referenced in the review bucket. \\ 5 & Completion & The number of review cycles and the coverage of the review meets the criteria of the target’s branchConfig. The branch may be merged into the target. \\ \hline \end{tabular} \end{table} Table 1: Overview of the Proof–of-Review (PoR) process #### Broadcasting Henceforth we refer to our proper branch as _core_. It functions as a production branch. Every proposed new merge submit could either become the stable head of core or become the first submit of a new (disagreeing) branch that is rooted in core. We refer to any of those new branches collectively as the _belt_ branch. Note that the _core_ branch may also become _belt_ for another branch. Merge submits carry in themselves the possibility of becoming the head of a new branch. Therefore we decided to "wrap" them into short-lived proto-branches, namely sprouts, whose respective heads are the merge submits. The process of broadcasting is as follows. A content contributor of core, let's call her Alice, creates a merge submit, which is a special kind of submit (see Subsection 2.3). This submit is then wrapped into a sprout, which means that the head of the sprout is set to be the merge submit and the content contributors are set to be the union of Alice and all the contributors of the pull-requesting branch. Let's call this sprout \(S\). The branch information of the sprout becomes relevant if it eventually turns into a proper branch, a process which is discussed in the lignification part of this Subsection. The parent of the merge submit is the head of a branch \(B\), that is either the core or any of the sprouts upstream of the core10. Alice chooses \(B\), so she decides where to root the new sprout. If she decides to point to a branch that is already pointed at, there will be a conflict. The new sprout \(S\) - or rather its branchId - is then added to the sproutSelection entries of \(B\) and the sprouts entry of the core (which might coincide with _B_). The new state of core is then broadcasted to all contributors of core. Note that the new state of core might have received more updates than just the modification of the sprouts or sproutsSelection entries. There can also be further modifications resulting from the lignification process (see next part). The changes, i.e. creation, of the sprout branch \(S\) are also broadcasted to its contributors. Footnote 10: The sprouts entry of a proper branch keeps track of all the upstream sprouts, but depending on the last branch update may also contain outdated sprouts. In order to retrieve all upstream sprouts one may ”walk” upstream using the sproutSelection entry, which only contains the immediate offspring sprouts of a given branch. #### Lignification Once a given submit is the new stable head of core or of belt, it cannot be revoked. We say it is _lignified_. The conversion of a previously flexible object into a rigid amendment of a branch has similarities to the process of lignification in botany. The decision about the stable head is not made immediately, but there is a period of time where it can still be revoked and deferred. This time is called _lignification time_. As mentioned above, the objects that we make decisions about are not the merge submits themselves, but the sprouts that contain them. If there is only one sprout available after the lignification time, then the decision is clear, namely that the submit contained in that sprout becomes the new stable head of core and no action is required. However, there may be multiple sprout options. In this case, we propose to have a deterministic rule that singles out one sprout and we suggest the possibility of vetoing the default deterministic choice. This minimizes the need to vote each time multiple options arise, but more importantly it reduces the attack vector for people to bring branch growth to a halt by proposing alternate - yet still reviewed - merge submits. Vetoing is possible throughout the lignification time. Any branch contributor may register a veto to any of the vying sprouts and therefore against the default sprout. In case that a veto is registered the sprout in question has a chance to provide the next stable head. Once a veto is registered, the content contributors can bring in their votes on the rivaling sprouts. After a period of time, called the _engagement time_, the winning sprout will provide the new stable head and the other sprouts can turn into peripheral proper branches rooted in core. Like with blockchains, the state of Lakat does not change by itself, but only through transactions (See Branch Requests 2.6 and Submits 2.3). This means that only when a new submit is broadcasted can the state of a branch be updated. Furthermore, a branch may only be updated if it is the target of a transaction. If the transaction is targeting core, then peripheral branches cannot be updated and vice versa. As a consequence those ousted rival sprouts do not turn into their own branches immediately, but only once a transaction targets them. Some of them may never turn into proper branches at all. Apart from the lignification time and the optional engagement time there is a time allowing for latency issues in broadcasting, called the _broadcastingBuffer_. This ensures that the timestamped vetoes or votes are broadcasted and thus recorded before the stable head is irrevocably fixed. Due to the time between successive transactions it is quite possible that the state of the core, in particular its stable head, needs to be updated. Maybe the veto time or the voting time between vying sprouts has passed or maybe there are no rivaling sprouts and the stable head simply needs to be advanced. The pseudo-code in the lignification Algorithm 1 outlines the iterative procedure that advances the stable head on each new transaction. It is worth noting that also the sprouts entry and the sproutSelection entry of core get updated by pruning and replacement respectively. An illustration of the lignification process is also shown in Figure 5. In practice the broadcasting and lignification can be automated by a script so that it requires less cognitive bandwidth. The script would choose a content contributor of core at random and broadcast collect all the pull requests that meet the merge-requirements from core, then create one or more merge submits from them, go through the lignification process and broadcast the result. Only in the case when there are disagreements would a manual interference be required. ``` 0: coreBranch, mergeSubmit, broadcastingBuffer, lignificationTime, engagementTime downstreamBranches \(\leftarrow\) branches downstream of coreBranch: [coreBranch,..., sproutOf(mergeSubmit)] referenceBranch \(\leftarrow\) coreBranch /* referenceBranch may later be core or belt branch */ for i in 1... (downstreamBranches.length - 1) /* indexing starts at 1 */ do currentBranch \(\leftarrow\) downstreamBranches[i] childSprout \(\leftarrow\) downstreamBranches[i + 1] /* always exists */ if all currentBranch.sprouts are within lignificationTime time (plus broadcastingBuffer) then return else ifThere is a veto against defaultSuccessor(currentBranch) and voting has finished then /* engagementTime is over (plus lignificationTime plus broadcastingBuffer) */ if childSprout does not win the vote then /* doesn't participate (not defaultSuccessor or not part of a veto) or participates and doesn't win */ childSprout becomes a peripheral branch rooted in referenceBranch. referenceBranch \(\leftarrow\) childSprout. else /* childSprout wins the vote */ set the stableHead, sproutSelection and sprouts of referenceBranch to those of childSprout endif /* childSprout may or may not be defaultSuccessor. Both cases are covered. */ else ifThere is a veto against defaultSuccessor(currentBranch), but voting has not finished then return else /* There is no veto against defaultSuccessor(currentBranch) */ if childSprout is defaultSuccessor(currentBranch) then set the stableHead, sproutSelection and sprouts of referenceBranch to those of childSprout else childSprout becomes a side branch rooted in referenceBranch. referenceBranch \(\leftarrow\) childSprout. endif endif /* Note that the referenceBranch may have changed. */ endif endfor return ``` **Algorithm 1**Lignification - Advancing the stable head of the branch ### Branch Config Changes The branch config contains configurable metadata such as the branch type, a flag that can be set to allow only conflictless submits (see Subsection 2.3), then the accepted proofs (e.g. proofs of storage, proofs of contributorship, proofs of token transfer) and also the parameters that determine the consensus (e.g. the number of reviewers needed in the proof of review algorithm). These entries have constrained mutability. They require a merge rather than a plain commit to take effect. For a twig this means that the consensus mechanism for a twig needs to be met, i.e. a config-specific fraction of contributors need to approve the merge. For a proper branch this means that the config change needs to go through the proof-of-review (PoR) consensus mechanism (see Subsection 4.4). We envision that in some future release there will be a default schema for the config, but that this schema may be altered through schema buckets to which the schema is pointing. Figure 5: Example of a branch lignification with _broadcastingBuffer_\(=1\), _lignificationTime_\(=50\) and _engagementTime_\(=60\). Whenever a new mergeSubmit is added the lignification algorithm 1 runs and updates the stable head. The updates 1 to 4 are unambiguous. But then the target branch has two competing sprouts. The default sprout is _#107_ and the other one is _#558_. Without any veto _#107_ will deliver the next stable head of branch _#3c4_. This is the scenario 5a. The veto time plus _broadcastingBuffer_ have passed and a new mergeSubmit _#c37_ inside sprout _#444_ triggers the lignification algorithm so that the loosing branch _#558_ becomes lignified and rooted in _#3c4_. In 6a the mergeSubmit lignifies the target branch _#3c4_. Its head has advanced to _#b3b_. In scenario 5b a veto had been registered for _#558_ in the sproutSelection entry of the target. If the voting turns out to be in favor of _#558_, the lignification process will grow the target branch in that direction (c.f. Step 6b). ### Branch Operations #### Creation The first branch operation is the creation. There are _genesis creations_ and _rooted creations_. As the name suggests, the genesis creation is a branch that does not have ancestral submits. This is similar to blockchains or git, which have a block or submit without a parent. However, unlike those, Lakat allows for multiple genesis creations. Anyone can at any time create a new genesis branch, which is either a twig or a proper branch. A genesis creation requires the creator to set the branch config. Optionally the creator may also specify a branch token. On the other hand a rooted creation is a branch creation in which the initial submit has a parent submit. There are two ways that rooted creations come about. One possibility is that a creator starts a new branch and chooses a parent submit as a root. Anyone may do that for any root branch at any time. The config can then either be chosen anew or inherited. Another possibility involves an ousted sprout, namely one that has been attempting to provide the next stable head in a lignification process. If that ousted sprout receives another submit, it turns into a proper branch. This branch is rooted in the branch for which it was a sprout. It inherits the branch config, but not the token and the branch contributors are the creator of the sprout and the content contributors of the branch that has been attempted to be merged. Any of those contributors can create submits to that ousted sprout and with that submit it turns into a proper branch where the parent entry is set during that conversion. Thus this mechanism for a rooted branch creation is indirect and can only be executed by the respective content contributors. The creation of branches is permissionless. It is therefore a potential vector for a denial of service attack. The attacker can create a lot of branches and bombard other nodes with branch creation requests. One may leverage the token entry in a newly created branch to mitigate this risk. The attachment of a proof of token transfer in the token entry of the branch can function as a filter for sincere branch creation broadcasts. #### Merge In Lakat a merge is the inclusion of changes from one branch into another. There is a strict directionality in a Lakat-merge. Git distinguishes between this and other and in Lakat this corresponds to core and belt, where core is the pulling branch. Merging into a proper branch can only occur after a pull-request (see Proof-of-Review in Subsection 4.4). Twigs on the other hand can pull other branches using an approval of a fraction of its content contributors. The fraction is specified in the branch config. After a merge the belt branch may become stale. A stale branch cannot receive submits. Whether a branch becomes stale after a merge depends on the branch config (see Subsection 4.6). A merge requires a merge submit (see Subsection 2.3) and a cryptographic validation of the branch that is merged. When the conditions for a merge are not met, the merge submit cannot pass the cryptographic validation. For instance if the config of the pulling branch only allows conflictless submits and the belt branch has conflicted submits, then the merge is invalid. In order to discuss which data buckets are included in the merge submit we briefly introduce the set theoretic slang of _A minus_\(B\) for the set of elements in \(A\) that are not in \(B\). The set of elements that are in \(A\) and \(B\) is called _intersection_ of \(A\) and \(B\) and the set of elements that are in \(A\) or \(B\) is called the _union_ of \(A\) and \(B\). The respective notations are \(A-B\), \(A\cap B\) and \(A\cup B\). We denote the set of submits of a branch \(\mathfrak{B}\) by \(\mathcal{S}_{\mathfrak{B}}\). We denote the set of data buckets in the data root of a submit \(s\) by \(\mathcal{B}_{s}\). Thus the set of data buckets in a branch \(\mathfrak{B}\) with stable head \(head(\mathfrak{B})\) is \(\mathcal{B}_{head(\mathfrak{B})}\). We have already discussed that there is no many-to-one relation between buckets and branches (c.f. Figure 3). There may be data buckets in core \(\mathfrak{C}\) that are not in belt \(\mathfrak{P}\) and there sure are data buckets in belt that are not in core, i.e. \(\mathcal{S}_{\mathfrak{P}}-\mathcal{S}_{\mathfrak{C}}\neq\emptyset\) is not empty. One question that arises in the context of merges is how to combine disparate bucket sets and how to handle that on the level of submits. There are two possible design choices. Either all the submits of belt become submits of core and, consequently, also the buckets in \(\mathcal{S}_{\mathfrak{P}}-\mathcal{S}_{\mathfrak{C}}\). Alternatively they stay submits of belt and the beforementioned buckets are included in the merge-submit's Merkle hash of the data trie. In the first scenario one is faced with the problem that the submits of belt all have immutable timestamps and parents. Rebasing those would require loosening those immutability conditions. In the latter scenario one needs to point to those submits from which data was included. It suffices to point to the belt's last submit before the merge. We opt for the second scenario. Unlike the first scenario, the second scenario has the peculiar situation that belt may have data buckets in common with core even though they do not share any submits, i.e. \(\mathcal{S}_{\mathfrak{P}}\cap\mathcal{S}_{\mathfrak{C}}=\emptyset\) yet \(\mathcal{B}_{\mathfrak{P}}\cap\mathcal{B}_{\mathfrak{C}}\neq\emptyset\). The only way this can happen in Lakat is if core and belt have pulled from the same branch or from branches that have a common submit in their histories 11. Footnote 11: Here we make the distinction between the history of a branch and the set of submits of a branch. A branch may be rooted in the branch \(\mathfrak{P}\). Integration and Adaptability ### Onramping One of the objectives of Lakat is to transition academic publishing from a paper-formatted system to a cryptographically secure, collaborative and pluralistic system that allows for the continuous integration of research output. In order to achieve this objective, we believe that a transition should be as seamless as possible. The publishing system with isolated paper-formatted publications and intransparent review processes is an edge case of Lakat, an unsustainable and hacky one yet sufficient for onramping. We describe in which sense this is the case and how a transition could be achieved. We can imagine a scenario for Lakat with a set of isolated branches. Each branch is controlled by a single legal entity, namely an academic journal. The academic journal is the content contributor, the storage contributor and the token contributor all in one. When a hypothetical researcher, say Alice (AI or human), wants to publish a paper, she has to send it to the journal. The branch that the journal controls is simply the indexed collection of articles that have been submitted, respectively chained together cryptographically. For a journal to transition its content to such a branch state is anywhere between immediate - by pointing the head of the branch to the storage locations of all content - or a matter of running a script that creates a submit for each accepted article retrospectively. Each paper is stored on a journal-controlled server, thus making the journal the sole storage provider. By adding the submission to the journal branch, the journal becomes the sole content contributor and retains all the rights of the contribution. The contribution is no longer owned by Alice at all. In this hypothetical oligarchic aberration of Lakat, a contribution is a submit with a single data bucket containing the paper. In summary, there is a way to map the classical publishing system into Lakat. Depending on the openness and licensing it might be difficult to either access or modify the content, but at least there is an entry point for the conversion. Why is this unsustainable? Given the design of Lakat, this branch would quite naturally undergo diversification through forking. At some point a researcher may create a branch rooted in that journal branch, which is but a click. Maybe the incentive structure provided by the journal is so strong that authors are willing to transfer all the rights to the journal voluntarily, but given Lakat's inherent ease of branching it will be a matter of time until a diversification is to be expected. ### Interfaces We envision Lakat as a base layer for an open, pluralistic and collaborative publication system that progresses through continuous integration. As a base layer we strive to rely only on a bare minimum of other software and aim to have an interface for existing software or protocols. Here we provide an overview of the protocols and software that we plan to build upon or interface with. We would like to use the libp2p library to implement Lakats demands for networking. Libp2p is a modular peer-to-peer networking stack which amongst others also contains Kademlia as a distributed hash table protocol. In particularly we would like to build a first client using the Rust implementation of libp2p. We would like to interface with mediawiki. Mediawiki is an evolving database schema with a php frontend that allows for the creation of knowledge databases such as wikipedia. There are various ways how Lakat could interface with mediawiki. The weakest form requires the conversion of the data contributions in Lakat to database entries in mediawiki. A stronger form converts also contributions to mediawiki into Lakat contributions. Regarding storage we aim to stay agnostic and leave the storage protocol as a configurable option. As options we consider IPFS, Ceramic (which is built on top of IPFS and anchored in Ethereum) or Urbit (lineDB). Regarding the token layer we aim to be agnostic here as well. Since this is an optional feature it is left to the branch contributors to decide and merge updates on their token transactions into their branch. We do recommend deploying tokens on the Polygon Layer 2 network though and plan to integrate this into the pipeline. Regarding version control we would like to reduce the complexity of branch operations to a bare minimum in order to avoid security threats and reliance on other protocols. For the local consensus mechanism we believe that a heavily reduced set of operations is favorable. Nevertheless we would like to interface with existing version control systems such as git or radicle. We would like to interface with them in order to allow for the conversion of git or radicle branches into Lakat branches. We would also like to allow for new branches to be turned into parathreads in Polkadot [49]. This would allow for the integration of Lakat into the Polkadot ecosystem. To this end one would need to create a pipeline to spin up a new parathread using Substrate together with a custom consensus protocol, namely the Lakat protocol. One would have to set the _BlockImport_ to a custom way of importing new submits into the key-value database. _SelectChain_ handles the finalization mechanism and would need to be set to the Lignification mechanism (See Subsection 4.5). One would also need to set the proof-of-review mechanism (See Subsection 4.4) in the _Environment_ option of the Substrate runtime. ## 6 Conclusion The contributions presented in this paper are threefold. First we propose a process- and conflict-oriented academic publishing system that is based on a peer-to-peer network architecture and supports continuous integration across multiple branches. Second, we propose a new consensus mechanism for branched, permissionless systems, called Proof of Review. Third, we propose a new finality gadget, called Lignification, that is specifically designed for branched, permissionless systems. Regarding the first contribution of an architecture for an academic publishing system we provided a list of high-level requirements for such a system. We then argued that paper-based publishing has major shortcomings including the incentivization to withhold preliminary results, the tendency to wrap minor changes into an entire research paper, the lack of representation of the research process in the output, the creation of isolated content islands, the difficulty of tracking contributions and the barring of potential contributors. Based on that we proposed an architecture that meets the requirements and addresses these shortcomings. It is composed of a cryptographically linked data structure that is kept in a distributed key-value database, the entries of which are stored, updated and communicated using a peer-to-peer network protocol. Storage is partly outsourced to other protocols. The central data objects are branches and data buckets. All research content as well as updates and context information is stored in buckets. Branches on the other hand are cryptographically linked chains of changes to this underlying content. These changes are bundled up into submits, which are analogous to Gittommits. Every branch has its own staging area, where any type of branch interactions are waiting to be included. We distinguished three types of branches: proper branches which behave like production branches, twigs which are used like feature branches and sprouts which are temporary auxiliary branches. All branches are controlled by branch contributors. We distinguished four types of branch contributors: Content contributors who can prove to have submitted content to a given branch, review contributors who can prove to have pushed reviews to the branch, token contributors who can prove to have deposited funds into the branch and storage contributors who can prove to store data of that branch. Branch modifications happen via submits through a local consensus and finality mechanism amongst the branch contributors. In principle the Lakat specification does not require a particular choice of these mechanisms. We put forth a particular local consensus mechanism for proposing changes in branched, permissionless systems. This constitutes the second contribution. The mechanism distinguishes between feature and production branches. Using Lakat terminology these are twigs and proper branches. Amendments to both types of branches are made through submits. Twigs ought to be used for small changes and quick iterations. Adding a submit requires a majority vote amongst the branch contributors, which in the case of a single contributor leads to no consensus rule at all. Proper branches on the other hand are used for larger changes. Amendments to proper branches involve merge submits, but the process is more akin to adding a block to a blockchain. First a pull-request is sent by a requesting branch to the staging area of the target branch. Any content contributor from the target branch may send a review commitment to the requesting branch and can participate in the review process through its role as a review contributor. Once a request has been reviewed and approved by a set number of reviewers a formally valid merge submit can be formed. Any contributor of the target branch can propose the amendment of the merge-submit to the proper branch. We called this process the Proof-of-Review (PoR). Finally we proposed a mechanism to deterministically decide which proposed merge submits are eventually amended to the target branch. We put forth a finality gadget called "Lignification", which we presented as our third contribution. In order to decide which of the proposed merge submits become the head of the proper branch, we proposed a process using temporary proto-branches called sprouts. Every merge submit bears in it the possibility of becoming either the head of the proper branch or the head of forked branch. That is why we proposed to wrap the proposed merge submits into sprouts. There could be multiple proposed merge submits. In the lignification method there is a possibility for the branch contributors to issue vetoes and subsequently votes for case of multiple contesting merge submits. The lignification times and engagement times are respectively reserved for vetoing and voting. A broadcasting buffer is used to allow for network latency. We also mentioned Lakat's potential for interfacing with existing software and protocols, such as mediawiki, IPFS, Ceramic, Urbit, git, radicle, and Polkadot. Moreover, we drew a possible path to onramp users such as journals and scientists to Lakat. This further enhances its adaptability and integration capabilities, which is crucial for the system's adoption and growth, ensuring that it can evolve alongside technological advancements and changing academic needs. Having presented the high-level concepts of a new publishing architecture, we are yet to find the concrete specifications and to test and study the elements in interaction. In a next step we are inviting anyone to collaboratively build client software for Lakat. We are currently in the process of building a Rust-client around the rust-libp2p library, called Lakat-OS. Please visit our Lakat Github Repository. ### Acknowledgements I would like to thank Mahdi Kourehpaz for discussions and being a great critical mind and I would like to thank Andrei Taranu for discussions, support and for poking the idea here and there. Furthermore I appreciated the opportunity to present this idea at a workshop organized by the Basic Research Community in Physics in September 2022, which provided a stimulating environment for discussion and feedback. In particular I am grateful for discussions with Benjamin Bose, Louis Garbe, Bernadette Lessel and Markus Penz. Finally I would also like to thank the Urbiter \(\sim\)dachus-tiprel for very insightful discussions in San Salvador and for the Urbit community to provide a great environment for discussions. ,
2308.09734
A Robust Policy Bootstrapping Algorithm for Multi-objective Reinforcement Learning in Non-stationary Environments
Multi-objective Markov decision processes are a special kind of multi-objective optimization problem that involves sequential decision making while satisfying the Markov property of stochastic processes. Multi-objective reinforcement learning methods address this problem by fusing the reinforcement learning paradigm with multi-objective optimization techniques. One major drawback of these methods is the lack of adaptability to non-stationary dynamics in the environment. This is because they adopt optimization procedures that assume stationarity to evolve a coverage set of policies that can solve the problem. This paper introduces a developmental optimization approach that can evolve the policy coverage set while exploring the preference space over the defined objectives in an online manner. We propose a novel multi-objective reinforcement learning algorithm that can robustly evolve a convex coverage set of policies in an online manner in non-stationary environments. We compare the proposed algorithm with two state-of-the-art multi-objective reinforcement learning algorithms in stationary and non-stationary environments. Results showed that the proposed algorithm significantly outperforms the existing algorithms in non-stationary environments while achieving comparable results in stationary environments.
Sherif Abdelfattah, Kathryn Kasmarik, Jiankun Hu
2023-08-18T02:15:12Z
http://arxiv.org/abs/2308.09734v1
A Robust Policy Bootstrapping Algorithm for Multi-objective Reinforcement Learning in Non-stationary Environments ###### Abstract Multi-objective Markov decision processes are a special kind of multi-objective optimization problem that involves sequential decision making while satisfying the Markov property of stochastic processes. Multi-objective reinforcement learning methods address this kind of problem by fusing the reinforcement learning paradigm with multi-objective optimization techniques. One major drawback of these methods is the lack of adaptability to non-stationary dynamics in the environment. This is because they adopt optimization procedures that assume stationarity in order to evolve a coverage set of policies that can solve the problem. This paper introduces a developmental optimization approach that can evolve the policy coverage set while exploring the preference space over the defined objectives in an online manner. We propose a novel multi-objective reinforcement learning algorithm that can robustly evolve a convex coverage set of policies in an online manner in non-stationary environments. We compare the proposed algorithm with two state-of-the-art multi-objective reinforcement learning algorithms in stationary and non-stationary environments. Results showed that the proposed algorithm significantly outperforms the existing algorithms in non-stationary environments while achieving comparable results in stationary environments. Multiobjective Optimization, Reinforcement Learning, Non-stationary, Environment, Dynamics, Policy Bootstrapping, Markov Decision Processes + Footnote †: journal: Journal Tie 1 ## 1 Introduction In reinforcement learning (RL), an agent learns from interactions with the environment guided by a reward signal. The objective of the learning process is to find a mapping from the environment's state space to the action space that maximizes the expected reward return [12]. Over the last decade, RL research has made many breakthroughs such as playing computer games with human-level performance [14] or beating human experts in complicated games such as Go [15]. This has been achieved by maximizing a single reward function (e.g., the score in games). However, many real-world sequential decision making problems involve multiple objectives that may be in conflict with each other. Consider a search and rescue scenario in which an unmanned ground vehicle (UGV) aims at optimizing multiple objectives including minimizing the exposure chance to risks in the environment (i.e., fire or flood), maximizing the rescue ratio of trapped victims, and minimizing the overall search and rescue time. Similarly, a drone in a patrolling scenario trying to maximize battery usability, maximize the patrolled area, and maximize the detection rate of danger/enemy objects. These problems exhibit conflicting objectives that cannot be simultaneously optimized without a compromise. In reinforcement learning and control literature, such problems are known as multi-objective Markov decision processes (MOMDPs). Conventional reinforcement learning methods cannot directly tackle the MOMDP problem, as the defined objectives are assumed to be in conflict and can not be simultaneously optimized by a single policy. Rather, multi-objective reinforcement learning (MORL) extends the conventional RL methodology through maximizing a vector of rewards instead of a single reward. Primarily, this is achieved by one of two MORL approaches: the single policy approach; and the multiple policy approach [11]. In the single policy approach, it is assumed that an optimal user's preference can be identified before solving the problem. Consequently, the multiple objectives can be transformed into a single objective using the supplied user's preference. The main limitations of this approach lie in the difficulty of satisfying its main assumption in many practical multi-objective problems. First, it may be impossible to find an optimal user's preference beforehand. Secondly, it is difficult to deal with changes in the user's preference in real-time, as it is necessary to optimize a different policy after each preference change. Changes in the user's preference can arise if the learning agent needs to deal with different users (such as in computer games or a personal assistant) or due to dealing with changes in the environment's setup (e.g., opponent actions) or in the objective space (e.g., new priorities or new objectives). Alternatively, the multiple policy approach addresses the MOMDP problem by finding a coverage set of optimal policies that can satisfy any user's preference in solving the problem. This is achieved by performing an intensive policy search procedure that evolves and orders policies based on their performance across the defined objectives. While overcoming the limitations of the single policy approach, this comes with additional costs. First, this approach has higher computational complexity as it requires an intensive interaction with the environment to evolve a set of policies instead of of a single one. Secondly, it assumes stationary environmental dynamics. This makes it inflexible to non-stationary environments, as changes in the environment will demand re-optimization of the evolved policy coverage set. In order to overcome these limitations in the current MORL methods in either dealing with changes in the user's preference, or with the non-stationary environments, we propose a developmental multi-objective optimization method. The main hypothesis behind this method is that, despite the existence of a large set of specialized policies for every possible user's preference, there is possibly a bounded set of steppingstone policies that can bootstrap any specialized policy. In contrast to specialized policies greedily optimized for a specific user's preference and environment, steppingstone policies are dedicated to an interval of user preference. Targeting a coverage set of steppingstone policies that covers the whole user preference space and can be used to bootstrap specialized policies, provides efficient optimization that can work in an online manner and be robust to non-stationary environments as we show experimentally in this paper. The contribution of this paper is threefold. First, we propose a robust policy bootstrapping (RPB) algorithm that evolves a convex coverage set (CCS) of steppingstone policies in an online manner and utilizes the CCS to bootstrap specialized policies in response to new user preferences or changes in the environmental dynamics. Second, we experimentally evaluate each design decision for the proposed algorithm in order to shed light on the configurable parts that can be changed for different scenarios. Finally, we compare our proposed algorithm with state-of-the-art multi-objective reinforcement learning algorithms in stationary and non-stationary multi-objective environments. The remainder of this paper is organized as follows. The background section introduces the fundamental concepts and the problem definition. The related work section reviews relevant literature. The methodology section illustrates the proposed algorithm and its workflow. The experimental design section describes the aim, assumptions, and methodology of the experiments. The results section presents the experimental results and discussion. Finally, the conclusion section concludes the work and identifies potential future work. ## Background This section introduces the fundamental concepts necessary for the work and formulates the research problem. ### Multi-objective Optimization The problem of multi-objective optimization includes multiple conflicting objectives that can not be simultaneously optimized without a compromise (Deb and Deb, 2014). The mathematical formulation of such a problem can be as follows: \[\max\left(R^{1}\left(\pi\right),R^{2}\left(\pi\right),\,\ldots,R^{ M}\left(\pi\right)\right) \tag{1}\] \[s.t.\,g^{j}\left(\pi\right)\leq 0,\,j=1,2,\ldots,J\] The aim is to optimize the performance of the resultant solutions over a set of reward functions \(\left\{R^{1},R^{2}\left(\pi\right),\ldots,R^{M}\left(\pi\right)\right\}\), given that each function represents an objective \(o^{m}\left(m=1,2,\ldots,M\right)\), the parameter \(\pi\in\Pi\) refers to the parameter configuration of the policy (decision variables) to be optimized over the search space \(\Pi\), while \(\left\{g^{1}(\pi),g^{2}(\pi),\ldots,g^{j}(\pi)\right\}\) constitutes the set of constraint functions defined in the problem. Policies have to be ranked and evaluated based on performance dominance in order to reach the optimal coverage set that can solve the problem while satisfying all possible user's preferences. **Definition 0.1**.: Dominance: _"A solution (A) dominates solution (B) if (A) is better than (B) for at least one objective and is equal to (B) for all other objectives."_(Abdelfattah et al., 2018) Additional illustration of Definition 0.1 is presented in Figure 1, which shows a two-objective policy search space. It can be noticed that policy (B) is dominated by policy (A). The set of red circles represents the set of non-dominated policies known to as the Pareto front. The user selects a policy from the set of non-dominated policies based on his/her preference over the defined objectives. **Definition 0.2**.: Preference: _"A preference is defined as a weight vector with each weight element dedicated to a specific objective \(\vec{w}=\left[w^{1},w^{2},\,\ldots,w^{M}\right]^{T}\), such that the sum of all the elements equals one \(\sum_{m=1}^{M}w^{m}=1\)."_(Abdelfattah et al., 2018) Figure 1: The policy search space of a bi-objective problem. The set of non-dominated polices is represented by the red circles, which is known as the Pareto front. Given a user's preference, which constitutes a tradeoff among the defined objectives, a scalarization function can be used to formulate a combined objective to be optimized. **Definition 0.3**.: Scalarization Function: _"A scalarization function \(h\), transforms a vector of multiple objectives' values into a single objective scalar value given a preference as parameter \(o_{\text{ul}}=h(\vec{o},\vec{w})\)."_Abdelftatah et al. (2018) In case of using of linear or piecewise linear scalarization functions (Eichfelder, 2008), the non-dominated policies front is referred to as the convex hull (CH). **Definition 0.4**.: Convex Hull: _"A convex hull is a subset of the policy space (\(\Pi\)) that contains optimal policies that can match any user's preference."_Abdelftatah et al. (2018) \[CH(\Pi)=\left\{\pi:\pi\in\Pi\wedge\exists\vec{w}\,\forall\,(\pi^{\prime}\in \Pi)\,\vec{w}:\vec{r}^{\pi}\geq\vec{w}\cdot\vec{r}^{\pi^{\prime}}\right\}\] For further illustration of the CH concept, Figure 2 visualizes the CH surface using linear scalarization functions over a two objective problem. The axes represent the normalized reward functions for the two objectives. The surface shaped by solid lines, which includes the set of non-dominated policies (i.e., red circles), represents the CH. The non-convex surface shaped by solid and dashed line segments represents the Pareto front which contains all the red and blue points. The non-dominated policy set falling within the CH is depicted as red circles. The blue circles refer to the set of non-dominated policies that falls within the Pareto front and outside the CH. Dominated policies are represented by black circles. Figure 1(b) depicts the scalarized reward front produced using a linear scalarization function. Where each line represents a unique preference. The first weight component (\(w^{1}\)) is shown on the x-axis and the second weight component can be computed from the first component value given the summation to one constraint (\(w^{2}=1-w^{1}\)). The y-axis shows the scalarized reward value. In Figure 1(b), the surface of the bold black line segments, which forms a linear piecewise functions, represents the CH in this scenario. As the CH surface can contain redundant policies (Roijers et al., 2013), we can define a subset of it that contains the minimal number of unique policies that solve the problem, to be the convex coverage set (CCS). **Definition 0.5**.: Convex Coverage Set: _"A convex coverage set (CCS) is a subset of the CH that can provide for each preference (\(\vec{w}\)) a policy whose scalarized reward value is maximal."_Abdelftatah et al. (2018) \[CCS\left(\Pi\right)\subseteq CH\left(\Pi\right)\wedge\] \[\left(\forall\vec{w}\right)\left(\exists\pi\right)\left(\pi\in CCS \left(\Pi\right)\wedge\forall\,(\pi^{\prime}\in\Pi)\,\vec{w}\cdot\vec{r}^{ \pi}\geq\vec{w}\cdot\vec{r}^{\pi^{\prime}}\right)\] ### Multi-objective Markov Decision Processes A Markov decision process (MDP) is a sequential planning problem in which the learning agent senses its current state in the environment (\(s_{t}\)) at time \(t\), performs an action (\(a_{t}\)) which results in transition to a new state (\(s_{t+1}\)) and receives a reward/penalty (\(r_{t+1}\)) for reaching this new state (Papadimitriou and Tsitsiklis, 1987). This paradigm is extended in multi-objective Markov decision processes by having multiple reward signals instead of a single one after performing an action (Roijers et al., 2013). Figure 3 shows a comparison between these two problems. The tuple \(\left\langle S,A,\mathbb{P}_{\text{ss}^{\prime}},\vec{R},\mu,\gamma\right\rangle\) represents a MOMDP problem, given that \(S\) refers to the state space, the action space is \(A,\ \mathbb{P}_{\text{ss}^{\prime}}=\Pr(s_{t+1}=s^{\prime}\left|s_{t}=s,a_{t}=a)\right.\) represents the state transition distribution which may have a time varying parameters (dynamics) in non-stationary environments, \(\vec{R}\in\mathbb{R}^{M}\,\forall R\,:\,S\times A\times S^{\prime}\to r\in \mathbb{R}\) represents the rewards vector corresponding to \(M\) objectives, \(\mu=\Pr(s_{0})\) is the initial state distribution, and finally, \(\gamma\in[0,1)\) represents the discounting factor that balances the priority of immediate versus future rewards during the learning process. Accordingly, the learning agent aims at maximization the expected return of the scalarized reward signal. Initializing at time \(t\) and provided a user's preference \(\vec{w}\), this scalarized return can be calculated as: \[R_{t}^{\text{ul}}=\sum_{l=0}^{T}\gamma^{l}h\left(\vec{r}_{t+l+1},\vec{w}\right) \tag{2}\] such that \(T\) refers to the _time horizon_ which approaches \(\infty\) in the _infinite time horizon_ situation. ### Problem Definition The research problem in this paper can be formulated as: provided a MOMDP problem setup \(\left\langle S,A,\mathbb{P}_{\text{ss}^{\prime}},\vec{R},\mu,\gamma\right\rangle\), we aim at finding the CCS under non-stationary dynamics of the environment (i.e., time varying parameters of state transition distribution) that satisfies the minimal cardinality while maximizing the scalarized reward return for any preference set given at \(T\)_time horizon_: \[\max\ R_{t}^{\text{ul}^{\text{ul}^{\text{ul}^{\text{ul}^{\text{ul} }}}}}=\sum_{l=0}^{T}\gamma^{l}h\left(\vec{r}_{t+l+1},\vec{w}^{i}\right) \tag{3}\] \[\min\ |CCS|\] \[s.t.\,\vec{w}^{i}\in W\,\forall\vec{w}^{i}\in\mathbb{R}^{M}, \sum_{m=1}^{M}w^{m}=\textit{1}\] where \(W\) is the set of all legitimate user's preferences over the defined objectives. ## 3 Related Work In this section, we review related work in the MORL literature in order to highlight the contribution of this paper. There are two main methods in MORL literature: single policy methods; and multiple policy methods (Roijers et al., 2013; Liu et al., 2015). Given a user's preference defined prior to solving the problem, then a single policy can be learned using a scalarized reward function utilizing any single objective reinforcement learning algorithms. In contrast, multiple policy methods search and prioritize the policy search space in order to reach the set of non-dominated policies that can solve the problem given any possible user's preference. We are going to review each of these methods as follows. Single Policy MethodsMany single method techniques utilize scalarization functions in order to combine the multiple defined objectives into a single objective. Kasimbeyli et al. (Kasimbeyli et al., 2015) discussed the properties of different scalarization functions in solving multi-objective optimization problems. Moffaert et al. (Moffaert et al., 2013) introduced a variant of the Q-learning algorithm (Watkins and Dayan, 1992) that utilizes the Chebyshev function for reward scalarization solving an MOMDP grid-world scenario. Castelletti et al. (Castelletti et al., 2013) used non-linear scalarization functions with a random exploration of the weight space for optimizing the workflow of irrigation resource management systems. Lizotte et al. (Lizotte et al., 2010) introduced linear scalarization with an algorithm that adopts the classical value iteration algorithm in order to rank actions for finite state space problems. Perny and Weng (Perny and Weng, 2010) utilized a linear programming method that incorporates Chebyshev scalarization to address a MOMDP problem. Alternatively, other methods utilize linear programming techniques that follow lexicographic ordering of objectives (Marchi and Oviedo, 1992). Ogryczak, Perny, and Weng (Ogryczak et al., 2011) further developed the aforementioned method by adopting a regret technique instead of reward scalarization for action ranking. The regret value was derived for each objective given an anchor point and the ranking is done based on the summed regret from all the defined objectives. (Feinberg and Shwartz, 1995; Altman, 1999) proposed an alternative approach based on constrained multi-objective optimization which transforms the MOMDP problem into a MDP problem by focusing the optimization process on one objective while dealing with the remaining objectives as constraints. #### Multiple Policy Methods Techniques of user's preference elicitation learn the user's preference gradually by sampling from different policy trajectories and observing the user's feedback in order to adjust future action selection (Mousseau and Pirlot, 2015). One method of this group was proposed by Akrour et al. (Akrour et al., 2011) where the preference of domain experts was acquired within the policy optimization using an algorithm named preference-based policy learning (PPL). The algorithm assumed a defined parameterization of the policy that can be sampled to draw different trajectories. The preference of the domain expert is inferred implicitly by asking them to rank the performed trajectories, which is utilized to guide the next sampling iteration of trajectories. A similar approach was introduced by Furnkranz et al. (Furnkranz et al., 2012) which assumes that the Pareto front Figure 3: Comparing single objective Markov decision process (MDP) with the multi-objective variant known as multi-objective Markov decision process (MOMDP). (a) Markov decision process (MDP). (b) Multi-objective Markov decision process (MOMDP) (Abdelfattah et al., 2018). Figure 2: Comparing the Pareto front surface with the convex hull surface in a bi-objective scenario.(Abdelfattah et al., 2018). of optimal policies is found before questioning the expert's preference. Another group of methods adopts evolutionary optimization techniques (Abraham et al., 2005) for searching in the policy space to tackle the MOMDP problem. An evolutionary algorithm was proposed by Busa-Fekete et al. (Busa-Fekete et al., 2014) for finding the Pareto front of non-dominated policies. Afterwards, the learning agent will perform the action recommended by one of the policies in the Pareto front based on the user's feedback in ranking different sampled trajectories. Other methods utilize systematic preference exploration techniques to evolve a coverage set of policies that can solve the MOMDP problem. These methods achieved comparable results to the evolutionary optimization alternatives with more efficient computation complexity (Roijers et al., 2013). Roijers, Whiteson, and Oliehoek (Roijers et al., 2014) introduced an algorithm named Optimistic Linear Support (OLS) which evolves an approximate coverage set by systematically investigating various user's preferences over the defined objectives. As an example, given a two-objectives scenario, the algorithm explores the two edge preferences (i.e., \([0.1,0.9]\), \([0.9,0.1]\)) and generates two corresponding policies using a reinforcement learning algorithm (i.e., Q-learning). The performance of the evolved policies is contrasted using a predefined threshold parameter where the one that exceeds it will be added to the final solution set. The algorithm will continue this procedure by further dividing the two edge preferences using a median point and repeating the same selection technique until no more policies exceeding the threshold parameter can be found. Policy lexicographic ordering has been explored by Gabor, Kalmar, and Szepesvari (Gabor et al., 1998) in their Threshold Lexicographic Ordering (TLO) algorithm which ranks policies based on a predefined threshold value. For each sampled policy, the TLO algorithm performs the action that achieves the maximum reward value exceeding another predefined threshold in any of the reward functions or the maximum reward value if all actions are below the threshold. OLS and TLO algorithms have been widely adopted in many relevant literature (Roijers et al., 2015; Geibel, 2006; Mossalam et al., 2016) to evolve convex coverage sets of non-dominated policies. Therefore, they form a good representation for the current state-of-the-art methods and can clearly contrast the performance of our proposed algorithm. In a previous work (Abdelfattah et al., 2018), we embarked on investigating the non-stationary MOMDP environments by proposing a new benchmark environment which poses non-stationary state transition dynamics and proposing a multi-objective reinforcement learning algorithm based on the fuzzy segmentation of the preference space. In this work, we propose a generic and simpler algorithm and investigate the different options and design decisions not explored in the previous work. We utilize our previously introduced non-stationary benchmark in the evaluation of the proposed algorithm. ## Robust Policy Bootstrapping For Solving MOMDPs The proposed robust policy bootstrapping (RPB) algorithm aims at maximizing the learning agent's robustness to perturbations in the user's preferences by evolving a coverage set of robust policies that can cope with these perturbations. As mentioned previously, our main assumption is that while there may be a large number of policies, each corresponding to a specific user's preference, a smaller and finite number of robust policies can offer the best steppingstone for any change in the preference. To find this coverage set of robust policies, we assume a preference threshold value (\(\varphi\)) that divides the linear scalarization of the defined reward functions into a number \(G\) of regions (see Figure 4). Then, for each region, a single steppingstone policy is evolved that maximizes a robustness metric, which measures the stability of its behaviour. This paper first describes the generic design of our methodology. Then, We experimentally evaluate a number of design options to shed light on configurable parts that can be changed in different applications. Figure 5 shows the generic flowchart of our proposed algorithm. The flow starts after a change occur in the user's preference represented by a weight vector over the defined reward functions. Then, the distance between the previous working preference (\(\overrightarrow{w}_{t-1}\)) and the new one (\(\overrightarrow{w}_{t}\)) is measured using a vector distance function. If the distance is less than the significance threshold (\(\varphi\)), (which means that we are still in the same preference region) then the current policy is used to respond to the new preference and policy optimization using the RL algorithm will continue. Otherwise, the policy bootstrapping mechanism is activated. The first step in this mechanism is to store the steppingstone policy for the previous preference region in the \(CCS\). To achieve that we search in the existing \(CCS\) for a steppingstone policy dedicated to the previous preference region. If a steppingstone policy is found, then it is compared with the current policy on the basis of robustness value. Finally, the best one is stored in the \(CCS\). Alternatively, in the case that there is no steppingstone policy found for the previous preference region, then the current policy is directly assigned to it and saved in the \(CCS\). For each steppingstone policy \(p^{k}\), we store three parameters \(\left<\pi^{k},\vec{w}^{k},\beta^{k}\right>\). Where \(\pi^{k}\) is the adopted Figure 4: Dividing the linear scalarization of two objectives into regions using the threshold value (\(\varphi\)). reinforcement learning algorithm's parameters (e.g., Q-values in Q-learning algorithm), \(\vec{w}^{k}\) is the preference corresponding to this policy, and \(\beta^{k}\) is the robustness metric value. Finally, we search in the \(CCS\) for the steppingstone policy with the minimum distance to the new preference. This policy is used to bootstrap the ensuing policy optimization procedure using the reinforcement learning algorithm. As we are dealing with grid world benchmark environments, we adopt a scalarized version of Q-learning [1, 2] for solving the MOMDP task using scalarization given the weights vector as depicted in Algorithm 1. Q-learning has been successfully deployed to optimize policies for such environments effectively [1, 1, 1, 2]. Accordingly, the policy parameters \(\pi^{k}\) stored in the \(CCS\) are the \(Q(s,a)\) values for each state and action pairs. We refer to this algorithm as SQ-L. It has to be noted that the proposed RPB algorithm is not limited to the use of Q-learning only as any reinforcement learning algorithm [1] can be used instead of it without affecting the workflow. In other words, changing the reinforcement learning algorithm will only change the policy parameterization \(\pi\). For example, changing Q-learning with a policy gradient reinforcement learning algorithm [1] will change \(\pi\) from \(Q(s,a)\) values to weight matrices of a neural network. ``` 0: A preference \(\vec{w},\pi^{init}\) 1:if\(\pi^{init}=\phi\)then 2: Initialize \(Q(s,a)\,\forall\,s\in S,\,a\in A(s)\) arbitrarily 3:else 4: Initialize \(Q(s,a)\,\forall\,s\in S,\,a\in A(s)\) from \(\pi^{init}\) 5:repeat 6:for each episode do 7: Initialize \(S\) 8:for each time point \(t\)do 9: Take \(a_{t}\) from \(s_{t}\) using policy derived from \(Q\) (e.g., \(\epsilon\)-greedy ), observe \(\vec{r}_{t+1}\), \(s_{t+1}\) 10: Calculate scalarized reward \(\rho=\vec{w}\cdot\vec{r}_{t+1}\) 11:\(Q(s_{t},a_{t})\gets Q(s_{t},a_{t})+\alpha\left[\rho+\gamma max_{a^{\prime}} Q(s_{t+1},a^{\prime})-Q(s_{t},a_{t})\right]\) 12:\(s\gets s_{t+1}\) 13:until\(s\) is terminal ``` **Algorithm 1** Scalarized Q-Learning (SQ-L) [1] The pseudocode of the proposed robust policy bootstrapping algorithm is presented in Algorithm 2. There are three main design decisions that need to be configured in the previously described generic procedure as follows. 1. The distance function \(d\left(\overrightarrow{w}_{t},\overrightarrow{w}_{t-1}\right)\) that measures the difference between user preferences. 2. The metric \(\beta(p)\) for measuring the robustness of generated policies. 3. The value for the preference significance threshold parameter \(\varphi\). We are going to conduct a separate empirical analysis for each of these design decision in order to highlight the impact of different configurations. ## 4 Experimental Design In this section, we present our experimental design for assessing the performance of the proposed algorithm. ### Environments In this work, a benchmark of three MOMDP environments is utilized. This benchmark includes a resource gathering environment, deep-sea treasure environment, and a search and rescue environment. The former environments are well-established in relevant literature [2], while the latter one was proposed in our previous work [1]. Latter environment has stochastic state transition dynamics necessary for examining performance in non-stationary environments. A graphical representation of these environments is shown in Figure 6. ### Search and Rescue (SAR) Environment **State Space:** SAR environment is a grid-world with dimensions \(9\times 9\) that simulates a task in which the learning agent has to search for trapped victims while avoiding exposure to fire. Accordingly, the state representation is defined as \(\langle X,Y,F,O,H\rangle\), where \(X\), \(Y\) indicate the latest position features, and \(F,\,O,\,H\) are boolean flags for the existence of fire, an obstacle, or victim in the current position. As a stochastic feature of the state transition in this environment, each victim has a random death time \(\xi_{i},\,i\in\{1,2,3,\dots,N\}\) for \(N\) victims. The episode ends when all victims are dead or rescued. **Action Space:** The action space is \(A=\{MoveEast,\,MoveWest,\,MoveNorth,\,MoveSouth\}\) with one square per movement. Attempting to move into a location occupied by an obstacle fails (i.e., the agent remains at its current location and incurs a penalty). **Reward Functions:** This environment includes two main objectives: minimizing the time to finish the task, and minimizing destruction by fire. Accordingly, there are two corresponding reward functions \(\vec{r}=\left[r^{fire},\,r^{time}\right],\,\vec{r}\in\mathbb{R}^{2}\). Every time the agent collides with a fire it gets a penalty of Figure 5: A flowchart diagram describing the generic RPB algorithm workflow (Algorithm 2). \(-5\) and \(0\) elsewhere, while there is a constant time penalty for each step of \(-1\). #### Deep Sea Treasure (DST) Environment **State Space:** This grid-world environment has dimensions of \(10\times 11\). It simulates an undersea treasure search scenario in which a submarine agent is trying to maximize the found treasure value. The state representation is defined by the features of the current location \(\langle X,Y\rangle\). The episode ends when a treasure is found. **Action Space:** The learning agent can move in one of the four directions at each step \(A=\left\{Left,\ Right,\ Up,\ Down\right\}\) where moving in directions that make the agent go outside the grid will not change the latest state. **Reward Functions:** The submarine has two conflicting objectives of maximizing the resource reward while minimizing task's time. The reward vector for these objectives is \(\vec{r}=\left[\,r^{time},\,r^{transure}\right],\,\vec{r}\in\mathbb{R}^{2}\). The value of \(r^{time}\) is a constant equal to \(-1\) on every action, while treasure reward \(r^{transure}\) will be unique for each treasure. #### Resources Gathering (RG) Environment **State Space:** The dimensions of this grid-world environment are \(5\times 5\). The learning agent aims at acquiring resources and returning back home without being attacked by an enemy. The representation of the state space is \(\langle X,Y,G,Y,E\rangle\), provided that \(X\), \(Y\) are the features of the current position, and \(G,Y,E\) are boolean flags referring to the existence of gold, gem, or enemy in the current position. This environment has stochastic state transition with probability of enemy attack of \(10\%\). After enemy attacks, the agent will lose any gathered resources and will be returned back to its home. The episode ends when the agent safely returns to the home location or when an attack occurs. **Action Space:** The learning agent has the ability to move to one of the four directions \(A=\left\{MoveEast,\,MoveWest,\,MoveNorth,\,MoveSouth\right\}\) by one step at a time. **Reward Functions:** The learning agent has two objectives in this environment which are to maximize the reward value of collected resources and to minimize exposure to attacks from the enemy. These are represented as \(\vec{r}=\left[\,r^{resources},\,r^{enemy}\right],\,\vec{r}\in\mathbb{R}^{2}\), where \(r^{resources}\) equals \(1\) for each gathered resource, and every enemy attack will result in a penalty \(r^{enemy}\) of \(-1\). ### Comparison Algorithms We compare our proposed algorithm with three MORL algorithms: 1) Optimistic Linear Support (OLS) (Roijers et al., 2014), 2) Threshold Lexicographic Ordering (TLO) (Gabor et al., 1998) and 3) Robust Fuzzy Policy Bootstraping (RFPB) (Abdelftath et al., 2018). In addition, we add the (SQ-L) algorithm (see Algorithm 1) to the comparison as a baseline using a random policy initialization after each preference change. We explored an alternative policy initialization approach for the SQ-L algorithm that adopts the parameters of policy optimized for the last preference. We found that given a uniform preference sampling, there was no significant difference between these two policy initialization approaches. As both the OLS and TLO algorithms require an offline simulation phase to evolve their coverage sets, we run them until convergence before comparing with our algorithm in the execution phase. For the parameter configuration of the OLS, TLO, and RFPB algorithms we utilize the same configuration in (Roijers et al., 2014), (Geibel, 2006), and (Abdelftath et al., 2018) respectively. For the proposed RPB algorithm we conduct empirical analysis in order to compare the effect of different configurations. ### Experiments To shed light on the design decisions and to assess the performance of our RPB algorithm, we conduct five experiments. These experiments include empirical analysis for the three main design decisions: 1) preference significance threshold, 2) robustness metric and 3) and distance function; and two additional experiments for contrasting the performance of the proposed algorithm with state-of-the-art methods in MORL literature in stationary and non-stationary environments. ### Empirical Analysis for the Preference Significance Threshold (\(\varphi\)) **Aim:** This experiment aims at assessing the optimal value of the preference significance threshold (\(\varphi\)) in each experimental environment. **Method:** We evaluate 10 different threshold values that gradually increase from \(0.05\) to \(0.5\). For each run, we execute Figure 6: Graphical representation of the benchmark environments. (a) The search and rescue (SAR) environment. (b) The deep sea treasure (DST) environment. (c) The resource gathering (RG) environment (Abdelftath et al., 2018). the 10 uniformly sampled preferences in Table 1 giving each 800 episodes. We run 15 independent runs for each environment. **Evaluation Criteria:** We compare the reward loss values after switching the preference for each threshold value over the 15 independent runs. The one that achieves the lowest value on this metric is the best fit for the environment. ### Empirical Analysis for the Robustness Metric (\(\beta\)) **Aim:** The aim of this experiment is to assess the robustness metric to use in the design of the RPB algorithm. **Method:** For investigating the different variations of policy robustness metrics, we evaluated five candidate metrics for each environment. We selected these metrics based relevant literature for policy robustness evaluation [1, 1, 2, 3]. The metrics are presented in Table 2. The first metric measures the stability of the policy's behaviour in terms of mean of rewards divided by standard deviation. The second metric measures the dispersion of a probability distribution (i.e., the observed reward values in our case), it is the ratio of variance to the mean and it is called index of dispersion (IoD). The third metric measures the degree of variability with respect to the mean of the population. It is calculated as the ratio between the standard deviation and the mean and it is referred to as The coefficient of variation (CV). The fourth metric is the entropy of the reward distribution. It is derived from the concepts of Information theory. Finally, the fifth metric is the regret, defined as the difference between the reward mean of the current policy and the reward mean of the optimum policy. While this metric has the potential to guide the policy search given a ground truth (optimum policy) it is not applicable in many scenarios in which an optimum policy cannot be known prior solving the problem. For the last metric evaluation, we utilize the policies generated by the OLS algorithm as optimum policies to compare with. We separate the regret metric with a dotted line in Table 2 in order to indicate the difference in its assumption compared to the rest of the metrics. **Evaluation Criteria:** We compare the five metrics by running 15 independent runs for each metric. Each run includes 800 episodes for each preference in the \(9\) sampled preferences in Table 1. Then, we calculate the median reward value for each preference and sum the medians to get one representative value for each run. Finally, we average the 15 independent sums and visualize the average with standard deviation bars. Empirical Analysis for the Distance Function (\(d(\overrightarrow{w}_{t-1},\overrightarrow{w}_{t})\)) **Aim:** In this experiment, we evaluate the effectiveness of various distance functions over all experimental environments. **Method:** We evaluate four well-known distance functions including the Euclidean, Hamming, Cosine, and Manhattan distance functions. The equations for these functions are illustrated in Table 3 **Evaluation Criteria:** We followed a similar evaluation criteria as in the robustness metric empirical analysis. We run 15 independent runs for each metric. Each run includes 800 episodes for each preference in the 9 sampled preferences in Table 1. Then, we calculate the median reward value for each preference and sum the medians to get one representative value for each run. Finally, we average the 15 independent sums and visualize the average with standard deviation bars. ### Performance Evaluation in Stationary Environments **Aim:** The aim of this experiment is to assess the impact of user's preference changes on each of the comparison methods while neutralizing the impact of the environment's dynamics changes by operating in a stationary environment. In this experiment, the environment's dynamics (i.e., distribution of objects) is fixed during each run. Therefore, only the start location of the agent is randomized at the beginning of each run. In addition, the two comparative MORL algorithms (OLS and TLO) assume a stationary environment setup, therefore, this experiment guarantees a fair comparison by including the best scenario for them. **Method:** We compare the four algorithms based on a set of (9) random preferences sampled uniformly from the two-dimensional weight space for the three experimental environments. Table 1 presents the set of the randomly sampled preferences. We execute \(30\) runs each with a different initial distribution of objects. Each preference is given a total of \(800\) episodes and the average reward value over the last \(50\) episodes is used to compare the four algorithms. As the OLS and TLO algorithms are working in an offline mode, we firstly, run their training procedures till convergence (evolving the CCS), then, we compare them with the other algorithms. The RPB algorithm is configured based on the outcomes of the empirical analysis of each design configuration. **Evaluation Criteria:** We evaluate the performance of each algorithm using two metrics: the average reward value over the last 50 episodes for each preference; and the average reward loss after preference change. The former metric reflects the performance level of the generated policy for each algorithm in response to the user's preference in terms of reward value after convergence. While the latter metric shows the robustness of each algorithm to changes in the user's preference in terms of reward loss. For further illustration of these two metrics, Figure 7 shows a diagram for the average reward signal over a single preference change from preference (A) to preference (B). The value \(\Gamma_{c}\) is the average reward value over the last \(50\) episodes of preference (A). While \(\Gamma_{I}\) is the average reward value over the first 50 episodes of preference (B). We utilize \(\Gamma_{c}\) to show the average reward value after convergence and \((\Gamma_{c}-\Gamma_{I})\) to show the average loss value after the preference change from (A) to (B). ### Performance Evaluation in Non-stationary Environments **Aim:** This experiment aims at evaluating the impact of the environment dynamics change, while using the same preference change pattern, therefore, the performance level of each comparison algorithm in non-stationary environments can be well contrasted. In order to achieve that we allow the environment's setup to change within each run by allowing \(25\)% of the objects (e.g., fire, victim, obstacle, treasure) to change their locations randomly every \(100\) episodes. **Method:** We utilize the same method as in the stationary environment experiment. **Evaluation Criteria:** We use the same evaluation metrics as in the stationary environment experiment. ## 4 Results and Discussion In this section, we are going to present and discuss the results of each of the five experiments included in the experimental design. ### Empirical Analysis for the Preference Significance Threshold (\(\varphi\)) Figure 8 shows the average reward loss after preference change for the preference distance threshold parameter (\(\varphi\)) values for each experimental environment. For the SAR environment, the value of (\(0.25\)) was found to be the optimal as significant reward loss was observed for higher values. While for the DST and RG environments, the optimum value for (\(\varphi\)) was found to be (\(0.15\)). These results indicate that the there is no one optimum threshold value that can fit all scenarios. In other words, the preference threshold parameter needs to be tuned for each different environment based on its dynamics and the defined reward functions. This finding opens the way for further future enhancements to the proposed algorithm through ways that automatically tune this threshold parameter (our previously proposed algorithm (Abdelfattah et al., 2018), while more complex, is able to do this). The RPB algorithm will utilize the identified \(\varphi\) values for each environment during comparison to other MORL algorithms. ### Empirical Analysis for the Robustness Metric (\(\beta\)) Figure 9 shows the comparison results for the robustness metrics. While the regret metric achieved the best results in terms of average reward value over the three experimental environment, it is not applicable in many scenarios in which the CCS of policies is not defined beforehand. The stability metric achieved the best overall results in comparison to the other three remaining metrics. Thus, the RPB algorithm will use the stability metric during comparison with other algorithms. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Preference & \(P_{1}\) & \(P_{2}\) & \(P_{3}\) & \(P_{4}\) & \(P_{5}\) & \(P_{6}\) & \(P_{7}\) & \(P_{8}\) & \(P_{9}\) \\ \hline \(w_{1}\) & 0.66 & 0.33 & 0.28 & 0.54 & 0.68 & 0.44 & 0.88 & 0.65 & 0.48 \\ \(w_{2}\) & 0.34 & 0.67 & 0.72 & 0.46 & 0.32 & 0.56 & 0.12 & 0.35 & 0.52 \\ \hline \hline \end{tabular} \end{table} Table 1: The set of uniformly sampled user preferences utilized in the experimental design. \begin{table} \begin{tabular}{c c} \hline \hline Distance Function & Equation \\ \hline Euclidean & \(\sqrt{\sum_{m=1}^{M}\left(w_{t-1}^{m}-w_{t}^{m}\right)^{2}}\) \\ Hamming & \(\sum_{m=1}^{M}\left[w_{t-1}^{m}\neq w_{t}^{m}\right]\) \\ Cosine & \(\frac{\mathbf{w_{t-1}},\mathbf{w_{t}}}{\left[\mathbf{w_{t-1}}\right]\left\| \mathbf{w_{t}}\right\|}\) \\ Manhattan & \(\sum_{m=1}^{M}\left[w_{t-1}^{m}-w_{t}^{m}\right]\) \\ \hline \hline \end{tabular} \end{table} Table 3: A list of distance functions evaluated in the empirical analysis Figure 7: A diagram that that illustrates the two evaluation metrics utilized in our experimental design. The average reward value over the last 50 episodes is represented by \(\Gamma_{c}\). While the average reward loss after change from preference (A) to preference (B) is represented by \((\Gamma_{c}-\Gamma_{l})\). Empirical Analysis for the Distance Function (\(d\left(\overrightarrow{w}_{t-1},\overrightarrow{w}_{t}\right)\)) Figure 10 depicts the comparison results for different distance function and for each experimental environment. Based on the results it can be observed that the Euclidean distance function achieved the best overall results among the other comparative functions. A justification to this finding can be the representation of the user's preferences in our methodology as an Euclidean space of two dimensions (\(w^{1},w^{2}\)) that are linearly separable. The RPB algorithm will utilize the Euclidean distance function during comparison to other algorithms. ### Performance Evaluation in Stationary Environments We are going to present and discuss the results for each experimental environment for each of the two evaluation metrics utilized. For the average reward value over the last \(50\) episodes metrics, Figure 11 shows a line plot that visualizes the average reward value and its standard deviation over \(30\) runs for each experimental environment and for each comparison algorithm. While the results of the three environments differ in the magnitude of the average reward values due to changes in reward functions, they reflect common implications. It can be noticed that the OLS and TLO achieved the highest results in the average reward value across the three experimental environments, as the two algorithms evolved their CCS during the their offline training phases, therefore they were able to respond with an optimal policy after each preference change. While the RPB and RFPB algorithms started with an empty CCS as they work in an online manner, they achieved comparable results to the OLS and TLO algorithms and significantly (t-test: p-value \(<0.05\)) outperformed the SQ-L algorithm configured to respond to each preference Figure 8: Empirical analysis results for determining the optimal (\(\varphi\)) threshold value of the RPM algorithm for each experimental environment. The boxplots represent the distribution of reward loss values after switching preferences for each threshold value. (a) Results for the SAR environment. (b) Results for the DST environment. (c) Results for the RG environment. change with a randomly initialized policy. The RPB algorithm benefits from the policy bootstrapping mechanism which enables it to build on previous experiences for each preference region given the threshold value (\(\varphi\)). The advantage of the RPB algorithm is its simplicity, while the RFPB algorithm removes requirement for the preference significance threshold parameter tuning. For the average reward loss metric, Figure 12 depicts the results of \(30\) runs for each experimental environment and for each algorithm. As the RFPB achieved similar results on the average reward metric to the RPB, we only report the RPB results on the loss metric due to space constraints. Similarly, the average reward loss after each preference change emphasizes the findings from the average reward value metric as both OLS and TLO algorithms benefited from their evolved CCS during the offline training phase to minimize the reward loss after preference changes. While our RPB algorithm achieved comparable results in comparison to the OLS and TLO algorithms and significantly outperformed the SQ-L algorithm working with random initialization strategy after each preference change. The results of the two metrics indicate that our RPB algorithm can achieve comparable results to the OLS and TLO algorithms that require an offline training phase to evolve their CCS, while our algorithm works in an online manner. Also, the RPB achieved a similar performance level compared to the RFPB algorithm, while it adopts a simpler preference decomposition technique. Moreover, the RPM algorithm outperforms the SQ-L baseline algorithms representing the single policy MORL approach that responds to each new preference by randomly initialized policy. While the results show that the current MORL multiple policy algorithms can deal with stationary environments through offline exhaustive search techniques, these environments are quite rare in practical problems. In the next experiment, we are going to assess the performance of the comparison algorithms under non-stationary environments to contrast the advantage of proposed RPB algorithm. ### Performance Evaluation in Non-stationary Environments For the average reward value over the last \(50\) episodes metric, Figure 13 presents the comparison results for each experimental environment. Despite the variation in reward magnitude across environments as a result of different reward functions for each environment, the results show that the RPB algorithm performed comparably to the RFPB algorithm, while it significantly outperformed (t-test: p-value \(<0.05\) Figure 9: Empirical analysis results for determining the robustness metric to be used in the RPM algorithm for each experimental environment. (a) Results for the SAR environment. (b) Results for the DST environment. (c) Results for the RG environment. The error bars represent the standard deviation. the other comparative algorithms across all experimental environments. Mainly, this is due to its ability to evolve the CCS in an online manner, which enabled it to better cope with changes in the environment's dynamics in comparison to the OLS and TLO algorithms which failed to adapt due to their outdated CCS evolved in an offline manner. Also it can noticed that the SQ-L baseline, representing the single policy MORL approach, failed to adapt adequately as it can not accumulate knowledge from past experiences. Another remark in these results is that at some situations the OLS and TLO algorithms performed worse than the SQ-L baseline as their CCS were greedy optimized for an outdated environment's setup, consequently, this experience was misleading during the new setup after changes in the environment's dynamics. The results of the average reward loss after preference change metric are visualized in Figure 14. These results show a similar message to the average reward value results as our RPB algorithm significantly outperformed (t-test: p-value \(<0.05\)) the other comparative algorithms in minimizing the reward loss after preference changes. While the two state-of-the-art multiple policy MORL algorithms (OLS and TLO) performed insignificantly to the SQ-L baseline due to the limitation to adapt with non-stationary dynamics in an online manner. Based on the recent results, our RPB algorithm behaved in a more robust and adaptive manner in the non-stationary environments comparing to other comparative algorithms. Reasons behind this finding can be summarized as follows. First, the RPB algorithm targets generic steppingstone policies instead of specialized policies to specific environment setups in the case of the other algorithms, which makes its evolved coverage set more robust to changes in the environment dynamics. Second, the ability of the RPB algorithm to continuously evaluating and enhancing the policy set which cannot be achieved in the comparison algorithms that depends on offline and comprehensive policy search techniques. ## Conclusion and Future Work This paper proposed a novel algorithm that can robustly address multi-objective sequential decision-making problems in non-stationary environments. This is achieved by evolving a robust steppingstone policy coverage set in an online manner, then utilizing this coverage set to bootstrap specialized policies in response to changes in the user's preference or in the environment dynamics. We compared our proposed algorithm with state-of-the-art multi-objective reinforcement Figure 10: Empirical analysis results for determining the distance function to be used in the RPM algorithm for each experimental environment. (a) Results for the SAR environment. (b) Results for the DST environment. (c) Results for the RG environment. The error bars represent the standard deviation. Figure 11: Average reward value and standard deviation over \(30\) runs achieved by the four algorithms for each preference in the stationary environments. (a) the SAR environment. (b) the DST environment. (c) the RG environment. Figure 12: Average reward loss after preference change with standard deviation over \(30\) runs achieved by the four algorithms in the stationary environments. (a) the SAR environment. (b) the DST environment. (c) the RG environment. Figure 14: Average reward loss after preference change with standard deviation over \(30\) runs achieved by the four algorithms in the non-stationary environments. (a) the SAR environment. (b) the DST environment. (c) the RG environment. Figure 13: Average reward value and standard deviation over \(30\) runs achieved by the four algorithms for each preference in the non-stationary environments. (a) the SAR environment. (b) the DST environment. (c) the RG environment. learning algorithms over three different multi-objective environments using both stationary and non-stationary dynamics. We experimentally analyzed the different design decisions for the proposed algorithm in order to transparently describe the configuration setup and to indicate the possible configurable parts that can tailored for different application scenarios. In order to contrast the performance of our proposed algorithm, we compared it with two state-of-the-art MORL algorithms and one baseline algorithm that is based on the well known Q-learning algorithm. We evaluated the comparison algorithms under both stationary and non-stationary environmental dynamics. During the stationary environments, the performance of the proposed algorithm was comparable to the performance of the other algorithms. While in the non-stationary environments, the proposed algorithm significantly outperformed the other algorithms in terms of the average reward value achieved as it adapted better under changes in the environment dynamics. The future extension for this work can be summarized into three main points. First, we are going to explore adaptive strategies for automatic preference exploration instead of the random strategy currently adopted. Unsupervised learning-based generative models can be for this purpose. Candidate strategies can include adversarial generative networks (Goodfellow et al., 2014) and intrinsically motivated exploration (Merrick and Maher, 2009). Second, we will investigate the impact of utilizing non-linear scalarization functions such as Chebyshev function, on the performance of the RPB algorithm. Finally, we are going to enhance the generalization ability of the RPB algorithm through adding a generic skill learning module, therefore, the generated skill set can be used to speed up the learning of specialized policies in different environments. We believe that equipping our proposed algorithm with the ability to learn generic skills will facilitate the evolution of better policies and the transfer of learned knowledge across different environments.
2306.10911
Dense Molecular Environments of B[e] Supergiants and Yellow Hypergiants
Massive stars expel large amounts of mass during their late evolutionary phases. We aim to unveil the physical conditions within the warm molecular environments of B[e] supergiants (B[e]SGs) and yellow hypergiants (YHGs), which are known to be embedded in circumstellar shells and disks. We present K-band spectra of two B[e]SGs from the Large Magellanic Cloud and four Galactic YHGs. The CO band emission detected from the B[e]SGs LHA 120-S 12 and LHA 120-S 134 suggests that these stars are surrounded by stable rotating molecular rings. The spectra of the YHGs display a rather diverse appearance. The objects 6 Cas and V509 Cas lack any molecular features. The star [FMR2006] 15 displays blue-shifted CO bands in emission, which might be explained by a possible close to pole-on oriented bipolar outflow. In contrast, HD 179821 shows blue-shifted CO bands in absorption. While the star itself is too hot to form molecules in its outer atmosphere, we propose that it might have experienced a recent outburst. We speculate that we currently can only see the approaching part of the expelled matter because the star itself might still block the receding parts of a (possibly) expanding gas shell.
Michaela Kraus, Michalis Kourniotis, Maria Laura Arias, Andrea F. Torres, Dieter H. Nickeler
2023-06-19T13:16:32Z
http://arxiv.org/abs/2306.10911v1
# Dense Molecular Environments of B[e] Supergiants and Yellow Hypergiants ###### Abstract Massive stars expel large amounts of mass during their late evolutionary phases. We aim to unveil the physical conditions within the warm molecular environments of B[e] supergiants (B[e]SGs) and yellow hypergiants (YHGs), which are known to be embedded in circumstellar shells and disks. We present \(K\)-band spectra of two B[e]SGs from the Large Magellanic Cloud and four Galactic YHGs. The CO band emission detected from the B[e]SGs LHA 120-S 12 and LHA 120-S 134 suggests that these stars are surrounded by stable rotating molecular rings. The spectra of the YHGs display a rather diverse appearance. The objects 6 Cas and V509 Cas lack any molecular features. The star [FMR2006] 15 displays blue-shifted CO bands in emission, which might be explained by a possible close to pole-on oriented bipolar outflow. In contrast, HD 179821 shows blue-shifted CO bands in absorption. While the star itself is too hot to form molecules in its outer atmosphere, we propose that it might have experienced a recent outburst. We speculate that we currently can only see the approaching part of the expelled matter because the star itself might still block the receding parts of a (possibly) expanding gas shell. stars: massive; stars: supergiants; stars: winds; outflows; circumstellar matter + Footnote †: journal: _Galaxies_ 0000-0002-2861-6508]M. Kraus 0000-0002-2861-6508]M. Kourniotis 0000-0002-2861-6508]M. Laura Arias 0000-0002-2861-6508]M. Laura Arias 0000-0002-2861-6508]A. F. Torres ## 1 Introduction The evolution of massive stars (\(M_{\rm{ini}}\gtrsim 8\) M\({}_{\odot}\)) bears many uncertainties, which render it difficult to trace such objects from the cradle up to their spectacular explosion as supernova. One major hindrance is the poorly constrained mass loss due to stellar winds that the stars experience along the course of their evolution. Furthermore, the post-main sequence evolution of massive stars encounters phases in which the stars lose a significant amount of mass due to episodically enhanced mass loss or occasional mass eruptions, both of poorly understood origin. The ejected mass can accumulate around the star in rings, shells, or bipolar lobes, as seen in some B- or B[e]-type supergiants (e.g., Sher 25 [1, 2], MWC 137 [3, 4], SBW1 [5]), yellow hypergiants (IRC+10 420) [6] and Hen 3-1379 [7]), many luminous blue variables [8, 9], and Wolf-Rayet stars [10, 11, 12, 13, 14]. Two groups of evolved massive stars are particularly interesting. These are the B[e] supergiants (B[e]SGs) and the yellow hypergiants (YHGs). Both types of objects have dense and warm circumstellar environments, and representatives of both classes of objects show (at least occasionally) emission from hot molecular gas. ### B[e] Supergiants The group of B[e]SGs consists of luminous (log(\(L_{*}/L_{\odot})\geq 4.0\)) post-main sequence B-type emission line stars. Besides large-scale ejecta (with sizes of several pc) detected in some B[e]SGs [3, 9], all objects have intense winds and are surrounded by massive disks on small scales (up to \(\sim\)100 AU) [15; 16; 17], giving rise to the specific emission features characterizing stars with the B[e] phenomenon [18]. These disks give shelter to a diversity of molecular and dust species, and the near-infrared (NIR) is an ideal wavelength regime to detect molecular emission features that, when resolved with high spectral resolution, provide insight into the physical properties of the disks and reveal the disk dynamics. The most commonly observed molecule is CO. The emission from its first-overtone bands arises in the \(K\)-band around 2.3 \(\upmu\)m and has been detected in about 50% of the B[e]SGs [19]. Besides the main isotope \({}^{12}\)CO, emission from \({}^{13}\)CO is seen in considerable amounts [20; 21], confirming that the matter from which the disks have formed contains processed material that must have been released from the stellar surface [22]. Emission from the first-overtone bands of SiO, arising in the \(L\)-band around 4 \(\upmu\)m, has been reported for some Galactic B[e]SGs [23]. SiO has a lower binding energy than CO. It thus forms at distances farther away from the star than CO. The individual ro-vibrational lines in both CO and SiO are kinematically broadened with a double-peaked profile, and (quasi-)Keplerian rotation of the molecular gas around the central object has been suggested as the most likely explanation to interpret the spectral appearance [17; 23; 24; 25; 26; 27; 28]. Observations of B[e]SGs in the NIR are sparse. But persistent CO band emission over years and decades has been detected in numerous objects [21; 29; 30] and has been used as one of the criteria to identify and classify stars as B[e]SGs in Local Group galaxies [31; 32]. However, in a few cases, considerable variability in these emission features has been reported as well. The most striking object is certainly the B[e]SG star LHA 115-S 65 in the Small Magellanic Cloud (SMC), for which a sudden appearance of CO band emission has been recorded [33]. The disk around this object is seen edge-on, and in addition to its rotation around the central object, it also drifts outwards, very slowly and with a velocity decreasing with distance from the star and reaching about zero [34]. This slowdown might have resulted in a build-up of density in regions favorable for molecule condensation and for excitation of the CO bands. Furthermore, LHA 115-S 18 in the SMC showed no CO band emission back in 1987/1989 [35], whereas follow-up observations in November 1995 taken with a more than three times higher resolution displayed intense CO bands [36], which were also seen in the observations acquired in October 2009 [21]. In the spectrum of the Galactic B[e]SG MWC 349, intense CO band emission appeared in the early 1980s [37]. It was still observable in 2013, but by then the CO gas had clearly cooled and the emission intensity had significantly decreased, which has been interpreted as due to expansion and dilution of the circumstellar disk [38]. Two more objects in the Large Magellanic Cloud (LMC) displayed indications of CO band variability, most likely related to inhomogeneities within the distribution of the molecular gas around the central star. These are LHA 120-S 73 [24] and LHA 120-S 35 [27]. In the optical range, indications for emission from TiO molecular bands have been found in six B[e]SGs [24; 27; 39; 40]. All six objects reside in the Magellanic Clouds, and five of them also have CO band emission1. No Galactic B[e]SG has been reported to date to display TiO band emission [19]. Footnote 1: The \(K\)-band is a function of the spectral type of the B[e]SG. ### Yellow Hypergiants With temperatures in the range \(T_{\rm eff}\simeq 4000-8000\) K and luminosities \(\log(L/L_{\odot})\) spreading from 5.2 to 5.8, the YHGs populate a rather narrow domain in the Hertzprung-Russel (HR) diagram. The stars are in their post-red supergiant (post-RSG) evolutionary phase [41], and their luminosities place them on evolutionary tracks of stars with initial masses in the range \(M_{\rm ini}\simeq 20-40\) M\({}_{\odot}\). Evolutionary calculations of (rotating) stars in this mass range have shown that these objects can indeed evolve back to the blue, hot side of the HR diagram [42], whereas stars with lower initial mass just reach the RSG stage before they explode as SNe of type II-P. Support for this theoretical scenario is provided by the lack of high-mass (\(M_{\rm ini}\geq 18\) M\({}_{\odot}\), i.e., with luminosities \(\log L/L_{\odot}>5.1\)) RSG progenitors for this type of supernovae [43; 44]. As post-RSGs, the YHGs might be expected to be embedded in envelopes, remnants of the previous mass-losing activities during the RSG stage. However, surprisingly, so far, only about half of the YHGs have been reported to have a dusty and/or cold molecular envelope. These are the Galactic objects IRC +10420 [45; 46; 47], HR 5171A [48], HD 179821 [49] and Hen 3-1379 [7; 50], three YHGs in the LMC (HD 269953, HD 269723, HD 268757, [51]), as well as Var A [52] and three more YHG candidates in the galaxy M33 [53]. A typical classification characteristic of YHGs is the occurrence of outbursts that can be clearly discriminated from the more regular (cyclic) brightness variability due to stellar pulsations. During such an outburst event, the star inflates, its brightness drastically decreases, and the object seems to undergo a red loop evolution in the HR diagram. Molecules such as TiO and CO can form in the cool, outer atmospheric layers, leading to intense absorption structures in the optical and NIR, respectively, and the object's entire spectral appearance resembles that of a much later spectral type. The outbursts are most likely connected with enhanced mass loss or mass eruptions from the star, which might be connected to non-linear instabilities such as finite-time singularities or blow-ups typically occurring in fluid dynamics [54] or to strange-mode instabilities [55; 56] as recent computations propose [57]. The duration of the outbursts can range from months to decades before the star appears back at its real location in the HR diagram. The bona-fide YHG, \(\rho\) Cas, experienced four documented outbursts during the past 80 years with variable duration (from weeks up to three years) and amplitude (0.29 to 1.69 mag), connected with significant changes in its spectral appearance [58; 59; 60; 61], whereas Var A in M33 presumably underwent an eruption around 1950 that lasted \(\sim\)45 years [52]. The object V509 Cas experienced mass-loss events in the seventies, during which the star's apparent temperature decreased significantly. Since then, the star has displayed a steady increase in its effective temperature from \(\sim\)5000 K in 1973 to \(\sim\)8000 K in 2001 [62], and since then it has stabilized at that temperature [63]. Furthermore, IRC +10420 changed its spectral type from F8 to mid- to early A, connected with an increase in temperature over a period of about 30 years with an average rate of \(\sim\)120 K per year [64; 65]. The light curves of other YHG candidates also display outburst activity in connection with variable mass loss [51; 53]. However, in many cases, the mass-loss episodes appear to be short, so that the released material expands and dilutes without creating detectable large-scale circumstellar envelopes [66]. Nevertheless, many YHGs are surrounded (or were for some period in the past) by hot molecular gas traced by first-overtone CO band emission, suggesting that the objects are embedded in a dense and hot environment. Whether the molecular gas is arranged in a ring revolving around the object, as in the case of the B[e]SGs, is currently not known. However, the CO band spectral features in YHGs seem to be much more variable than in their hotter B[e]SG counterparts, especially because they often appear superimposed on photospheric CO band absorption that forms during the expansion and cooling periods of the long-term pulsation cycles, especially of the cooler YHGs, or during outburst events. One such candidate with cyclic CO band variability is \(\rho\) Cas. During its pulsation cycles, CO band emission appears when the star is hottest (maximum brightness) and most compact, whereas CO bands are seen in absorption when the star is coolest (minimum brightness) and most inflated [67]. It has been speculated that the appearance of CO bands in emission might be related to propagating pulsation-driven shock waves in the outer atmosphere of the star [67]. An alternative scenario would also be conceivable, in which the CO emission could be permanent, arising from a circumstellar ring or shell and being detectable only during phases in which no photospheric absorption compensates the emission [60]. Support for the latter scenario is provided by the fact that the CO emission features remain at constant radial velocities (along with other emission lines formed in the circumstellar environment such as [Ca ii] and Fe i, [60]) whereas the absorption components change from red- to blue-shifted, in phase with the pulsation cycle. The object \(\rho\) Cas is, to date, the best monitored YHG in the NIR. For many other YHGs observations in the \(K\)-band have been taken only sporadically. Hence, not much can be said or concluded about the variability rhythm of their CO band features. Occasional CO band emission has been reported for the galactic objects HD 179821 [68; 69], V509 Cas [70], [FMR2006] 15 [71], and the two LMC objects HD 269723 and HD 269953 [21; 29]. The latter even displays emission from hot water vapor and is, so far, the only evolved massive star with such emission features from its environment [72]. ### Motivation and Aims The appearance of CO band emission in the NIR spectra of B[e]SGs and YHGs suggests that similar physical conditions may prevail in the circumstellar environments of both groups of objects. In depth studies of these conditions are rare though. For the B[e]SGs in the Magellanic Clouds CO, column densities, temperatures, and \({}^{12}\)CO/\({}^{13}\)CO isotopic ratios were determined based on medium-resolution \(K\)-band spectra [21]. The resolution of these spectra was, however, too low to derive the gas kinematics with high confidence. On the other hand, high-resolution \(K\)-band spectra of the Galactic B[e]SG sample allowed to obtain the CO dynamics [17; 73], whereas in most cases the spectral coverage was too short to infer about the density and temperature of their hot molecular environment. The situation for the YHGs is even worse. Besides for \(\rho\) Cas [67] and for the LMC object HD 269953 [21; 72] no attempts have been undertaken so far to study their warm molecular environments in more detail and to derive the parameters of the CO band emitting regions. Therefore, we started to systematically observe both the B[e]SGs and the YHGs in the Milky Way and Magellanic Clouds to fill this knowledge gap. A further motivation for our research is provided by the location of the B[e]SGs with CO band emission in the HR diagram. As has been mentioned previously for the Magellanic Clouds sample [19], these objects cluster around luminosity values \(\log L/L_{\odot}\simeq 5.0-5.9\), whereas B[e]SGs that are more luminous than \(\sim\)5.9 or less luminous than \(\sim\)5.0 do not show CO band emission. The same holds when inspecting the Galactic B[e]SGs. In the left panel of Figure 1 we depict the LMC objects, whereas in the right panel we display the Galactic sample2 from [17; 38]. The YHGs in the LMC [51] and the Milky Way [75] have been added to the plots3. Their positions with maximum and minimum effective temperature values, as reported in the literature, have been connected with dashed lines. Errors in temperature and luminosity are indicated when provided by the corresponding studies. Figure 1: HR diagram with evolutionary tracks for LMC (\(Z=0.006\), [76], left panel) and solar (\(Z=0.014\), [42], right panel) metallicity for models of rotating stars (\(\nicefrac{{\nu}}{{v_{\rm crit}}}=0.4\)) with initial masses from \(20-40\,{\rm M}_{\odot}\). Shown are the positions of LMC (blue symbols) and Galactic (red symbols) objects. Only B[e]SGs with hot circumstellar molecular CO gas are shown. These populate similar evolutionary tracks as the YHGs. The minimum and maximum temperature values (where known) of the YHGs are connected by dashed lines. YHGs with reported (at least once) CO band emission are shown with triangles. The stars of the current study are labeled. Interestingly, the YHGs and the B[e]SGs with CO band emission share similar evolutionary tracks, which is particularly evident for the Galactic objects. This raises the question of whether evolved stars in this particular mass range suffer from specific instabilities in the blue and yellow temperature regimes independent of their evolutionary state. Such instabilities need to have the potential to drive mass ejections or eruptions, and the released mass would have to be dense and cool enough to create the required conditions for the formation of significant amounts of molecules generating intense band emission. ### Selection of Targets For our current study, we have selected two B[e]SGs from the LMC. These are the objects LHA 120-S 12 and LHA 120-S 134. Both are known to display CO band emission (see, e.g., [21]), but both of them lack high-resolution \(K\)-band spectra to derive the CO kinematics. Furthermore, we have selected four Galactic YHGs, [FMR2006] 15, HD 179821, V509 Cas, and 6 Cas. Of these, only the former three objects were reported in the literature to display (at some epochs) CO band emission. The basic stellar parameters of all objects, as obtained from the literature, are listed in Table 1. ## 2 Observations and Data Reduction High-resolution spectra (\(R\sim\) 50,000) of the two B[e]SGs were acquired with the visitor spectrograph Phoenix [84] mounted at GEMINI-South. The spectra were taken on 20 December 2004 and 30 November 2017 under program IDs GS-2004B-Q-54 and GS-2017B-Q-32. The observations were carried out in the \(K\)-band with two different filters, K4396 and K4308. The central wavelength was chosen such that the wavelength ranges cover the first and second band heads of the first-overtone CO band emission. Medium-resolution \(K\)-band spectra of the YHGs have been acquired with the Gemini Near-InfraRed Spectrograph (GNIRS, [85; 86]) at GEMINI-North under Program IDs GN-2019A-Q-204, GN-2019B-Q-418, and GN-2021A-Q-315. The spectrum of [FMR2006] 15 was observed on 12 May 2019 centered on \(\lambda=2.35\,\mathrm{\SIUnitSymbolMicro m}\). The instrument configuration was a short camera (0.15" per pixel) with the 0.3" slit and the 1111\(\,\mathrm{mm}^{-1}\) grating, resulting in a resolving power of \(R\sim 5900\). V509 Cas and 6 Cas were observed on 21 December 2019 with the instrumental configuration: Long camera (0.05" per pixel) with the 0.10" slit and the 321\(\,\mathrm{mm}^{-1}\) grating which provides a resolving power of \(R\sim 5100\). The observations were centered on \(\lambda=2.35\,\mathrm{\SIUnitSymbolMicro m}\). HD 179821 was observed on 7 April 2021 with two different central wavelengths \(\lambda=2.14\,\mathrm{\SIUnitSymbolMicro m}\) and \(2.33\,\mathrm{\SIUnitSymbolMicro m}\) and with the following instrument configuration: a short camera (0.15" per pixel), the 0.3" slit, and the 1111\(\,\mathrm{mm}^{-1}\) grating, resulting in a resolving power of \(R\sim 5900\). For all objects, a telluric standard star (usually a late B-type main sequence star) was observed close in time and airmass. For optimal sky subtraction, the star was positioned at two different locations along the slit (A and B), and the observations were carried out in ABBA cycles. Data reduction and telluric correction were performed using standard IRAF4 \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Object** & \(\mathbf{G}\) & \(\mathbf{K_{s}}\) & \(\mathbf{log\,T_{eff}}\) & \(\mathbf{log\,L/L_{\odot}}\) & **Ref.** & \(\mathbf{d}\) & **Ref.** \\ & **[mag]** & **[mag]** & **[K]** & & & **[kpc]** & \\ \hline LHA 120-S 12 & 12.4 & 10.2 & 4.36 & \(5.34\pm 0.04\) & [15] & \(49.6\pm 0.5\) & [77] \\ LHA 120-S 134 & 11.4 & 8.6 & 4.41 & \(5.90\pm 0.06\) & [15] & \(49.6\pm 0.5\) & [77] \\ \hline [FMR2006] 15 & 19.5 & 6.7 & 3.84 & \(5.36\pm 0.15\) & [71] & \(6.6\pm 0.9\) & [71] \\ 6 Cas & — & 3.4 & 3.93 & 5.13 & [78] & \(2.8\pm 0.3\) & [79] \\ V509 Cas & 5.0 & 1.7 & 3.90 & 5.60 & [63; 80] & \(1.4\pm 0.5\) & [62]* \\ HD 179821 & 7.5 & 4.7 & 3.83 & 5.30 & [75; 81] & \(5.3\pm 0.3\) & [81; 82] \\ \hline \end{tabular} Note: The \(G\) and \(K_{s}\)-band magnitudes are from GAIA Early Data Release 3 [82] and from the MASS point source catalog [83], respectively. The YHG effective temperatures refer to the hot state. No GAIA \(G\)-band measurement is available for 6 Cas. a Listed distance is based on the HIPPARCOS parallax of \(0.73\pm 0.25\). The new GAIA Early Data Release 3 parallax of \(0.2507\pm 0.0633\)[82] places the object at a distance of \(\sim\)4 kpc. \end{table} Table 1: Stellar parameters. Errors are given where available. tasks. The reduction steps consist of subtraction of AB pairs, flat-fielding, wavelength calibration (using the telluric lines), and telluric correction. The observing log is given in Table 2, where we list the star name, object class, observing date (UT), used instrument, covered wavelength range, spectral resolution \(R\), and resulting signal-to-noise ratio (SNR). ## 3 Results ### Description of the Spectra CO band head emission is detected in both B[e]SGs (black lines in Figure 2) despite the low quality of some of the spectral pieces and some telluric remnants in the red parts of the short wavelength portions (left lower panels). The spectrum of LHA 120-S 134 contains additional emission lines from the hydrogen Pfund series. Intense emission in these recombination lines has already been reported from that star based on medium-resolution spectra [21]. No contribution from the Pfund lines is seen in the spectra of LHA 120-S 12. \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Object** & **Class** & **Obs Date** & **Instrument** & \(\lambda_{\text{min}}-\lambda_{\text{max}}\) & **R** & **SNR** \\ & & **[yyyy-mm-dd]** & & **[\(\mu\)m]** & & \\ \hline LHA 120-S 12 & B[e]SG & 2004-12-20 & Phoenix & 2.291–2.300 & 50,000 & 35 \\ & & 2017-11-30 & Phoenix & 2.319–2.330 & 50,000 & 20 \\ LHA 120-S 134 & B[e]SG & 2017-11-30 & Phoenix & 2.290–2.299 & 50,000 & 40 \\ & & 2017-11-30 & Phoenix & 2.319–2.330 & 50,000 & 100 \\ \hline [FMR2006] 15 & YHG & 2019-05-12 & GNIRS & 2.257–2.440 & 5900 & 300 \\ 6 Cas & YHG & 2019-12-21 & GNIRS & 2.232–2.453 & 5100 & 100 \\ V509 Cas & YHG & 2019-12-21 & GNIRS & 2.233–2.453 & 5100 & 140 \\ HD 179821 & YHG & 2021-04-07 & GNIRS & 2.046–2.424 & 5900 & 250 \\ \hline \hline \end{tabular} \end{table} Table 2: Observation log. The \(K\)-band spectra of the YHGs display a diverse appearance, as depicted in Figure 3. Two stars possess just Pfund lines in absorption (V509 Cas and 6 Cas) and an otherwise featureless spectrum, in agreement with their high effective temperature (\(T_{\rm eff}\geq 8000\) K, see Table 1). The star [12] 15 shows CO band emission overimposed on the atmospheric spectrum of a presumably late-type star. In HD 179821 the CO bands and the Br \(\gamma\) line are in absorption along with numerous other photospheric lines, whereas the Na i \(\lambda\lambda\)2.206,2.209 doublet shows prominent emission. Figure 2: Best fitting model (red) to the normalized Phoenix spectra (black) of LHA 120-S 12 (**top**) and LHA 120-S 134 (**bottom**). For each star we display the entire fit to the total spectrum (top panels) and the zoom to the band heads (bottom panels). Emission from the Pfund series (blue dashed line), detected in the spectrum of LHA 120-S 134, is included in the total fit. ### Modeling of the CO Band and Pfund Line Emission We model the CO emission using our molecular disk code [87] that has been developed to compute the ro-vibrational bands from a rotating ring (or disk) of circumstellar gas under local thermodynamic equilibrium conditions. The calculations are carried out for the two main isotopes, \({}^{12}\)CO and \({}^{13}\)CO [21; 22]. The high-resolution spectra of the two B[e]SGs display kinematic broadening of the individual ro-vibrational lines in the form of a double-peaked profile (Figure 2). Such a profile can be interpreted either as rotation around the central object or as an equatorial outflow. Since B[e]SGs are known to be surrounded by (quasi-)Keplerian rotating disks, the assumption of rotation as the most likely broadening mechanism seems to be justified. The situation for the YHG star [FMR2006] 15 is less clear. The medium-resolution spectrum provides no clear hint about a possible double-peaked shape of the individual lines. Therefore, we can only derive an upper limit for a possible rotational (or outflow) contribution to the total dynamics. Due to the high sensitivity of the CO band intensity to the gas temperature and column density, the observed emission traces the hottest and densest molecular regions. Therefore, it is usually sufficient to consider a single ring of gas with constant density and temperature, reducing the number of free parameters to the column density \(N_{\rm CO}\), temperature \(T_{\rm CO}\), the isotope ratio \({}^{12}\)CO/\({}^{13}\)CO, and the gas kinematics split into contributions of the rotation velocity projected to the line of sight \(v_{\rm rot,los}\), and of a combined thermal and turbulent velocity in the form of a Gaussian component \(v_{\rm Gauss}\). The best-fitting CO parameters obtained for the three objects are listed in Table 3, and the total emission spectra are included (in red) in Figure 2 for the two B[e]SGs. It is noteworthy that the two CO band heads of LHA 120-S 12 can be reproduced fairly well with the same model parameters, despite the fact that the spectral pieces have been observed 13 years apart. This implies that the ring of CO gas around LHA 120-S 12 is stable on longer timescales. Figure 3: Normalized medium-resolution \(K\)-band spectra of the YHGs taken with GNIRS. For better visualization, the spectra have been offset along the flux axis. Positions of the CO band heads and of the lines from the Pfund series are marked by ticks. The lines of Br\(\gamma\) and of the Na i doublet are labeled as well. The spectrum of LHA 120-S 134 displays emission from the hydrogen Pfund line series superimposed on the CO band spectrum [5]. These Pfund lines appear to be broad with no indication of a double-peaked profile shape. To include the contribution of these lines to the total emission spectrum, we apply our code developed for the computation of the hydrogen series according to Menzel case B recombination, assuming that the lines are optically thin [87]. We fix the electron temperature at 10,000 K, which is a reasonable value for ionized gas around an OB supergiant star and, using a Gaussian profile, we obtain a velocity of \(53\pm 3\) km s\({}^{-1}\). Similar velocity values for the lines from the Pfund series have been found for various B[e]SGs (see [20; 21; 26]). These rather low values compared with the wind velocities of classical B supergiants might suggest that the Pfund lines form in a wind emanating from the surface of the ionized part of the circumstellar disk. The electron density can be derived from the maximum number of the Pfund series visible in the spectrum. For LHA 120-S 134, this number corresponds to the line Pf(57), resulting in an electron density of \((5.8\pm 0.5)\times 10^{12}\) cm\({}^{-3}\) within the Pfund line forming region. Having the parameters for the Pfund emission fixed, we compute the contribution of the Pfund series to the total emission spectrum of LHA 120-S 134. This contribution is shown in blue in Figure 2. The best-fitting CO model for the YHG [FMR2006] 15 is depicted in Figure 4 (top). It should be noted that the contribution of the rotation (respectively outflow) component should be considered an upper limit. A double-peaked profile corresponding to such a velocity might be hidden within the CO band structure. Only high-resolution observations will tell whether such a profile component is really included. We found that a model omitting this velocity component and using instead just a single Gaussian profile (added to Table 3) results in a similar but slightly less satisfactory fit because the intensity of many individual ro-vibrational lines is overestimated in the short wavelength domain (Figure 4, bottom). Figure 4: Best fitting model (red) to the observed (black) \(K\)-band spectrum of [FMR2006] 15 for the model including a rotational component (**top**) and a pure Gaussian broadening (**bottom**). \begin{table} \begin{tabular}{c c c c c c} \hline **Object** & \(\mathbf{T_{\mathrm{CO}}}\) & \(\mathbf{N_{\mathrm{CO}}}\) & \(\mathbf{\nu_{\mathrm{rot,los}}}\) & \(\mathbf{\nu_{\mathrm{Gauss}}}\) & \({}^{12}\)CO/\({}^{13}\)CO \\ & [**K**] & [\(\times 10^{21}\) cm\({}^{-2}\)] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & \\ \hline LHA 120-S 12 & \(2800\pm 200\) & \(1.5\pm 0.2\) & \(27\pm 2\) & \(3\pm 1\) & \(20\pm 2\) \\ LHA 120-S 134 & \(2300\pm 100\) & \(2.0\pm 0.1\) & \(30\pm 1\) & \(1.5\pm 0.5\) & \(15\pm 2\) \\ \hline \([\)FMR2006] 15 & \(3000\pm 200\) & \(2.0\pm 0.2\) & \(20\pm 5\) & \(15\pm 5\) & \(4\pm 2\) \\ \hline \end{tabular} Note: The \({}^{12}\)CO/\({}^{13}\)CO values for LHA 120-S 12 and LHA 120-S 134 have been derived by [20; 21], respectively. Our Phoenix spectra do not reach the wavelength region of the \({}^{13}\)CO bands. \end{table} Table 3: Best fitting CO parameters. The rotation velocity of the CO gas allows us to estimate the distances of the CO-emitting rings from the central object. For this, we need to know the current stellar masses. Considering the stellar effective temperatures, luminosities (from Table 1), and the carbon isotope ratios (from Table 3), we search for the best matching evolutionary models for each of our targets utilizing the stellar evolution track interpolation tool SYCLIST6. To reproduce the observed carbon isotope ratios, we vary the initial stellar rotation rate between zero and the maximum offered value of 0.4 and select those evolutionary tracks that fit both the stellar location in the HRD and the carbon isotope ratio. The metallicities used are 0.006 for the LMC objects and 0.014 for the Galactic star. Table 4 lists the stellar radii of our targets along with the most likely initial masses, initial rotation rate, and the resulting current masses. We note that according to these evolutionary tracks, the B[e]SGs seem to have evolved just off the main sequence, whereas the low carbon isotope ratio measured in [15] 15 clearly places this object on the post-RSG path. For the reason that no proper values for the disk inclination angles are known, we provide the distances of the CO-emitting rings as a function of the inclination angle. These are lower limits to the real distances. We refrain from adding a distance of the CO emitting region for [15] 15 because of the speculative nature of its rotation component. Footnote 6: The \(K\)-band data are available at [http://www.astro.ucla.edu/](http://www.astro.ucla.edu/)\(\sim\)m/s/s. ## 4 Discussion We have detected CO band emission in three objects and CO band absorption in one object in our sample. To assess a possible CO variability, we summarize in Table 5 information about previous detections of CO band features in the \(K\)-band from the literature, including our results. The number of observations in the \(K\)-band over the last 3-4 decades is sparse for all objects, but the two B[e]SGs continuously display emission over a time interval of 32 years. The diversity in spectral resolution used for the observations makes it difficult to compare the shape and intensity of the emission bands and, hence, to judge variability. Only for LHA 120-S 12, we can say that no significant variability seems to have taken place between our observations acquired in 2004 and in 2017. For the YHGs, the situation is different. Three of them clearly show variability in their CO bands, ranging from emission over complete disappearance to absorption (or a combination of emission and absorption). The changes are on time scales ranging from months to years. The exception is 6 Cas, which has not been reported previously to have CO bands, and our own observations taken 25 years later also lack any CO band features. In the following, we briefly describe the known characteristics of each of our targets with respect to stellar variability and the properties of the circumstellar environments in order to incorporate our new observational results. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Object** & \(R_{*}\) & \(M_{\rm in}\) & \(\nu/v_{\rm crit}\) & \(M_{*}\) & \(r_{\rm CO}/(\sin i)^{2}\) & \(r_{\rm CO}/(\sin i)^{2}\) \\ & \([R_{\odot}]\) & \([M_{\odot}]\) & \([\rm km\,s^{-1}]\) & \([M_{\odot}]\) & \([\rm cm]\) & \([R_{*}]\) \\ \hline LHA 120-S 12 & 30 & 26 & 0.25 & 25 & \(4.6\times 10^{14}\) & 218.0 \\ LHA 120-S 134 & 44 & 45 & 0.36 & 40 & \(5.9\times 10^{14}\) & 192.7 \\ \hline [15] & 333 & 25 & 0.40 & 13 & — & — \\ \hline \end{tabular} \end{table} Table 4: Estimated current stellar masses and distances of the CO emitting rings from the star. ### Lha 120-S 12 (= Sk - 67 23) The object LHA 120-S12 was first mentioned in the catalog of H\(\alpha\)-emission stars in the Magellanic Clouds [90]. Follow-up observations recorded an intense IR excess due to hot dust [91]. The two-component wind associated with the star led to its classification as B[e]SG [15], and the high degree of intrinsic polarization suggested a high, but not fully edge-on, viewing angle towards the object and its dusty disk [92]. Shell-line profiles were seen in the NIR, consistent with the high inclination angle of the system [21]. Observations with the _Spitzer Space Telescope_ revealed silicate dust within the circumstellar disk, based on weak features in the star's mid-IR spectrum, but no IR nebulosity has been detected in association with the object [93]. The first detection of CO band emission from LHA 120-S12 was in 1985 based on a low-resolution _K_-band spectrum [29]. This spectrum clearly depicted four first-overtone band heads, similar to the observations collected in 2009. Modeling of the latter [20] resulted in similar parameters for the CO temperature and column density to those we found from the high-resolution spectra taken before (2004) and afterwards (2017). Moreover, we see no changes in the rotation velocity, projected to the line of sight, within the time span of 13 years between our two observations. Therefore, we believe that the CO-emitting ring revolving around LHA 120-S 12 is rather stable, with no detectable outflow or inflow. A similar (projected) rotation velocity to that for CO can be inferred from the double-peaked profiles of the [Cali] lines resolved in high-resolution optical spectra of LHA 120-S 12 [94], although a Gaussian component with a higher value might be necessary to smooth out the sharp synthetic double-peaked rotation profile in order to reproduce the shape of the observed forbidden lines. \begin{table} \begin{tabular}{c c c c c} \hline **Object** & **Obs Date** & **R** & **CO Bands** & **Ref.** \\ \hline LHA 120-S 12 & 1985-12-28 & 450 & em & [29] \\ & 2004-12-17 & 50,000 & em & TW \\ & 2009-10-14 & 4500 & em & [20; 21] \\ & 2017-11-30 & 50,000 & em & TW \\ LHA 120-S 134 & 1985-12-29 & 450 & em & [29] \\ & 2009-11-10 & 4500 & em & [21] \\ & 2017-11-30 & 50,000 & em & TW \\ \hline \([\)FMR2006\(]\) 15 & 2005-09-15 & 1000 & weak abs & [88] \\ & 2006-05-05 & 17,000 & em & [71] \\ & 2006-08-12 & 17,000 & none & [71] \\ & 2019-05-12 & 5900 & em & TW \\ 6 Cas & 1996-08-31 & 1800 & none & [89] \\ & 2019-12-21 & 5100 & none & TW \\ V509 Cas & 1979-1980 & 32,000 & em + abs (variable) & [70] \\ & 1988 & N.A. & none & [67] \\ & 2003-11-20 & 300 & none & [67] \\ & 2004-10-30 & 300 & none & [67] \\ & 2019-12-21 & 5100 & none & TW \\ HD 179821 & 1989-07-14 & 1600 & em & [69] \\ & 1990-09-26 & 760 & em & [68] \\ & 1991-11-04 & 330 & em & [68] \\ & 1992 August & N.A. & none & [69] \\ & 1997-04-19 & 1800 & none & [89] \\ & 2000-10-18 & N.A. & none & [67] \\ & 2021-04-07 & 5900 & abs & TW \\ \hline \end{tabular} Note: em = emission; abs = absorption; TW = this work; N.A. = no information available. \end{table} Table 5: CO band detection in the _K_-band and CO variability. LHA 120-S 134 (= HD 38489, SK -69 259, MWC 126) The object LHA 120-S 134 was listed in the catalog of B and A-type stars with bright hydrogen lines published in 1933 [95] under the Mount Wilson Catalogue (MWC) number 126 with classification as Beq. Its hybrid spectra with narrow (equivalent widths of 30-50 km s\({}^{-1}\)) emission lines of neutral and low-ionized metals in the optical spectral range [15] and very broad P Cygni profiles (implying a wind terminal velocity of \(\sim\)2300 km s\({}^{-1}\)) of high-ionized metals in the ultraviolet [96], along with the detected intense IR excess emission characteristic of hot circumstellar dust [91], resulted in the classification of the star as B[e]SG. A small inclination angle has been proposed for LHA 120-S 134 [15], and the relatively low measured degree of polarization supports a close to pole-on orientation of the star plus disk system [92]. LHA 120-S 134 is one of only two B[e]SGs showing broad emission from He ii\(\lambda\)4686 [15]. The other object is LHA 115-S 18 in the SMC, which displays this (time variable) He ii line in emission in concert with Raman-scattered O vi emission and TiO molecular emission features [40]. While for LHA 115-S 18, it was proposed that the peculiar spectral characteristics might point towards an LBV-like status of the star [40], a possible Wolf-Rayet companion was postulated to explain the spectral features of LHA 120-S 134 [97]. Although a solid proof for a Wolf-Rayet companion is still missing, LHA 120-S 134 has appeared since then in the catalog of LMC Wolf-Rayet stars (WR 146). The mid-IR spectrum of LHA 120-S 134, obtained with the _Spitzer Space Telescope_, shows intense 10 \(\mu\)m and weak 20 \(\mu\)m emission features of amorphous silicate dust, and a faint and wispy nebulosity around the IR bright star was found with the telescope's imaging facilities [93]. Follow-up optical imaging revealed that LHA 120-S 134 is located on the northeast rim of the superbubble of DEM L269 and on the western rim of the H ii region SGS LMC-2 [98]. Therefore, it is unclear if and how much of the optical and IR nebulosity might be related to LHA 120-S 134 itself. In the NIR regime, the first mention of CO first-overtone emission dates back to 1985, when the star was observed with low-resolution [29]. The next _K_-band spectrum was taken only 24 years later (see Table 5). The new spectrum had higher spectral resolution, but the CO bands appeared to be similar to the previous detection. Our new, high-resolution spectrum was acquired after another time gap of 8 years. Our modeling of the band spectrum revealed basically the same CO parameters (temperature and column density) as in 2009, with one addition, the projected rotational velocity. As for LHA 120-S 12, this velocity is comparable to the one that might be inferred from the [Ca ii] lines [94], and we may conclude that also LHA 120-S 134 is surrounded by a stable, rotating ring of atomic and molecular gas. The star [FMR2006] 15 was recorded as object number 15 in a survey of the cool supergiant population of the massive young star cluster RSGC1 [88]. Based on the weak CO absorption detected in its _K_-band spectrum, a spectral type of G6 I was allocated. In a follow-up investigation, it was proposed that [FMR2006] 15 is most likely a YHG, based on the star's luminosity and spectral similarity to \(\rho\) Cas [71]. The star was assigned an effective temperature of \(6850\pm 350\) K and a luminosity of \(\log L/L_{\odot}=5.36^{+0.14}_{-0.16}\). With this temperature, the spectral type of [FMR2006] 15 is more likely G0 (\(\pm 2\) subtypes), and the luminosity implies that the star lies with about \(M_{\rm ini}\sim 25\) M\({}_{\odot}\) in the lower mass range of stars developing into YHGs (see Figure 1). This relatively low initial mass was considered to be the reason for the lack of detectable intense IR excess emission and of maser emission in contrast to the high-mass YHGs such as, e.g., IRC +10420 [71]. The first low-resolution _K_-band spectrum of [FMR2006] 15 was taken in September 2005. At that time, very weak CO band absorption was seen [88]. Soon thereafter, the object was re-observed twice with considerably higher spectral resolution. In May 2006, the spectrum displayed CO band emission and in August 2006, the emission had disappeared [71]. Our detection of intense CO band emission 13 years later is a clear in dication that [FMR2006] 15 is embedded in circumstellar matter, even if the molecular gas is with 3000 K much too hot for the condensation of dust grains. When modeling the CO bands from [FMR2006] 15, we noticed that the CO emission displays a blue-shift of \(-77\) km s\({}^{-1}\) with respect to the atmospheric absorption line spectrum. A similar behavior has been seen in IRC +10420. This star displays blue-shifted emission in its IR hydrogen recombination lines [99] and in its CO rotational transitions [45], which has been interpreted as emission formed in a close to pole-on seen bipolar outflow with the central star eclipsing the receding part of the emission. The outflow velocity from IRC +10420 has been measured to be on the order of \(\sim\)40 km s\({}^{-1}\), which is about half the value we measure for [FMR2006] 15. To date, nothing is known about a possible orientation of [FMR2006] 15, but we might speculate that a similar scenario as for IRC +10420 might hold for that object as well. In such a case, we would expect to have a Gaussian-like distribution of the velocity (see bottom model in Figure 4). On the other hand, if we consider the contribution from the rotation as real, the blue-shifted CO gas might revolve around a hidden companion, as in the case of the YHG HD 269953 in the LMC [51]. In this object, the companion was proposed to be surrounded by a gaseous disk traced by numerous emission lines that display a time-variable radial velocity offset with respect to the photospheric lines of the YHG star. If [FMR2006] 15 is indeed a binary system, time-resolved K-band observations would be essential to derive the orbital parameters and to characterize the hidden companion. The best-fitting CO model has been subtracted from the \(K\)-band spectrum of [FMR2006] 15, and the residual is shown in Figure 5. Also included in this plot is a synthetic spectrum of a cool supergiant star with \(T_{\rm eff}=6800\) K and \(\log g=1.5\), similar to the values derived for [FMR2006] 15 [71]. This spectrum has been computed with the spectrum synthesis code Turbospectrum using MARCS atmospheric models [100]. We note that the main photospheric features are decently well represented by this model. The short wavelength coverage of our spectrum and a possible alteration of the intrinsic stellar spectrum caused by the absorbing circumstellar gas impede a more decent classification of the object. **6 Cas (= V566 Cas, HR 9018, HD 223385)** The object 6 Cas was reported to be a spectroscopic binary based on its composite spectrum in the ultraviolet [101]. While the main component is traced by the resonance lines of low-ionized metals with terminal wind velocities of about 330 km s\({}^{-1}\), typical for A-type supergiants, the high-ionized resonance lines with P Cygni profiles and terminal wind velocities of about 2400 km s\({}^{-1}\) can be assigned to a significantly hotter O-type companion. Disentangling the spectra in the optical, to which the O star only very weakly contributes, resulted in the classification of the system as a A3 Ia (A) and a O9.5 II (B) component [79]. The positions of the two stars are separated by about 1\(\aas@@fstack{\prime\prime}\)5 at a position angle of \(\sim\)195\(\degr\). Figure 5: Residual spectrum of [FMR2006] 15 after subtraction of the blue-shifted CO band emission. For illustration purposes, a synthetic model spectrum of a cool supergiant is shown as well (shifted down along the flux axis for better visualization) with parameters similar to those determined for [FMR2006] 15. Whether these two stars form indeed a bound binary system or whether they are just close in projection could not be solved yet. The small radial velocity variations detected for the A supergiant seem to be due to pulsations rather than orbital motion in a binary system [79]. The H\(\alpha\) profile of 6 Cas resembles those seen in other hypergiants, such as HD 33579 and Schulte 12. It is superimposed on broad electron scattering wings [102]. Discrete absorption components (DACs) traveling through the broad absorption components of the wind profiles of H and Fe ii lines were detected in 6 Cas [103]. Based on the atypical characteristics of a regular A-type supergiant, the star was assigned the status of a hypergiant. We are aware of only one former \(K\)-band spectrum of 6 Cas, which has been taken with the ISO-SWS instrument [89]. This spectrum shows no indication of CO bands, just the lines of the Pfund series in absorption, similar to our spectrum (Figure 3). Such an otherwise featureless spectrum is consistent with a hot (\(T_{\rm eff}>8000\) K) star, in agreement with its previous classification (see Table 1). ### V509 Cas (= HR 8752, HD 217476) The star V509 Cas is one of the YHGs for which an outburst was recorded based on combined photometric and spectroscopic monitoring of the star [62]. This outburst must have taken place around 1973, after a preceding 16-year-long period of reddening and cooling of the object from about 5000 K down to about 4000 K [104]. Thereafter, the effective temperature of the star gradually increased until it reached a value of about 8000 K around 2001 [62], where it has stabilized since then [63]. During the "heating" period, several short-term drops in temperature were recorded, which were associated with phases of enhanced mass loss [62]. Despite the outburst and the multiple mass-loss events, no extended nebulosity was detected at optical wavelengths so far [66]. Nevertheless, the star must be embedded in an ionized circumstellar envelope traced by thermal emission at radio wavelengths [105; 106]. The ionization of this envelope is most likely performed by the radiation field of a distant, hot main-sequence B1-type companion [107]. The envelope is also the place where other emission lines are formed. Most prominent are the nebular lines of [N ii] \(\lambda\lambda\) 6548,6583 [108], but also those of [O i] \(\lambda\lambda\) 6300,6364 and [Ca ii] \(\lambda\lambda\) 7291,7324 were identified [63], which were proposed to trace a possible Keplerian disk or ring based on the double-peaked profiles, particularly of the [Ca ii] lines, for which a rotation velocity, projected to the line of sight, of about 40 km s\({}^{-1}\) was derived [109]. Support for such an interpretation comes from an optical spectroscopic monitoring of V509 Cas between 2015 and 2022. The observations revealed that the [Ca ii] lines are stable in position and shape over the observing period of \(\sim\)7 years, in contrast to the photospheric absorption lines, whose shape and radial velocity are strongly influenced by the pulsation activity of the star [110]. In the NIR, observations dating back to 1979-1980 detected CO features displaying both emission and absorption components with variable strength (see Table 5, [70]). Comparison with the brightness curve revealed that the CO emission was strongest when the star was around maximum brightness [67]. Furthermore, the CO emission appeared at the stellar systemic velocity, whereas the absorption was either blue- or red-shifted. This behavior is very similar to what has been observed for the YHG star \(\rho\) Cas [67]. Since about 1988, the CO features have disappeared from the NIR spectra of V509 Cas, and our spectrum also shows no traces of molecular emission. The absorption lines of the hydrogen Pfund series seen in our otherwise featureless \(K\)-band spectrum are in agreement with a stellar temperature of about 8000 K. ### HD 179821 (= RAFGL 2343, IRAS 19114+0002, V1427 Aql) The star is embedded within a detached, almost spherical shell of cold (120-140 K) dust [49] responsible for an intense far-IR excess emission [111]. In the radio regime, CO (\(J=1\longrightarrow 0\)) emission has been detected, which traces a cold molecular outflow with a velocity of \(\sim\)33-35 km s\({}^{-1}\)[112]. The object is oxygen-rich, as is inferred from its OH maser emission [113]. The evolutionary state of the star is highly debated in the literature due to its uncertain distance value. Distance estimates range from about 1.5 kpc to about 6 kpc, which would classify the star as either a post-asymptotic giant branch star or a YHG. Considering the newest parallax measurements of \(0.1893\pm 0.0206\) provided by the _GAIA_ Early Data Release 3 [82], a distance closer to the upper value seems to be more likely. Such a high value is also in agreement with the star's kinematic distance derived from its large heliocentric systemic velocity of 84-88 km s\({}^{-1}\)[81; 112; 113]. The spectral classification of HD 179821 has been rather controversial as well in the past decades ranging from F3-5 [81; 114; 115], over G5 [116] to K4 [117]. The value for \(\log g\), derived from high-resolution spectroscopy, ranges around \(0.5\pm 0.5\). Such a low \(\log g\) value assigns the star a luminosity class I. Extinction values obtained for HD 179821 range from \(A_{V}=2.0\)[116] over 3.1 [118] to \(\sim\)4 [115], and it has been suggested that the total extinction might be variable due to possible changes in the circumstellar contribution from the dust shell [119]. In a recent work, data from long-term photometric and spectroscopic monitoring have been presented [120]. The colors imply that the star first became bluer between 1990 and 1995 and has displayed a systematic reddening since 2002. The fastest change in color took place between 2013 and 2017, with a simultaneous brightening of the star. The spectra confirm this trend and record a more or less stable temperature of \(T_{\rm eff}=6800\pm 100\) K between 1994 and 2008, in agreement with previous temperature determinations [7][81; 114; 115; 120]. Since then, it decreased and reached a value of about 5900 K in 2017 [120]. The reddening and change in temperature indicate the onset of a possible new red loop evolution of HD 179821, i.e., an excursion to the cool edge of the HR diagram, related to a significant increase in the stellar radius and an increase in mass loss. In the NIR, HD 179821 displayed CO-band emission, during observations taken between 1989 and 1991 (see Table 5), whereas no indication for CO band features (neither emission nor absorption) was seen between 1992 and 2000. The presence and disappearance of CO band emission might indicate a prior phase of higher mass loss or some mass ejection episode followed by the subsequent expansion and dilution of the released circumstellar material. Such a scenario might be supported by the redder color of the star around 1990 [120]. The absence of CO bands in the spectra between 1992 and 2000 supports the classification of the star as early to mid F-type, because stars in this temperature range are too hot for the formation of molecules in their atmospheres. In contrast to all previous NIR observations, our data from 2021 clearly display CO band absorption. One possible explanation could be that the trend of cooling has continued since 2017. When comparing our observed spectra with synthetic spectra, we found that the intensity of the first band head of CO can be achieved for a stellar effective temperature of \(\sim\)5400 K (see Figure 6), although the entire CO band structure signals a considerably cooler temperature for the molecular gas due to the only weakly pronounced higher band heads. Hotter stars display less intense and cooler stars more intense CO bands. However, a star with a temperature of 5400 K should show significantly stronger absorption in all other atomic photospheric lines, which is not the case. Instead, the intensity and specific line ratios in the \(K\)-band spectrum are more in line with an effective temperature of about 6600 K. But such a hot stellar photosphere contains no CO molecular absorption features (Figure 6). Based on this discrepancy, we believe that our \(K\)-band spectrum is composite. It shows a hotter stellar photosphere along with CO absorption formed in a presumably cooler gas shell or outflow. Support for such a scenario is provided by the fact that the CO absorption bands display a blue-shift of about \(-43\pm 1\) km s\({}^{-1}\) with respect to all other photospheric atomic lines and the circumstellar emission lines, such as the Na i doublet. If interpreted as outflow velocity, then this value is comparable to the outflow seen in IRC +10420 [45; 99] but is slightly higher than the isotropic expanding cold molecular gas and dust shell around HD 179821 (\(\sim\)35 km s\({}^{-1}\), [112; 122]). We exclude a cool binary component as an explanation for the velocity-shifted CO absorption bands because, in this case, the cool companion would imprint (besides CO) significantly stronger blue-shifted absorption lines onto the \(K\)-band spectrum, which are not observed. If the proposed scenario of a new outflowing shell or gas layer is correct, possibly initiated during the reddening of the star and the increase in stellar radius recorded in the years 2014-2017 [120], or a possible outburst event that might have followed this reddening as in the case of V509 Cas (as we mentioned before, see [62; 104]), then we may speculate that with further expansion of this matter, future \(K\)-band observations will display CO bands in emission before the material dilutes and the CO features might disappear again. Besides CO, our spectrum of HD 179821 also shows Br \(\gamma\) in absorption and intense emission of the NaI doublet. The line of Br \(\gamma\) was in absorption in previous observations taken in 1989 [69] and 1990 [68]. The latter spectrum also displays intense emission of the (blended) NaI doublet, as well as the spectrum taken in 2000 [67]. The remaining previous spectra either do not cover the spectral region of Br \(\gamma\) and/or the NaI doublet, or these lines have not been mentioned by the corresponding authors. Na i emission is a clear indicator for circumstellar material. It has been reported from the NIR spectra of numerous evolved massive stars: (i) the YHGs \(\rho\) Cas, V509 Cas, Hen 3-1379 (the Fried Egg Nebula), IRC +10420 [70; 123], and the IRC +10420 analog IRAS 18357-0604 [124], (ii) most of the B[e]SGs [21; 29; 125], and (iii) also many luminous blue variables [21; 53]. The equivalent widths of the intense Na i lines from the YHG IRC +10420 [6] are about three times higher than for HD 179821, for which we measured values of \(-1.0670\pm 0.015\) A and \(-0.9231\pm 0.018\) A. Recent spatially resolved observations revealed that the Na i emission in IRC +10420 and Hen 3-1379 is confined within a compact spherical envelope around the star [50; 123]. The emission lines in our spectrum are symmetric, and their wavelengths coincide with the systemic velocity of the star. While their formation region can be a compact spherical shell as well, they cannot be related to the possible new blue-shifted outflow traced by the CO band absorption. ## 5 Conclusions We present new medium- and high-resolution \(K\)-band spectra for two B[e]SGs and four YHGs. The spectra of both B[e]SGs show rotationally broadened CO band emission, from which we could derive, for the first time, the projected rotation velocity of the CO gas for both stars. On the other hand, our model parameters for the CO temperature and column density are very similar to those reported in previous studies based on spectra with significantly lower resolution [20; 21]. The similarities of the detected CO band features over more than 30 years suggest that the CO emitting gas rings around these two B[e]SGs are stable structures, neatly fitting to the findings of most of the B[e]SGs. Figure 6: Comparison of the \(K\)-band spectrum of HD 179821 (**top**) with synthetic spectra for effective temperatures of 5400 K (**middle**) and 6600 K (**bottom**). For illustration purposes, the synthetic model spectra are included in this figure and shifted down along the flux axis for better visualization. With respect to the YHGs, we detect CO band emission from only one star, the highly reddened cluster member [14] 15, which previously showed time-variable CO features (see Table 5), and which has an effective temperature that is clearly too high to form molecules within its atmosphere. Consequently, the CO emission must be of circumstellar origin. A second object, HD 179821, shows CO bands in absorption, while it had CO emission during 1989-1991 but lacked any CO features in its spectrum since 1992 (Table 5). The latter is consistent with the star's high effective temperature, which prevents the formation of molecules. For both YHGs, our detected CO features are clearly blue-shifted with respect to the photospheric absorption lines, suggesting that both stars most likely had recent mass ejection events and the CO emission/absorption forms within the expelled matter. For [14] 15, we propose that the blue-shifted emission arises in a possible pole-on seen bipolar outflow as in the case of the YHG star IRC +10420 [12], but nothing can be said yet about the geometry of the outflow seen from HD 179821 because the (highly inflated) star itself might still block large portions of the possibly receding parts of the ejecta. Conceptualization, M.K. (Michael Kraus), M.K. (Michalis Kourniotis) and D.H.N.; methodology, M.K. (Michalisa Kraus), M.K. (Michalis Kourniotis), M.L.A. and A.F.T.; formal analysis, M.K. (Michala Kraus) and M.K. (Michalis Kourniotis); investigation, M.K. (Michala Kraus), M.K. (Michalis Kourniotis) and D.H.N.; resources, M.K. (Michala Kraus), M.K. (Michalis Kourniotis), M.L.A. and A.F.T.; writing--original draft preparation, review and editing, M.K. (Michala Kraus), M.K. (Michalis Kourniotis), M.L.A., A.F.T. and D.H.N.; visualization, M.K. (Michala Kraus) and M.K. (Michalis Kourniotis); funding acquisition, M.K. (Michala Kraus), M.L.A. and A.F.T. All authors have read and agreed to the published version of the manuscript. This research was funded by the Czech Science foundation (GA CR, grant number 20-00150S), by CONICET (PIP 1337), by the Universidad Nacional de La Plata (Programa de Incentivos 11/G160), Argentina, and by the European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sklodowska-Curie Grant Agreement No. 823734. The Astronomical Institute of the Czech Academy of Sciences is supported by the project RVO:67985815. Not applicable. Not applicable. The data underlying this article will be shared on reasonable request to the corresponding author. We thank the anonymous referees for their valuable comments and suggestions. This research made use of the NASA Astrophysics Data System (ADS) and of the SIMBAD database, operated at CDS, Strasbourg, France. This paper is based on observations obtained with the Phoenix infrared spectrograph, developed and operated by the National Optical Astronomy Observatory and based on observations obtained at the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea) under program IDs GS-2004B-Q-54, GS-2017B-Q-32, GN-2019A-Q-204, GN-2019B-Q-418, and GN-2021A-Q-315. The authors declare no conflict of interest. ## Abbreviations The following abbreviations are used in this manuscript: \begin{tabular}{l l} B[e]SG & B[e] supergiant \\ NIR & Near infrared \\ LMC & Large Magellanic Cloud \\ SMC & Small Magellanic Cloud \\ YHG & Yellow hypergiant \\ \end{tabular} Notes * The sixth object is LHA 120-S 111. To our knowledge, it has been observed in the \(K\)-band only once, in January 1987 [29]. At that time, no CO band emission was detected. * We excluded two Galactic B[e]SGs from this plot. With a literature luminosity value of \(\log L/L_{\odot}=4.33\pm 0.09\)[74], the luminosity of HD 62623 is considerably lower than for the other B[e]SGs with CO bands. However, its distance with a parallax value of \(0.59\pm 0.17\) is not well constrained. The object HD 327083 turned out to be misclassified and has been removed from the B[e]SGs list (Cidale et al., in preparation). * We omit the SMC objects, because we do not have new data for any of the SMC B[e]SGs, and there are currently no confirmed YHGs in the SMC. * IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. * Pfund line emission is also reported from LHA 120-S 12 [20], but in that star the maximum detected Pfund transition is with Pf(31) arising at 2.34 \(\mu\)m clearly outside our spectral coverage. * [https://www.unige.ch/sciences/astro/evolution/en/database/syclist/](https://www.unige.ch/sciences/astro/evolution/en/database/syclist/) * We note that a higher effective temperature of \(\sim 7350\) K was proposed in the same period [121].
2301.12179
Discontinuous rigidity transition associated with shear jamming in granular simulations
We investigate the rigidity transition associated with shear jamming in frictionless, as well as frictional, disk packings in the quasi-static regime and at low shear rates. For frictionless disks, the transition is under quasistatic shear is discontinuous, with an instantaneous emergence of a system spanning rigid cluster at the jamming transition. For frictional systems, the transition appears continuous for finite shear rates, but becomes sharper for lower shear rates. In the quasi-static limit, it is discontinuous as in the frictionless case. Thus, our results show that the rigidity transition associated with shear jamming is discontinuous, as demonstrated in a past for isotropic jamming of frictionless particles, and therefore a unifying feature of the jamming transition in general.
Varghese Babu, H. A Vinutha, Dapeng Bi, Srikanth Sastry
2023-01-28T12:32:59Z
http://arxiv.org/abs/2301.12179v1
# Discontinuous rigidity transition associated with shear jamming in granular simulations ###### Abstract We investigate the rigidity transition associated with shear jamming in frictionless, as well as frictional, disk packings in the quasi-static regime and at low shear rates. For frictionless disks, the transition is under quasistatic shear is discontinuous, with an instantaneous emergence of a system spanning rigid cluster at the jamming transition. For frictional systems, the transition appears continuous for finite shear rates, but becomes sharper for lower shear rates. In the quasi-static limit, it is discontinuous as in the frictionless case. Thus, our results show that the rigidity transition associated with shear jamming is discontinuous, as demonstrated in a past for isotropic jamming of frictionless particles, and therefore a unifying feature of the jamming transition in general. Granular materials can exist in a flowing or a solid state. The transition between these states, called the jamming transition, has been the subject of intense research [1; 2; 3], particularly under isotropic compression of frictionless sphere packings. The jamming point \(\phi_{J}\) for packings of soft particles exhibits many characteristics of a second-order phase transition, at which various quantities show power law scaling - with respect to the distance from the jamming point - as one compresses beyond the jamming point [4; 5]. Further, the distribution of small forces between particles just in contact, as well as the gaps between particles nearly in contact, also exhibit power law behavior. Exponents characterizing these are constrained by an inequality that is saturated for configurations at the jamming point, which are therefore "marginally stable" [6; 7]. The mean-field theory of glasses and jamming has predictions for these exponents which match numerical values for dimensions \(D=2\) and above[5]. Extensions of this theory predict these exponents to be the same for shear jamming, as recent numerical results indeed confirm, along with the aforementioned aspects of criticality [8]. These and related results [8; 9] strongly support a unified description of both isotropic and shear jamming. In contrast, the manner in which the contact network acquires rigidity is strongly discontinuous [10; 11] for frictionless isotropic jamming. At the jamming point, the entire system (barring a small percentage of _rattlers_, described later) acquires rigidity discontinuously. From the Maxwell criterion for the rigidity of networks of nodes connected by edges representing distance constraints, the contact network of a configuration with \(N\) particles in \(D\) dimensions can be rigid when contacts result in at least \(N_{c}=D(N-1)\) constraints on the non- global degrees of freedom. In general, this is a necessary but not sufficient condition. Therefore, isotropic jamming occurs at the _isostatic point_, where the system has just the minimum number of contacts per particle, \(Z\) required, \(Z_{iso}=2D\) (from \(\frac{NZ_{iso}}{2}=ND\))). This discontinuous rigidity transition is different from the continuous transition observed, _e. g._ for _sticky_ packings [12; 13], and in random spring networks [14; 15] for which the rigid component of the system grows continuously beyond rigidity percolation, which does not occur at the isostatic point, and is preceded by the presence of both rigid and over-constrained regions. Results available for shear jamming appear to suggest that the rigidity transition is continuous, in contrast to isotropic jamming[16; 17; 18; 19]. Computational investigation of the rigidity transition for frictional two dimensional (\(2D\)) systems sheared at finite rates [16] revealed a broad distribution of rigid cluster sizes with increasing mean size as the jamming transition is approached, supporting a continuous rigidity transition, although becoming "sharper" as the shear rate is lowered. Similar results have been recently reported from analysis of sheared granular packings in experiments [18]. Following the observation that sheared _frictionless packings_ acquire geometric characteristics associated with jamming [20], the rigidity transition in such packings in \(2D\) was analysed by including constraints associated with friction [19]. The size distribution of overconstrained clusters, similar to [16], exhibits a broad distribution, supporting a continuous rigidity transition. In addition, the rigidity transition associated with jamming in frictional systems were studied in lattice models of jamming where a continuous transition was observed except in a limiting case corresponding to infinite friction [17]. These observations suggest that the nature of the rigidity transition could be an exception to the commonality of isotropic and shear jamming phenomenology outlined earlier. In this letter, we therefore investigate carefully the nature of the rigidity transition for both sheared frictional and frictionless packings, under both quasi-static and at finite shear rate. We find that the rigidity transition is unambiguously discontinuous under quasistatic shear. Such a transition appears rounded in the case of finite shear rate, but the dependence on shear rate clearly supports an approach to a discontinuous transition in the limit of vanishing shear rate. Shear jammed frictionless packings are obtained by shearing unjammed bidisperse soft-disk mixtures of size ratio \(1:1.4\) above the minimum jamming density \(\phi_{J}\). As described in [8; 21; 22; 23], well annealed disk-packings jam at a packing fraction higher than \(\phi_{J}\). In this study, we equilibrate hard-disk configurations at high density, which jam at a density \(\phi_{j}>\phi_{J}\) with the protocol described in [4]. Unjammed configurations decompressed to a density \(\phi\), with \(\phi_{J}<\phi<\phi_{j}\) undergo shear jamming when subjected to athermal quasistatic shear (AQS) wherein strain increments of \(\Delta\gamma=5\times 10^{-4}\) are applied, each step followed by energy minimization. We study 3 independent samples of \(N=16384\) particles at a density of \(\phi=0.8485\), with \(\phi_{j}\approx 0.85\) and \(\phi_{J}\approx 0.84\). We use Discrete Element Method (DEM) [24] to simulate frictional disks, using LAMMPS [25], with linear and tangential spring dash-pot forces. The model includes damping in both normal and tangential directions, in addition to global viscous damping. The normal and tangential spring constants \(k_{n}\) and \(k_{t}\) are set to 2.0. The normal velocity damping \(\eta_{n}\) is set to 3.0 and the tangential damping \(\eta_{t}\) is set to \(\frac{1}{2}\eta_{n}\). The global damping term \(\eta\) is also set to \(\approx 3\). Shear is applied by performing an affine transformation of particle positions, with strain increments \(\Delta\gamma\) followed by relaxation using DEM. Because of the damping terms, the system will eventually reach a force, torque balanced configuration if one waits long enough. Quasistatic shear requires reaching force/torque balance at each strain step. In practice, we consider the system to have reached force/torque balance when the total force (sum of total forces acting on the disks) is less than \(10^{-11}\) or when the total kinetic energy of the system is less than \(10^{-19}\). The simulation is stopped when the number of timesteps reaches \(2\times 10^{9}\) regardless. The timescale required to relax the system diverges at the shear jamming transition as pointed out in [26] and thus it is difficult to achieve force-balance close to the transition. For a finite shear rate \(\dot{\gamma}\), each strain step is followed by DEM dynamics of duration \(\Delta\gamma/\dot{\gamma}\). We set \(\Delta\gamma\) is \(10^{-4}\) for finite rate shear and \(10^{-3}\) for quasi-static shear. We perform finite rate shear on a system size of \(N=16384\) particles for 10 independent samples (and 20 samples for highest and lowest shear rate), and quasi-static shear with \(N=2000\) for 16 samples. The packing fraction \(\phi\) of the system is 0.81. Further details of the simulations are described in the supplemental material (SM) section I [27]. We describe the results for friction coefficient \(\mu=1\) in the main text. Results with \(\mu=0.1\) can be found in SM section V. A major distinction between frictionless and frictional jamming is the isostatic contact number \(Z\) at which jamming can occur in the absence of redundant constraints, which has been shown to range from \(D+1\) to \(2D\) depending on the friction co-efficient \(\mu\)[28; 29; 19; 20] with \(Z_{iso}=D+1\) for \(\mu=\infty\). This can be understood using the generalized isostaticity condition, obtained by considering additional conditions due to the "mobilized contacts"[28]. The tangential frictional force between two particles has an upper bound due to the Coulomb threshold: \(f_{t}\leq\mu f_{n}\) and the mobilized contacts are those for which \(\frac{f_{t}}{f_{n}}\approx\mu\). Considering a configuration with \(N\) particles and \(n_{m}N\) mobilized contacts, the conditions that the contact network at jamming has to satisfy are \(DN\) force balance conditions, \(\frac{D(D-1)}{2}N\) torque balance conditions and \(n_{m}N\) Coulomb conditions. The number of constraints imposed by the contacts is \(\frac{NDZ}{2}\) (since each contact constrains one translational and \(D-1\) rotational degrees of freedom). \(Z\) is by default computed excluding _rattlers_ (particles with less than the minimum number of contacts necessary for local rigidity, \(=3\) for frictionless, and 2 for frictional particles in 2D), and represented by \(Z_{NR}\) for clarity. Defining \(Z_{\mu}=Z_{NR}-\frac{2n_{m}}{D}\), the generalized iso-staticity condition is \[Z_{\mu}^{iso}=Z_{NR}-\frac{2n_{m}}{D}=D+1. \tag{1}\] For \(2D\) networks arising in several contexts including jamming, the onset of rigidity has been analysed by employing the pebble game algorithm[14]. Each node of the network represents a disk in the present context and is assigned \(k\) pebbles (\(k=2\) for frictionless disks and \(k=3\) for frictional disks) representing the degrees of freedom. The constraints imposed by each contact are represented by 1 or 2 edges (2 for the frictional case, 1 for the frictionless case, as well as for a mobilised contact). A \((k,l)\) pebble game (\(l=2\) indicates the global degrees of freedom) assigns pebbles recursively to edges, and based on such an assignment, decomposes the network into rigid clusters that are mutually floppy. Rigid clusters with redundant bonds (with no assigned pebbles) are termed over-constrained. A more detailed description of the algorithm is provided in SM section II. We employ the pebble game to monitor the size of the largest rigid cluster in the system primarily, as well as the distribution of the size of rigid clusters. First, we discuss the results of the frictionless system, for which above jamming, energy minimization cannot remove all the overlaps in the system, resulting in finite forces. As discussed in [8; 21], configurations are iso-static (\(N_{c}=(N-1)\times 2\) after removing rattlers) at the jamming point. (\(k=2,l=2\)) pebble game analysis of isostatic configurations shows that the whole system is made up of a single rigid cluster, as shown in Fig. 1 (a). Removing a single bond from this system leads to loss of rigidity, as shown in Fig. 1 (b). The results of this analysis are summarized in Fig. 2. The shear jamming transition can be identified by the presence of finite contact forces as well as by \(Z_{NR}\). The rigidity transition occurs at the jamming transition point and is characterized by a discontinuous jump in the size of the largest cluster. This strongly discontinuous rigidity transition is therefore common for frictionless isotropic and shear jamming. Next, we discuss the results from finite rate shear of frictional systems for shear rates \(\dot{\gamma}=10^{-6},10^{-7},10^{-8},10^{-9},10^{-10}\). The main observation from this set of simulations is that the rigidity transition associated with shear jamming becomes "sharper" as one reduces the shear rate, an observation also made in [16]. As shown in Fig. 3 (a), the increase in pressure \(P\) with strain is noticeably sharper for smaller shear rates. To characterize the rigidity of these configurations we follow [16; 18; 19] and use the \((k=3,l=2)\) pebble game on the contact network. Note that in the finite rate simulations, we do not simulate the system till it achieves force balance, and therefore for jammed as well as unjammed configurations, the net forces on the disks are finite. We use a threshold \(\delta\) to identify mobilized contacts - if \(\frac{|\vec{f}_{i}|}{|\vec{f}_{i}|}>\mu-\delta\) then the contact is mobilized. For simulations with \(\mu=1\), very few of our contacts are sliding and the choice of \(\delta\) does not significantly affect the results presented. The choose \(\delta=10^{-12}\) for the results in the main text. A discussion on the choice of \(\delta\) is included in the SM section VI. Even though the system is not in force balance when sheared at a finite rate, we identify rattlers as particles with just one contact and Figure 1: **Rigidity transition in sheared frictionless disk packings.** Pebble game analysis on the isostatic networks yields a single rigid cluster consisting of the whole system (**b**). Removal of one bond from that network results in a complete loss of rigidity, with the pebble game decomposing the system into multiple small rigid clusters indicated by the different colors (**a**)). Figure 4: **Comparison of cluster size distribution between high and low \(\dot{\gamma}\) studied.****a)**\(\dot{\gamma}=10^{-6}\) and **b)**\(\dot{\gamma}=10^{-10}\). Comparing the distribution of cluster sizes for the range covering 3, we see that \(\dot{\gamma}=10^{-6}\) shows a broader distribution compared to the one at \(\dot{\gamma}=10^{-10}\) as quantified by the exponent characterizing the power law distribution, indicating that the transition becomes discontinuous as the shear rate vanishes. The distribution corresponding to a given region in \(Z_{\mu}\) is calculated by considering the sizes of all rigid clusters in a configuration with \(Z_{\mu}\) in that region. Figure 3: **Finite rate shear for \(N=16384\) with \(\mu=1\).****a)** Pressure \(P\) vs \(\gamma\). **b)** Fraction of the largest rigid cluster with the total number of particles as a function of \(Z_{\mu}\). As \(\dot{\gamma}\) is reduced the transition becomes “sharper”. **Inset top left:** Data from different shear rates collapse onto each other when scaled by the “width” \(W\) of the transition region. **Inset lower right:** The width of the transition region obtained by fitting the data. Dependence of \(W\) for the three smaller shear rates on \(\dot{\gamma}\) can be described using a power-law suggesting that the transition becomes discontinuous as \(\dot{\gamma}\to 0\). Figure 2: **Rigidity transition associated with shear jamming in frictionless systems.** Rigidity transition can be seen as a discontinuous jump in the size of the largest cluster. Inset shows pressure \(P\) vs strain \(\gamma\) and the rigidity transition. The transition occurs at the isostatic value of the non-rattler contact number, \(Z_{NR}=4\). remove them recursively. For the remaining contact network, we perform pebble game analysis and show in Fig. 3 (b) the size of the largest rigid cluster as a function of the average contact number \(Z_{\mu}=Z_{NR}-n_{m}\). The transition becomes sharper as one reduces \(\dot{\gamma}\), and interestingly, the transition occurs at \(Z_{\mu}\approx 3\), the isostatic value, for all shear rates. We fit the data using the logistic function \(f(x)=\left[1+e^{-\frac{x-Z_{c}}{W}}\right]^{-1}\) (as a reasonable but arbitrary choice) and use \(W\) as a measure of the width of the transition region. As the top left inset in 3 (b) shows, the data can be collapsed using the fit values, with \(Z_{c}\approx 2.99\). In the lower right inset, we show the behavior of \(W\), whose dependence on \(\dot{\gamma}\) can be described by a power law that implies that the transition becomes discontinuous at \(\dot{\gamma}\to 0.\) To our knowledge, this has not been reported for shear jamming transition. Next, we study the cluster size distribution as shown in Fig. 4 for the largest and the smallest \(\dot{\gamma}\) studied. For both cases, we divide the region studied (in \(Z_{\mu}\)) into three regimes - before the jamming transition, a regime covering the transition, and after the transition - and compute the distribution of the rigid cluster sizes separately for each of them. The distributions in the regime covering the transition are quantified by an exponent characterizing the power-law distribution of the rigid clusters. For \(\dot{\gamma}=10^{-6}\), the exponent is \(-1.61\) and for \(\dot{\gamma}=10^{-10}\), the exponent is \(2.19\). While the transition in this regard appears continuous for both the shear rates studied, the distributions become progressively narrower as the shear rate decreases. The corresponding curves for the frictionless and frictional quasistatic shear show a faster than power law decay below the rigidity transition. We also calculate \(P_{\infty}\), the probability that a given disk belongs to a system spanning (percolating) rigid cluster, which is shown in the SM section IV. The \(P_{\infty}\) curves become progressively step-like with decreasing shear rate. Thus, we conclude that the appearance of a continuous transition is associated with the finite shear rates and absence of force/torque balance, rather than being an indication of the intrinsic nature of the shear jamming transition, or the presence of friction. To underscore our conclusions, we next consider quasistatic shearing of frictional disks, which is performed by applying an affine transformation and relaxing the system using DEM till the system reaches force balance. As noted before, the relaxation near the jamming transition is very slow and therefore it is hard to generate force-balanced configurations near the jamming transition [26, 30]. Given configurations that are fully relaxed, we define rattlers as particles that do not have finite forces acting on them. Disks with a single contact cannot sustain a non-zero force on that contact, which we remove recursively. In addition, given a friction co-efficient \(\mu\), disks with two contacts can be in force balance with finite forces only if the angle \(\theta\) between the two contacts is large enough. If \(\mu<\tan(\frac{\pi}{2}-\frac{\theta}{2})\), these contacts cannot carry forces (see SM section III), and are this also removed recursively. These configurations are analyzed using the (\(k=3,l=2\)) pebble game, and the results are shown in Fig. 5. As \(Z_{\mu}\) crosses the isostatic value 3, the largest rigid cluster encompasses the whole system, exhibiting a striking similarity with the behavior found for the frictionless case (Fig. 2). This observation is even more remarkable when one considers the behavior of the contact forces or pressure, _vs._\(Z_{\mu}\), which show a more rounded change, as a result of the difficulty of converging to force balanced configurations, as indicated by the non-monotonic behavior of the net forces acting on the disks. \(P\), \(\langle|\vec{f}_{contact}|\rangle\) and \(\langle|\vec{f}_{total}|\rangle\) shown are average values computed from all configurations having a given value of \(Z_{\mu}\). \(N_{largest}/N\) is a scatter plot from all trajectories. Before closing, we briefly compare our results and conclusions with previous work mentioned earlier. While the conclusion in [16] differ from ours, the sharpening of the rigidity transition has also been noted in [16]. In [19], shear was applied to frictionless disk assemblies before friction was included in the rigidity analysis. While this procedure captures many features of sheared frictional disks, like the anisotropy and the emergence of a contact network that supports jamming in the presence of friction, subtle but important differences in the organization of contacts exist. Specifically, using the procedure of [19], the fraction of redundant bonds rises continuously from below the isostatic contact number, as shown in the SM Section VII, whereas they are strictly zero below the frictional jamming point. The absence of redundant bonds before the rigidity transition is a characteristic feature of Figure 5: **Rigidity analysis of quasi-statically shear jammed frictional disks**. The size of the largest rigid cluster discontinuously jumps to equal the system size as \(Z_{\mu}\) crosses 3, the iso-static value. The contact forces and pressure show a more gradual change, but the behavior of the net force on the disks reveals this to be a result of incomplete convergence, as indicated by the average of the net force of individual disks. jamming, as compared to rigidity percolation in spring networks and other systems [15]. Our results differ from the analysis of experimentally sheared disk packings in [18], for which we do not have a ready explanation, since the experimental protocol should be expected to closely agree with the quasistatic shear we employ, an inconsistency that needs to be further investigated. In summary, our results unambiguously demonstrate that the rigidity transition associated with shear jamming in both frictionless and frictional disk packings is discontinuous in nature, when conditions of force and torque balance are met. Thus, the nature of the emergence of rigidity is the same for isotropic and shear jamming. Features that suggest a continuous transition are associated with partial relaxation of unbalanced forces, as our results for finite shear rate demonstrate, but such behavior approaches discontinuous change as the shear rate vanishes. Our results thus establish a key additional element in the shared phenomenology of isotropic and shear jamming. We thank Sumantra Sarkar, Sanat Kumar, Karen Daniels and Silke Henkes for useful discussions. We acknowledge support from the Thematic Unit of Excellence on Computational Materials Science (TUE-CMS) and the National Supercomputing Mission facility (Param Yukti) at the Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR) for computational resources. D.B. acknowledges support from the National Science Foundation (grant no. DMR-2046683) and the Alfred P. Sloan Foundation. S.S. acknowledges support through the JC Bose Fellowship (Grant No. JBR/2020/000015) from the Science and Engineering Research Board, Department of Science and Technology, India.
2304.08927
Multitenant Containers as a Service (CaaS) for Clouds and Edge Clouds
Cloud computing, offering on-demand access to computing resources through the Internet and the pay-as-you-go model, has marked the last decade with its three main service models; Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The lightweight nature of containers compared to virtual machines has led to the rapid uptake of another in recent years, called Containers as a Service (CaaS), which falls between IaaS and PaaS regarding control abstraction. However, when CaaS is offered to multiple independent users, or tenants, a multi-instance approach is used, in which each tenant receives its own separate cluster, which reimposes significant overhead due to employing virtual machines for isolation. If CaaS is to be offered not just at the cloud, but also at the edge cloud, where resources are limited, another solution is required. We introduce a native CaaS multitenancy framework, meaning that tenants share a cluster, which is more efficient than the one tenant per cluster model. Whenever there are shared resources, isolation of multitenant workloads is an issue. Such workloads can be isolated by Kata Containers today. Besides, our framework esteems the application requirements that compel complete isolation and a fully customized environment. Node-level slicing empowers tenants to programmatically reserve isolated subclusters where they can choose the container runtime that suits application needs. The framework is publicly available as liberally-licensed, free, open-source software that extends Kubernetes, the de facto standard container orchestration system. It is in production use within the EdgeNet testbed for researchers.
Berat Can Senel, Maxime Mouchet, Justin Cappos, Olivier Fourmaux, Timur Friedman, Rick McGeer
2023-04-18T12:07:50Z
http://arxiv.org/abs/2304.08927v2
# Multitenant Containers as a Service ###### Abstract Cloud computing, offering on-demand access to computing resources through the Internet and the pay-as-you-go model, has marked the last decade with its three main service models; Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The lightweight nature of containers compared to virtual machines has led to the rapid uptake of another in recent years, called Containers as a Service (CaaS), which falls between IaaS and PaaS regarding control abstraction. However, when CaaS is offered to multiple independent users, or tenants, a multi-instance approach is used, in which each tenant receives its own separate cluster, which reimposes significant overhead due to employing virtual machines for isolation. If CaaS is to be offered not just at the cloud, but also at the edge cloud, where resources are limited, another solution is required. We introduce a native CaaS multitenancy framework, meaning that tenants share a cluster, which is more efficient than the one tenant per cluster model. Whenever there are shared resources, isolation of multitenant workloads is an issue. Such workloads can be isolated by Kata Containers today. Besides, our framework esteems the application requirements that compel complete isolation and a fully customized environment. Node-level slicing empowers tenants to programmatically reserve isolated subclusters where they can choose the container runtime that suits application needs. The framework is publicly available as liberally-licensed, free, open-source software that extends Kubernetes, the de facto standard container orchestration system. It is in production use within the EdgeNet testbed for researchers. Edge computing Cloud computing Containers as a Service Multitenancy Federation Kubernetes ## 1 Introduction Multitenancy is what makes cloud computing economical. From a single bare metal machine, a cloud provider can offer resources to multiple tenants, where each tenant is a customer that contracts for cloud services on behalf of one or more users. These resources are, for example, virtual machines in the Infrastructure as a Service (IaaS) service model, or tools for application development and deployment in the Platform as a Service (PaaS) model. Tenants that are prepared to accept less than perfect isolation from other tenants benefit from the lower prices that providers can offer thanks to more efficient use of the providers' hardware. But, despite the greater efficiency of containers as compared to virtual machines, and despite recent improvements in ensuring isolation between containers, the cloud industry does not yet propose a multitenant Containers as a Service (CaaS) offering that takes advantage of these advances. What passes for CaaS today is in fact multiple side-by side instances of single-tenant clusters of compute nodes, each cluster having its own container orchestration control plane and its own data plane, and isolated from other clusters through the use of virtual machines. For example, automated services such as AWS Fargate1 and Google Autopilot2 that manage cluster capacity on behalf of a user who is deploying containers to the cloud do not do away with virtual machine overhead and do not improve control plane efficiency.3 In brief, although CaaS ought to offer greater efficiency than IaaS,4 it does not yet do so. Footnote 1: Amazon Web Services’ Fargate [https://aws.amazon.com/fargate](https://aws.amazon.com/fargate) Footnote 2: Google Cloud’s Autopilot [https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview) Footnote 3: Google Cloud documentation: Cluster Architecture [https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#nodes](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#nodes) With the emergence of the edge cloud, such efficiency will take on greater importance because resources will typically be more constrained than in the cloud. As part of the vision for 5G, it is projected that mobile network operators will become edge cloud providers, offering up compute resources from servers that are colocated with their wireless base stations [25, 41], at what is being termed the'service provider edge' [4, 5, 7]. These operators are also expected to offer resources from their peering sites, or the'regional edge' [6]. Such edge cloud instances will be data centers that are geographically dispersed to be closer to the users of cloud services or to edge devices than are the centralized data centers that dominate the present-day cloud.5 With fewer resources, an edge cloud will not scale as elastically as a cloud, yet it must be prepared to receive a large number of workloads that have been deployed to serve local users and devices. Footnote 4: We make the assumption that IaaS is offered through virtual machines, which is commonly the case [28]. Footnote 5: To be clear, we do not include low-powered IoT devices that are unable to run cloud-like workloads (the ‘constrained device edge’) [4] in our conception of the edge cloud that we anticipate for CaaS. The problem that we aim to resolve is how to move CaaS multitenancy away from a high-overhead multi-instance model to a more efficient one that will be suitable for the resource-constrained edge cloud. In the solution that we propose, multiple tenants share a single instance of the control plane, which is used to deploy containers that coexist within a single instance of a shared cluster, while still allowing tenants to enjoy isolation from each other as well as the opportunity to customize their resources. Our multitenancy solution has the particularity that it is designed to work in a federated environment. Today, a cloud customer typically deploys their workloads to a single cloud provider, but if they want to extend those workloads to be close to users and edge devices, a customer will also need to obtain resources from multiple edge cloud providers [18].6 Doing so will be easiest for a customer if those providers are federated, meaning that the customer will be able to contract with just one cloud or edge cloud provider and the customer will be able to deploy its workloads through a single interface offered by that provider [60, 36], and the provider will manage the propagation of the workloads to the other providers. Accordingly, our multitenancy solution ensures that each cloud provider can accept tenant workloads that originate from other providers. Footnote 6: In addition, a customer might bring resources to bear from its own ‘user edge’. As we use the term, a _multitenancy framework_ consists of a set of rules that govern how a cloud provider offers resources to its tenants such that each tenant can use their portion of the resources and configure those resources to meet their needs without regard for the presence of the other tenants. The rules address the creation of isolated environments, resource sharing, and user permission management. They determine which rights over resources are given to which tenants, under which conditions, and how those rights affect the relationships of other tenants with the same resources. The term equally well refers to the set of entities that are coded to enforce these rules. In this paper, we describe our framework, argue for it, and show how we have implemented it in EdgeNet, a production edge cloud.7 What we henceforth refer to as the _EdgeNet multitenancy framework_ is part of the larger EdgeNet code base,8 which is free, liberally-licensed, and open source software that enables CaaS deployments to the edge cloud. It is designed as a set of extensions to the Kubernetes container orchestration system,9 which is itself free, liberally-licensed, and open source. Our reasoning in building upon Kubernetes is that cloud customers will want to continue using this familiar system, which is today's de facto industry standard container orchestration tool. Footnote 7: The EdgeNet testbed [https://edge-net.org/](https://edge-net.org/) Footnote 8: The EdgeNet software [https://github.com/EdgeNet-project/edggent](https://github.com/EdgeNet-project/edggent) Footnote 9: Kubernetes [https://github.com/kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) As Kubernetes does not natively support multitenancy, others have identified the need for such an extension and have developed their own Kubernetes multitenancy frameworks. (See Table 2 for details.) We will show that the existing frameworks, while no doubt fine for the cloud, will not be suitable for CaaS in the edge cloud. There are a few prior studies concerning these frameworks [62, 33, 27], but this is the first paper to situate them, and EdgeNet, within the existing scientific literature on cloud multitenancy. Our contributions, and the sections of the paper that address them, are as follows: * We look at Kubernetes multitenancy frameworks through the lens of the scientific literature on cloud multitenancy and, in Sec. 3.1, we provide a novel classification of these frameworks into three main approaches: multi-instance through multiple clusters, multi-instance through multiple control planes, and single-instance native. * Based upon our analysis of the literature, we distill out four features that we believe will promote a future in which CaaS can thrive, in particular at the network edge, and we describe how we have incorporated these features into the EdgeNet multitenancy framework: consumer and vendor tenancy in Sec. 3.3, tenant resource quota for hierarchical namespaces in Sec. 3.4, variable slice granularity in Sec. 3.5, and federation support in Sec. 3.6. * We have implemented the EdgeNet multitenancy framework as a free and open-source extension to Kubernetes, and have put it into production as the EdgeNet testbed, as described in Sec. 5. * Our EdgeNet multitenancy framework constitutes a prototype for the federation of clouds and edge clouds, and we provide a vision in Sec. 5.2.4 for the future development of a full federation framework. * We benchmark the three multitenancy framework approaches using a representative implementation for each approach, and we reveal their pros and cons from a tenancy-centered edge computing perspective in Sec. 6. The paper is structured as follows. Sec. 2 provides background on cloud multitenancy, the challenges that it presents, and the ways in which those challenges have been addressed for edge computing. Sec. 3 describes related work in the specific area of Kubernetes multitenancy frameworks. Sec. 4 discusses design principles for a CaaS multitenancy framework, and Sec. 5 presents the architecture of the EdgeNet multitenancy framework that we have developed. In Sec. 6, we benchmark our framework against representative frameworks for two alternate approaches, and we point to our future work in Sec. 7. ## 2 Rationale We envisage a future in which tenants deploy services on a continuum of computing resources from cloud to edge cloud, about which we make the following assumptions: * Edge clouds are ubiquitous, scattered across the world [26]. * Compute and storage resources are constrained in the edge cloud, making it harder to scale tenant workloads there than in the cloud. * Tenants value the ability to easily move their workloads from one edge cloud cluster to another and between the edge cloud and the cloud. * Each tenant's user database is maintained by that tenant. User management is not a functionality provided by the compute clusters. * Tenants and their users are unreliable. They may purposely or accidentally harm each other, or the compute cluster, or themselves. We conceive of our proposed architecture based on these assumptions, for which we provide rationale in the following subsections: the necessity of a novel Kubernetes CaaS multitenancy framework (Sec. 2.1) that takes container-specific security and performance considerations into account (Sec. 2.2), and that enables federation across edge clouds and control over slice granularity at the edge (Sec. 2.3). ### Multitenancy It is an often-repeated commonplace that cloud computing is not just "using someone else's computer", as the cloud goes beyond this to promise more flexible, convenient, and cost-effective access to computing resources. Multitenancy is required to realize this promise. The _NIST Definition of Cloud Computing_[45] mentions resource pooling as one of the "five essential characteristics" of cloud computing, saying that: _The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand._ And in their _Defining Multi-Tenancy_ paper from 2014 [38], Kabbedijk et al. state: _Multi-tenancy is a property of a system where multiple customers, so-called tenants, transparently share the system's resources, such as services, applications, databases, or hardware, with the aim of lowering costs, while still being able to exclusively configure the system to the needs of the tenant._ Multitenancy is a standard feature of the three established cloud service models, Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) [48, 14]. If CaaS is to provide the promised benefits of the cloud and the edge cloud at scale, then it requires an efficient multitenancy model as well. We further discuss why such efficiency is required for CaaS to run for clouds and edge clouds in Sec. 3.1, and the results of our experiments in Sec. 6 support our contention. Multitenancy has a broad meaning and can be enabled at different cloud abstraction layers using different techniques to share resources among multiple customers. This paper discusses multitenancy in the context of CaaS and methods for accomplishing it. CaaS offerings are mostly based upon Kubernetes [62], so we focus on the ways in which it can serve multiple customers using multitenancy. To be clear with respect to the discussion of multitenancy in the Kubernetes documentation,10 which describes how a tenant can deploy an application in a Kubernetes cluster to serve its multiple customers using a multi-tenant model: that is also multitenancy, but at the application layer, and more precisely at the SaaS layer, however it is not multi-tenant CaaS, which is what this paper considers. Footnote 10: Kubernetes documentation: _Multi-tenancy_ [https://kubernetes.io/docs/concepts/security/multi-tenancy/](https://kubernetes.io/docs/concepts/security/multi-tenancy/) ### Security and Performance While multitenancy is an essential cloud feature, it raises security issues that researchers have been considering for over a decade [16], notably with respect to the IaaS service model [23]. For example, potential users are concerned about the security of their data when multiple tenants share the same infrastructure [12], and the resulting lack of trust can hamper cloud adoption [48]. Virtualization is used to isolate tenants from one another, but containers tend to offer weaker isolation [13], which introduces new concerns for multitenant container platforms [49], such as information leakage between colocated containers [30]. In general, Sultan et al. [51] have identified four categories of threat in containerized environments: malicious applications within containers, one container harming another, a container harming its host, and a container within an untrustworthy host. In Kubernetes, container security must be considered in the context of the _pod_, which is that system's smallest deployable unit, consisting of a set of one or more containers. The Kubernetes pod security standards define three profiles, Privileged, Baseline, and Restricted.11 However, these standards address a single-tenant environment, and so overlook some of the multitenant security issues mentioned above. Footnote 11: Kubernetes documentation: _Cloud Security Standards_ [https://kubernetes.io/docs/concepts/security/pod-security-standards/](https://kubernetes.io/docs/concepts/security/pod-security-standards/) We therefore see the need for a solution that diminishes the security risks of running colocated containerized workloads. In order to be of interest for CaaS, such a solution needs to maintain the performance advantage of containers over virtual machines. ### Edge Computing, Federation, and Slicing As described in the Linux Foundation's 2021 _State of the Edge_ report [5], cloud-like infrastructure is being developed at the network edge in order to serve edge devices that produce bandwidth-intensive and/or latency-sensitive workloads. ETSI's multi-access edge computing (MEC) architecture [3] provides a standard structure for making servers at cellular operators' radio access networks available for the deployment of such workloads by third parties. That is, the emerging edge cloud will be a multitenant cloud [11]. Since the MEC architecture anticipates that workloads may be containerized, we argue that there is a need for a multitenant CaaS framework that meets the specific requirements of the network edge. The prime edge requirements that we identify are federation and variable slice granularity. MEC facilities will be provided by multiple operators. Just as a mobile phone user is able to roam from one regional operator to another today, a mobile edge device will need to be able to connect to different operators and find its containerized edge services spun up near each base station to which it connects. And ETSI describes a requirement for edge devices to be able to engage in low-latency interactions with each other when they are near each other, even if they are connected to different operators' base stations. ETSI uses the term _federation_ to describe such interoperability scenarios. To enable federation, we argue, a CaaS framework must support the deployment of third parties' containers across multiple operators' edge clouds. That is, the framework will not just be multitenant, it will also be multi-provider, with providers furnishing geographically dispersed heterogeneous resources. Those who deploy CaaS services to a multi-provider environment will be in need of a unified interface that simplifies the task of moving workloads between remote clusters that are owned by different providers [60]. In addition, as anticipated by the Next Generation Mobile Networks Alliance in 2016 [1], operators will have to support third party services that put a much more heterogeneous set of requirements on their networks than is currently the case. Extreme requirements are incompatible with a one-size-fits-all approach. The way that MEC handles this is through _slicing_[41, 2, 61], which allows network and compute resources to be allocated and custom-configured to meet the specific needs of individual services. In the CaaS context, we argue that no single slice granularity will meet the full range of needs. The standard CaaS sub-node-level slicing, in which containers are provided from a shared resource pool on individual node, while no doubt appropriate for many services, will not be appropriate for those that are the most sensitive to performance variation. For those services, node-level slice granularity will be needed. ## 3 Related Work Someone who wishes to deploy containerized services to the cloud has a choice of open source container orchestration systems with which to do so, four of the most prominent being [10]: Apache Mesos,12 Docker Swarm,13 Kubernetes,14 and Rancher's Cattle.15 We focus on Kubernetes, as it has in recent years become the de facto industry standard. All of the major cloud providers offer Kubernetes-based CaaS to their customers (see Table 1). And Datalog, a company that provides cloud monitoring and security services, reports [22] that nearly 50% of their customers that deploy containers use Kubernetes to do so, this having increased about 10 percentage points over the past three years. Footnote 12: Apache Mesos [https://github.com/apache/mesos](https://github.com/apache/mesos) Footnote 13: Docker Swarm [https://github.com/docker/swarmkit](https://github.com/docker/swarmkit) Footnote 14: Kubernetes [https://kubernetes.io/](https://kubernetes.io/) Footnote 15: Rancher’s Cattle [https://github.com/rancher/cattle](https://github.com/rancher/cattle) In the commercial cloud offerings, each customer gets their own Kubernetes cluster, which is a straightforward form of multitenancy. Some providers add on more advanced features. For example, an Amazon EKS customer can use a service called Fargate16 to manage the capacity of their Kubernetes cluster, adding and removing nodes as they need to. Similarly, a Google Cloud customer can hand over control of their cluster capacity management to a service called Autopilot,17 to do the same thing for them automatically. Footnote 16: Amazon Web Services (AWS) Elastic Kubernetes Cloud (EKS) documentation: _Fargate_[https://docs.aws.amazon.com/eky/latest/useguide/fargate.html](https://docs.aws.amazon.com/eky/latest/useguide/fargate.html) Footnote 17: Google Cloud documentation: _Create an Autopilot cluster_[https://cloud.google.com/kubernetes-engine/docs/how-to/creating-an-autopilot-cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-an-autopilot-cluster) While Kubernetes multitenancy in this form might be fine for large centralized data center clouds, there are drawbacks when looking to an edge cloud future. Setting up a separate cluster for each tenant is far from the most efficient approach, as we will show in Sec. 6. Resources are liable to be underused, which will be of particular concern in the smaller data centers that we can anticipate at the edge. And when tenants need to be repeatedly instantiated as their workloads migrate, for instance at one roadside cabinet after another to serve vehicles that are moving along a highway, spinning up an entire cluster for each arrival of a tenant risks taking too much time. We anticipate that lighter forms of multitenancy will be needed: ones that allow more efficient resource sharing, even at some cost in workload isolation, and that allow more rapid creation and deletion of tenants. Furthermore, proprietary systems for \begin{table} \begin{tabular}{l l l} \hline \hline **Cloud provider** & **Kubernetes-based CaaS offering** & **URL** \\ \hline Amazon Web Services & Elastic Kubernetes Service & [https://aws.amazon.com/eky/](https://aws.amazon.com/eky/) \\ Microsoft Azure & Azure Kubernetes Service & [https://aws.amazon.com/eky/](https://aws.amazon.com/eky/) \\ Google Cloud Platform & Google Kubernetes Engine & [https://aws.amazon.com/eky/](https://aws.amazon.com/eky/) \\ Alibaba Cloud & Alibaba Cloud Container Service for Kubernetes & [https://aws.alibabaCloud.com/products/kubernetes](https://aws.alibabaCloud.com/products/kubernetes) \\ Oracle Cloud & Oracle Container Engine for Kubernetes & [https://aws.amazon.com/cloud/cloud-native/container-engine-labernetes/](https://aws.amazon.com/cloud/cloud-native/container-engine-labernetes/) \\ IBM Cloud & IBM Cloud Kubernetes Service & [https://aws.bm.com/cloud/kubernetes-service/](https://aws.bm.com/cloud/kubernetes-service/) \\ Tencent Cloud & Tencent Kubernetes Engine & [https://aws.tencentcloud.com/products/ke](https://aws.tencentcloud.com/products/ke) \\ O/Fleud & Free Manager Kubernetes & [https://aws.mtcloud.com/public-cloud/kubernetes/](https://aws.mtcloud.com/public-cloud/kubernetes/) \\ DigitalOcean & DigitalOcean Kubernetes & [https://aws.digitalcom.com/kubernetes-in-mistes/](https://aws.digitalcom.com/kubernetes-in-mistes/) \\ Linode & Linode Kubernetes Engine & [https://aws.liode.com/lab/kubernetes/](https://aws.liode.com/lab/kubernetes/) \\ \hline \hline \end{tabular} \end{table} Table 1: Major cloud providers’ Kubernetes-based containers-as-a-service (CaaS) offerings enabling multitenancy risk being a hindrance in a federated environment, in which a single customer might deploy their workloads to many edge clouds, each owned by a different operator. If all of the operators use a common open-source multitenancy framework, it will promote interoperability. Starting in 2019, as Table 2 shows, a fair number of open-source Kubernetes multitenancy frameworks have been developed. Some, such as Virtual Kubelet [56] and frameworks that are derived from that code, take the same starting point as the commercial services, which is each tenant having its own cluster. But others offer worker nodes to tenants out of a shared cluster, which is more resource efficient. And some of these serve multiple tenants out of a shared control plane, which is yet more efficient. The Kubernetes community has recognized the importance of developing such frameworks, as evidenced by the fact that one of the Kubernetes working groups, of which there are just five,18 is devoted to multitenancy.19 Both of the frameworks that this working group supports take the shared cluster approach. VirtualCluster (VC) [57] offers a separate control plane to each tenant while the control plane is shared among tenants by the Hierarchical Namespace Controller (HNC) [54]. These two frameworks, along with the others shown in Table 2, comprise the essential related work for our own EdgeNet framework. Footnote 18: Kubernetes working groups [https://github.com/kubernetes/community/blob/master/sig-list.md](https://github.com/kubernetes/community/blob/master/sig-list.md) We look at six aspects of Kubernetes multitenancy frameworks when comparing EdgeNet to the related work: the multitenancy approach (Sec. 3.1), the customization approach (Sec. 3.2), support for consumer and vendor modes (Sec. 3.3), management of tenant resource quotas (Sec. 3.4), support for variable slice granularities (Sec. 3.5), and support for federation (Sec. 3.6). ### Multitenancy Approach The scientific literature describes two approaches to enabling CaaS multitenancy: multi-instance [34], and single-instance native [37]. We ourselves further distinguish between multi-instance through multiple clusters and multi-instance through multiple control planes, making three approaches altogether, as shown in Table 2. The approaches are illustrated in Fig. 1 and we describe them as follows: #### 3.1.1 Multi-instance through multiple clusters Fig. 0(a) illustrates the multi-instance through multiple clusters approach, in which each tenant receives its own cluster. The proprietary commercial CaaS offerings (see Table 1) are structured in this way, but there is no open-source frame Figure 1: **Multitenancy Approaches. The multi-instance approaches provide each tenant with its own instance of the control plane (or, at the least, of certain control plane components) and, optionally, its own set of worker nodes, ensuring better isolation between tenants. The single-instance native approach caters to multiple tenants through a single control plane, while having them share the resources of a single set of worker nodes, thereby providing improved performance.** work to enable precisely this form of multitenancy, spinning up and spinning down full Kubernetes clusters on demand for different tenants. Existing open-source tools for deploying Kubernetes clusters, such as RKE20 and Kubespray,21 do not address multitenancy. Footnote 20: RKE [https://ike.docs.rancher.com/](https://ike.docs.rancher.com/) Footnote 21: Kubespray [https://kubespray.io](https://kubespray.io) There is, however, a set of open-source Kubernetes frameworks that do address multitenancy for the case in which there are already multiple tenants, each of which possesses one or more of their own clusters, even if these frameworks do not spin up or spin down the clusters on demand. These frameworks, based on the code of **Virtual Kubelet**[56], a sandbox project of the Cloud Native Computing Foundation, are designed to allow workloads from one cluster to be deployed to another cluster. Their primary focus is on cross-cluster deployment in general, and multitenancy arises only in the specific case of clusters belonging to different tenants, but since they do enable this sort of multitenancy, we examine the advantages and disadvantages of doing so. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & **EdgeNet** & **HNC** & **Capsule** & **Kiosk** & **Arktos** & **VC** & **k3v** & **veluster** & **Kamaji** & **VK\(\star\)** \\ & v1.0.0- & v1.0.0 & v0.3.1 & v0.2.11 & v1.0 & v0.1.0 & v0.0.1 & v0.15.0- & v0.2.1 & VK \\ & alpha.5 & & & & & & & & alpha.1 & v1.8.0 \\ & 2023- & 2022- & 2023- & 2021- & 2022- & 2021- & 2019- & 2023- & 2023- & 2023- \\ & 04 & 04 & 03 & 11 & 03 & 06 & 07 & 03 & 02 & 03 \\ \hline \multicolumn{10}{l}{**Multitenancy Approach**} \\ \multicolumn{10}{l}{Multi-instance} \\ \multicolumn{10}{l}{Through Multiple Clusters} \\ \multicolumn{10}{l}{-Through Multiple Control Planes} \\ \multicolumn{10}{l}{Single-instance} \\ \multicolumn{10}{l}{-Single-instance} \\ \multicolumn{10}{l}{-Single-instance} \\ \multicolumn{10}{l}{-Single-instance} \\ \multicolumn{10}{l}{-Single-instance} \\ \multicolumn{10}{l}{-Single-instance} \\ \multicolumn{10}{l}{-Single-instance} \\ \multicolumn{10}{l}{-Single-instance} \\ \multicolumn{10}{l}{**Customization Approach**} \\ \multicolumn{10}{l}{-Control Plane**} \\ \multicolumn{10}{l}{-Full Control Plane View} \\ \multicolumn{10}{l}{-Tenant-wise Abstraction} \\ \multicolumn{10}{l}{-Flat Namespaces} \\ \multicolumn{10}{l}{-Flat Hierarchical Namespaces} \\ \multicolumn{10}{l}{-Hierarchical Namespaces} \\ \multicolumn{10}{l}{-Data Plane} \\ \multicolumn{10}{l}{-SSH Access to Worker Nodes} \\ \multicolumn{10}{l}{**Consumer \& Vendor Modes**} \\ \multicolumn{10}{l}{-Consumer Mode} \\ \multicolumn{10}{l}{-Vendor Mode} \\ \multicolumn{10}{l}{-Tenant Resource Quota} \\ \multicolumn{10}{l}{-Incomplete} \\ \multicolumn{10}{l}{-Vendor} \\ \multicolumn{10}{l}{**Variable Slice Granularity**} \\ \multicolumn{10}{l}{-Node-level Slicing} \\ \multicolumn{10}{l}{-Sub-node-level Slicing} \\ \multicolumn{10}{l}{-Automated Selection} \\ \multicolumn{10}{l}{**Federation Support**} \\ \multicolumn{10}{l}{-Unknown} \\ \mul As illustrated in Fig. 2, Virtual Kubelet establishes a connection from one cluster to another by leveraging Kubernetes' _kublet22_ API. A kubelet is the agent that runs on each node of a Kubernetes cluster in order to manage the life cycles of pods, which are groups of containers associated with a workload. By implementing the kubelet API, a virtual kubelet masquerades as the kubelet of an individual node, but is in reality a stand-in for the remote cluster. It, in turn, uses the remote cluster's control plane API to deploy and manage workloads on that cluster. Footnote 22: Kubernetes documentation: _kublet_ [https://kubernetes.io/docs/concepts/overview/components/#kublet](https://kubernetes.io/docs/concepts/overview/components/#kublet) Although we might think of this as a small scale form of federation, the Virtual Kubelet authors expressly say that it "is not intended to be an alternative to Kubernetes federation," by which we understand a full-featured and scalable federation. Similarly, as we have mentioned, Virtual Kubelet is not primarily designed for multitenancy. By contrast, EdgeNet is designed precisely for federation and multitenancy. While similar to Virtual Kubelet in the sense that EdgeNet introduces agents to transfer workloads from one cluster to another, EdgeNet avoids the overhead associated with each tenant having its own cluster. This is because, in EdgeNet, it is the cloud and edge cloud providers that possess the clusters. Provider ownership of the clusters also means that an EdgeNet tenant can rely upon a provider to ensure the privacy of its workloads, rather than relying upon another tenant to do so. **Liqo**[55], **Admiralty**[53] and **tensile-kube**[52] are all based on the Virtual Kubelet code. Liqo is one of the few frameworks to date to be the subject of a peer-reviewed scientific paper [35]. The authors are careful to state that some of the issues that arise from multitenancy, such as the manner in which the workloads of different tenants in the same cluster are isolated from each other, remain to be addressed.23 Sec. 4.2 describes our proposed resolution for this problem. Footnote 23: From the Liqo paper [35]: “Specifically, we foresee a _shared security responsibility model_, with the provider responsible for the creation of well-defined sandboxes and the possible provisioning of additional security mechanisms (e.g., secure storage) negotiated at peering time.” #### 3.1.2 Multi-instance through multiple control planes In the multi-instance through multiple control planes approach, all tenants are supported by a single cluster, but each tenant acquires its own control plane within that cluster, as illustrated by Fig. 0(b). One or more nodes are dedicated to supporting the tenant control planes, and, within each control plane node, containers, or containers grouped into pods, isolate one tenant's control plane from another's. (Isolating control planes from each other through containers imposes lower overhead than doing so with VMs.) There are variants to this approach, in which some control plane components, like the scheduler, are shared among tenants, while others, such as the API server and database, are duplicated so as to provide one instance to each tenant. In any case, this approach gives each tenant a full view of its own control plane view, which it can use for customizing its own environment. Frameworks that follow this approach differ in how they isolate tenant workloads from each other. If tenants share a common set of worker nodes as they do in VirtualCluster, k3v, and cluster, the degree of isolation will depend upon the container runtime used to run the containers. If each tenant acquires its own dedicated set of worker nodes, as happens in Kamaji, then there is better isolation. Figure 2: **Multitenancy through Virtual Kubelet. The virtual kubelet masquerades as the kubelet of a node in a cluster, but in reality deploys workloads from that cluster to other clusters via those clusters’ APIs. When a local cluster belongs to one tenant and a remote cluster belongs to another tenant, this results in a distinctive form of multitenancy, with clusters that, while otherwise belonging to one tenant, host pods from other tenants.** **VirtualCluster**[57] is one of the two open-source frameworks incubated by the Kubernetes Multi-Tenancy Working Group. It virtualizes the control plane components per tenant, with the exception of the scheduler. For isolation between the worker nodes of different tenants, it uses Kata containers [47]. A drawback of VirtualCluster is the cost of providing separate control plane components per tenant. In a peer-reviewed scientific paper [62], the VirtualCluster authors state that this cost is a blocking point when more than a thousand tenants are in the cluster. By contrast, EdgeNet's shared control plane approach allows far more tenants to be allowed into a given cluster, and allows more tenants to arrive within a short period of time, as we show in Sec. 6. In a federated edge cloud environment, where we anticipate limited resources, large numbers of workloads, and the rapid propagation of workloads from one cluster to another, the shared control plane approach has a clear advantage. In fairness to VirtualCluster, it is designed for a different sort of environment. In Rancher's **k3v**[46], the control plane is virtualized on a per-tenant basis, similar to VirtualCluster, but it does not provide data plane isolation, as VirtualCluster does. Exceptionally among the frameworks, k3v does not provide a mechanism for managing tenant resource quotas, as we mention in Sec. 3.4. **vcluster**[43], not to be confused with VirtualCluster, is one of two open-source frameworks developed by Loft, the other being kiosk, which takes the single-instance native approach. In the control plane, each vcluster has a separate API server and data store. Workloads created on a vcluster are copied into the namespace of the underlying cluster to be deployed by the shared scheduler. **Kamajl**[21] is one of two open-source frameworks developed by Clastix Labs, the other being Capsule, which takes the single-instance native approach. Kamaji enables Kubernetes multitenancy by running tenant control planes as pods on a common cluster, known as the admin cluster. Each tenant receives its own dedicated worker nodes. Isolation between worker nodes on the same machine is enabled through VMs, which introduces much higher overhead than would isolation through containers. #### 3.1.3 Single-instance native In the single-instance native approach, all tenants share a single control plane and a common set of worker nodes, as illustrated in Fig. 1c. Control plane isolation is ensured through a logical entity, such as Kubernetes namespaces, that introduces negligible overhead, but provides less control plane isolation compared to a multi-instance approach. Workload isolation depends upon the container runtime, as it does for the multi-instance through multiple clusters that are federated approach and for the multi-instance through multiple control planes approach. This approach demands significant coding work to give each tenant an experience akin to using their own separate cluster. The single-instance native approach's scaling advantage is illustrated by a scenario examined by Guo et al. in which it supported thousands of tenants, as opposed to just dozens for a multi-instance approach [34]. It also has lower operational costs [19]. And it is lighter weight for workload mobility, allowing containers to be spun up and spun down with less overhead than in a multi-instance approach, as we show through benchmarking in Sec. 6. For these reasons, we have adopted the single-instance native approach for EdgeNet. The Hierarchical Namespace Controller (**HNC**) is one of the two open-source frameworks incubated by the Kubernetes Multi-Tenancy Working Group, the other being VirtualCluster. HNC takes the single-instance native approach, whereas VirtualCluster takes the multi-instance through multiple control planes approach. HNC uses a hierarchical namespace structure in order to enable multitenancy.24 Functionalities such as policy inheritance that allow objects to be replicated across namespaces are built upon this hierarchy. Footnote 24: Kubernetes Multi-tenancy Working Group documentation: _HNC: Concepts_[https://github.com/kubernetes-sigg/hierarchical-namespaces/blob/master/docs/user-guide/concepts.md](https://github.com/kubernetes-sigg/hierarchical-namespaces/blob/master/docs/user-guide/concepts.md) Aspects of this work that have inspired our own multitenancy framework are its hierarchical namespace structure and the terminology that it employs. We have also designed our own framework to avoid what we perceive to be its defects: * HNC does not enforce unique names for namespaces, opening the possibility for namespace conflicts. * HNC's quota management system is not aligned with the hierarchical namespace structure so as to limit a child's quota based upon its parent's quota, though community documentation states25 that work is underway to enable this. Footnote 25: Kubernetes Multi-tenancy Working Group documentation: _HNC: Policy inheritance and object propagation_[https://github.com/abernetes-sigg/hierarchical-namespaces/blob/master/docs/user-guide/concepts.md#policy-inheritance-and-object-propagation](https://github.com/abernetes-sigg/hierarchical-namespaces/blob/master/docs/user-guide/concepts.md#policy-inheritance-and-object-propagation) * HNC's quota management system allows namespaces without quota to coexist alongside namespaces that have quotas, which puts those quotas at risk (see Fig. 4b and discussion in Sec. 3.4). **Capsule**[20] is one of two open-source frameworks developed by Clastix Labs, the other being Kamaji, which, as we have seen, takes the multi-instance through multiple control planes approach. Capsule is one of two frameworks that adopts flat namespaces (see Sec. 3.2) as its customization approach, the other being kiosk. It gives a tenant the possibility of creating resources that can be replicated across a collection of namespaces of the tenant, and it provides the cluster administrator with the possibility to copy resources among namespaces of various tenants. Although this approach facilitates the management of multiple namespaces that belong to a tenant, so it eases management complexity, it may not be fully scalable for extensive tenant settings, as we discuss in the following subsection. Capsule aims at allowing an organization to share a single cluster efficiently, hence not accounting for the needs of the envisaged edge computing infrastructure. **kiosk**[42] is one of two open-source frameworks developed by Loft, the other being vcluster, which vcluster takes the multi-instance through multiple control planes approach. This solution uses flat namespaces approach, as does Capsule, for customization. A tenant is represented by an abstraction called an _account_, and an account can create a namespace through an entity called a _space_. Each space is strictly tied to only one namespace. This framework permits the preparation of templates that can be employed during namespace creation, facilitating the automated provisioning of resources as defined within these templates in the designated namespaces. Despite alleviating management complexity, this approach still shares Capsule's limitations stemming from flat namespaces. Multi-cluster tenant management is listed on their roadmap, but the project does not seem to be under active development, as the latest commit in its main branch was around a year ago. Centaurus's **Arktos**[17] takes the single-instance native approach to multitenancy. As discussed in Sec. 3.2, it is the only framework that takes a tenant-wise abstraction approach to enabling customization. Arktos achieves this through API modifications,26 which may require a significant amount of effort to keep aligned with the upstream Kubernetes control plane code. Its architecture primarily consists of three main software entities: an _API gateway_ that receives tenant requests, a _Tenant Partition (TP)_ that gives the illusion of each tenant acquiring an individual cluster, and a _Resource Partition (RP)_ that operates on resources like nodes [27]. Although not all of its features are precisely presented, based upon our reading of their documentation, we consider that this solution addresses some federation aspects, such as scalability and cloud-edge communication. They provide a vision of consolidating 300,000 nodes belonging to different resource partitions into a single regional control plane. However, the main branch of their project repository has not received commits for around a year, implying that it may not be currently undergoing active development. Footnote 26: Arktos documentation: _Multi-tenancy Overview_[https://github.com/CentaurusInfra/arktos/blob/master/docs/design-proposals/multi-tenancy/multi-tenancy-overview.md#api-server](https://github.com/CentaurusInfra/arktos/blob/master/docs/design-proposals/multi-tenancy/multi-tenancy-overview.md#api-server) ### Customization Approach Containers-as-a-service cannot scale to a large number of tenants if the mechanism by which each tenant obtains the environments in which to deploy its workloads, and configures each environment to meet the needs of its workload, requires manual intervention at every stage by the cloud administrator. Each tenant should have a degree of autonomy to: create and delete the environments in which its workloads can be deployed; obtain resource quotas and assign them to those environments; and designate users for the environments, assign roles to those users, and grant permissions based upon those roles. Some combination of automation of these processes and delegation of administrative responsibility is needed to enable that autonomy. In Table 2, we call the way in which a multitenancy framework does this its _Customization Approach_. By giving each tenant its own control plane, which the tenant's administrator can use to configure its environments as they wish, the multi-instance frameworks provide the greatest flexibility. We call this approach the **Full Control Plane View**. As Table 2 shows, it is offered by the frameworks that follow the multi-instance through multiple clusters approach (Virtual Kubelet based frameworks), since each cluster has its own control plane, and, of course, by the multi-instance through multiple control planes approach (VirtualCluster, k3v, vcluster, and Kamaji). Some of these frameworks (Kamaji and, partially, Virtual Kubelet based frameworks) allow additional server environment configuration to take place by enabling SSH access to worker nodes, and this is noted as **Data Plane** customization in the comparison table. In Virtual Kubelet based frameworks, administrators of a tenant that owns a cluster can typically access the worker nodes in that cluster by SSH, but not the ones in other clusters, and this is classified as **Partial** in the comparison table. In frameworks that follow the single-instance native multitenancy approach, some extensions to Kubernetes are required in order to safely enable customization. This is because in standard Kubernetes, giving a tenant's administrator the permissions necessary to configure their own environments means giving them the ability to configure other tenants' environments as well. Since there is no control plane isolation mechanism other than namespaces, an admin s. The hierarchy captures relationships between the namespaces: \(a\) and \(b\) are the core namespaces belonging to two tenants, whereas the others belong to sub-trees of those core namespaces. For example, \(aa\) and \(ab\) are subnamespaces of \(a\). They belong to the same tenant as \(a\) and they may inherit a portion of that tenant's resource quota, user roles, and the permissions that accompany those roles. Likewise, \(aba\) and \(abb\) belongs to the same tenant as \(ab\) and may inherit from it. Management of tasks such as the approval of new namespaces, and the modification of quotas, users, etc., can be delegated to each tenant's administrators, and, further down the hierarchy, to sub-tree administrators. The flat structure does not express these relationships. For example, no mechanism provides for \(aa\) to inherit from \(a\). If they are to share configuration parameters, this needs to be expressly requested by the common administrator of the two namespaces. There are efforts to solve this issue through configuration templates to be applied to multiple namespaces. Nevertheless, as the number of namespaces that a tenant has grows, it results in management complexity for the root admin of this tenant, which makes it challenging to keep track of independent namespaces. istrator who has permission to create, modify, and delete namespaces can do so freely across the board. Rather than hand out such permissions, a single-instance customization approach needs to provide one or more custom resources that a tenant's administrator can access, and the controllers of those will ensure safety while configuring the tenant environment on the administrator's behalf. Among the single-instance frameworks, Arkto's employs the most elaborate customization approach: that of introducing a new abstraction, beyond namespaces, by which to isolate tenants from one another in the control plane. As this abstraction is meant to capture the notion of a tenant, we refer to it in Table 2 as a **Tenant-wise Abstraction**. Our concern about this approach is the amount of development work that it might entail, both to develop this new abstraction and to maintain its compatibility with Kubernetes' upstream version of the control plane code. Instead of introducing an entirely new abstraction, frameworks can build on Kubernetes' existing control plane isolation mechanism: namespaces. We identify two ways of doing so. The simpler one, followed by Capsule and kiosk, is to follow the standard Kubernetes approach, in which each namespace exists independently of every other namespace. This is described as **Flat Namespaces** in Table 2. Another way, but one that requires more development work, is to provide controllers that keep track of the relationships between namespaces, such as several namespaces all belonging to the same tenant. Since the two frameworks that do this, EdgeNet and HNC, do so by maintaining a hierarchical structure through which to track the relationships, we identify this approach as **Hierarchical Namespaces** in Table 2. Fig. 3 compares the two namespace structures. A hierarchical structure permits configurations to be inherited and allows for configuration tasks to be delegated, offloading tasks from administrators at the top of the hierarchy to administrators further down. The prime disadvantage of a flat namespace structure is that, even with automation, the root admin of tenants are highly solicited. EdgeNet adopts a hierarchical namespace structure, which is implemented by the architecture described in Sec. 5.1 and Sec. 5.2. ### Consumer and vendor modes Cloud services generally support two types of tenancy: **Consumer Mode**, in which the tenant is the end user of the resources; and **Vendor Mode**, in which the tenant can resell access to the resources to others. Figure 3: **Customization Approach: Hierarchical versus Flat Namespaces. The same seven namespaces organized into a hierarchy (left) and without a hierarchy (right), in each case under a root environment \(r\), which is not itself a namespace.** The type of tenancy affects the visibility that the manager of a tenant has into that tenant's isolated environments. For a consumer tenant, these environments are generally termed _workspaces_, and they are created to be used by the members of that tenant's group or organization. A manager of a set of workspaces needs visibility into who the users of each workspace are, and needs fine-grained control over the rights of those users with respect to those workspaces. But a vendor tenant manages a set of subtenant environments that are destined for its own customers. A customer expects a certain level of privacy, with the users and user rights of their subtenant environment remaining hidden from the vendor. As shown in Table 2, all of the CaaS multitenancy frameworks that we have studied support consumer tenancy, but only EdgeNet and Virtual Kubelet based frameworks support vendor tenancy. We expect that the same commercial logic that has driven other cloud service models towards both forms of tenancy will lead to support for vendor tenancy being generalized for containers-as-a-service. In order to enable any sort of tenancy, a system must support authorization and isolation mechanisms. It requires greater expressiveness to support both consumer and vendor tenancy than it does to support consumer tenancy alone. Such expressiveness, for example, allows a tenant to create a subtenant for the purpose of reselling its own allocated resources. This can be done in different ways depending upon the multitenancy approach: * _Multi-instance through multiple clusters:_ A tenant who owns a cluster can open this cluster for use by one of its subtenants. Because of the ease of doing so, we indicate Virtual Kubelet based frameworks as offering support for a vendor mode, even though their documentation does not explicitly mention this. However, since such an approach requires a cluster per tenant, this introduces high overhead, as our benchmarking shows in Sec. 6. * _Multi-instance through multiple control planes:_ A tenant could create a subtenant generated with its subtenant control plane instance running on top of the tenant control plane instance. None of the frameworks that we have studied currently do this. * _Single-instance native:_ A tenant can create a subtenant assigned with private namespaces that the tenant is solely authorized to remove. EdgeNet, having adopted the single-instance native approach to multitenancy, builds consumer and vendor modes on top of its hierarchical namespace structure. The implementation is described in Sec. 5.2.1 and illustrated in Fig. 10. ### Tenant resource quota allocation Resource quotas are popular in commercial settings, where they provide a basis for providers to bill their customers. In situations where resources are constrained, quotas are also a simple means by which to ensure an equitable allocation of those resources. Quotas are commonly used in the cloud, and Kubernetes supports them by providing a mechanism for allocating quotas to namespaces.27 The Kubernetes mechanism is conceived for the relatively small scale scenario of a single organization using a cluster, and an administrator who manually sets resource quotas per namespace so as to share out the resources among different teams in the organization. A multitenancy framework that is built on Kubernetes needs to automate this process, to enable it to scale. Footnote 27: Kubernetes documentation: _Resource Quotas_ [https://kubernetes.io/docs/concepts/policy/resource-quotas/](https://kubernetes.io/docs/concepts/policy/resource-quotas/) As Table 2 shows, all of the Kubernetes multitenancy frameworks that we have studied offer a mechanism for managing tenant resource quotas, with the exception of k3v. We classify k3v in this way as we consider its mechanism to be incomplete. In that framework, which is no longer under active development, a cluster administrator can set a resource quota in the host namespace of a virtual cluster, but the tenant will not be aware of it. In the edge cloud, we can expect resources to be more constrained than in the cloud, and so the need for a quota allocation mechanism is even stronger. Since our EdgeNet framework is designed for the edge cloud as well as the cloud, such a mechanism is a required feature of the framework. Having made the design decision to use a hierarchical namespace structure, our quota management system needs to follow that structure. This means building in dependencies between quotas: as shown in Fig. 3(a), at each node in the namespace tree, quota must be shared out between the parent namespace located at that node and the sub-trees that are rooted at the children of that node. EdgeNet's quota implementation is more thoroughly described in Sec. 5.3. The only other framework that uses hierarchical namespaces, HNC, also allows quota to be shared out hierarchically. The mechanism employed in doing so relies on Google Cloud's Hierarchy Controller28 as its foundation. But since it does not require that a quota be attributed to each namespace, it can end up constraining some namespaces while not In EdgeNet, the quota of 15 must also be distributed within the sub-tree rooted at \(ab\). For example, here, 3 is reserved for the namespace \(ab\) and 8 and 4 are allotted to the sub-trees rooted at \(aba\) and \(abb\), respectively. Likewise, quota must be allocated to the sub-tree rooted at \(b\) and distributed within that sub-tree. HNC, on the other hand, allows portions of the hierarchy to be free of quotas. In this example, in HNC, the administrator of namespace \(ab\) has, perhaps inadvertently, not set quotas for its subnamespaces, and likewise for the tenant administrator of \(b\). If workloads in \(aba\) and \(abb\) were to exceed a resource consumption of 12 or the workloads at \(b\) were to consume resources exceeding 40, other namespaces with quotas might not be able to fully enjoy the resources quotas that had been reserved for them. constraining others, opening the possibility for a sub-tree to not enjoy the full resource quota that it has been allocated, as shown in Fig. 3(b). In EdgeNet, quotas apply either to the entire tenant namespace hierarchy or not at all, so this problem cannot arise. Resource quotas can be wasteful of resources if they are not used fully, while best-effort distribution of resources is more efficient without providing guarantees. None of the Kubernetes multitenancy frameworks provides an intermediate solution. Providing such a solution is on the EdgeNet development road map. ### Variable slice granularity We use the term _slicing_ to refer to a mechanism that enables multitenancy by dividing a larger pool of resources into smaller portions, each portion being for the exclusive use of one of the tenants. For CaaS, the larger pool is a compute cluster that consists of nodes, which may be either physical servers or virtual machines. But what size should a smaller portion be: a full node, or a subset of the resources of a node? A subset can be acquired through the use of containers, sandboxed to a greater or lesser degree, as Sec. 4.2 will describe. Fig. 5 depicts the different possible node and slice granularities. In our estimation, neither of the slicing granularities is ideal for all use cases, and a multitenancy framework should offer both, and automate the ability to switch between them. **Node-level Slicing** (Figs. 4(a) and 4(b)). Slicing at this granularity, which is offered by all of the frameworks that we have studied, provides a tenant with one or more entire nodes, so that isolation of a tenant workload is ensured at the level of the node in which it runs. By this means, it offers greater freedom in choosing a container runtime to support a particular containerized workload. And it can better ensure stable access to resources. Reserving an entire physical server (Fig. 4(a)) can be valuable, in particular, for a tenant that needs to meet an unusual requirement, such as guaranteed access to GPU resources. However, when entire nodes are reserved for tenants, some nodes might be under-utilized. Figure 4: **Hierarchical allocation of resource quotas. Examples of a quota of 100 being divided up among the sub-trees of a hierarchical namespace rooted at \(r\). The tenant of the sub-tree rooted at \(a\) has been allocated a quota of 60, from which it reserves 20 for its core namespace and allocates 25 and 15 to the sub-trees rooted at \(aa\) and \(ab\), respectively.** **Sub-node-level Slicing** (Figs. 4(c) and 4(d)). Sub-node-level slicing improves the ability of a cluster to maximize the efficiency of its resources. This is enabled through containers where each container on a node takes a portion of its resources. Isolation between multi-tenant workloads on the same host is provided at the level of containers, so it is weak. Better isolation can be ensured through container runtimes that provide sandboxes to containers. This approach restricts tenant autonomy in selecting a container runtime as there are just a few of them available. As Table 2 shows, all of the CaaS multitenancy frameworks that we have studied offer node-level slicing, and all but Kamaji offer sub-node-level slicing. When it is available, sub-node-level slicing is the default. Upon the request of a tenant, a cluster administrator can manually configure node-level slicing. The EdgeNet framework is the only one for which the process of switching granularity is automated. Sec. 5.4 describes how we implement this. It might seem that the node-level slicing that we thereby enable suffers from all of the inefficiency of the multi-instance CaaS model that we have critiqued (see Sec. 1), but this is not so, as our architecture preserves the single-instance efficiency of a single control plane. ### Federation support CaaS multitenancy frameworks have to date generally been aimed at the use case of a single cluster operator offering its resources to its own tenants. However, the resources of several operators from different regions or countries will generally be required by a tenant that wishes to provide its edge cloud based services to large numbers of end-users. Such a tenant might prefer to be the customer of just one operator and, through that operator, gain access to the others. We anticipate that operators will see a commercial interest in federation, which will allow them to more broadly commercialize access to their clusters. We also anticipate that operators will want to lower the barrier to entry for those who deploy services by allowing them to orchestrate their containers across multiple clusters with a single tool. Many edge cloud services, such as cognitive services [26, 32], are expected to involve workloads that are spread across the cloud and the edge cloud [8], with workloads moving back and forth between the two, so there are voices in industry that argue [60], and we are convinced, that a unified, single interface for users is a necessity. As a first step towards this goal, the EdgeNet multitenancy architecture presents an essential first brick in such a federation architecture: the ability to generate object names that are universally unique to cluster and tenant. Such uniqueness Figure 5: **Node and slice granularities. Dashed vertical lines indicate how a cluster’s resources are sliced so as to make those resources available to tenants. A node in a cluster can be a physical server (left illustrations) or a VM (right illustrations), presented as node granularities. Slicing can be performed so as to make an entire node available to a tenant (top illustrations) or so as to make a subset of a node’s resources available to a tenant (bottom illustrations). Different node and slice granularities can coexist within a cluster (e.g., the scenarios shown in all four illustrations could appear simultaneously in a single cluster). Our EdgeNet multitenancy framework automates the process of varying the slice granularity, allowing a node to be reserved for a tenant, or returning a reserved node to the pool of nodes available to be subdivided.** avoids name collisions during the propagation of objects across clusters. The details of our implementation are found in Sec. 5.2.4. Besides our EdgeNet framework, five of the frameworks that we study support scaling up the infrastructure that multiple tenants share, and four of them, the Virtual Kubelet based frameworks, do so through federation. Even for their main purpose of enabling deployment of workloads to multiple clusters, Virtual Kubelet based frameworks suffer from a significant drawback: Kubernetes' automatic scaling up and down of workloads to meet demand gets lost in remote clusters. This is because the Kubernetes objects that get deployed through a virtual kubelet are pods rather than the _Deployment_ or _StatefulSet_ workload resources that manage pod life cycles on a user's behalf, and the Kubernetes Horizontal Pod Autoscaling mechanism29 in each cluster works on these sorts of objects, not on individual pods. Footnote 29: Kubernetes documentation: _Horizontal Pod Autoscaling_[https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) Like Virtual Kubelet, EdgeNet enables the deployment of workloads from local clusters on remote clusters, but EdgeNet handles this through an intermediate cluster between local and remote clusters. The intermediate cluster that does this for EdgeNet is called the _Federation Manager_. When a tenant, using its local cluster, makes a deployment in federation scope, the Federation Manager creates the deployment on the remote cluster on behalf of the tenant, as we describe briefly in Sec. 5.2.4. Some of Liqo's extensions to Virtual Kubelet start to tackle some of the concerns that would arise in a multi-tenant federation, such as collisions between the names of namespaces generated in local clusters and in remote clusters. Liqo's solution is a naming scheme that ensures that the name of a namespace used by a workload will be unique in the remote cluster in which it is deployed.30 However, the same workload risks running in namespaces with different names in different clusters, which can itself lead to problems. EdgeNet by contrast generates globally unique names that avoid collisions, and a workload runs in namespaces that carry the same name on all clusters to which it is deployed. Footnote 30: Liqo documentation: _Namespace Offloading_[https://docs.liqo.io/en/v0.7.0/usage/namespace-offloading.html](https://docs.liqo.io/en/v0.7.0/usage/namespace-offloading.html) The other framework that provides for cloud-edge communication and significant scaling is Arktos, but we have been unable to determine whether federation is involved. Its stated aim is to achieve a single regional control plane to manage 300,000 nodes that multiple tenants will share.31 Footnote 31: Arktos documentation: _Large Scalability_[https://github.com/CentaurusInfra/arkos#large-scalability](https://github.com/CentaurusInfra/arkos#large-scalability) ## 4 Design Decisions Our vision for EdgeNet's multitenancy framework is to promote a future in which the CaaS service model can thrive, particularly at the network edge. We have made nine design decisions, listed below, to support this vision. The first six were discussed in relation to related work in the previous section, and the latter three are discussed in this section. The implementation details are provided in the Architecture section that follows (Sec. 5). * **Multitenancy approach.** EdgeNet obtains the lower overhead offered by a _single-instance native_ approach to multitenancy, compromising on the isolation that would be offered by a _multi-instance_ one (Sec. 3.1). * **Customization approach.** We mitigate customization limitations that stem from the single-instance approach through the use of hierarchical namespaces (Sec. 3.2). * **Consumer and vendor tenancy.** We design EdgeNet to support both the _consumer_ and _vendor_ forms of tenancy (Sec. 3.3). * **Tenant resource quota.** EdgeNet incorporates a control mechanism to manage the allocation of resource quotas in a hierarchical tenancy structure, allowing tenants to grant quotas to their subtenants and recoup those quotas from them (Sec. 3.4). * **Variable slice granularity.** Considering that there is no ideal granularity at which to slice a compute cluster in order to deliver resources to tenants, we allow an EdgeNet cluster to be sliced into individual compute nodes or at a sub-node-level granularity (Sec. 3.5). * **Federation support.** Our framework allows each EdgeNet cluster to receive the workloads of tenants from other EdgeNet clusters with which it is federated, while avoiding name collisions by generating object names that are unique to cluster and tenant (Sec. 3.6). * **Kubernetes custom resources.** For ease of integration into existing systems and ease of adoption by users, we implement EdgeNet using the Kubernetes _custom resources_ feature, rather than creating a wrapper around Kubernetes or forking the Kubernetes code (Sec. 4.1). * **Lightweight hardware virtualization.** We compensate for the loosened isolation of workloads in the single-instance native approach through the use of lightweight hardware virtualization that is optimized for running containers (Sec. 4.2). * **External authentication.** In a federated multitenancy environment, users will need to authenticate with remote clusters, and for that reason EdgeNet adopts an authentication method that is external to any individual cluster (Sec. 4.3). ### Kubernetes custom resources Kubernetes' custom resource feature32 allows new entities to be added that, by the fact of their presence, extend the standard Kubernetes API, thereby maintaining backward compatibility with tools and interfaces that are familiar to users. By building our EdgeNet framework in this way, instead of as a wrapper around Kubernetes or as a separate system that interacts with Kubernetes, we increase the chances that the framework will be compatible with a variety of Kubernetes distributions. For example, we have successfully tested and run EdgeNet framework as an extension of k3s,33 a lightweight certified Kubernetes distribution for IoT and edge computing. Footnote 33: Kubernetes documentation: _Custom Resources_ [https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) We have containerized the EdgeNet extensions and we provide them in the form of public Docker images and configuration files. The core Kubernetes code remains untouched, and there is no need to recompile any existing code that runs a cluster. Any cluster administrator can deploy the extensions to their cluster with a single _kubect apply_ command without the need to bring down the cluster or interrupt its work in any way. Aside from the choice of Kubernetes and of Kubernetes custom resources, all of our other design decisions should, in principle, apply to enabling multitenancy in any other container orchestration tool. ### Lightweight hardware virtualization The choice of virtualization technology, in the context of edge computing, between hypervisors providing the best isolation and containers being lightweight [59], is a longstanding discussion. We prioritize virtualized environments because of their lower overhead; in so doing, we favor enhanced performance over delivering the best isolation [31]. A native framework with operating-system-level virtualization satisfies these requirements, but it presents security concerns having to do with containers sharing the same kernel. We want to offer each tenant the security of its own guest kernel, which hardware virtualization provides, but without going so far as to adopt a multi-instance approach that would negate the performance advantages of containers over VMs. Fortunately, this is possible through the use of lightweight virtual machines, which offer the isolation benefits of hardware virtualization while offering near-container-level performance. Our multitenancy framework therefore adopts a single-instance native approach with lightweight hardware virtualization. We follow earlier work [29, 47] that has recommended the Kata runtime34 for providing isolation between containers in a multitenant environment [24, 39, 9, 58]. Kata spawns a lightweight VM that is optimized to run containers, delivering near-container-level performance [24, Fig. 5] and better isolation than OS-level virtualization. Footnote 34: K4a containers [https://katacontainers.io/](https://katacontainers.io/) Fig. 6 depicts three methods for workload isolation: virtual machines, Docker containers, and Kata containers. We consider a single workload per method that can improve isolation and performance at the cost of overhead. One workload per virtual machine provides the best isolation among the three while introducing high overhead. The containerization technique can lower such overhead, having one workload per container, although it diminishes the isolation. The Kata method falls between VMs and containers in terms of isolation and overhead, as a containerized single workload runs in a lightweight virtual machine. Tenants who require better isolation and performance at the same time, can obtain these using the _slice_ software entity in our framework. As described in Sec. 5.4, this entity provides a tenant with the option of selecting container runtimes on an isolated subcluster so that the tenant can select one that meets its application requirements. ### External Authentication A tenant's users must authenticate themselves in order to access the resources that they are authorized to access. For multitenant CaaS to run at scale, it is not feasible to require users to have individual accounts at every different cluster location where they will deploy their workloads [15]. Instead, authentication should be managed by an integrated identity management system. For example, an identity federation that consists of multiple identity providers, using OpenID Connect (OIDC)35 running on top of OAuth 2.036 as the authentication method, can support large-scale federations. With this in mind, EdgeNet uses this type of authentication (See Sec. 5.8). Footnote 35: OpenID Connect [https://openid.net/connect/](https://openid.net/connect/) Footnote 36: OAuth 2.0 [https://oauth.net/2/](https://oauth.net/2/) Footnote 37: EdgeNet multitenancy software entities: Principal custom controllers [https://github.com/EdgeNet-project/edgeneet/tree/v1.0.0-alpha.5/pkg/controller/core/v1alphai](https://github.com/EdgeNet-project/edgeneet/tree/v1.0.0-alpha.5/pkg/controller/core/v1alphai), Assistant custom controllers [https://github.com/EdgeNet-project/edgeneet/tree/v1.0.0-alpha.5/pkg/controller/registration/lajthal](https://github.com/EdgeNet-project/edgeneet/tree/v1.0.0-alpha.5/pkg/controller/registration/lajthal), Admission control webbook [https://github.com/EdgeNet-project/edgeneet/tree/v1.0.0-alpha.5/pkg/admissioncontrol](https://github.com/EdgeNet-project/edgeneet/tree/v1.0.0-alpha.5/pkg/admissioncontrol) ## 5 Architecture Our EdgeNet architecture has been conceived around the design decisions articulated in Sec. 4, with the aim of introducing as low overhead as possible while making Kubernetes ready for the edge. As a reminder, our main design decision has been to take a single-instance native approach, meaning that tenants share a cluster's control plane components and compute nodes, rather than having each tenant acquire its own control plane components and compute nodes. To compensate for the diminished isolation that comes with sharing the same cluster, EdgeNet uses lightweight VMs to isolate workloads while retaining low overhead. The architecture of our EdgeNet multitenant CaaS framework is illustrated in Fig. 7. It is designed as a set of custom resources and custom controllers that extend Kubernetes from within. The framework consists of six principal new entities:37 Footnote 38: Kubernetes documentation: _Dynamic Admission Control_https:/tabernetes.io/docs/reference/access-auth-auth/extensible-admission-conroller/#what-are-admission-webbooks * _Tenant_ is the fundamental entity that isolates a tenant from other tenants (Sec. 5.1). * _Subsidiary Namespace_ is an isolated environment created by a tenant (Sec. 5.2). * _Tenant Resource Quota_ controls a tenant's use of resources (Sec. 5.3). * Two entities, _Slice_ and _Slice Claim_, allow dynamically reserving sub-clusters isolated from multitenant workloads, entitled node-level-slicing (Sec. 5.4). * _Admission Control Webbook_ enforces custom policies39 such as employing Kata Containers for multitenant workloads (Sec. 5.5). Footnote 39: Kubernetes documentation: _Runtime Class_https:/kubernetes.io/docs/concepts/containers/runtime-class/ These are assisted by new entities that facilitate cluster and tenant management: _Role Request_ (Sec. 5.6), and _Tenant Request_ and _Cluster Role Request_ (Sec. 5.7). Our architecture also covers user authentication via existing mechanisms (Sec. 5.8). Aside from these, it provides cluster operators with configuration files in YAML format that can be carefully customized, which define runtime class40 and predefined role resources. Footnote 39: Kubernetes documentation: _Runtime Class_https:/kubernetes.io/docs/concepts/containers/runtime-class/ ### Tenant In the context of the namespace structure maintained by the EdgeNet framework, the _Tenant_ entity is a controller that acts at the top level of the hierarchy: creating, updating, and deleting the core namespaces of cluster-scoped tenants, which are the ones that are admitted into the cluster by the cluster's administrator. Here, we describe the _Tenant_ entity, while Sec. 5.2 describes the _Subsidiary Namespace_ entity, which acts lower down in the hierarchy, on the subtenants Figure 6: Methods for isolating workloads. The dashed vertical lines distinguish one tenant from another. Each thick blue horizontal line designates a node. that are admitted either by top-level tenants or, recursively, by subtenants. The roles of these two controllers are shown in Fig. 8. The _Tenant_ entity handles the creation of a tenant environment in a cluster. It starts by generating a namespace for this tenant, using a tenant name supplied by the tenant and checked for uniqueness among the cluster's namespaces. Because the tenant will be able to create its own hierarchy of namespaces rooted at this namespace, we distinguish this one, which will be at the root of any sub-tree that the tenant creates, by calling it the tenant's _core namespace_. The controller also applies four labels to the namespace: * _kind=<namespace-type>_, which is _core_ in this case; * _tenant=<tenant-name>_, the name supplied by the tenant; * _tenant-uid=<tenant-uid>_, a locally-generated unique identifier for the tenant; * _cluster-uid=<cluster-uid>_, the UID of the _kube-system_ namespace. Figure 7: Architectural overview of EdgeNet multitenant CaaS framework implemented on Kubernetes. Much of the architecture is built upon native Kubernetes (blue) and third-party software components (green). Our innovation can be seen in the control plane, which consists of resources and controllers (yellow). This allows multiple tenants (left) to make use of the same control plane (center) via the API server to create workloads (right). What is more, our contributions enable the creation of two types of worker nodes; shared (top right) and reserved to a node-level slice (bottom right). Upon inclusion in a slice, a worker node’s type switches to reserved, reverting to shared after slice termination. Multitenant workloads (top right) share the compute resources of shared worker nodes, yet, each is isolated from another through hardware virtualization, lightweight virtual machines that are optimized for running containers, called Kata Containers. Every pod has its lightweight VM-based sandbox for isolation, and container(s) defined in a pod specification runs in the virtual machine tied to that pod. This single instance shared approach eliminates an overhead introduced due to employing conventional virtual machines for the isolation of single-tenant clusters from each other, addressing the overhead not only related to worker nodes but also control plane. With that being said, worker nodes in a slice that is dynamically created by a tenant (bottom right) are isolated from multitenant workloads, hence providing the tenant with container runtime selection. In this way, the tenant can make the most of the advantages of containerization, such as lower overhead, and shorter creation and startup time. At the bottom right of the figure, several worker nodes are subclustered in Tenant A’s slice, each hosting the workloads, for which the tenant employs Kata Containers as well as another container runtime like runC. In this example, \(a\) and \(b\) are tenant core namespaces, directly under the root of the hierarchy, \(r\), which is not itself a namespace; the subsidiary namespaces are \(aa\), \(ab\), \(aba\), \(abb\), and \(ba\). Kubernetes' initial namespaces, default (\(d\)), kube-node-lease (\(hnl\)), kube-public (\(kp\)), and kube-system (\(ks\)) are not included in the hierarchy and are not managed by these controllers. UIDs are defined in Kubernetes as being 128-bit-long universally unique identifiers [40],40 and the Kubernetes community suggests using the UID of the _kube-system_ namespace as a cluster identifier.41 The labels allow the tenant namespaces to be consumed by policies and other entities locally. This labeling model is also required for the inter-cluster object propagation mechanism. Footnote 40: Kubernetes documentation: _Object Names and IDs; UIDs_[https://kubernetes.io/docs/concepts/overview/working-with-objects/names/builds](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/builds) Footnote 41: Cluster ID API discussion in the Kubernetes Architecture SIG mailing list [https://groups.google.com/g/kubernetes-sig-architecture/c/mVGobfD4rP/ym/uJEVvsinAAJ](https://groups.google.com/g/kubernetes-sig-architecture/c/mVGobfD4rP/ym/uJEVvsinAAJ) Each tenant has an owner who has control over the tenant and its resources, including any subnamespaces that the tenant might create. Having created the core namespace, the _Tenant_ entity uses the Kubernetes role-based access control (RBAC) mechanism to grant this control, while at the same time limiting the tenant owner's control to the scope of its core namespace, so that it may not interfere with other tenants' namespaces. The _Subsidiary Namespace_ entity will be responsible for extending the scope of the owner's control to the subnamespaces. With their control over the core namespace, the owner can manage the tenant by, among other things: admitting users; granting roles, which are sets of permissions, for those users; and deploying workloads. Kubernetes' network policies allow confining pod communication into a namespace or set of namespaces by using labels. In our multitenancy framework, the policies consume the UID labels, as specified earlier, attached to tenant namespaces. Since tenants have complete authorization on their network policies, an authorized user can, wittingly or not, misconfigure network policies in a namespace, thus resulting in security threats. To overcome this vulnerability, we let a tenant enable or disable cluster-level network policy in the tenant specification, which confines the tenant's namespaces thanks to VMware's Antrea.42 Footnote 42: Antrea [https://autrea.io/](https://autrea.io/) Figure 8: **Namespace Hierarchy in EdgeNet.** EdgeNet’s multitenancy framework provides two principal controllers for managing its namespace hierarchy. The _Tenant_ controller creates, updates, and deletes the tenant core namespaces at the top level of the hierarchy, while the _Subsidiary Namespace_ controller handles all namespaces further down in the hierarchy. ### Subsidiary Namespaces Authorizations are issued hierarchy-wise, establishing a chain of accountability. In other words, the permissions of a tenant owner to use the system are granted in the tenant's core namespace, applying to all its hierarchical namespaces. Each individual user in a tenant, in turn, is authorized by the tenant owner. According to permissions granted, the owner can create different roles in different subnamespaces as needed. For example, an owner can grant some users administrative rights to approve other users in core and subnamespaces. As their permissions are limited to their hierarchy tree, tenants cannot interfere with other tenants' environments. A tenant, at the same time, can use the system as if it has the authorization to create namespaces directly, thus having a relatively customizable environment. The _subsidiary namespaces_ custom resource, also known as _subnamespaces_, is a software entity through which tenants can create Kubernetes namespaces without having the authorization to do so directly. Subnamespaces are indispensable for realizing the key features of our framework that are described in this paper's introduction, as we see in our discussion of Modes, Inheritance, Naming Convention, and Federation, below. #### 5.2.1 Consumer and Vendor Tenancy The subnamespace entity relies on the parent-child relationship between the namespaces, starting from the core namespace of a tenant. Each subsidiary namespace can be both a parent and a child at the same time. The entity exists in one of two modes, _workspace_ or _subtenant_, corresponding to the two forms of tenancy: consumer and vendor. The sequence diagram in Fig. 9 sketches out how the workspace and subtenant modes differ in creating a child namespace. The hierarchical namespaces approach allows an organization to isolate workspaces for different products by generating a subnamespace per product. Each child namespace assigned to a product can be used to isolate its teams, for example, backend and frontend, in the same way. Another real-world example consists in using subnamespaces to create isolated environments for multiple groups of students working on a laboratory experiment. Fig. 10 demonstrates a tree of namespaces where a parent is blind to information about a child's namespace and its children. This shielding assists a tenant in subleasing the desired amount of resources to its customers, thus becoming a vendor. Accordingly, the customer of a vendor becomes a _subtenant_. A vendor can remove any of its subtenants when the customer-vendor relationship comes to an end. Figure 9: Sequence diagram of the _Subnamespace_ entity performing namespace and permission creation. It highlights how workspace and subtenant modes differ in doing so. The _Subnamespace_ controller creates workspaces rooted at _aa_ and _ab_ for the **consumer tenant** by placing those namespaces into _workspace_ mode. The consumer tenant has visibility into those workspaces. The controller creates subtenants rooted at _ba_ and _bb_ for the **vendor tenant** by placing those namespaces into _subtenant_ mode. The vendor tenant does not have visibility into its subtenants. Note that the subtenant that owns the sub-tree rooted at _bb_ does have visibility into its own workspaces at _bba_ and _bbbb_. A key characteristic of subnamespaces is enabling the choice of either mode, workspace or subtenant, at any depth in the hierarchy. By extension, subnamespaces allow a subtenant to be created in a child namespace with the workspace mode and another to be created with the subtenant mode, as shown in Fig. 10. Not only can these two modes co-exist in the same subtree, but they also reinforce each other's benefits. Last but not least, a subsidiary namespace can also be formed to be propagated across federated clusters. If so, it generates object names that are unique to the originating cluster and tenant to prevent name collisions during object propagation across the federation. Sec. 5.2.4 describes how our federation architecture functions. #### 5.2.2 Inheritance In the subnamespace specification, an authorized user can declare which objects are passed by inheritance from parent to child. The Kubernetes resource kinds that can be inherited are currently as follows: * Role-based access control (RBAC): Roles and Role Bindings; both together adjust permissions of users. * Network policies; make a namespace restricted to defined ingress/egress rules. * Limit ranges; set a resource quota per pod. * Secrets; keep sensitive information such as credentials to be consumed by pods. * Config Maps; configuration to be used by pods. * Service Accounts; an entity that allows applications and services to authenticate with the Kubernetes API. If RBAC objects are not inherited, the specification must include the owner of the subnamespace for management purposes. Further, it is possible to declare continuous inheritance. In this case, the controller constantly syncs objects from a parent to its child. Note that a resource quota is not an entity subject to inheritance, so as to avoid overconsumption by a tenant, which could get around quotas by generating subnamespaces at will. The logic ensures that the aggregated child resource quotas cannot exceed their parent's initial resource quota, including the core namespace. Each subnamespace creation Figure 10: **Consumer and Vendor Tenancy in EdgeNet, showing workspace (w) and subtenant (s) modes.** EdgeNet uses its hierarchical namespace structure to build consumer and vendor tenancy. In this example, the namespace \(a\) belongs to a consumer tenant and the namespace \(b\) belongs to a vendor tenant that is resembling containers–as–a–service to its own customers. taxes its parent's resource quota so that the aggregation of resource quotas in the parent and child namespaces remain the same. In other words, a tenant's resource quota is a cake to be shared out, and each subnamespace gets a piece of cake from its parent's cut. #### 5.2.3 Naming Convention The naming convention has been conceived so as to enable federation deployments. As mentioned in Sec. 5.1, a core namespace shares the same name with its tenant. Independent of its depth, a subnamespace follows the pattern of _<subnamespace-name>-<hash>_. We feed the hash function with the parent namespace and subnamespace name. This naming convention reduces the chance of name collisions while creating subnamespaces. If a collision nonetheless occurs, the subnamespace object enters a failure state, indicating a collision status. This is vital to the interoperability of multiple clusters. The reason is that tenants or namespaces holding the same names in different clusters probably occur in many clusters. Consequently, conflicts will inevitably arise while propagating objects, unless there is an adjustment mechanism such as the one described here. #### 5.2.4 Federation In our federation vision, each cluster, even before it is federated, is a multitenant cluster, making its worker nodes available to multiple tenants, and federation further opens the cluster to the workloads of tenants from other clusters. (As we have discussed in Sec. 3, this differs from the approach of the Liqo framework, based on Virtual Kubelet, in which clusters only achieve multitenancy by federating.) We have developed a proof-of-concept federation architecture with a prototype implementation, which works jointly with our multitenancy framework. The source code of the prototype is publicly accessible via our repository. We see each tenant gaining access to a federated set of clusters via what we might term a home cluster or local cluster. For example, a company that has developed an application that serves vehicles in several countries might need to deploy its workloads to the edge clusters of mobile operators in each of those countries, and it can do so via a cluster in its home country that is federated with these other clusters. To obtain access to a local cluster, it might contract with a cloud provider that has a commercial presence in its home country, leaving the cloud provider to manage the commercial relationships with the other providers in the federation. Information regarding the identity of the company and its contract with its local provider remains local, while only the workload-related objects necessary for the deployment of the application get propagated to remote clusters. Propagating as few objects as possible has three significant benefits: (1) it avoids replication of tenant information across clusters, thus reducing bandwidth consumption and unnecessary traffic; (2) it enhances data privacy and sovereignty and mitigates security risks; and (3) it significantly reduces overhead that could stem from running a control plane or worker nodes per tenant at the scale of a federation. In EdgeNet, the deployment scope of any subnamespace can be set to either _federated_ or _local_. If federated, the subnamespace controller adds the UID of the kube-system namespace as a prefix to the namespace name, and this cluster UID is also fed into the hash function described just above (Sec. 5.2.3). This ensures the uniqueness of each name across all of the federated clusters. In our prototype federation, a tenant deploys its workloads to remote clusters by creating a _Selective Deployment_[50] that targets the remote clusters using affinities, such as locations and connected devices.45 A manager entity, called the _Federation Manager_, is informed by the local cluster for federation-scoped Selective Deployments. When it receives one, it searches for remote clusters that satisfy the affinities, in order to deploy the workload there on behalf of the tenant. To move towards a production federation architecture, issues such as caching and scheduling will need to be tackled. Footnote 45: This is an extension to the _Selective Deployment_ mechanism in our previous work [50]. There, workloads could be deployed to remote nodes within a single geographically dispersed cluster. Now, workloads can be deployed to entire remote clusters within a federation of clusters. ### Tenant Resource Quota As described in Sec. 3.4, Kubernetes provides the ability to associate resource quotas with namespaces, but in the context of independent namespaces. Since our multitenancy framework extends Kubernetes namespaces to work in a hierarchical fashion, we need to extend the quota mechanism to take into account the dependency of each namespace on other namespaces above it and below it in the hierarchy. The EdgeNet quota mechanism is designed to allow for a given resource to be shared out between a namespace and its child namespaces, and for the parent namespace to recoup each child's portion when it is relinquished. Child namespaces can in turn share out their quota with their children, and so on, recursively. Our framework covers the following resources: CPU, memory, local storage, ephemeral storage, and bandwidth, each accounted for individually.44 Footnote 44: Tenant resource quotas will be expanded to include other resources in the future, such as namespaces, pods, and configmaps. We model tenant resource quotas by representing the tree of a hierarchical namespace as a graph \(T=(V,E)\) composed of vertices \(V\) and parent-to-child edges \(E\). For our purposes, each vertex \(v\in V\) is a namespace, except for the root node. The tenant of a namespace \(v\) is entitled to construct a subtree \(T_{v}\) rooted at that namespace \(v\), which is also called a core namespace. Denote \(q(T)\) the resource quota of tree \(T\), and each namespace \(v\in V\) has a resource quota \(q(v)\). Here, we assume that there is only a quota for different types of resources for simplicity. In fact, different quotas can be set for different resources, such as CPU and memory. Let \(\sigma(v)=\{w_{1},w_{2}\ldots\}\subset V\) represent the subnamespaces of \(v\). Likewise, assume \(\sigma(w)=\{z_{1},z_{2}\ldots\}\subset V\) represent the subnamespaces of \(w\). The hierarchical resource quota problem here is twofold. First, we must ensure that a tenant resource quota \(q(T_{v})\) is equal to aggregated resource quota across all its namespaces: \(q(v)+\sum_{v\in\sigma(v)}q(w)+\sum_{z\in\sigma(w)}q(z)\). The latter is to guarantee that the resource quota allocated to a subtree rooted at a namespace \(w\) is also equal to aggregated resource quota across the namespaces of that subtree, thus \(q(T_{w})=q(w)+\sum_{z\in\sigma(w)}q(z)\). We solve this problem by partitioning resource quotas among parents and their children while keeping with the container orchestration tool's declarative approach. A tenant resource quota works by applying an identical resource quota, a Kubernetes resource, to the tenant's core namespace. Then, each subsidiary namespace in the core namespace takes its portion from that resource quota, as shown in Fig. 3(a). As mentioned above, when resources are constrained, ensuring a fair share of them is essential. Static allocation of quotas, however, may lead to inefficient use of the resources. There are two sides to this problem. Such resource quotas that are allocated to tenants, assuming some tenants' resource consumptions are inferior to their quotas, may result in suboptimal utilization of compute resources in clusters. Likewise, the resource quotas that are allocated statically to subnamespaces by tenants, assuming some subnamespaces consume fewer resources than their quotas, may provoke less-than-ideal use of their tenant resource quotas. Even though our system allows temporary addition to and removal from tenant resource quotas as well as manually updating subnamespace quotas, this solution cannot scale when there are many clusters. Sec. 7 introduces how we plan to address this problem. ### Slice and Slice Claim Two software entities enable node-level slicing; _slice_ and _slice claim_. Slice, a cluster-scoped entity, forms a subcluster by slicing among nodes, as its name signifies. A slice isolates the nodes within it from multitenant workloads once it is established. These nodes are chosen via a selector composed of fields that denote labels, number of nodes, and desired resources. On the other hand, a slice claim is a namespaced entity that tenants may create for their subnamespaces. Nodes in a slice remain in the pre-reserved status until a subnamespace uses that slice. Once a subnamespace is bound to a slice, the multitenant workloads that runs on the nodes in this slice are terminated within a grace period of a minute. That is to say, workloads created in that subnamespace are isolated from other tenants. Thus, the container runtime configuration within such subnamespaces becomes available to tenants.45 Regarding the termination grace period, we have set it to one minute by default, as twice the default grace period of 30 seconds in Kubernetes. However, providers can adjust this termination grace period according to their requirements. Footnote 45: Resource-constrained environments may compel CaaS to operate on bare metal. We will, therefore, assess the performance of Kata with a specific experiment setup described in Sec. 7. A slice claim has two working modes; _dynamic_ and _manual_. The dynamic mode permits a tenant to automatically create a slice if the resource quota in the slice claim's namespace is sufficient. In contrast, the manual mode prevents a slice claim from generating a slice even if the slice claim's namespace has an adequate resource quota. In this case, a cluster administrator must satisfy the tenant's request. This kind of behavior can be desirable if the number of nodes in a cluster is scarce. Fig. 11 depicts how a tenant can receive node-level isolation. We discuss the need for a daemon to improve isolation in Sec. 7. ### Admission Control Webhook An admission control webhook is a software entity that allows for enforcing custom policies. It can mutate and validate object operation requests of users. Such mutating and validating operations are critical so as to ensure that users adhere to framework-specific policies. We enforce custom policies for subnamespaces, slice and slice claim, role requests, tenant requests, cluster role requests, as well as pods. Kubernetes, by default, lets users pick runtime classes that they desire for their pods. Likewise, in our framework, a tenant can select the container runtime for the containers running on the nodes in its slice. However, this is not the preferred behavior unless a tenant acquires entire nodes through node-level slicing. For the purpose of better isolation, we constantly mutate the runtime class to employ Kata (see Sec. 4.2) for multitenant workloads through admission control. ### Role Request This feature facilitates permission management at namespace scope. Thanks to its design, this entity provides granular control over tenant users. A user can request a specific role in a core namespace or any subnamespace of a tenant. This role can be one of the cluster roles offered by the cluster provider or a role in that namespace. Once a request is made, an authorized user in that namespace can approve or deny it. That is to say, tenant owners and admins can delegate responsibilities to team leaders in child namespaces. When a tenant represents a large organization, delegation becomes crucial to facilitate management. ### Other Entities There are two more assistant entities. A _tenant request_ stands for tenant registration. A central administrator or a trust strategy, for example, credit card verification, can approve the establishment of tenants and their owners. In our Figure 11: Sequence diagram of a tenant acquiring node-level isolation. For the dynamic mode, we assume the tenant has enough quota in the namespace. implementation, a cluster admin may approve a request or deny it. Another option is, as mentioned above, a provider can integrate a credit card verification-like mechanism with our framework to avoid the manual administration of clusters, supporting CaaS to operate with many clusters at scale. There are four pieces of information in the request; the organization, the owner, the tenant resource quota, if desired, and whether or not to apply a cluster-level network policy. A _cluster role request_ is an entity that allows a user to claim to hold a role at the cluster scope. This entity eases shaping a cluster administration team and encourages the platform users to ask for the roles that they need. ### Authentication Our general design approach is to build, wherever possible, upon what is already available for Kubernetes, as we do by adopting OpenID Connect (OIDC)46 running on top of OAuth 2.047 as our authentication method. A feature that is still under development is to extend OIDC with Pinniped48 so as to access resources across clusters. This allows a user to authenticate once to access namespaces and objects, for which the user has access rights, in all of the clusters to which the objects have propagated. Footnote 46: OpenID Connect [https://openid.net/connect/](https://openid.net/connect/) Footnote 47: OAuth 2.0 [https://oauth.net/2/](https://oauth.net/2/) Footnote 48: Pinniped [https://pinniped.dev/](https://pinniped.dev/) ## 6 Benchmarking This section analyzes the performance of our EdgeNet single-instance native Kubernetes multitenancy framework. One of our goals is to assess to what extent native and multi-instance approaches are suitable for edge computing use cases. To this end, we compare our framework to single cluster per tenant offerings with the help of Rancher Kubernetes Engine (RKE)49 in order to automate cluster creations and to the VirtualCluster [62] code that realizes a multi-instance-based multitenancy framework. That is to say, to represent the multi-instance through multiple clusters approach, we pick RKE, which is widely known for installing Kubernetes; VirtualCluster for the multi-instance through multiple control planes approach, which is a Kubernetes working group framework that is described in the scientific literature [62]; and our own EdgeNet framework is single-instance. Footnote 49: RKE [https://rancher.com/products/ike](https://rancher.com/products/ike) Both RKE and VirtualCluster perform well when the compute resources are nearly unlimited, or scalability with regard to the number of tenants is less of a concern. Compared to RKE, VirtualCluster is well-adapted to address the issues of the single cluster per tenant solution, such as high overhead. However, as we shall see, there is a tradeoff between performance and isolation, which means that existing solutions are not ideal for edge computing. We used the Gen infrastructure [44] to spawn four Ubuntu 20.04 LTS virtual machines with 8 CPUs and 16 GB of memory in order to conduct experiments with EdgeNet and VirtualCluster. Using these virtual machines, we created a Kubernetes v1.21.9 cluster consisting of one control plane node and three worker nodes. The control plane node is completely isolated from any workloads. For the VirtualCluster experiments, we reserved a worker node for running the manager, syncer, and agent components. Likewise, the per-VirtualCluster-tenant entities, which are apiserver, etcd, and controller-manager, are deployed on a dedicated worker node. For the EdgeNet experiment, an isolated worker node was sufficient to run the entities. A separate worker node hosted monitoring tools in both cases. We used the default configuration settings for both frameworks, including the number of workers that process concurrently and the execution period that triggers the controller. We compared the frameworks' performance for tenant creation and for pod creation. For VirtualCluster tenant creation, inter-arrival times of 0, 8, 16, and 32 seconds were used for creating 2, 4, 8, 16, 32, and 64 tenants, respectively. For EdgeNet, inter-arrival times of 0, 2, 4, 8, 16, and 32 seconds were used for creating up to 10,000 tenants. (We discuss the reasons for the disparity in the number of tenants below.) For both framework, pods created were 1,000, 2,500, 5,000, and 10,000. Timeout is two minutes to create tenants and pods separately. To measure the performance of a cluster per tenant method, reserved resources for tenant entities, a virtual machine with 8 CPUs and 16 GB of memory, were divided evenly among four Ubuntu 20.04 LTS virtual machines with 2 CPUs and 4 GB of memory on GENL 2 CPUs were chosen because cluster provisioning repeatedly failed with VMs having a single CPU. We repeated measurements at least three times for each case. ### Tenant Creation As discussed throughout the paper, besides security, overhead is a noteworthy factor in qualifying a multitenancy framework, especially for edge clouds. Our experiments measure a framework implementation's ability to handle simultaneous creation requests; the time it takes to create a tenant; entities' resource consumption; and consumption per tenant, if it exists. Each request is considered successful if the framework returns a success status within two minutes after the control plane receives the request. #### 6.1.1 VirtualCluster The experiments show a correlation between request inter-arrival time and tenant creation success rate. For example, with a 32 s inter-arrival time for 32 creation requests, the number of successfully created tenants ranges from 26 to 32; when the inter-arrival time is lowered to 8 s, the successes decrease to between 13 and 18, as shown in Fig. (a)a. It is possible that VirtualCluster's difficulties in handling simultaneous requests stem from an implementation issue that starves tenants of the compute resources necessary to establish their control planes in these circumstances. Similarly, as seen in Fig. (b)b, decreasing the request inter-arrival time increases the tenant creation time. At a 32 s inter-arrival time, the median creation time is 76 s; put another way, it would take more than an hour to create 128 tenants. Furthermore, as the figure shows, the creation time fluctuates more widely as inter-arrival time decreases. The most critical scaling weakness for VirtualCluster is that every tenant introduces additional overhead in terms of memory and CPU usage due to the per tenant isolation of control plane components: apiserver, etcd, and controller manager. Fig. (c)c presents the regular memory usage for 2, 4, 8, 16, and 32 tenants. For example, a thousand tenants would consume around 300 GB of memory just to be present in the cluster. This limitation ultimately affected our experiment, which could not reach a high number of tenants on the single node that we had reserved for tenant components; the maximum number of tenants that we could create stably was approximately 40. In addition to this, a tenant starting to use the cluster results in an increase in resource consumption. We also noticed that a successful status message for the tenant control plane does not imply that all its components are present and functioning properly. Therefore, we only considered the cases where control plane components per tenant were all created successfully. #### 6.1.2 EdgeNet As opposed to VirtualCluster, EdgeNet supported the creation of 128 tenants simultaneously with an almost zero failure rate across experiments. It also scaled well beyond this number, stably generating 2,560 and 10,000 tenants when the request inter-arrival time was set to 2 s and 4 s respectively, as shown in Fig. (a)a. This is as far as one can Figure 12: Experiment results for VirtualCluster and EdgeNet. go before running into Kubernetes' maximum namespace threshold50 of 10,000 in a cluster; if tenants are allowed to have around ten namespaces each, the number of tenants per cluster is limited to around 1,000. Footnote 50: Kubernetes Scalability SIG documentation: _Kubernetes Scalability thresholds_[https://github.com/kubernetes/community/blob/master/is-scalability/configs-and-limits/thresholds.md](https://github.com/kubernetes/community/blob/master/is-scalability/configs-and-limits/thresholds.md) When requests arrive simultaneously, the median time for EdgeNet to create a tenant object in the control plane increases with the number of tenants: 38 ms, 48 ms, 63 ms, 68 ms, 106 ms, 175 ms, 216 ms, and 270 ms for 2, 4, 8, 16, 32, 64, 128, 256 tenants respectively. Another pattern of results is obtained with an inter-arrival time of 2 s: creation times are 11 ms for 1,280 tenants, 11 ms for 2,560 tenants, and we tested as far as 5,120 tenants, also clocking in at a median of 12 ms. For 10,000 tenants, the median value is still 12 ms when inter-arrival time is set to 4 s. However, the maximum values increase as a function of the number of requests. This suggests that concurrent or many requests saturate the shared API server, controller manager, and etcd moderately. Thus, when arrivals are simultaneous, the average time to fully establish a tenant increases as follows: 500 ms for 2 tenants, going up to 937 ms for 128 tenants. But Fig. 11(b) reveals that the time to fully establish a tenant drops when requests are spread out in time. For 32 tenants, the median times are 11.5 s for simultaneous arrivals, 271 ms for 8 s, 274 ms for 16 s, and 274 ms for 32 s. Good results are seen for EdgeNet since it configures the state of the cluster rather than replicating the components, it does not generate per-tenant overhead, as shown in Fig. 11(c). Given that the resource consumption of controllers is negligible, it is fair to state that there is no significant overhead in our framework. It takes EdgeNet approximately 1 min 41 s to create 128 tenants. Furthermore, EdgeNet's creation time can be shortened if needed by adjusting the number of workers and the running period. By default, the tenant controller uses two workers with a running period of 1 s, and the client's query per second (QPS) rate and burst size are set to 5 and 10, respectively. We tried altering the setup to have ten workers with a 500 ms running period, setting QPS and burst to 1,000,000 each. With these settings, it takes just 17 s to fully create 128 tenants, as seen in Fig. 12(a). The same figure shows that EdgeNet can handle simultaneous requests if a cluster becomes around 1,000 tenants. The time it takes to establish all tenants eventually converges towards two minutes for both settings, thereby satisfying the success criteria we described at the beginning of Sec. 6.1. However, we noticed it surpasses two minutes when simultaneous requests are more than 1,280. We presume that this may be due to client or control plane saturation resulting in the API server receiving delayed requests, which we need to investigate further. Fig. 12(b) shows that EdgeNet with default settings can scale up to 10,000 tenants when inter-arrival time is set to 4 s, but it takes more than ten hours in total. #### 6.1.3 Comparison Our findings on tenant creation at least hint that better isolation provided by the multi-instance approach comes at the cost of performance loss. What can be clearly seen is that EdgeNet surpasses VirtualCluster on scalability and speed. The peak number of tenants in a cluster is 10,000 for EdgeNet but around 40 for VirtualCluster, even with longer inter-arrival times. VirtualCluster offers a separate control plane per tenant, meaning an increase in base resource consumption, which is one of the major limitations. In contrast, EdgeNet can scale up to the cluster namespace threshold thanks to the native approach discussed in Sec. 3.1. Scalability is only one aspect of evaluating a framework's performance, especially for edge-specific workloads. Speed, stability, and overall reliability are also important. EdgeNet is considerably faster than VirtualCluster at tenant establishment for all inter-arrival times. Fig. 12(a) shows how optimizing the number of workers, running period, QPS, and burst can further improve EdgeNet's performance. Furthermore, when arrivals are not simultaneous, EdgeNet handles each request in microseconds, whereas VirtualCluster takes seconds, even minutes. Speed is an important contributing factor to establishing many tenants concurrently or in sequence, but stability and reliability are also critical. VirtualCluster cannot adequately address simultaneous requests or requests with a short inter-arrival time, even if they are not many. Because of this issue, we observe a marked fall in the success rate of tenant establishment in such cases. We speculate that an implementation issue might be provoking resource starvation in tenant control planes. The time it takes to finish establishing all tenants is significantly more deterministic for EdgeNet than for VirtualCluster; EdgeNet exhibits almost no variation, irrespective of whether 128 tenants or 2,560 tenants are being created. However, EdgeNet's performance is tied to the control plane capacity as well. When many requests with little time between arrivals oversaturate the control plane, it has difficulty establishing all tenants properly. Nonetheless, EdgeNet can process 1,000 simultaneous requests, allowing tenants to use ten namespaces for each, as discussed above. The multi-instance approach limits VirtualCluster's scalability since the base resource consumption increases as tenant numbers grow; providing one control plane per tenant costs about 285 MB of memory each. It is a large memory consumption, especially for edge computing use cases. This would also increase cloud computing expenses per tenant. Such overhead is not present in EdgeNet. VirtualCluster, with its multi-instance approach, falls on the isolation side of the isolation-performance trade-off. Thus, performance degradation is expected. Considering this information is crucial to interpreting results correctly. Table 3 shows how much better VirtualCluster performs than a single cluster per tenant system. Regardless, our framework produced a more robust outcome with a significant performance advantage. Based on the results, EdgeNet is better suited for edge computing, as well as for cloud-edge collaborations. ### Pod Creation To examine the effect of VirtualCluster's syncer on performance, we measure the time that it takes to create a representation of a pod as an object. The syncer gathers pod objects from the tenant control plane and creates them in the host control plane, called a _supercluster_. The time that we measure is the time that it takes for pods to show a pending status in the supercluster. We focus on pods instead of containers as that helps us reveal the framework's capabilities, as the creation of containers is dependent upon many factors such as available resources on the host, container runtime, and volume type. There are already many papers that evaluate these aspects for different container runtimes. #### 6.2.1 VirtualCluster Request inter-arrival time affects the number of successfully created pods similarly to how it affects the number of successfully created tenants. This is because, as described above, created tenants struggle to enter a healthy state when inter-arrival time is short. Also, increasing the number of tenants that create pods degrades pod creation performance. One reason for this is the increased resource use in tenants' control planes. Pod creation success, for 16 or 32 tenants, is 100% for 1,250 and 2,500 pods; performance drops slightly for the creation of 5,000 pods; for 10,000 pods, a median of 9,431 are successfully created. The time it takes to create pods also increases with the number of pods created. #### 6.2.2 EdgeNet Up to 10,000 pods can be created simultaneously for up to 300 tenants in a deterministic manner. Performance for 10,000 pods started to degrade at 384 tenants due to saturation of the Kubernetes API server in processing requests from different core namespaces. Pod creation time does not spike, as there is no intermediate layer syncing the objects. A linear relationship is observed between the median creation times and the number of pods in Table 4. Figure 13: Effect of the number of workers, running period, QPS and burst on EdgeNet’s performance, including comparison with the effect of time between arrivals. #### 6.2.3 Comparison In VirtualCluster, the syncer is an intermediate layer between the supercluster and tenant control planes in order to sync pod objects. The disadvantage of this approach is that every pod operation introduces synchronization overhead, both on the supercluster and tenant sides. We should emphasize that every synchronization process causes a delay for a pod to be up and running. This may raise concerns about running VirtualCluster at scale; however, it can be mostly overcome by providing more computing resources to the framework, leading to higher costs. In contrast, EdgeNet allows tenants to directly make use of the same control plane so as to create pods. Its performance is directly related to the capabilities of the control plane. Thus, EdgeNet produces superior results, where VirtualCluster takes at least three times as much time as EdgeNet to create 1,250 pods, 2,500 pods, 5,000 pods, and 10,000 separately. Table 4 shows how far VirtualCluster's synchronization of objects between the supercluster and tenant control planes causes significant delays while achieving better isolation. ## 7 Future Work Although the work presented in this paper goes a long way to establishing a Kubernetes multitenancy framework that is suitable for the edge cloud, there is still considerable room for improvement. We describe areas for future work below. **Resource Quota Optimization.** We plan to develop an optimization algorithm that distributes, in a best-effort fashion, underutilized tenant resource quotas among the ones who consume all of their quotas and surplus subnamespace resource quotas among those in the same tenant who hit their quotas. **Sub-node-level VIP Slicing.** In order for tenants to receive guaranteed access to resources that are both available and dedicated to them, node-level slicing is currently the only option. By adding a new point to the slice spectrum, it will be possible to do so at sub-node-level granularity. We will deploy a pod that consumes almost no real resources on a node to ensure that resources are secured. Priority classes will enable the reservation mechanism for pods. **Storage.** Sharing storage among containers securely at the edge is a challenge due to the security issues discussed in the Rationale section (see Sec. 2.2). We plan to develop an agent that runs on every node and is ready, upon tenant demand, to prepare a disk partition that the tenant can use as a storage volume for its Kata containers. **Security.** We plan to encrypt each tenant's data separately, across its namespaces and cluster-scoped resources. In this way, even if a tenant's data leaks, another tenant will not be able to read it. \begin{table} \begin{tabular}{l c|c|c|c} \hline \hline Number of pods & 1,250 & 2,500 & 5,000 & 10,000 \\ \hline VirtualCluster & 12 & 22.5 & 46 & 134 \\ EdgeNet & 2.5 & 6 & 12 & 27 \\ \hline \hline \end{tabular} \end{table} Table 4: Time in seconds, median values, to create a representation of a pod as an object in the host control plane. The number of tenants used for the experiments is set to 32 for both VirtualCluster and EdgeNet \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Max Tenants} & \multicolumn{2}{c}{Creation Time (s)} & \multicolumn{2}{c}{Per Tenant} \\ \cline{2-5} & & Median & Max & Overhead (MB) \\ \hline EdgeNet & 10,000 & 0.616 & 1.741 & None \\ VC & 40 & 82 & 93 & 285 \\ RKE & 4 & 343 & 395 & 711 \\ \hline \hline \end{tabular} Max number represents the maximum number of successfully established tenants that can be stably reached with respect to allocated resources for tenant creation. The time it takes to establish a tenant for four simultaneous requests. Per tenant overhead refers to the fixed proportion of resources each tenant consumes in an average manner, regardless of activity. VC consumption was measured using pods that deliver control planes to tenants, and RKE consumption was measured through containers that provide clusters to tenants. Traditional VM-based overhead is not included in RKE. \end{table} Table 3: Quick comparison of native and multi-instance approaches **Customization.** Tenants cannot currently create cluster-scoped resources independently. We plan to develop a namespace-scoped custom resource that allows users to dynamically create cluster-scoped resources. This entity will be using the namespace name as a prefix in generating cluster-scoped resource names to avoid collisions. **Subnamespaces.** A user may want to attach labels to subnamespaces. There is a risk, however, of a malicious actor breaching another tenant's network policies if labels are defined independently. For example, one can launch a brute-force attack to correctly guess the namespace labels used in a tenant's network policies. By using the name of the subsidiary namespace as a prefix, we plan to solve this issue. Inheritance will then allow labels to be passed down from parent to child. **Container Isolation.** Based on the reasons outlined at the end of our discussion of lightweight hardware virtualization (see Sec. 4.2), we will use a specific experiment setup to assess how Kata, gVisor,51 and runC perform. We will examine a setup in which Kata and gVisor run on a physical server while runC runs on a virtual machine created on that server. Footnote 51: gVisor [https://gvisor.dev/](https://gvisor.dev/) **Federation.** Containerized workloads need to move between edge clouds and clouds seamlessly without any user intervention. By leveraging local authorities such as hierarchical federation managers, we aim to address issues of clusters trusting one another. We will develop a throughgoing federation architecture and will implement a fully-developed federation framework that works in concert with our multitenancy framework. Once the implementation is done, we will assess its performance. **Isolation Daemon.** Kubernetes garbage collection removes unused images. However, our slicing feature provides on-demand node-level isolation, so we need to instantly clean the node from multi-tenant pods and container images. We also consider clearing up _iptables_ rules during this process. An isolation daemon that runs on each node will be further developed to fulfill these operations. **Additional Experiments.** Due to time and resource constraints, we compare a limited number of systems in the benchmarking section. Likewise, some variables of interest could not be studied. For the generalizability of our findings, we will conduct additional measurements addressing these two limitations. ## 8 Conclusion We have presented EdgeNet, a Kubernetes-based multitenancy framework for Containers as a Service (CaaS) that, because it is native, i.e., serves all tenants through a single control plane and a single data plane per cluster, is a more efficient alternative to the current multi-instance manner in which cloud providers offer CaaS. Our benchmarking results demonstrated good scalability and response times for EdgeNet as compared to a leading multi-instance alternative. Though, in our framework, tenants are not isolated into separate control planes, their containers nonetheless receive the high level of isolation that is provided by Kata containers. For edge computing to succeed, we believe that security and isolation must be handled natively in software so that workloads can be moved between distant clusters within short delays. There are, of course, still many questions to be answered. What are the most optimal ways to establish a robust CaaS federation that is composed of ubiquitous clusters offered by numerous providers? In order for clusters to join and leave such a federation seamlessly and securely, what trust mechanisms must be in place? How can users get reliable and transparent billing systems in such an environment? Anyone may avail themselves of our liberally-licensed, free, open-source code to enable multitenancy in a Kubernetes cluster. It is already in production use in the EdgeNet edge cloud testbed, for which the tenants are research groups around the world. And it is particularly suited for edge clouds, where resources are limited, as well as for the cloud. Because of its federation features, we see this framework as paving the way for tenants to deploy their services across edge clouds operated by many different operators worldwide. ## 9 Acknowledgements EdgeNet got its start thanks to an NSF EAGER grant, and now benefits from VMware Academic Program grants via CAF America and the Fondation Sorbonne Universite, as well as a French Ministry of Armed Forces cybersecurity grant.
2307.11926
PartDiff: Image Super-resolution with Partial Diffusion Models
Denoising diffusion probabilistic models (DDPMs) have achieved impressive performance on various image generation tasks, including image super-resolution. By learning to reverse the process of gradually diffusing the data distribution into Gaussian noise, DDPMs generate new data by iteratively denoising from random noise. Despite their impressive performance, diffusion-based generative models suffer from high computational costs due to the large number of denoising steps.In this paper, we first observed that the intermediate latent states gradually converge and become indistinguishable when diffusing a pair of low- and high-resolution images. This observation inspired us to propose the Partial Diffusion Model (PartDiff), which diffuses the image to an intermediate latent state instead of pure random noise, where the intermediate latent state is approximated by the latent of diffusing the low-resolution image. During generation, Partial Diffusion Models start denoising from the intermediate distribution and perform only a part of the denoising steps. Additionally, to mitigate the error caused by the approximation, we introduce "latent alignment", which aligns the latent between low- and high-resolution images during training. Experiments on both magnetic resonance imaging (MRI) and natural images show that, compared to plain diffusion-based super-resolution methods, Partial Diffusion Models significantly reduce the number of denoising steps without sacrificing the quality of generation.
Kai Zhao, Alex Ling Yu Hung, Kaifeng Pang, Haoxin Zheng, Kyunghyun Sung
2023-07-21T22:11:23Z
http://arxiv.org/abs/2307.11926v1
# PartDiff: Image Super-resolution with Partial Diffusion Models ###### Abstract Denoising diffusion probabilistic models (DDPMs) have achieved impressive performance on various image generation tasks, including image super-resolution. By learning to reverse the process of gradually diffusing the data distribution into Gaussian noise, DDPMs generate new data by iteratively denoising from random noise. Despite their impressive performance, diffusion-based generative models suffer from high computational costs due to the large number of denoising steps. In this paper, we first observed that the intermediate latent states gradually converge and become indistinguishable when diffusing a pair of low- and high-resolution images. This observation inspired us to propose the Partial Diffusion Model (PartDiff), which diffuses the image to an intermediate latent state instead of pure random noise, where the intermediate latent state is approximated by the latent of diffusing the low-resolution image. During generation, Partial Diffusion Models start denoising from the intermediate distribution and perform only a part of the denoising steps. Additionally, to mitigate the error caused by the approximation, we introduce 'latent alignment', which aligns the latent between low- and high-resolution images during training. Experiments on both magnetic resonance imaging (MRI) and natural images show that, compared to plain diffusion-based super-resolution methods, Partial Diffusion Models significantly reduce the number of denoising steps without sacrificing the quality of generation. Image super-resolution, Deep learning, Generative models, Diffusion models, Magnetic Resonance Imaging. ## 1 Introduction Single image super-resolution [1], which aims to generate a high-resolution (HR) raster image consistent with a low-resolution (LR) input, is an important problem in computer vision and image processing. However, it is an ill-posed challenging task because a specific low-resolution input may correspond to multiple high-resolution outputs. Many recent studies employ powerful deep neural networks (DNNs) for image super-resolution and promising performance has been achieved. Early DNN-based super-resolution methods train feed-forward DNNs to learn the mapping between low- and high-resolution images [2, 3] from substantial training pairs. Although feed-forward networks can achieve impressive results at low upsampling factors, they cannot reproduce high-fidelity details at high upsampling factors because of the highly complex distribution of output images condition on the input images. Deep generative models, including Generative Adversarial Networks (GANs) [4] and Variational Auto-encoders (VAEs) [5] have shown impressive results in data generation of various modalities including image [4, 6, 7] video [8, 9], and audio [10, 11]. Generative Adversarial Networks (GANs) [4] have shown impressive results in image generation and have been applied to various conditional image generation tasks including image super-resolution [12, 13]. Though generating striking images, GANs often suffer from instability in model optimization and model collapse. Recently, Denoising Diffusion Probabilistic Models (DDPMs) [14] have shown striking performance on image generation tasks and have been applied to image super-resolution [15, 16]. By modeling the reverse process of gradually diffusing the data distribution into Gaussian noise, diffusion models generate new data by iteratively denoising from a random Gaussian noise. SR3 [15] and SRDiff [16] adopt diffusion models for image super-resolution and have shown their Fig. 1: \(\chi\), \(\chi\), \(\chi\)d, and \(\chi\)d super-resolution results of T2-weighted prostate MRI (top), natural (middle) and facial images (bottom). The prostate cancer lesion is marked with the red arrow.
2302.05155
TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation
This paper proposes a novel batch normalization strategy for test-time adaptation. Recent test-time adaptation methods heavily rely on the modified batch normalization, i.e., transductive batch normalization (TBN), which calculates the mean and the variance from the current test batch rather than using the running mean and variance obtained from the source data, i.e., conventional batch normalization (CBN). Adopting TBN that employs test batch statistics mitigates the performance degradation caused by the domain shift. However, re-estimating normalization statistics using test data depends on impractical assumptions that a test batch should be large enough and be drawn from i.i.d. stream, and we observed that the previous methods with TBN show critical performance drop without the assumptions. In this paper, we identify that CBN and TBN are in a trade-off relationship and present a new test-time normalization (TTN) method that interpolates the statistics by adjusting the importance between CBN and TBN according to the domain-shift sensitivity of each BN layer. Our proposed TTN improves model robustness to shifted domains across a wide range of batch sizes and in various realistic evaluation scenarios. TTN is widely applicable to other test-time adaptation methods that rely on updating model parameters via backpropagation. We demonstrate that adopting TTN further improves their performance and achieves state-of-the-art performance in various standard benchmarks.
Hyesu Lim, Byeonggeun Kim, Jaegul Choo, Sungha Choi
2023-02-10T10:25:29Z
http://arxiv.org/abs/2302.05155v2
# TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation ###### Abstract This paper proposes a novel batch normalization strategy for test-time adaptation. Recent test-time adaptation methods heavily rely on the modified batch normalization, _i.e.,_ transductive batch normalization (TBN), which calculates the mean and the variance from the current test batch rather than using the running mean and variance obtained from source data, _i.e.,_ conventional batch normalization (CBN). Adopting TBN that employs test batch statistics mitigates the performance degradation caused by the domain shift. However, re-estimating normalization statistics using test data depends on impractical assumptions that a test batch should be large enough and be drawn from i.i.d. stream, and we observed that the previous methods with TBN show critical performance drop without the assumptions. In this paper, we identify that CBN and TBN are in a trade-off relationship and present a new _test-time normalization_ (TTN) method that interpolates the standardization statistics by adjusting the importance between CBN and TBN according to the _domain-shift sensitivity_ of each BN layer. Our proposed TTN improves model robustness to shifted domains across a wide range of batch sizes and in various realistic evaluation scenarios. TTN is widely applicable to other test-time adaptation methods that rely on updating model parameters via backpropagation. We demonstrate that adopting TTN further improves their performance and achieves state-of-the-art performance in various standard benchmarks. ## 1 Introduction When we deploy deep neural networks (DNNs) trained on the source domain into test environments (_i.e.,_ target domains), the model performance on the target domain deteriorates due to the domain shift from the source domain. For instance, in autonomous driving, a well-trained DNNs model may exhibit significant performance degradation at test time due to environmental changes, such as camera sensors, weather, and region (Choi et al., 2021; Lee et al., 2022; Kim et al., 2022b). Test-time adaptation (TTA) has emerged to tackle the distribution shift between source and target domains during test time (Sun et al., 2020; Wang et al., 2020). Recent TTA approaches (Wang et al., 2020; Choi et al., 2022; Liu et al., 2021) address this issue by 1) (re-)estimating normalization statistics from current test input and 2) optimizing model parameters in unsupervised manner, such as entropy minimization (Grandvalet & Bengio, 2004; Long et al., 2016; Vu et al., 2019) and self-supervised losses (Sun et al., 2020; Liu et al., 2021). In particular, the former focused on the weakness of conventional batch normalization (CBN) (Ioffe & Szegedy, 2015) for domain shift in a test time. As described in Fig. 1(b), when standardizing target feature activations using source statistics, which are collected from the training data, the activations can be transformed into an unintended feature space, resulting in misclassification. To this end, the TTA approaches (Wang et al., 2020; Choi et al., 2022; Wang et al., 2022) have heavily depended on the direct use of test batch statistics to fix such an invalid transformation in BN layers, called transductive BN (TBN) (Nado et al., 2020; Schneider et al., 2020; Bronskill et al., 2020) (see Fig. 1(c)). The approaches utilizing TBN showed promising results but have mainly been assessed in limited evaluation settings (Wang et al., 2020; Choi et al., 2022; Liu et al., 2021). For instance, such evaluation settings assume large test batch sizes (_e.g.,_ 200 or more) and a single stationary distribution shift (_i.e._, single corruption). Recent studies suggest more practical evaluation scenarios based on small batch sizes (Mirza et al., 2022; Hu et al., 2021; Khurana et al., 2021) or continuously changing data distribution during test time (Wang et al., 2022). We show that the performance of existing methods significantly drops once their impractical assumptions of the evaluation settings are violated. For example, as shown in Fig. 1(d), TBN (Nado et al., 2020) and TBN applied methods suffer from severe performance drop when the test batch size becomes small, while CBN is irrelevant to the test batch sizes. We identify that CBN and TBN are in a trade-off relationship (Fig. 1), in the sense that one of each shows its strength when the other falls apart. To tackle this problem, we present a novel _test-time normalization (TTN)_ strategy that controls the trade-off between CBN and TBN by adjusting the importance of source and test batch statistics according to the _domain-shift sensitivity_ of each BN layer. Intuitively, we linearly interpolate between CBN and TBN so that TBN has a larger weight than CBN if the standardization needs to be adapted toward the test data. We optimize the interpolating weight after the pre-training but before the test time, which we refer to as the post-training phase. Specifically, given a pre-trained model, we first estimate channel-wise sensitivity of the affine parameters in BN layers to domain shift by analyzing the gradients from the back-propagation of two input images, clean input and its augmented one (simulating unseen distribution). Afterward, we optimize the interpolating weight using the channel-wise sensitivity replacing BN with the TTN layers. It is noteworthy that none of the pre-trained model weights are modified, but we only train newly added interpolating weight. We empirically show that TTN outperforms existing TTA methods in realistic evaluation settings, _i.e._, with a wide range of test batch sizes for single, mixed, and continuously changing domain adaptation through extensive experiments on image classification and semantic segmentation tasks. TTN as a stand-alone method shows compatible results with the state-of-the-art methods and combining our TTN with the baselines even boosts their performance in overall scenarios. Moreover, TTN applied methods flexibly adapt to new target domains while sufficiently preserving the source knowledge. No action other than computing per batch statistics (which can be done simultaneously to the inference) is needed in test-time; TTN is compatible with other TTA methods without requiring additional computation cost. Our contributions are summarized as follows: * We propose a novel domain-shift aware test-time normalization (TTN) layer that combines source and test batch statistics using channel-wise interpolating weights considering the sensitivity to domain shift in order to flexibly adapt to new target domains while preserving the well-trained source knowledge. Figure 1: **Trade-off between CBN & TBN. In conceptual illustrations (a), (b), and (c), the depicted standardization only considers making the feature distribution have a zero mean, disregarding making it have unit variance. When the source and test distributions are different, and the test batch size is large, (b) test features can be wrongly standardized when using CBN (Ioffe and Szegedy, 2015), but (c) TBN (Nado et al., 2020) can provide a valid output. (d) Error rates (\(\downarrow\)) on shifted domains (CIFAR-10-C). TBN and TBN applied (TENT (Wang et al., 2020), SWR (Choi et al., 2022)) methods suffer from severe performance drop when the batch size becomes small, while TTN (Ours) improves overall performance.** * To show the broad applicability of our proposed TTN, which does not alter training or test-time schemes, we show that adding TTN to existing TTA methods significantly improves the performance across a wide range of test batch sizes (from 200 to 1) and in three realistic evaluation scenarios; stationary, continuously changing, and mixed domain adaptation. * We evaluate our method through extensive experiments on image classification using CIFAR-10/100-C, and ImageNet-C (Hendrycks and Dietterich, 2018) and semantic segmentation task using CityScapes (Cordts et al., 2016), BDD-100K (Yu et al., 2020), Mapillary (Neuhold et al., 2017), GTAV (Richter et al., 2016), and SYNTHIA (Ros et al., 2016). ## 2 Methodology In this section, we describe our method, the _Test-Time Normalization_ (TTN) layer, whose design is suitable for test-time adaptation (TTA) in practical usages out of the large batch size and i.i.d assumptions during a test time. We first define the problem setup in Section 2.1 and present our proposed TTN layers in Section 2.2. Finally, we discuss how we optimize TTN layers in Sections 2.3. ### Problem Setup Let the train and test data be \(\mathcal{D}_{S}\) and \(\mathcal{D}_{T}\) and the corresponding probability distributions be \(P_{S}\) and \(P_{T}\), respectively, where \(\mathcal{D}_{S}\) and \(\mathcal{D}_{T}\) share the output space, _i.e.,_\(\{y_{i}\}\sim\mathcal{D}_{S}=\{y_{i}\}\sim\mathcal{D}_{T}\). The covariate shift in TTA is defined as \(P_{S}(x)\neq P_{T}(x)\) where \(P_{S}(y|x)=P_{T}(y|x)\)(Quinonero-Candela et al., 2008). A model, \(f_{\theta}\), with parameters \(\theta\), is trained with a mini-batch, \(\mathcal{B}^{S}=\{(x_{i},y_{i})\}_{i=1}^{|\mathcal{B}^{S}|}\), from source data \(\mathcal{D}_{S}\), where \(x_{i}\) is an example and \(y_{i}\) is the corresponding label. During the test, \(f_{\theta}\) encounters a test batch \(\mathcal{B}^{T}\sim\mathcal{D}_{T}\), and the objective of TTA is correctly managing the test batch from the different distribution. To simulate more practical TTA, we mainly consider two modifications: (1) various test batch sizes, \(|\mathcal{B}^{T}|\), where small batch size indicates small latency while handling the test data online, and (2) multi, \(N\)-target domains, \(\mathcal{D}_{\mathcal{T}}=\{\mathcal{D}_{T,i}\}_{i=1}^{N}\). Under this setting, each test batch \(\mathcal{B}^{T}\) is drawn by one of the test domains in \(\mathcal{D}_{T}\), where \(\mathcal{D}_{T}\) may consist of a single target domain, multiple target domains, or mixture of target domains. ### Test-Time Normalization Layer We denote an input of a BN layer as \(\mathbf{z}\in\mathbb{R}^{BCHW}\), forming a mini-batch size of \(B\). The mean and variance of \(\mathbf{z}\) are \(\mu\) and \(\sigma^{2}\), respectively, which are computed as follows: \[\mu_{c}=\frac{1}{BHW}\sum_{b}^{B}\sum_{h}^{H}\sum_{w}^{W}\mathbf{z}_{bchw}, \quad\sigma_{c}^{2}=\frac{1}{BHW}\sum_{b}^{B}\sum_{h}^{H}\sum_{w}^{W}(\mathbf{ z}_{bchw}-\mu_{c})^{2}, \tag{1}\] where \(\mu\) and \(\sigma^{2}\) are in \(\mathbb{R}^{C}\), and \(C\), \(H\), and \(W\) stand for the number of channels, dimension of height, and that of width, respectively. Based on \(\mu\) and \(\sigma^{2}\), the source statistics \(\mu_{s},\sigma_{s}^{2}\in\mathbb{R}^{C}\) are usually estimated with exponential moving average over the training data. Figure 2: **Method overview.****(a)** We introduce an additional training phase between pre-train and test time called **(a-1)** post-training phase. **(b)** Our proposed TTN layer combines per-batch statistics and frozen source statistics with interpolating weight \(\alpha\), which is **(b-1)** optimized in post-training phase and **(b-2)** fixed in test time. In BN layers, input \(\mathbf{z}\) is first standardized with statistics \(\mu\) and \(\sigma^{2}\) and then is scaled and shifted with learnable parameters \(\gamma\) and \(\beta\) in \(\mathbb{R}^{C}\). The standardization uses current input batch statistics during training and uses estimated source statistics \(\mu_{s}\) and \(\sigma_{s}^{2}\) at test time (Fig. 2(b)). To address domain shifts in test time, we adjust the source statistics by combining the source and the test mini-batch statistics (Singh and Shrivastava, 2019; Summers and Dinneen, 2019) with a learnable interpolating weight \(\alpha\in\mathbb{R}^{C}\) ranges \([0,1]\). Precisely, TTN standardizes a feature with \[\tilde{\mu}=\alpha\mu+(1-\alpha)\mu_{s},\quad\tilde{\sigma}^{2}=\alpha\sigma^ {2}+(1-\alpha)\sigma_{s}^{2}+\alpha(1-\alpha)(\mu-\mu_{s})^{2}, \tag{2}\] while using the same affine parameters, \(\gamma\) and \(\beta\). Note that we have different mixing ratios \(\alpha_{c}\) for every layer and channel. ### Post Training Like Choi et al. (2022), we introduce an additional training phase, the post-training (after pre-training but before testing), to optimize the mixing parameters \(\alpha\) in Eq. 2 (Fig. 2(a)). Note that all parameters except \(\alpha\) are frozen and we have access to the labeled source data during the post-training. We first obtain prior knowledge \(\mathcal{A}\) of \(\alpha\) by identifying which layers and their channels are sensitive to domain shifts. Then, we optimize \(\alpha\) with the prior knowledge and an additional objective term. The overall procedure is depicted in Fig. 3 and the pseudocode is provided in appendix A.3. **Obtain Prior \(\mathcal{A}\).** To identify which BN layers and corresponding channels are sensitive to domain shifts, we simulate the domain shifts by augmenting1 the clean image, _i.e.,_ original training data, and make a pair of (clean \(x\), domain-shifted \(x^{\prime}\)) images, where the semantic information is shared. To analyze in which layer and channel the standardization statistics should be corrected, we consider the standardized features \(\hat{z}^{(l,c)}\) of \(z^{(l,c)}\), for a channel index \(c\) at a layer \(l\), whose input is clean \(x\). We compare \(\hat{z}^{(l,c)}\) to that of domain-shifted one, \(\hat{z}^{\prime(l,c)}\) from \(x^{\prime}\). Since the pre-trained CBN uses the same \(\mu_{s}^{(l,c)}\) and \(\sigma_{s}^{(l,c)}\) for both inputs, the difference between \(\hat{z}^{(l,c)}\) and \(\hat{z}^{\prime(l,c)}\) is caused by the domain discrepancy between \(x\) and \(x^{\prime}\). We argue that if the difference is significant, the parameter at \((l,c)\) is sensitive to the domain shift, _i.e.,_ intensely affected by the domain shift, and hence the standardization statistics at \((l,c)\) should be adapted towards the shifted input. Footnote 1: It is noteworthy that the post-training phase is robust to the choice of data augmentation types. Ablation study results and discussions are provided in the appendix B.4. Drawing inspiration from Choi et al. (2022), we measure the domain-shift sensitivity by comparing gradients. Since the standardized feature \(\hat{z}\) is scaled and shifted by \(\gamma\) and \(\beta\) in each BN layer, we compare the gradients of affine parameters \(\gamma\) and \(\beta\), \(\nabla_{\gamma}\) and \(\nabla_{\beta}\), respectively, to measure the dissimilarity of \(\hat{z}\) and \(\hat{z}^{\prime}\). As described in Fig. 3(a-1), we collect the \(\nabla_{\gamma}\) and \(\nabla_{\beta}\) using cross-entropy Figure 3: **Two stages in post-training phase.****(a)** Given a pre-trained model, which uses CBN, and its training data, we obtain prior knowledge of each BN layer. **(a-1)** We first compute gradients of affine parameters in each BN layer from clean \(x\) and augmented input \(x^{\prime}\) and obtain the gradient distance score (Eq. 4). **(a-2)** For BN layers with larger distance score, we put more importance on current batch statistics than source statistics (_i.e.,_ large \(\alpha\)), and we define prior \(\mathcal{A}\) accordingly (Eq. 5). **(b)** After obtaining prior \(\mathcal{A}\), we substitute BN layers from CBN to TTN. **(b-1)** Initializing \(\alpha\) with prior \(\mathcal{A}\), **(b-2)** we optimize \(\alpha\) using CE and MSE loss (Eq. 6) with augmented training data \(x^{\prime}\). loss, \(\mathcal{L}_{\text{CE}}\). To this end, we introduce a _gradient distance score_, \(d^{(l,c)}\in\mathbb{R}\) for each channel \(c\) at layer \(l\) as follows: \[s =\frac{1}{N}\sum_{i=1}^{N}\frac{g_{i}\cdot g_{i}^{\prime}}{\|g_{i} \|\|g_{i}^{\prime}\|}, \tag{3}\] \[d^{(l,c)} =1-\frac{1}{2}\big{(}s_{\gamma}^{(l,c)}+s_{\beta}^{(l,c)}\big{)}, \tag{4}\] where \((g,g^{\prime})\) is \((\nabla_{\gamma}^{(l,c)},\nabla_{\gamma^{\prime}}^{(l,c)})\) and \((\nabla_{\beta}^{(l,c)},\nabla_{\beta^{\prime}}^{(l,c)})\) for \(s_{\gamma}^{(l,c)}\) and \(s_{\beta}^{(l,c)}\), respectively, \(N\) is the number of training data, and the resulting \(d^{(l,c)}\in[0,1]\). Once we obtain \(s_{\gamma}\) and \(s_{\beta}\) from Eq. 3, we conduct min-max normalization over all \(s_{\gamma}^{(l,c)}\) and \(s_{\beta}^{(l,c)}\), before computing Eq. 4. To magnify the relative difference, we take the square as a final step and denote the result as a prior \(\mathcal{A}\) (Fig. 3(a-2)): \[\mathcal{A}=[d^{(1,\cdot)},d^{(2,\cdot)},\ldots,d^{(L,\cdot)}]^{2}, \tag{5}\] where \(d^{(l,\cdot)}=[d^{(l,c)}]_{c=1}^{C_{1}}\). **Optimize \(\alpha\).** The goal of optimizing \(\alpha\) is to make the combined statistics correctly standardize the features when the input is sampled from an arbitrary target domain. After obtaining the prior \(\mathcal{A}\), we replace CBN with TTN layers while keeping the affine parameters. Then, we initialize the interpolating weights \(\alpha\) with \(\mathcal{A}\), which represents in which layer and channel the standardization statistics need to be adapted using test batch statistics (see Fig. 3(b-1)). To simulate distribution shifts, we use augmented training data. Expecing the model to make consistent predictions either given clean or augmented inputs, we use cross-entropy loss \(\mathcal{L}_{\text{CE}}\). Furthermore, to prevent \(\alpha\) from moving too far from the initial value \(\mathcal{A}\), we use mean-squared error loss \(\mathcal{L}_{\text{MSE}}\) between \(\alpha\) and the prior \(\mathcal{A}\), _i.e.,_\(\mathcal{L}_{\text{MSE}}=\|\alpha-\mathcal{A}\|^{2}\) as a constraint. Total loss \(\mathcal{L}\) can be written as \(\mathcal{L}=\mathcal{L}_{\text{CE}}+\lambda\mathcal{L}_{\text{MSE}}\left(6\right)\), where \(\lambda\) is a weighting hyperparameter (Details are provided in the appendix A.1 & B.1). ## 3 Experiments In image classification, we evaluate TTN for corruption robustness in realistic evaluation settings, _i.e.,_ where the test batch size can be variant and where the target domain can be either stationary, continuously changing, or mixed with multiple domains. Additionally, we further validate TTN on domain generalization benchmarks incorporating natural domain shifts (_e.g.,_ changes in camera sensors, weather, time, and region) in semantic segmentation. ### Experimental Setup Given models pre-trained on clean source data, we optimize TTN parameter \(\alpha\) with the augmented source data in the post-training phase. Afterward, we evaluate our post-trained model on the corrupted target data. **Implementation details are provided in the appendix A.1.** **Datasets and models.** We use corruption benchmark datasets CIFAR-10/100-C and ImageNet-C, which consist of 15 types of common corruptions at five severity levels (Hendrycks and Dietterich, 2018). Each corruption is applied to test images of the clean datasets (CIFAR-10/100 and ImageNet). We use a training set of the clean dataset for post-training and the corrupted dataset for evaluation. As backbone models, we used WideResNet-40-2 (Hendrycks et al., 2019) trained on CIFAR-10/100, and ResNet-50 (He et al., 2016) trained on ImageNet. To validate our method in semantic segmentation, we conduct experiments on Cityscapes (Cordts et al., 2016), BDD-100K (Yu et al., 2020), Mapillary (Neuhold et al., 2017), GTAV (Richter et al., 2016), and SYNTHIA (Ros et al., 2016) datasets, in accordance with the experimental setup for domain generalization proposed in RobustNet (Choi et al., 2021). **Baselines.** To demonstrate that TTN successfully controls the trade-off between CBN and TBN, we compare TTN with (1) AdaptiveBN (Schneider et al., 2020), (2) \(\alpha\)-BN (You et al., 2021) and (3) MixNorm (Hu et al., 2021), which combines or takes the moving average of the source and the test batch statistics with a pre-defined hyperparameter (_i.e.,_ a constant \(\alpha\)). The following baselines are suggested on top of TBN (Nado et al., 2020); (4) TENT (Wang et al., 2020) optimizes BN affine parameters via entropy minimization. (5) SWR (Choi et al., 2022) updates the entire model parameters considering the domain-shift sensitivity. (6) CoTTA (Wang et al., 2022) ensembles the output of augmented test inputs, updates the entire model parameters using a consistency loss between student and teacher models, and stochastically restores the pre-trained model. We refer to TBN, (1), (2), and (3) as _normalization_-based methods (Norm), the other as _optimization_-based methods (Optim.), and denote the pre-trained model using CBN as'source'. **Evaluation scenarios.** To show that TTN performs robust on various test batch sizes, we conduct experiments with test batch sizes of 200, 64, 16, 4, 2, and 1. We evaluate our method in three evaluation scenarios; _single_, _continuously changing_, and _mixed_ domain adaptation. In the single domain adaptation, the model is optimized for one corruption type and then reset before adapting to the subsequent corruption, following the evaluation setting from TENT and SWR. In the continuously changing adaptation (Wang et al., 2022), the model is continuously adapted to 15 corruption types (w/o the reset), which is more realistic because it is impractical to precisely indicate when the data distribution has shifted in the real world. Finally, to simulate the non-stationary target domain where various domains coexist, we evaluate methods in the mixed domain adaptation setting, where a single batch contains multiple domains. We use a severity level of 5 (Hendrycks and Dietterich, 2018) for all experiments. It is noteworthy that we use a single checkpoint of TTN parameter \(\alpha\) for each dataset across all experimental settings. ### Experiments on Image Classification Tables 1, 2, and 3 show error rates on corruption benchmark datasets in three different evaluation scenarios; single domain, continuously changing, and mixed domain adaptation, respectively. Note that the performance of normalization-based methods in the single (Table 1) and in the continuously changing (Table 2) settings are identical. Tables 4 and 5 show the adaptation performance on the source and class imbalanced target domains, respectively. **More results and discussions are provided in the appendix B, importantly, including results on ImageNet-C (B.5).** **Robustness to practical settings.** In Table 1, 2, and 3, TTN and TTN applied methods show robust performance over the test batch size ranges from 200 to 1. Comparing with normalization-based baselines, we demonstrate that TTN, which uses channel-wisely optimized combining rate \(\alpha\), shows better results than defining \(\alpha\) as a constant hyperparameter, which can be considered as a special \begin{table} \begin{tabular}{c|c||c c c c c|c||c c c c c c|c} \hline \hline \multicolumn{2}{c||}{} & \multicolumn{5}{c||}{**CIFAR-10-C**} & \multicolumn{5}{c}{**CIFAR-100-C**} \\ \multicolumn{2}{c||}{**Method**} & 200 & 64 & 16 & 4 & 2 & 1 & **Avg.** & 200 & 64 & 16 & 4 & 2 & 1 & **Avg.** \\ \hline \hline \multirow{9}{*}{**CIFAR-100**} & Source (CBN) & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 46.75 & 46.75 & 46.75 & 46.75 & 46.75 & 46.75 \\ \cline{2-13} & **Ours (TTN)** & **11.67** & **11.80** & **12.13** & **13.93** & **15.83** & **17.99** & **13.89** & **35.58** & **35.84** & **36.73** & **41.08** & **46.67** & **57.71** & **42.27** \\ \cline{2-13} & CoTTA & 12.46 & 14.60 & 21.26 & 45.69 & 58.87 & 90.00 & 40.48 & 39.75 & 42.20 & 52.94 & 73.69 & 87.66 & 98.99 & 65.87 \\ \cline{2-13} & TENT & 12.54 & 13.52 & 15.69 & 26.23 & 35.57 & 97.00 & 32.29 & **36.11** & 37.90 & 43.78 & 58.71 & 81.76 & 90.91 & 59.55 \\ \cline{2-13} & **+Ours (TTN)** & **11.44** & **11.60** & **12.08** & **16.14** & **18.36** & **22.40** & **15.33** & 43.50 & **37.60** & **38.28** & **44.60** & **45.29** & **80.63** & **49.82** \\ \cline{2-13} & SWR & 11.04 & 11.53 & 13.90 & 23.99 & 34.02 & 90.00 & 30.75 & 34.16 & 35.79 & 40.71 & 85.18 & 80.55 & 99.03 & 62.56 \\ \cline{2-13} & **+Ours (TTN)** & **10.09** & **10.51** & **11.28** & **14.29** & **16.67** & **84.12** & **24.49** & **33.09** & **34.07** & **36.15** & **42.41** & **53.63** & **93.08** & **48.74** \\ \hline \hline \end{tabular} \end{table} Table 2: **Continuously changing domain adaptation on corruption benchmark. Error rate (\(\downarrow\)) averaged over 15 corruptions with severity level 5 using WideResNet-40-2 as backbone for each test batch size. We omitted ‘Norm’ methods results in this table since they are eqaul to that of Table 1.** \begin{table} \begin{tabular}{c|c||c c c c c|c||c c c c c|c} \hline \hline & & \multicolumn{5}{c||}{**CIFAR-10-C**} & \multicolumn{5}{c}{**CIFAR-100-C**} \\ \multicolumn{2}{c||}{**Method**} & 200 & 64 & 16 & 4 & 2 & 1 & **Avg.** & 200 & 64 & 16 & 4 & 2 & 1 & **Avg.** \\ \hline \hline \multirow{9}{*}{**CIFAR-100**} & Source (CBN) & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 46.75 & 46.75 & 46.75 & 46.75 & 46.75 & 46.75 \\ \cline{2-13} & **Ours (TTN)** & **11.67** & **11.80** & **12.13** & **13.93** & **15.83** & **17.99** & **13.89** & **35.58** & **35.84** & **36.73** & **41.08** & **46.67** & **57.71** & **42.27** \\ \cline{2-13} & CoTTA & 12.46 & 14.60 & 21.26 & 45.69 & 58.87 & 90.00 & 40.48 & 39.75 & 42.20 & 52.94 & 73.69 & 87.66 & 98.99 & 65.87 \\ \cline{2-13} & TENT & 12.54 & 13.52 & 15.69 & 26.23 & 35.57 & 97.00 & 32.92 & **36.11** & 37.90 & 43.78 & 58.71 & 81.76 & 90.91 & 59.55 \\ \cline{2-13} & **+Ours (TTN)** & **11.44** & **11.60** & **12.08** & **16.14** & **18.36** & **22.40** & **15.33** & 43.50 & **37.60** & **38.28** & **44.60** & **45.29** & **80.63** & **49.82** \\ \cline{2-13} & SWR & 11.04 & 11.53 & 13.90 & 23.99 & 34.02 & 90.00 & 30.75 & 34.16 & 35.79 & 40.71 & 85.18 & 80.55 & 99.03 & 62.56 \\ \cline{2-13} & **+Ours (TTN)** & **10.09** & **10.51** & **11.28** & **14.29** & **16.67** & **84.12** & **24.49** & **33.09** & **34.07** & **36.15** & **42.41** & **53.63** & **93.08** & **48.74** \\ \hline \hline \end{tabular} \end{table} Table 1: **Single domain adaptation on corruption benchmark. Error rate (\(\downarrow\)) averaged over 15 corruptions with severity level 5 using WideResNet-40-2 as a backbone for each test batch size. We used reported results of MixNorm with fixed parameters from the original paper and denoted as \(*\). In appendix B.3, we provide variants of TTN, which show stronger performance for small test batch.** case of TTN; TBN and \(\alpha\)-BN corresponds to \(\alpha=1\) and \(0.1\), respectively. More comparisons with different constant \(\alpha\) are provided in the appendix B.2. It is noteworthy that TTN as a stand-alone method favorably compares with optimization-based baselines in all three scenarios. **Adopting TTN improves other TTN methods.** We compare optimization-based methods with and without TTN layers. Since TENT, SWR, and CoCITA optimize model parameters on top of using TBN layers, they also suffer from performance drops when the test batch size becomes small. Adopting TTN reduces the dependency on large test batch size, _i.e.,_ makes robust to small batch size, and even improves their performance when using large test batch. Furthermore, in continual (Table 2) and mixed domain (Table 3) adaptation scenario, TENT and SWR shows higher error rate than in single domain (Table 1) adaptation. We interpret that because they update the model parameters based on the current output and predict the next input batch using the updated model, the model will not perform well if the consecutive batches have different corruption types (_i.e.,_ mixed domain adaptation). Moreover, the error from the previous input batch propagates to the future input stream, and thus they may fall apart rapidly once they have a strongly wrong signal, which can happen in continual adaptation (_i.e.,_ long-term adaptation without resetting). Applying TTN significantly accelerates their model performance regardless of the evaluation scenarios. **TTN preserves knowledge on source domain.** In practice, data driven from the source domain (or a merely different domain) can be re-encountered in test time. We used clean domain test data in the single domain adaptation scenario to show how TTN and other TTA methods adapt to the seen source domain data (but unseen instance). As shown in Table 4, all baseline methods using TBN layers, show performance drops even with large batch sizes. We can conclude that it is still better to rely on source statistics collected from the large training data than using only current input statistics, even if its batch size is large enough to obtain reliable statistics (_i.e.,_ 200). However, since TTN utilizes source statistics while leveraging the current input, TTN itself and TTN adopted methods well preserve the source knowledge compared to the TBN-based methods. With a batch size of 200, we observe that combining the source and a current test batch statistics outperforms the source model (see 3rd row of Table 4). **TTN is robust to class imbalanced scenario.** Heavily depending on current test batch statistics are especially vulnerable when the class labels are imbalanced (Boudiat et al., 2022; Gong et al., 2022). To simulate this situation, we sorted test images in class label order and then sampled test batches following the sorted data order. In Table 5, we observe that TTN is more robust to the class imbalanced scenario than utilizing only test batch statistics (_i.e.,_ TBN). As explained in Section 3.5, \begin{table} \begin{tabular}{c|c||c c c c c c c|c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{5}{c|}{**Test batch size**} & \multirow{2}{*}{**Avg.**} \\ & & & 200 & 64 & 16 & 4 & 2 & 1 \\ \hline \hline \multirow{8}{*}{**TEN**} & Source (CNN) & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 \\ & & 200 & 64 & 16 & 4 & 2 & 1 \\ \hline \multirow{8}{*}{**TEN**} & Source (CNN) & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 \\ & & 770 & 76.66 & 77.72 & 78.39 & 77.84 & 0.00 & 79.74 \\ & **Ours (TTN)** & **35.75** & **35.83** & **34.02** & **35.26** & **36.20** & **37.90** & **38.25** \\ \hline \end{tabular} \end{table} Table 4: **Source domain adaptation.** Error rate Table 5: **Class imbalanced target domain.** Error rate (\(\downarrow\)) averaged over 15 corruptions of CIFAR-10-C with severity level 5 using WideResNet-40-2. \begin{table} \begin{tabular}{c|c||c c c c c c|c c c c c c c|c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{5}{c|}{**CIFAR-10-C**} \\ & & 200 & 64 & 16 & 4 & 2 & 1 & Avg. & 200 & 64 & 16 & 4 & 2 & 1 & Avg. \\ \hline \hline \multirow{8}{*}{**TEN**} & Source (CNN) & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 18.27 & 46.75 & 46.75 & 46.75 & 46.75 & 46.75 & 46.75 & 46.75 \\ \cline{2-19} & TBN & 14.99 & 15.29 & 17.38 & 26.55 & 35.99 & 90.00 & 33.31 & 39.88 & 40.48 & 43.73 & 59.11 & 30.30 & 98.91 & 60.40 \\ & AdaptiveBN & 12.62 & 12.48 & 12.97 & 14.59 & 15.74 & 16.02 & 14.02 & 14.07 & 41.68 & 36.86 & 38.49 & 41.43 & 43.48 & 44.31 & 40.23 \\ & \(\alpha\)-BN & 13.78 & 13.78 & 13.99 & 14.61 & **15.07** & **15.20** & 14.41 & 40.25 & 40.11 & 40.47 & 41.64 & **42.39** & **43.81** & **41.45** \\ & MixNorm & 18.80 & 18.80 & 18.80 & 18.80 & 18.80 & 18.80 & 18.80 & 18.80 & 18.80 & 18.00 & & & & & & & & \\ & **Ours (TTN)** & **21.26** & **12.12** & **12.29** & **13.24** & **13.96** & 15.55 & 17.83 & **14.00** & **36.24** & **36.23** & **36.85** & **41.01** & 45.85 & 55.52 & 41.95 \\ \hline \multirow{8}{*}{**TEN**} & TENT & 14.33 & 14.97 & 17.30 & 26.07 & 35.37 & 90.00 & 33.11 & 39.36 & 40.01 & 43.33 & 58.98 & 80.55 & 98.92 & 60.19 \\ & +**Ours (TTN)** & **12.02** & **12.04** & **12.29** & **13.77** & **15.42** & **16.40** & **13.64** & **36.29** & **36.23** & **36.89** & **41.38** & **46.65** & **57.95** & **42.57** \\ \cline{1-1} \cline{2-19} & SWR & 13.24 & 13.06 & 16.57 & 26.08 & 38.65 & 91.03 & 59.54 & 37.84 & 37.93 & 44.37 & 59.50 & 78.66 & 98.95 & 33.10 \\ \cline{1-1} & **+Ours (TTN)** & **11.89** & **11.65** & **13.37** & **17.05** & **23.50** & **64.10** & **50.29** & **36.49** & **36.51** & **39.60** & **46.20** & **58.20** & **84.76** & **23.59** \\ \hline \hline \end{tabular} \end{table} Table 3: **Mixed domain adaptation on corruption benchmark.** Error rate (\(\downarrow\)) of mixed domain with severity level 5 using WideResNet-40-2 as backbone for each test batch size. We used the reported results of MixNorm with fixed parameters from the original paper and denoted them as \(*\). we are putting more importance on CBN than TBN, where semantic information is mainly handled, _i.e.,_ in deeper layers, so we can understand that TTN is less impacted by skewed label distribution. ### Experiments on Semantic Segmentation We additionally conduct experiments on domain generalization (DG) benchmarks (Choi et al., 2021; Pan et al., 2018) for semantic segmentation, including natural domain shifts (_e.g.,_ Cityscapes\(\rightarrow\)BDD-100K), to demonstrate the broad applicability of TTN. Table 6 shows the results of evaluating the ResNet-50-based DeepLabV3+ (Chen et al., 2018) model trained on the Cityscapes training set using the validation set of real-world datasets such as Cityscapes, BDD-100K, and Mapillary, and synthetic datasets including GTAV and SYNTHIA. We employ a test batch size of 2 for test-time adaptation in semantic segmentation. We observe that even when exploiting test batch statistics for standardization in BN layers (TBN) or updating the model parameters on top of TBN (TENT, SWR) does not improve the model performance (_i.e._, perform worse than the source model), adopting TTN helps the model make good use of the strength of the test batch statistics. Implementation details and additional results are provided in the appendix A.1 and B.7, respectively. ### Ablation Studies **Prior \(\mathcal{A}\) regularizes \(\alpha\) to be robust to overall test batch sizes.** We conduct an ablation study on the importance of each proposed component, _i.e.,_ initializing \(\alpha\) with prior \(\mathcal{A}\), optimizing \(\alpha\) using CE and MSE losses, and the results are shown in Table 7. Using \(\mathcal{A}\) for initialization and MSE loss aims to optimize \(\alpha\) following our intuition that we discussed in Section 2.3. Optimizing \(\alpha\) using CE loss improves the overall performance, but without regularizing with MSE loss, \(\alpha\) may overfit to large batch size (rows 2 & 3). Initialization with \(\mathcal{A}\) or not does not show a significant difference, but \(\mathcal{A}\) provides a better starting point than random initialization when comparing the left and right of the 2nd row. We observe that when using MSE loss, regardless of initialization using \(\mathcal{A}\), the optimized \(\alpha\) sufficiently reflects our intuition resulting in a low error rate to overall batch sizes (row 3). ### Visualization of \(\alpha\) Fig. 4 shows the visualization of optimized \(\alpha\) for CIFAR-10 using WideResNet-40-2. We observe that \(\alpha\) decreases from shallow to deep layers (left to right), which means CBN is more active in deeper layers, and TBN is vice versa. As shown in Table 4 and 6, CBN employing source statistics is superior to TBN when the distribution shift between source and target domains is small. Assuming that the \(\alpha\) we obtained is optimal, we can conjecture that CBN is more active (_i.e.,_\(\alpha\) closer to 0) in deeper layers because domain information causing the distribution shift has been diminished. In contrast, TBN has a larger weight (_i.e.,_\(\alpha\) closer to 1) in shallower layers since the domain information induces a large distribution shift. This interpretation is consistent with the observations of previous studies (Pan et al., 2018; Wang et al., 2021; Kim et al., 2022) that style information mainly exists in shallower layers, whereas only content information remains in deeper layers. \begin{table} \begin{tabular}{c|c||c|c|c|c|c||c} \hline \multicolumn{2}{c||}{**Method** (Cityscapes\(\rightarrow\))} & BDD-100K & Mapillary & GTAV & SYNTHIA & Cityscapes \\ \hline \hline \multirow{3}{*}{DeepLabV3+} & Source (Chen et al., 2018) & 43.50 & 54.37 & 43.71 & 22.78 & 76.15 \\ \cline{2-9} & TBN & 43.12 & 47.61 & 42.51 & 25.71 & 72.94 \\ \cline{2-9} & **Ours (TTN)** & **47.40** & **56.92** & **44.71** & **26.68** & **75.09** \\ \hline \multirow{3}{*}{DeepLabV3+} & tent & 43.30 & 47.80 & 43.57 & 25.92 & 72.93 \\ \cline{2-9} & + **Ours (TTN)** & **47.89** & **57.84** & **46.18** & **27.29** & **75.04** \\ \cline{1-1} \cline{2-9} & SVR & 43.40 & 47.95 & 42.88 & 25.97 & 72.93 \\ \cline{1-1} & + **Ours (TTN)** & **48.85** & **59.09** & **46.71** & **29.16** & **74.89** \\ \hline \end{tabular} \end{table} Table 6: **Adaptation on DG benchmarks in semantic segmentation. mIoU(\(\uparrow\)) on four unseen domains with test batch size of 2 using ResNet-50 based DeepLabV3+ as a backbone.** \begin{table} \begin{tabular}{c c|c|c c c c c|c c c c|c c c c|c} \hline \multicolumn{2}{c|}{**Method**} & \multicolumn{4}{c|}{**Test batch size**} & \multirow{2}{*}{**Avg.**} & \multicolumn{4}{c|}{**Method**} & \multicolumn{4}{c|}{**Test batch size**} & \multirow{2}{*}{**Avg.**} \\ \cline{2-2} \cline{14-14} \cline{6-14} ## 4 Related Work Test-time adaptation/training (TTA) aims to adapt models towards test data to overcome the performance degradation caused by distribution shifts (Sun et al., 2020; Wang et al., 2020). There are other related problems, unsupervised domain adaptation (UDA) (Sun and Saenko, 2016; Ganin et al., 2016) and source-free domain adaptation (SFDA) (Liang et al., 2020; Huang et al., 2021; Liu et al., 2021). Both UDA and SFDA have access to sufficiently large enough unlabeled target datasets, and their objective is to achieve high performance on that particular target domain. Unlike UDA and SFDA, TTA utilizes test data in an online manner. There are two key factors of recent approaches: adapting standardization statistics in normalization layers and adapting model parameters. **Normalization in Test Time.**Nado et al. (2020) suggested prediction-time BN, which uses test batch statistics for standardization and Schneider et al. (2020) introduced to adapt BN statistics by combining source and test batch statistics considering the the test batch size to mitigate the intermediate covariate shift. In this paper, we refer to the former method as TBN. Similarly, You et al. (2021) and Khurana et al. (2021) mixed the statistics using predefined hyperparameter. Also, Mirza et al. (2022) and Hu et al. (2021) adapted the statistics using moving average while augmenting the input to form a pseudo test batch when only a single instance is given. The primary difference with the existing approaches is that we consider the channel-wise domain-shift sensitivity of BN layers to optimize the interpolating weights between CBN and TBN. Concurrently, Zou et al. (2022) proposed to adjust the standardization statistics using a learnable calibration strength and showed its effectiveness focusing on the semantic segmentation task. **Optimization in Test Time.** TENT (Wang et al., 2020), SWR (Choi et al., 2022), and CoTTA (Wang et al., 2022) updated model parameters while using TBN. TENT optimized affine parameters in BN layers using entropy minimization while freezing the others. To maximize the adaptability, SWR updated the entire model parameters minimizing the entropy loss based on the domain-shift sensitivity. To stabilize the adaptation in continuously changing domains, CoTTA used consistency loss between student and teacher models and stochastically restored random parts of the pre-trained model. Liu et al. (2021) and Chen et al. (2022) suggested to update the model through contrastive learning. We focus on correcting the standardization statistics using domain-shift aware interpolating weight \(\alpha\). Similar to Choi et al. (2022), we measure the domain-shift sensitivity by comparing gradients. The principle difference is that we use channel-wise sensitivity when optimizing \(\alpha\) in post-training, while SWR used layer-wise sensitivity regularizing the entire model update in test time. ## 5 Conclusions This paper proposes TTN, a novel domain-shift aware batch normalization layer, which combines the benefits of CBN and TBN that have a trade-off relationship. We present a strategy for mixing CBN and TBN based on the interpolating weight derived from the optimization procedure utilizing the sensitivity to domain shift and show that our method significantly outperforms other normalization techniques in various realistic evaluation settings. Additionally, our method is highly practical because it can complement other optimization-based TTA methods. The oracle mixing ratio between CBN and TBN can vary depending on the domain gap difference. However, our proposed method employs a fixed mixing ratio during test time, where the mixing ratio is optimized before model deployment. If we could find the optimal mixing ratio according to the distribution shift during test time, we can expect further performance improvement. We consider it as future work. In this regard, our efforts encourage this field to become more practical and inspire new lines of research. Figure 4: **Optimized \(\alpha\). x- and y-axes indicate all channels in order from shallow to deep layers and the interpolating weight \(\alpha\), respectively. \(C_{l}\) denotes the channel size of layer \(l\).** ## Reproducibility Statement To ensure the reproducibility of our method, we provide the experimental setup in Section 3.1. Moreover, the details on implementation and evaluation settings can be found in the appendix A.1 and A.2, respectively. The pseudocode for overall training and testing scheme is provided in the appendix A.3. Together with related references and publicly available codes, we believe our paper contains sufficient information for reimplementation. ## Acknowledgement We would like to thank Kyuwoong Hwang, Simyung Chang, and Seunghan Yang of the Qualcomm AI Research team for their valuable discussions.
2305.01394
An intermediate phase between jammed and un-jammed amorphous solids
A significant amount of attention was dedicated in recent years to the phenomenon of jamming of athermal amorphous solids by increasing the volume fraction of the microscopic constituents. At a critical value of the volume fraction, pressure shoots up from zero to finite values with a host of critical exponents discovered and discussed. In this letter, we advance evidence for the existence of a second transition, within the jammed state of two-dimensional granular systems, that separates two phases characterized by different mechanical screening regimes. Explicitly, highly packed systems are quasi-elastic with quadrupole-screening, and more loosely jammed systems exhibit anomalous mechanics with dipole screening. Evidence is given for a clear transition between these two regimes, reminiscent of the intermediate hexatic phase of crystal melting in two-dimensional crystals.
Yuliang Jin, Itamar Procaccia, Tuhin Samanta
2023-05-02T13:13:26Z
http://arxiv.org/abs/2305.01394v1
# An intermediate phase between jammed and un-jammed amorphous solids ###### Abstract A significant amount of attention was dedicated in recent years to the phenomenon of jamming of athermal amorphous solids by increasing the volume fraction of the microscopic constituents. At a critical value of the volume fraction, pressure shoots up from zero to finite values with a host of critical exponents discovered and discussed. In this letter, we advance evidence for the existence of a second transition, within the jammed state of two-dimensional granular systems, that separates two phases characterized by different mechanical screening regimes. Explicitly, highly packed systems are quasi-elastic with quadrupole-screening, and more loosely jammed systems exhibit anomalous mechanics with dipole screening. Evidence is given for a clear transition between these two regimes, reminiscent of the intermediate hexatic phase of crystal melting in two-dimensional crystals. The concepts of rigidity and jamming in granular systems are topics of intense studies for the community of amorphous solids [1; 2; 3]. For athermal hard sphere systems, the transition from zero to infinite pressure upon jamming, is clearly defined in terms of a critical isostatic number of contacts \(Z^{*}\)[4; 5; 6]. This mathematically clean phenomenology is getting somewhat blurred when temperature, or friction, are included in the relevant physics of amorphous solids. Notably, even the packing fraction in which jamming is observed is not well defined, since it can depend on details of preparation or compressing protocols [7]. The difficulty in identifying a sharp transition between solids and liquids is not particular to amorphous solids. In fact, it is well known that between perfect crystals in two dimensions and their fluid phase there exist an intervening hexatic phase in which the material exhibits intermediate properties between solid and liquid [8]. Motivated by the celebrated theory of the hexatic phase transition we have recently investigated the anomalous mechanics that appears between elastic amorphous solids and their fluid analogs [9; 10; 11; 12; 13]. In this Letter we present evidence that the transition between an elastic phase and the novel (rigid) phase in which elasticity is anomalous, is in fact sharp, at least in two dimensions. We emphasize that this transition is different from the jamming or rigidity transition that has been studied in the literature [1; 2]. The different phases are characterized by different responses to non-uniform strains, such that the resulting displacement field is sensitive to different mechanical screening mechanisms, which are either quadrupolar or dipolar elastic charges. The concept of screening in modern physics arises naturally in the electrostatics of continuous media, where microscopic mechanisms are available to damp the externally imposed electric field. A famous example of non-electric screening is the Kosterlitz-Thouless (KT) transition [14], where vortex-pairs dissociate to produce a coulomb gas phase where monopoles (unbound vortices) are available to screen external fields, similar to the Debye-screening in the electrostatic analog. In the field of mechanics, the nucleation of structural defect in response to external loads, is the basis for theories of crystal melting [15; 16], where solids are supplemented with screening quadrupoles (dislocation pairs), hexatics with screening dipoles (unbound dislocations) and liquids with screening monopoles (unbound disclinations). While thermal melting and the hexatic phase transition necessitate finite-temperature statistical mechanics, recent research on a variety of _athermal_ systems pointed out the existence of similar transition in cellular tissue models [17] and vibrated granular matter [18; 19; 20]. These observations were fundamentally based on structural characteristics, including short and quasi-long range translational and orientational order, observed via correlation functions. In contrast to these thermal or mechanically agitated examples of hexatic phases, we present here theoretical and numerical indications for the existence of a transition to a dipoles-screened phase within the _jammed regime_ of _athermal_ amorphous granular systems. This finding is based on recent research, in which it was discovered that the prevalence of plastic events in amorphous solids results in screening phenomena that are akin, but richer and different, to screening effects in electrostatics [9; 10; 11; 12; 13]. Plastic events, which are typically quadrupoles in the displacement field, can act as screening charges. It was shown that when the density of plastic quadrupoles is low, their effect is limited to renormalizing the elastic moduli, but the structure of (linear) elasticity theory remains intact. This is analogous to dipole screening in dielectrics. On the other hand, when the nucleation cost of quadrupoles drops, the quadrupoles density becomes high, and the nucleation of effective dipoles defined by the gradients of their density, cannot be neglected. The presence of effective dipoles changes the analytic form of the response to strains, in ways that are in fundamental clash with standard elasticity theory. It was concluded that one needs to consider a new theory, and this emergent theory was confirmed by comparing its predictions to results of extensive experiments and simulations [9; 10; 11; 12; 13]. While dipole screening was observed in both two and three dimensions, in this letter we focus on two dimensional systems in which one can demonstrate a clear transition as stated above. Having realized that _gradients of quadrupole density_ act as effective dipoles, it became evident that the vast majority of experiments and numerical simulations that study the mechanics of amorphous solids are not using the best strain protocols. Indeed, in most studies researchers employ simple and pure shear, or tensile compression or extension. To expose the unusual and interesting mechanical properties of amorphous solids it is advisable to employ non-uniform strains. To this aim we have employed, in both experiments [10] and simulations [9; 10; 12], a circular geometry in two dimensions and a spherical one in three dimensions [13]. In the former case we then inflate a central disk, to observe the resulting radial component of the displacement field \(d_{r}\). Theoretically we expect that in a phase where only quadrupole screening is dominant, an inflation \(r_{\rm in}\to r_{\rm in}+d_{0}\) of a disk of initial Radius \(r_{\rm in}\) in a system of outer radius \(r_{\rm out}\) will result in radial displacement \[d_{r}(r)=d_{0}\frac{r_{\rm in}\left(r^{2}-r_{\rm out}^{2}\right)}{r\left(r_{ \rm in}^{2}-r_{\rm out}^{2}\right)},\quad\mbox{in two dimensions}. \tag{1}\] On the other hand, in a phase that is governed by dipole screening we theoretically expect the same radial component of the displacement field to obey \[d_{r}(r)=d_{0}\frac{Y_{1}(r\kappa)J_{1}(r_{\rm out}\kappa)-J_{1}(r\,\kappa)Y_ {1}(r_{\rm out}\kappa)}{Y_{1}(r_{\rm in}\kappa)J_{1}(r_{\rm out}\kappa)-J_{ 1}(r_{\rm in}\kappa)Y_{1}(r_{\rm out}\kappa)}. \tag{2}\] Here \(J_{1}\) and \(Y_{1}\) are the circular Bessel functions of the first and second kind respectively. The parameter \(\kappa\) has the dimension of inverse length and is referred to as the screening parameter. When \(\kappa\to 0\) the expression (2) tends to Eq. (1). Obviously, the difference between Eqs. (1) and (2) is striking. The first exhibits a monotonous decay of an outward displacement until it vanishes at the outer boundary, whereas the latter allows oscillations, and even negative (inward pointing) displacements although the imposed inflation points outwards. The main question that we raise here is whether there exists a clear transition, as a function of an intensive parameter in a given athermal amorphous system, separating material phases in which the mechanical response tends to jump from Eq. (1) to Eq. (2) with a finite value of \(\kappa\). We show next that in 2-dimensions the answer is affirmative, the intensive parameter for a granular jammed system is the pressure, and the transition is indeed clear. To demonstrate the transition we investigate frictionless assemblies of small disks that are at mechanical equilibrium, prepared with a desired target pressure \(P\) and confined in a circular two-dimensional area with a fixed outer circular wall. Open source codes (LAMMPS [21]) are used to perform the simulations. Every simulation begins with a configuration of \(N=80000\) bi-disperse disks of mass \(m=1\), placed randomly in a circular area with a radius \(r_{\rm out}=172\) in dimensionless units. Half of the small spheres have a radius \(R_{1}=0.45\) and the other half a radius \(R_{2}=0.65\). One larger disk is not placed randomly, but rather fixed to the center of coordinates. To reach a desired pressure we begin with a chosen packing fraction and the system is relaxed to mechanical equilibrium by solving Newton's second law of motion with damping. This process is carried out until the desired target pressure is reached and forces are minimized to values smaller than \(10^{-6}\). The normal contact force is Hertzian with force constant \(k_{n}=2\times 10^{5}\), following the Discrete Element Method of Ref. [22]. The tangential contact force is zero as the system is frictionless. After achieving a mechanically stable configuration at a target pressure, we inflate the central disk by 25%. The displacement field is denoted \(\mathbf{d}(r,\theta)\) and the radial component is obtained as an angle average, \(d_{r}(r)\equiv(2\pi)^{-1}\oint_{0}^{2\pi}\mathbf{d}\cdot\hat{r}d\theta\) where \(\hat{r}\equiv\mathbf{r}/r\). The displacement field exhibits qualitatively different appearance at high and low pressures as exemplified in Fig. 1. At high pressures the displacement field is centered around the inflated disk as is expected from Eq. (1). In contrast, at low pressure the displacement field is spread out throughout the system, in correspondence with Eq. (2). A quantitative comparison is provided by plotting the radial component \(d_{r}(r)\), cf. Fig. 2. While the upper panel shows the typical decay of an elastic solution, the lower panel presents _negative_ radial displacement that result from screening and the Bessel functions in Eq. (2). The simulations indicate a clear transition from quasi Figure 1: Maps of the magnitude of the displacement field after inflation of 25% in the radius of the inner disk, \(N=80000\). Upper panel: high pressure \(P=29.35\). Lower panel: low pressure \(P=0.394\) elastic to anomalous response. The best way to demonstrate the transition is to measure the screening parameter \(\kappa\) as a function of the pressure. In Fig. 3 we present the measured screening parameter as a function of \(\ln(P^{-1})\). The screening parameter was measured in two independent ways. In one we simply fitted the measured radial component of the displacement field to Eq. (2), see for example Fig. 2 lower panel. The second method relied on a direct measurement of the presence of dipoles. This will connect the observed transition to the well known hexatic phase transition which appears in two-dimensional melting. It should be stressed that in the present context the existence of dipoles, or in fact of a dipole density \(\mathbf{\mathcal{P}}(r,\theta)\), does not refer to the material structure, but rather to the presence of dipoles in the displacement field. To this aim we refer to the theory presented in Ref. [11]. It was shown that the dipole density can be measured directly by the following line integral: \[\oint_{\partial\Omega}\mathbf{\mathcal{P}}(r,\theta)\cdot\mathbf{n}\,\mathrm{dl}= \oint_{\partial\Omega}(\nabla(\nabla\cdot\mathbf{d}))\cdot\mathbf{n}\,\mathrm{dl}. \tag{3}\] The line integral can be taken around any closed loop. Due to obvious conservation laws we expect that when the loop encircles the whole system, the net dipole included should be zero, whereas with a loop enclosing any part of the system the integral will not vanish **only when there is dipole density in the enclosed area \(\Omega\)**. In the case of our circular systems with radial symmetry both sides of this equation can be evaluated analytically. The final result is [11] \[\oint_{\partial\Omega}\mathbf{\mathcal{P}}(r,\theta)\cdot\mathbf{n}\,\mathrm{dl}=- \kappa^{2}\oint_{\partial\Omega}\mathbf{d}(r,\theta)\cdot\mathbf{n}\,\mathrm{dl}. \tag{4}\] In Fig. 4 we present the function \(\mathcal{P}(r)\) computed as a function of the radius \(r\) of the loop integral for our system with \(N=80000\) disks. The function \(\mathcal{P}(r)\) is computed in two ways. In the upper panel the simulation data was used according to Eq. (4), whereas in the lower panel Eq. (4) was applied to the analytic formulae Eqs. (1) or (2) (with the measured value of \(\kappa\)). At pressures with quasi-elastic response, \(\mathcal{P}(r)\) vanishes for every value of \(r\). In the anomalous regime \(\mathcal{P}(r)\) is not zero, in very good agreement between the two methods of computation, showing that the transition is tightly associated with the appearance of dipole densities in the displacement field at these conditions. Finally, the value of the screening parameter \(\kappa\) was determined by taking the ratio of the two integrals in Eq. (4), giving us \(\kappa^{2}\). The values of the screening parameter \(\kappa\) as a function of inverse pressure is shown in linear-log scales in Fig. 3. The values shown were obtained as an average between the two methods of measurement. For pressure \(P\geq 3.5\pm 0.3\) the response is quasi-elastic with \(\kappa=0\). For pressure \(P\leq 3.5\pm 0.3\) the response is anomalous. The scatter in the values of \(\kappa\) in the anomalous regime is typical to the considerable sample-to-sample fluctuations in the values of the screening parameter. We should note that once the screening parameter differs from zero it appears quite independent of pressure. To understand the transition and the apparent constancy of \(\kappa\) as a function of pressure, we need to determine when we can expect an avalanche of plastic events Figure 3: The screening parameter \(\kappa\) as a function of the logarithm of the inverse pressure. A transition between material phases with quasi-elastic response and with anomalous response is clearly observed. Figure 2: Green dots: the radial component of the displacement field that corresponds to the data in Fig. 1, averaged over the angles. Dashed line is the fit to theory. Upper panel: high pressure \(P=29.35\). Lower panel: low pressure \(P=0.394\), \(\kappa=0.023\) that can span a region of size \(\kappa^{-1}\). Start with estimating the maximal size of a blob that can become unstable and go through a plastic deformation. In our amorphous configurations of disks interacting via Hertzian forces. the pressure depends on excess coordination number \(\Delta Z\equiv Z-Z^{*}\) according to [3; 23] \[p\sim(\phi-\phi_{J})^{3/2}\sim\Delta Z^{3}\, \tag{5}\] where \(Z^{*}=4\) is the coordination number at jamming. When \(\Delta Z=0\), breaking any contact will render the system unstable. On the other hand, when \(\Delta Z>0\) one can afford breaking more than one bond, in fact one can break a whole circumference of bonds of length \(\ell^{d-1}\) as long as [24], \[Z\ell^{d-1}=\Delta Z\ell^{d}. \tag{6}\] If the region is smaller than this \(\ell\), it is unstable to such breaking, and if larger, the region is always stable and rigid. We then interpret this length as the maximal blob size that can participate in an avalanche of plastic events. This length scale depends on pressure like \[\ell\sim\frac{Z}{\Delta Z}\sim p^{-1/3}. \tag{7}\] This is represented by the black line in Fig. 5. Next the question is what is the pressure dependence of \(\kappa\). From Ref. [13] one reads \[\frac{\mu_{1}^{2}}{\mu_{2}\left(\lambda+2\mu\right)}=\kappa^{2}. \tag{8}\] Here \(\mu_{1}\) and \(\mu_{2}\) are new moduli associated with the quadrupole and dipole terms in the energy function. The combination of Lame' coefficients \((\lambda+2\mu)=B+G\), where \(B\) and \(G\) are the bulk and shear moduli respectively. This combination is dominated by \(B\sim(\phi-\phi_{J})^{\alpha-2}\) because \(G\sim(\phi-\phi_{J})^{\alpha-3/2}\) with \(\alpha=5/2\). The coefficients \(\mu_{1}\) and \(\mu_{2}\) are second derivatives of the energy with respect to strain, either directly or through the quadrupole field \(Q\). Thus they are both expected to scale like \(B\). Therefore \(\kappa\) should be independent of pressure. However, \(\kappa\) can exist only when a blob of the order of \(\kappa^{-1}\) exceeds the scale \(\ell\). Accordingly we predict that the _observed_ value of \(\kappa\) will be zero for high pressures and constant for small pressures, with a jump when \(\kappa\approx\ell^{-1}\). This is the red line in Fig. 5. Here we propose that the sketch presented in Fig. 5 rationalizes the numerical results shown in Fig. 3. We should note however that the pressure where the transition is observed can depend on the magnitude of the inflation at the central disk and on the microscopic properties of the amorphous material. **Acknowledgments**: IP thanks Michael Moshe for very useful discussions. This work has been supported in part by the the joint grant between the Israel Science Foundation and the National Science Foundation of China, and by the Minerva Foundation, Munich, Germany. Figure 4: The dipole density included in a radius of size r, as a function of r, for the system with \(N=80000\), for different pressures. Upper panel: calculation using the simulation data. Lower panel: calculation based on the analytic formulae (4) with \(d(r)\) taken from Eqs. (1) or (2) (with the measured value of \(\kappa\)). Figure 5: A sketch of the expected transition in the observed value of the screening parameter \(\kappa\)
2308.07719
The coherent measurement cost of coherence distillation
Quantum coherence is an indispensable resource for quantum technological applications. It is known to be distillable from a noisy form using operations that cannot create coherence. However, distillation exacts a hidden coherent measurement cost, whose extent has not previously been estimated. Here we show that this cost (quantified by an equivalent number of Hadamard measurements) is related to what we call the irretrievable coherence: the difference between the coherence of formation and the distillable coherence. We conjecture (and make partial progress towards proving) that when distilling from many copies of a given noisy coherent state, the coherent measurement cost scales extensively in the number of copies, at an asymptotic rate exactly equalling the input's irretrievable coherence. This cost applies to any application whereof coherence distillation is an incidental outcome (e.g. incoherent randomness extraction), but the implications are more dramatic if pure coherence is the only desired outcome: the measurement cost may often be higher than the distilled yield, in which case coherence should rather be prepared afresh than distilled from a noisy input.
Varun Narasimhachar
2023-08-15T11:52:35Z
http://arxiv.org/abs/2308.07719v1
# The coherent measurement cost of coherence distillation ###### Abstract Quantum coherence is an indispensable resource for quantum technological applications. It is known to be distillable from a noisy form using operations that cannot create coherence. However, distillation exacts a hidden coherent _measurement_ cost, whose extent has not previously been estimated. Here we show that this cost (quantified by an equivalent number of Hadamard measurements) is related to what we call the _irretirevable coherence_: the difference between the coherence of formation and the distillable coherence. We conjecture (and make partial progress towards proving) that when distilling from many copies of a given noisy coherent state, the coherent measurement cost scales extensively in the number of copies, at an asymptotic rate exactly equalling the input's irretrievable coherence. This cost applies to any application whereef coherence distillation is an incidental outcome (e.g. incoherent randomness extraction), but the implications are more dramatic if pure coherence is the only desired outcome: the measurement cost may often be higher than the distilled yield, in which case coherence should rather be prepared afresh than distilled from a noisy input. ## I Introduction Coherence is a cornerstone of quantum mechanics, playing a central role in the wavelike interference effects that epitomize quantum phenomena. It is also a valuable resource, powering transformative quantum technologies such as quantum computing, quantum communication, and quantum metrology [1]. A central concept in quantum information science is that of _resource distillation_: the conversion of a resource (like coherence) from an impure form into a standard, pure form. This paper concerns certain hidden costs of coherence distillation, which have not been probed before. In the following sections, we will provide a brief background and motivation of our research problem, followed by a summary of our main results. ### Resource theories of coherence At a high level, coherence refers to the presence of superposition in wavefunctions, i.e. vectors in the Hilbert space of quantum-mechanical states. But of course, this begs the question "a superposition of what sort of entities?" Indeed, coherence can be given different formal definitions based on what we consider the "unsupposed" objects; examples include eigenstates of conserved quantities [2], orthogonal subspaces induced by measurements [3; 4], and even mutually-nonorthogonal elements, such as the classical states of bosonic modes [5]. These and other formalizations of the concept of coherence--each well-motivated within its own operational context--have been explored under the broad umbrella of _resource theories_; for a general exposition on coherence resource theories, we direct the reader to [1]. Here we will provide a brief introduction to certain specific resource theories of coherence, adequate for our purposes. A resource theory formalizes the study of a quantum resource by identifying the operational capabilities required to prepare or proliferate it. Such capabilities are axiomatically forbidden, leaving only certain constrained actions that can be performed, called the "free operations". The theory then endeavours to chart out what can and cannot be done using only the free operations--spiritually akin to determining, say, the plane figures that can be constructed using only a compass and a straightedge1. Typically, the free operations in quantum resource theories are a class of quantum (sub)channels, including the preparation of so-called "free states". All states that are not free are called resource states. Footnote 1: We are indebted to Gilad Gour for this evocative analogy. In the resource theories we will consider, the resource is coherence relative to a fixed orthogonal basis of the Hilbert space of a given quantum system2. This basis is variously termed computational, canonical, classical, etc.; we will simply call it the _incoherent basis_. Furthermore, we will work in the paradigm where the incoherent basis of a composite system is just the tensor product of its subsystems' incoherent bases. This notion of coherence falls under what has been called _speakable coherence_ in the literature [2]. It is operationally relevant for, e.g., gate-based quantum computing, where every elementary system has a "computational basis" and where tensor products of computational-basis states are easy to prepare. Footnote 2: In the remainder we will use the term “coherence”, without qualifiers, to mean this specific notion of coherence. The free states in these coherence resource theories are the incoherent states, i.e. the states whose density matrices are diagonal in the incohe to all other states--the resource states--as _coherent_. In the resource-theoretic spirit, the free operations are constrained to be incapable of creating or increasing coherence. As it turns out, there are diverse ways to choose families of free operations obeying this constraint, spawning a veritable zoo of distinct coherence resource theories. Amongst these, we will focus on the resource theory whose free operations are the so-called _incoherent operations3_ (IO) [6]. Informally, an IO is a quantum process that can be implemented using components that _may detect_ coherence (i.e., measure relative to coherent bases) but _must not create_ coherence when acting on incoherent inputs. Footnote 3: We will adhere to the term “incoherent operations” established in the literature, notwithstanding its regrettable inspecificity. ### Distillation of coherence-resource The main motivation for this work comes from the resource-theoretic concept of _distillation_: the task of converting an arbitrary resource state to a standard form. Resource distillation is often an essential part of applications [7]; for example, coherence distillation is closely related to the task of randomness extraction using incoherent measurements [8]. Beyond this direct value, the study of resource distillation also offers valuable insight into the structure of a resource theory. In all resource theories of coherence, the standard form of coherence-resource is a pure state containing a uniform superposition of some number of incoherent basis elements, e.g. \(\left|\Psi_{M}\right\rangle:=M^{-1/2}\sum_{m\in\left[M\right]}\left|m\right\rangle\). This choice is justified by several factors. Firstly, these states are usually optimal for applications (e.g. phase estimation) requiring coherence. Secondly, the free operations can produce any required state from a single copy of one of these. A third justification comes from the _asymptotic_ or _independent and identically-distributed_ (i.i.d.) limit of the resource theory, where the free operations act on large numbers of independent copies of identical states. In this limit, the standard coherent states of different \(M\) values admit reversible4 "currency exchange" at a rate proportional to \(\log_{2}M\), which is the equivalent number of standard coherent bits (or _cobits_) \(\left|\Psi_{2}\right\rangle\). The cobit thus functions as a convenient unit for quantifying coherence. Footnote 4: Note that this reversibility is to leading order in the number of copies; in this work we will not consider higher-order effects [9]. The asymptotic limit of the IO resource theory affords an added feature: copies of _any_ coherent state--pure or mixed--can be converted (albeit _not_ reversibly--more on this later) by IO to cobits at a rate that is maximal in a resource-theoretic sense [10]. In other words, coherence is asymptotically _universally distillable_ by IO. But at the heart of this universal distillability lies the central question that motivates our work. ### What powers coherence distillation? Recall that IO can be implemented using components that do not create coherence, but may nevertheless detect it. _Strictly incoherent operations_ (SIO) are the subclass of IO that use only components that _cannot even detect_ coherence [11]. This restriction ends up breaking the asymptotic universal distillability seen under IO. Indeed, SIO exhibit a particularly severe form of non-distillable, or "bound", coherence: any number--however large--of copies of certain coherent states cannot be converted, even approximately, to even a _single_ cobit [12]. In summary, the _unbounded_ coherence-detecting power of IO enables universal distillation, while the strictly coherence-non-detecting SIO are too constrained to distill universally. But what lies between these two extremes? Our paper is an attempt to understand this intervening operational landscape, by answering questions such as: 1. How much coherence-detecting power (quantified in a way that will be discussed later) is necessary to recover the full extent of asymptotic distillability afforded by IO--i.e., to distill at the _maximal_ asymptotic rate? How much is sufficient? 2. Given a coherent measurement budget less than the cost of maximal distillation, what (non-maximal) distillation rate can be attained? 3. How does this coherent measurement cost behave away from the asymptotic limit? We approach these questions using a construction that we call the _target witness_: an object associated with a given quantum process, containing information about the efficacy of the process at mapping arbitrary inputs to a desired target output, as well as about the coherence-detecting power of the process. Among other things, we show that the coherent measurement cost of distillation is bounded by a particular property of the target witness, which we will now discuss. ### Irretrievable coherence In a resource theory, a real-valued function of states is called a _resource measure_ if it satisfies the following two conditions: (1) It is a non-increasing monotone under the free operations; (2) It is _faithful_, i.e. takes nonzero values on all and only the resource (non-free) states. The answers to our central questions turn out to involve some important measures of coherence. Given a state \(\rho\), its _relative entropy of coherence_ is defined as \[C_{r}(\rho)=\min\left\{S\left(\rho\|\sigma\right):\;\Delta[\sigma]=\sigma \right\}, \tag{1}\] where \(S\left(\cdot\|\cdot\right)\) is the quantum relative entropy and \(\Delta(\cdot)\) denotes the diagonal part (in the incoherent-basis representation) of the argument. Thus, the minimization is over all diagonal states \(\sigma\)--in other words, the free states in the IO resource theory. Conveniently, the minimization evaluates to \(C_{r}(\rho)=S\left(\rho\|\Delta[\rho]\right)=S\left(\Delta[\rho]\right)-S(\rho)\), where \(S(\cdot)\) is the von Neumann entropy. Meanwhile, \(\rho\)'s _coherence of formation_ is given by the so-called _convex-roof extension_ of the restriction of \(C_{r}\) to pure states: \[C_{f}(\rho)=\min_{p_{x}\geq 0;\;\sum_{r}p_{x}|\phi_{x}\rangle\langle\phi_{x}|= \rho}\sum_{x}p_{x}C_{r}\left(\left|\phi_{x}\right.\right)\left\langle\phi_{x} \right|\right), \tag{2}\] where the minimization is over all convex decompositions of \(\rho\) into pure states. Notice that \(C_{r}(\psi)=C_{f}(\psi)=S\left(\Delta[\psi]\right)=H\left(\mathbf{p}\right)\) (where \(H\) is the Shannon entropy) for a pure state \(\psi\equiv|\psi\rangle\langle\psi|\)5 with incoherent-basis distribution \(\left|\langle i|\;\psi\rangle\right|^{2}=p_{i}\). In particular, \(C_{r}\left(\Psi_{M}\right)=C_{f}\left(\Psi_{M}\right)=\log_{2}M\) for the standard resources. Footnote 5: We will use this shorthand for rank-1 projectors, where it is possible without ambiguity. These measures have operational significance in the IO resource theory. Firstly, \(C_{r}(\rho)\) is the regularized asymptotic _distillable coherence_ under IO, defined as the maximum asymptotic rate at which cobits can be distilled from copies of \(\rho\) by IO. That is, \(C_{r}(\rho)\) is the largest \(r\in\mathbb{R}\) such that the transformation \(\rho^{\otimes n}\mapsto\Psi_{2}^{\otimes rn}\) can be achieved by IO to an arbitrarily good approximation as \(n\to\infty\). Likewise, \(C_{f}(\rho)\) is the regularized asymptotic _coherence cost_ under IO: the minimum asymptotic rate at which cobits must be _consumed_ to _prepare_ copies of \(\rho\) by IO, in an operational task called _resource dilution_ or _formation_--the opposite of distillation. Mathematically, \(C_{f}(\rho)\) is the smallest \(r\in\mathbb{R}\) such that the transformation \(\Psi_{2}^{\otimes rn}\mapsto\rho^{\otimes n}\) can be achieved arbitrarily well as \(n\to\infty\). For almost all states \(\rho\) (in a measure-theoretic sense), \(C_{f}\) is strictly larger than \(C_{r}\)[13]. Hence, the coherence distillable by IO from a given input is generically smaller than that required to prepare the same input. Thus, IO presents an instance of _irreversibility_ in resource theories, a topic of current interest [14; 15]. IO's is a particularly strong form of irreversibility, since it persists even in the asymptotic limit and is, moreover, already present in the lowest order (i.e., between the regularized distillation and formation rates). Nevertheless, as we alluded to above, the IO-_distillable_ coherence is in fact maximal under general resource-theoretic constraints [10]; therefore, the culprit behind the irreversibility is the inflated _cost_ of resource formation under IO. Incidentally (as the reader may have anticipated from our earlier statements), the SIO resource theory is even more irreversible--this additional disparity owing solely to SIO's inferior _distillable_ coherence compared to IO's, the two theories' coherence costs being equal! In our main results, these two coherence measures feature in the form of their difference \(\ell(\rho):=C_{f}(\rho)-C_{r}(\rho)\). Because of the operational meaning of this quantity vis-vis the asymptotic irreversibility of the IO resource theory, we christen it the _irretrievable coherence_. It has, in fact, been encountered (though not named) in the literature in a different operational context: it quantifies the difference between the quantum and the classical values of the so-called _intrinsic randomness_ of a state [16; 17]. It is worth noting that, while the irretrievable coherence is determined by two coherence measures and is itself a _signature_ of coherence--it is nonzero only for coherent states--it is _not_ a coherence measure in the resource-theoretic sense. It fails to be faithful, as can be seen from the case of pure states. But more importantly, it is not a monotone under IO or, indeed, any reasonable class of free operations. ### Clues in the literature Recall that distillation in the asymptotic limit is the task of converting many copies of a given input to an output close to a standard resource. Formally, for every \(n\in\mathbb{Z}_{+}\), the input is \(\varrho_{n}\equiv\rho^{\otimes n}\) and is to be mapped approximately to \(\Psi_{2}^{\otimes m}\) for some \(m_{n}\). "Asymptotic" refers to the limit \(n\to\infty\), and the asymptotic rate of distillation is the value \(r=\lim_{n\to\infty}\left(m_{n}/n\right)\). As we alluded to earlier, the highest achievable rate for a given \(\rho\) is \(r=C_{r}(\rho)\). Winter and Yang [13] constructed an IO protocol achieving this maximal distillation rate. A high-level examination of the protocol already hints at connections between asymptotic irreversibility and the object of our interest, viz. the coherent measurement cost of distillation. Crucially, the protocol consists (apart from some asymptotically-inconsequential measurements) of just a unitary transformation of the input followed by a partial trace. Considering the purity required of the output (distillate), the effect of the protocol before the final partial trace can be summarized approximately as \[\varrho_{n}^{\rm A}\stackrel{{\mathcal{U}}}{{\longmapsto}}\tau^{ \rm S}\otimes{\Psi_{M}}^{\rm M}. \tag{3}\] Here the superscript A labels the input system, M the output system, and S the part that will be traced out. The question of how much coherent measurement the IO needs translates to how coherently this unitary channel \(\mathcal{U}\) must act. Since the unitary does not involve any additional systems, the systems' dimensionalities (which we denote by italicizing the corresponding labels) satisfy \(A=SM\). Let us now make some heuristic estimates for these numbers, appealing to (an extremely crude form of) asymptotic typicality [18]; for brevity, we will omit qualifiers like "approximate" and "typical part" in the following statements, but stress that these qualifications are implicit. Firstly, consider the input \(\varrho_{n}\equiv\rho^{\otimes n}\): its rank (and by unitarity, also the rank of \(\tau\)) is6\(S_{0}:=\exp_{2}\left[n\,S(\rho)\right]\), using asymptotic equipartition. Applying the same argument on the diagonal part \(\Delta\left(\varrho_{n}\right)\), which is in fact \(\left[\Delta(\rho)\right]^{\otimes n}\), we see that the relevant dimensionality of the input (covering all the incoherent basis labels that occur with nonzero amplitudes) is \(A=\exp_{2}\left[n\,S\left(\Delta[\rho]\right)\right]\). Finally, the size of the maximal distillate is \(M=\exp_{2}\left[n\,C_{r}(\rho)\right]=\exp_{2}\left[n\left(S\left[\Delta(\rho) \right]-S[\rho]\right)\right]\). Notice that \(M=A/S_{0}\), and therefore, \(S_{0}=S\). Noting (again from equipartition) that the input's spectrum must be flat, we conclude that \(\tau\) must be maximally mixed. In particular, this means that the subsystem S is discarded in an incoherent state, uncorrelated with the distillate M. Now let us view the protocol in reverse: we take \(\Psi_{M}\), append to it an auxiliary system S in the _incoherent_ state \(\tau\), and apply the unitary channel \(\mathcal{U}^{\dagger}\) to map the composite SM to \(\varrho_{n}^{\mathrm{A=SM}}\). How much coherence must \(\mathcal{U}^{\dagger}\) generate for this? If it had had to act on a fully incoherent input, it would have needed to create all of the coherence in \(\varrho_{n}\) from scratch; considering that it has the \(\Psi_{M}\) to begin with, it still needs to account for the deficit--a hint at the difference between the formation cost and the distillable coherence. To be sure, how much coherence an operation needs to generate to prepare a required resource is not an unambiguous concept: it depends on the class of operations considered. The Winter-Yang IO protocol's reversal, which we considered, is not itself an IO; nor is it in any of the other classes of incoherent operations defined in the literature. Besides, this maximal distillation protocol is but one possibility; in general, a protocol may use auxiliary systems, instead of acting unitarily on just the input. In this connection, a further difficulty is that there is not yet an operational understanding of IO in terms of their unitary implementations (or _dilations_): IO are only understood in terms of the abstract mathematical objects called Kraus operators. In contrast, SIO (for example) are understood through both their Kraus operators and their dilations. But another hint in the same direction--again from the near-maximally-mixed fate of S--is that IO distillation seems to entail fully "resolving" the space on which the input is supported, so as to retain in the distillate not only all of the input's coherence but also all of its _purity_. As such, the requisite coherent measurement of distillation translates to that of fully resolving the input's support. As a simple illustration of what we mean, consider the 2-dimensional subspace of a ququart (4-dimensional system) spanned by the vectors \[\left|v_{0}\right\rangle\propto\left|+\right\rangle+\left|2\right\rangle;\; \left|v_{1}\right\rangle\propto\left|-\right\rangle+\left|3\right\rangle, \tag{4}\] where \(\left|\pm\right\rangle\propto\left|0\right\rangle\pm\left|1\right\rangle\) as usual. An IO can distill one cobit from this subspace, e.g. using the Kraus operators \(K_{0}=\left|0\right\rangle\left\langle+\right|+\left|1\right\rangle\left\langle 2\right|\) and \(K_{1}=\left|0\right\rangle\left\langle-\right|+\left|1\right\rangle\left\langle 3\right|\). But to do so it must necessarily act coherently on the \(\left|\pm\right\rangle\) part. Let us now try to estimate the coherent measurement rank required to resolve (in this sense) the support of \(\varrho_{n}\). Asymptotically, the projector onto this support is close to \(\varrho_{n}\) itself, as the latter's spectrum flattens out. Let \(\varrho_{n}=\sum_{j}q_{j}\phi_{j}\) be some convex decomposition into pure components. Now consider the typical letter strings (i.e. incoherent basis indices) that occur in \(\varrho_{n}\) (or, equivalently, along its diagonal). As noted above, these are \(A=\exp_{2}\left[n\,S\left(\Delta[\rho]\right)\right]\) in number; and by asymptotic typicality, each of them carries a weight of \(A^{-1}\) in the overall input. On the other hand, the weight that any of these strings accrues by virtue of its occurrence in any single \(\phi_{j}\) is \(\lesssim\left(S\exp_{2}\left[nC_{f}(\rho)\right]\right)^{-1}\): the \(S\) factor is the dimensionality of the support, while \(\exp_{2}\left[nC_{f}(\rho)\right]\) lower-bounds the number of (approximately equally-superposed) typical strings in each pure component, as can be seen from the definition (2). Thus, in order to account for all of a string's weight in \(\varrho_{n}\), it must occur in no less than \[\frac{S2^{nC_{f}(\rho)}}{A}=\exp_{2}\left(n\left[C_{f}(\rho)-C_{r}(\rho) \right]\right)=\exp_{2}\left[n\ell(\rho)\right] \tag{5}\] distinct \(\phi_{j}\)'s. This bound applies alike to all of the typical strings. Therefore, a measurement that resolves the entire support of \(\varrho_{n}\) can be expected to involve coherence over blocks no smaller than this size. Though these hints are based on loose intuition and crude estimates, they proved helpful in our project, directing us towards more rigorous investigations and methods. In particular, they inspired us to consider a non-asymptotic idealization of the above "crudely-typicalized" case of maximal distillation, yielding a result (Theorem 1 below) that somewhat validates the hints and informs our conjectures 1 and 2 on the asymptotic case. ### Summary of results Our first contribution is a novel construction called the _target witness_ (Section 3). For any given quantum process (channel) \(\mathcal{E}\) and designated target state \(\left|\alpha\right\rangle\), we define an associated target witness \(T_{\mathcal{E}}^{\alpha}\), constructed to capture the requisite coherent measurement action in any implementation of \(\mathcal{E}\). We show that \(\mathrm{Tr}\left(\rho\,T_{\mathcal{E}}^{\alpha}\right)\) is the probability with which \(\mathcal{E}\) maps an arbitrary input \(\rho\) to the target \(\alpha\), hence motivating the term "target witness". We then use this result as the foundation to study the coherent measurement cost in several forms of the distillation task. Although the construction is motivated by our interest in the gap between SIO and IO, its applicability is general enough that all our results apply to the most general class of free operations in coherence resource theories, the _coherence-non-creating channels_ (established in the literature as "maximal incoherent operations", or MIO). As mentioned above, we first consider a certain single-shot (i.e., finite-sized and non-asymptotic) variant containing idealized versions of the typicality-related features encountered in maximal asymptotic distillation. **Theorem 1**.: _Any coherence-non-creating channel that deterministically maps a rank-\(S\) state \(\rho^{\mathrm{A}}\) to a stan dard coherent resource \(\Psi_{M}\) with \(M=A/S\) must involve coherent measurements over at least \(M^{-1}r_{C}\left(\tau_{\rho}\right)\geq M^{-1}\exp_{2}C_{f}\left(\tau_{\rho} \right)\geq\exp_{2}\ell\left(\tau_{\rho}\right)\) elements of \(\mathrm{A}\)'s incoherent basis, where \(\tau_{\rho}:=\mathbb{1}_{\rho}/S\)._ Here, \(r_{C}\) denotes the _coherence rank_ of a state, defined for pure states as \(r_{C}(\psi)=\mathrm{rank}\Delta(\psi)\) and for mixed states as \[r_{C}(\rho)=\min_{p_{x}\geq 0;\;\sum_{x}p_{x}\oplus_{x}=\rho}\max_{x}r_{C} \left(\phi_{x}\right). \tag{6}\] We prove Theorem 1 by showing that the target witness associated with any channel achieving such a transformation must be proportional to \(\tau_{\rho}\). When we present our results in detail, we will see that the condition \(M=A/S\) is always associated with the distilled resource' being maximal. In general, if an IO can distill \(\Psi_{M}\) from a rank-\(S\) state in \(A\) dimensions, then \(M\leq A/S\). Our next result applies to exact _non-maximal_ distillation, again in the single-shot regime. **Theorem 2**.: _Any coherence-non-creating channel that deterministically maps a rank-\(S\) state \(\rho^{\mathrm{A}}\) to a standard coherent resource \(\Psi_{M}\) with \(M=pA/S\) must involve coherent measurements over at least \(M^{-1}r_{C}^{\{p\}}\left(\tau_{\rho}\right)\geq M^{-1}\exp_{2}C_{f}^{\{p\}} \left(\tau_{\rho}\right)\) elements of \(\mathrm{A}\)'s incoherent basis, where \(\tau_{\rho}:=\mathbb{1}_{\rho}/S\) and_ \[C^{\{p\}}\left(\tau_{\rho}\right)=\min_{\tau_{\perp}:\;\mathrm{Tr}\left(\tau_ {\rho}\tau_{\perp}\right)=0}C\left[p\tau_{\rho}+(1-p)\tau_{\perp}\right] \tag{7}\] _with \(C\) standing for \(r_{C}\) or \(C_{f}\)._ We arrive at this result by methods similar to those of Theorem 1: showing that the target witness of any viable channel is associated with a state \(\sigma\) containing a fraction \(p\) of \(\tau_{\rho}\) mixed with some state orthogonal thereto. Moving on, we have an _approximate_ version of the maximal case--a lower bound on the coherent measurement cost of mapping a given input to an output that is close enough (i.e., has high enough fidelity) to a near-maximal standard resource. **Theorem 3**.: _Any coherence-non-creating channel that deterministically maps a rank-\(S\) state \(\rho^{\mathrm{A}}\), satisfying \(r_{\min}\mathbb{1}^{\mathrm{A}}\leq\rho^{\mathrm{A}}\leq r_{\max}\mathbb{1}^{ \mathrm{A}}\), to an output \(\sigma^{\mathrm{M}}\) such that \(F\left(\sigma,\Psi_{M}\right)\geq 1-\epsilon\) for \(M=A/\bar{S}\) must involve coherent measurements over at least_ \[M^{-1}\exp_{2}\left[C_{f}\left(\tau_{\rho}\right)-\delta\log_{2}A-(1+\delta) h\left(\frac{\delta}{1+\delta}\right)\right] \tag{8}\] _elements of \(\mathrm{A}\)'s incoherent basis, where \(\tau_{\rho}:=\mathbb{1}_{\rho}/S\) and \(\delta:=\sqrt{2\left(1-\frac{S_{\rho,\epsilon}}{\sqrt{S\bar{S}}}\right)}\) with_ \[S_{\rho,\epsilon}:=\max\left\{\frac{1-\epsilon}{r_{\max}},\,S\left(1-\frac{ \epsilon}{r_{\min}}\right)\right\}. \tag{9}\] For small \(\epsilon\) and near-maximal distillate (i.e. \(\bar{S}\approx S\)), \(\delta\) becomes small and we recover Theorem 1 as a limiting case. Although we do not delve into approximate _non_-maximal distillation, our methods can be suitably extended to address this case. Theorem 3 follows by applying the asymptotic continuity of the coherence of formation [13] to approximate versions of Theorem 1's conditions. Next, we look at distillation in the asymptotic limit, where the input is of the form \(\varrho_{n}\equiv\rho^{\otimes n}\). Based on our result on approximate distillation, and on the fact that \(\varrho_{n}\) for large \(n\) is approximately maximally-mixed on a large part of its support (due to a phenomenon called asymptotic typicality--see Appendix C), we make the following conjecture. **Conjecture 1**.: _Suppose a sequence \(\mathcal{E}_{n}\) of MIO channels is maximally-distilling on copies of an input \(\rho\). That is, \(\mathcal{E}_{n}\) acting on \(\varrho_{n}\equiv\rho^{\otimes n}\) achieves \(F\left[\mathcal{E}_{n}\left(\varrho_{n}\right),\Psi_{M_{n}}\right]\geq 1-o(1)\) for \(\log_{2}M_{n}=n\left[C_{r}(\rho)-o(1)\right]\). Then any Kraus operator decomposition of \(\mathcal{E}_{n}\) involves coherent measurements over at least \(L_{n}\) elements of the input's incoherent basis, where_ \[\log_{2}L_{n}\geq n\left[\ell(\rho)-o(1)\right]. \tag{10}\] _This rank bound applies both to the single most coherent measurement element and to the average (of the rank's logarithm) over all involved measurements under the distribution induced by the input._ We make some progress towards proving this conjecture. The proof sketch proceeds very similarly to the approximate single-shot case, with the approximation threshold dictated by \(n\)-dependent parameters associated with asymptotic equipartition. Essentially, we show that with increasing \(n\) the task gets closer to the idealized maximal instance of Theorem 1--a formalization of the observations we made in Section 1.2. But unfortunately, some subtleties of asymptotic typicality pose obstacles in completing our proof. So far, we showed or conjectured the _necessity_ of a certain coherent measurement cost for distillation. In the asymptotic limit, we conjecture that the cost scaling in Conjecture 1 is also _sufficient_. **Conjecture 2**.: _For any \(\rho\), there exists a sequence \(\mathcal{E}_{n}\) of maximally-distilling \(\mathrm{IO}\) channels with all but an asymptotically-vanishing fraction of measurements individually attaining the bound of Conjecture 1, as well as attaining it on average._ This is motivated by certain special properties of the IO distillation protocol constructed by Winter and Yang [13]. We also make partial progress towards proving this conjecture, but it turns out to be rather more involved than the other direction, requiring putting together several pieces: 1. A construction for a decomposition of \(\varrho_{n}\) that * asymptotically approaches the defining bound (2) of the coherence of formation and * possesses some symmetries (thanks to \(\varrho_{n}\)'s asymptotic typicality properties), whereby the coherence of each component in the decomposition approaches the overall average value (i.e. \(\varrho_{n}\)'s coherence of formation). 2. A sequence of maximally-distilling (candidate) IO subchannels \(\mathcal{F}_{n}\) based on * filtering the above decomposition to further "typicalize" or "flatten" the coherence in each pure component, * truncating the remaining components to get rid of parts more coherent than a threshold that asymptotically scales favourably, and * adapting from Winter and Yang's IO distillation protocol [13] a certain "pooling" of the classical labels to construct potential maximally-distilling IO channels by connecting their target witnesses to approximate convex decompositions of \(\varrho_{n}\). By virtue of the above truncation, the resulting \(\mathcal{F}_{n}\) use measurement coherence bounded by the claimed scaling. 3. Showing that, despite the above filtering and truncation, the IO subchannel sequence \(\mathcal{F}_{n}\) asymptotically converges towards trace preservation, so that the maximal distillate it produces is asymptotically deterministic. 4. Finally, showing that the (asymptotically negligible but nonzero) trace deficit remaining can be fulfilled by completing each \(\mathcal{F}_{n}\) to a full channel \(\mathcal{E}_{n}=\mathcal{F}_{n}+\mathcal{G}_{n}\) using a subchannel \(\mathcal{G}_{n}\) that is also IO. We encounter hurdles in the last couple of points--ones related to those of Conjecture 1, but also other, unrelated ones. Due to the added difficulties, we are inclined to place less confidence in Conjecture 2. We then build on these conjectures to speculate on the requisite coherent measurement cost of _non_-maximal distillation in the asymptotic limit, and on a possible tradeoff between the amount of coherent measurement used and the distilled yield. Apart from the main results summarized above, we also make some progress in understanding cases where the target state is a nonuniform superposition (Section 4.2), the behaviour of certain SIO monotones under constrained IO (Appendix D), and connections between coherence distillation and certain linear-algebraic structures that we call _decoupling schemes_ (Appendix E). With this summary, let us now dive into our problem in full detail. ## 2 Technical preliminaries Throughout this paper, we will use upright Roman symbols (e.g. A) for system labels; these labels will be rendered as superscripts where we deem them necessary, and omitted altogether where clear from context. For a system A, \(\mathcal{H}^{\text{A}}\) will denote its associated Hilbert space and \(A\) the dimensionality thereof. We will use \(a\) for the classical symbols labelling A's incoherent basis vectors \(\ket{a}\), and \(\mathcal{A}\) for the collection thereof (i.e., A's "classical alphabet"). For a generic space \(\mathcal{V}\in\mathcal{H}^{\text{A}}\), we will denote the space of its linear automorphisms by \(\mathcal{L}\left(\mathcal{V}\right)\) and the projector onto \(\mathbb{V}\) by \(\mathbb{1}_{\mathcal{V}}\), but use the abbreviation \(\mathbb{1}^{\text{A}}\equiv\mathbb{1}_{\mathcal{H}^{\text{A}}}\) in the case of the entire Hilbert space associated with some system, \(\mathbb{1}_{\mathcal{I}}\) for the projector onto \(\text{span}\left\{\ket{a}:\;a\in\mathcal{I}\subseteq\mathcal{A}\right\}\), and \(\mathbb{1}_{\rho}\equiv\mathbb{1}_{\text{supp}\rho}\) in the case of the support of a positive-semidefinite operator. All vector spaces mentioned will be implicitly assumed to be finite-dimensional. We will use the term "basis" to mean specifically "orthonormal basis". We will use the established notation \(S(\cdot)\) for the von Neumann entropy, \(H(\cdot)\) for the Shannon entropy, and \(S(\cdot\|\cdot)\) for the quantum relative entropy. Though we will also name some variables \(S\), this latter use of the symbol will be clear from context, with no scope for confusion. We will abusively express von Neumann (Shannon) entropies with their arguments _density operators_ (_distributions_) instead of the systems (random variables) distributed thereby; e.g., \(H(\mathbf{p})\equiv H(X)_{(p_{x})_{x}}\). For a Hermitian operator \(T\), we will use the notation \(\left\|T\right\|_{p}\) (with \(p\in\mathbb{R}\)) for its \(p\)-Schatten norm, defined as the \(L_{p}\) norm of the vector of its singular values. In the foregoing, we used the term "(sub)channel" a few times. A channel is a quantum operation, or state transformation, induced locally on a quantum system by a unitary interaction with an auxiliary system with which it is initially uncorrelated. Mathematically, a channel mapping states of some system A to states of M7 can be identified with a completely-positive (CP) trace-preserving (TP) map \(\mathcal{E}:\,\mathcal{L}\left(\mathcal{H}^{\text{A}}\right)\to\mathcal{L} \left(\mathcal{H}^{\text{M}}\right)\); we will often identify the input and output systems with the shorthand \(\mathcal{E}^{\text{A}\to\text{M}}\), and further abbreviate as \(\mathcal{E}^{\text{A}}\equiv\mathcal{E}^{\text{A}\to\text{A}}\) when the systems are identical. A subchannel is a CP trace-_nonincreasing_ map, and corresponds with the action of a quantum operation conditioned on one of several possible outcomes, each occurring with some (initial state-dependent) probability; a channel is a special case with only one outcome, occurring deterministically. Footnote 7: We will use M for the output system since we focus on distillation, a context where denoting the output M seems common practice. Any subchannel can be specified operationally through a so-called _dilation_: a (non-unique) specification of an auxiliary system, a unitary interaction, and an auxiliary measurement that collectively implement it. Alternately, it can be described somewhat more abstractly through a set (also non-unique) of operators called _Kraus operators_. In the remainder, we will assume the reader has a basic background of these concepts; for a detailed introduction, see [18]. For our purposes, we will need the following definitions for classes of quantum operations. We will primarily have IO in consideration, but most of our results will apply to the largest class of free operations, MIO. **Definition 1** (Coherence-non-creating channel).: \(\mathcal{E}^{\mathrm{A}\rightarrow\mathrm{M}}\) is a _coherence-non-creating channel_ if it maps all incoherent inputs to incoherent outputs; that is, for all \(a\in\mathcal{A}\), \[\mathcal{E}^{\mathrm{A}\rightarrow\mathrm{M}}\left(\left|a\right\rangle \left\langle a\right|\right)=\Delta^{\mathrm{M}}\circ\mathcal{E}^{\mathrm{A} \rightarrow\mathrm{M}}\left(\left|a\right\rangle\left\langle a\right|\right). \tag{11}\] Since the class of such channels has been established in the literature as the _maximal (class of) incoherent operations_ (MIO), we will also use the abbreviation MIO. **Remark 2.1**.: The channels in the MIO class have occasionally been called "maximally incoherent operations"--a maximally irresponsible misnomer, considering that MIO are arguably _minimally_ incoherent! To avoid such awkwardness, we take the liberty to call them coherence-non-creating channels. **Definition 2** (Incoherent operation).: A (sub)channel \(\mathcal{E}^{\mathrm{A}\rightarrow\mathrm{M}}\) is an _incoherent operation_ (IO) if it admits a Kraus operator decomposition \(\mathcal{E}(\cdot)=\sum_{c\in\mathcal{C}}K_{c}(\cdot)K_{c}^{\dagger}\) such that \(\forall\ c\in\mathcal{C}\) and \(a\in\mathcal{A}\), \[K_{c}\left|a\right\rangle\propto\left|m\equiv g_{c}(a)\right\rangle, \tag{12}\] where \(g_{c}:\,\mathcal{A}\rightarrow\mathcal{M}\) is a function. **Definition 3** (Strictly incoherent operation).: An IO is a _strictly incoherent operation_ (SIO) if it admits a Kraus operator decomposition \(\mathcal{E}(\cdot)=\sum_{c\in\mathcal{C}}K_{c}(\cdot)K_{c}^{\dagger}\) that, in addition to satisfying (12), has every \(g_{c}\) invertible on its image. In other words, \(\forall\ c\in\mathcal{C}\) and \(m\in g_{c}\left(\mathcal{A}\right)\), \[K_{c}^{\dagger}\left|m\right\rangle\propto\left|a\equiv g_{c}^{-1}(m)\right\rangle, \tag{13}\] where \(g_{c}^{-1}(m)\) is unique and well-defined. **Remark 2.2**.: Recall that we used \(\Delta(\cdot)\) earlier for the diagonal part of the argument in the incoherent basis. In fact, this mapping is a channel--the dephasing channel--realizable by SIO: \(\Delta(\cdot)=\sum_{a}\left|a\right\rangle\left\langle a\right|(\cdot)\left| a\right\rangle\left\langle a\right|\) (where \(a\) runs over the incoherent labels). Under our assumption that the incoherent basis of a composite system is the tensor product of the constituent systems' incoherent bases, the dephasing channel inherits this convenient multiplicativity: \(\Delta^{\mathrm{AB}}=\Delta^{\mathrm{A}}\otimes\Delta^{\mathrm{B}}\). Finally, the formal definition of the quantity that figures in our main results: **Definition 4** (Irretrievable coherence).: We define the _irretrievable coherence_ of a state \(\rho\) as \[\ell\left(\rho\right):=C_{f}\left(\rho\right)-C_{r}\left(\rho\right), \tag{14}\] where \(C_{r}(\rho):=\min_{\sigma:\,\Delta\left[\rho\right]=\sigma}S\left(\rho\middle| \Delta(\rho)\right]=S\left[\Delta(\rho)\right]-S(\rho)\) is the _relative entropy of coherence_ and \[C_{f}(\rho)=\min_{p_{x}\geq 0;\,\sum_{x}p_{x}\phi_{x}=\rho}\sum_{x}p_{x}C_{r} \left(\phi_{x}\right) \tag{15}\] its convex-roof extension, the _coherence of formation_. In addition to these coherence measures, we will also use the _coherence rank_\(r_{C}\), defined for pure states as \(r_{C}(\psi)=\mathrm{rank}\Delta(\psi)\) and for mixed states as \[r_{C}(\rho)=\min_{p_{x}\geq 0;\,\sum_{x}p_{x}\phi_{x}=\rho}\max_{x}r_{C} \left(\phi_{x}\right). \tag{16}\] Note that \(\log_{2}r_{C}(\psi)\geq C_{r}(\psi)\) and therefore \(\log_{2}r_{C}(\rho)\geq C_{f}(\rho)\) in general. ## 3 The coherent measurement cost and target witnesses Our first order of business is to specify what we mean by "coherent measurement cost". **Remark 3.1**.: For a generic channel \(\mathcal{E}^{\mathrm{A}\rightarrow\mathrm{M}}\) with a Kraus operator decomposition \(\mathcal{E}(\cdot)\equiv\sum_{c}K_{c}(\cdot)K_{c}^{\dagger}\), each of its Kraus operators can be expressed (in the incoherent basis) in terms of its rows \(\left|w_{c,m}\right\rangle^{\mathrm{A}}\) as \[K_{c}=\sum_{m\in\mathcal{M}}\left|m\right\rangle^{\mathrm{M}}\left\langle w_{c,m }\right|^{\mathrm{A}}. \tag{17}\] In particular, IO Kraus operators have rows of the form \[\left|w_{c,m}\right\rangle=\sum_{a\in g_{c}^{-1}(m)}\xi_{ca}\left|a\right\rangle. \tag{18}\] Notice that, by virtue of the IO condition (12), the \(\left|w_{c,m}\right\rangle\) for any fixed \(c\) and distinct \(m\) values involve disjoint subsets of the incoherent basis: \[\mathrm{Tr}\left[\Delta\left(\left|w_{c,m}\right\rangle\left\langle w_{c,m} \right|\right)\,\Delta\left(\left|w_{c,m^{\prime}}\right\rangle\left\langle w_{ c,m^{\prime}}\right|\right)\right]\propto\delta_{mm^{\prime}}. \tag{19}\] This also implies, of course, that \(\left\langle w_{c,m}\right|\,w_{c,m^{\prime}}\right\rangle\propto\delta_{mm^{ \prime}}\). As we develop various technical tools and results below, we shall bear in mind our ultimate aim--namely, to estimate the minimum measurement coherence required for performing certain tasks. For a given channel in the general form (17), we shall identify this with the coherence (suitably quantified) of the vectors \(\left|w_{c,m}\right\rangle^{\mathrm{A}}\). While additional auxiliary measurements may be required, our method accounts for all of the coherent measurement _on the space of the input_\(\mathrm{A}\). On this note, we also assume that \(\mathrm{A}\) is just big enough to contain the incoherent basis elements occurring with nonzero amplitudes in the input under consideration. This brings about no loss in generality, since any channel action outside this subspace is irrelevant for matters concerning the coherent measurement cost of distilling from the given input (except in possibly inflating the cost superfluously). The following construction will prove useful in characterizing the coherence of the vectors \(\left|w_{c,m}\right\rangle\) and, thereby, the coherent measurement cost of the channel they implement. **Definition 5** (Target witness).: Given some channel \(\mathcal{E}^{\Lambda\to\mathrm{M}}(\cdot)\equiv\sum_{c}K_{c}(\cdot)K_{c}^{\dagger}\) with a set of Kraus operators \(K_{c}:=\sum_{m}\left|m\right\rangle^{\mathrm{M}}\left\langle w_{c,m}\right|^{ \Lambda}\), and a "target" pure state \(\left|\alpha\right\rangle^{\mathrm{M}}=\sum_{m}\alpha_{m}\left|m\right\rangle\), define \[\left|w_{c}^{\alpha}\right\rangle^{\mathrm{A}}:=\sum_{m}\alpha_{m}\left|w_{c,m }\right\rangle^{\mathrm{A}}, \tag{20}\] and through these, the _target witness_ \[T_{\mathcal{E}}^{\alpha}:=\sum_{c}\left|w_{c}^{\alpha}\right\rangle\left\langle w _{c}^{\alpha}\right|. \tag{21}\] In the IO case, as observed above, the various \(\left|w_{c,m}\right\rangle\) for a given \(c\) involve disjoint incoherent subbases. As such, their coherence is fully reflected in that of the \(\left|w_{c}^{\alpha}\right\rangle\), enabling us to use the latter to put lower bounds on various measures of the \(\left|w_{c,m}\right\rangle\)'s coherence. While this is no longer true for more general channels, lower bounds on coherence measures of the \(\left|w_{c}^{\alpha}\right\rangle\) place _some_ constraints on the collective structure of the the \(\left|w_{c,m}\right\rangle\). In particular, the _coherence rank_ (i.e. the number of incoherent basis vectors with nonzero amplitudes) of \(\left|w_{c}^{\alpha}\right\rangle\) cannot be larger than the sum of those of the \(\left|w_{c,m}\right\rangle\)'s; this will allow us to put a lower bound on the average row coherence rank even under the more general MIO class of operations. The \(\left|w_{c}^{\alpha}\right\rangle\), in turn, constitute a convex decomposition of \(T_{\mathcal{E}}\), whereby convex-roof coherence quantifiers (e.g. \(C_{f}\)) applied on the latter yield bounds on the values that corresponding pure-state coherence quantifiers (e.g. \(C_{r}\)) take on the \(\left|w_{c}^{\alpha}\right\rangle\). That explains why the operator constructed above is relevant to our problem of estimating the coherent measurement cost. But we have not explained the terms "target" and "target witness". Moreover, our notation \(T_{\mathcal{E}}^{\alpha}\) for this operator seems abusive: surely, we used the specific Kraus operators \(K_{c}\) in its construction? The following will clarify both of these matters. **Observation 3.2**.: _Given a channel \(\mathcal{E}\) and a target \(\left|\alpha\right\rangle=\sum_{m}\alpha_{m}\left|m\right\rangle\),_ \[T_{\mathcal{E}}^{\alpha}=\mathcal{E}^{\dagger}\left(\left|\alpha\right\rangle \left\langle\alpha\right|\right), \tag{22}\] _where \(\mathcal{E}^{\dagger}\) is the Hilbert-Schmidt adjoint map of \(\mathcal{E}\). Consequently, for any operator \(X\in\mathcal{L}\left(\mathcal{H}^{\Lambda}\right)\),_ \[\mathrm{Tr}\left(XT_{\mathcal{E}}^{\alpha}\right)=\left\langle\alpha\right| \mathcal{E}\left(X\right)\left|\alpha\right\rangle. \tag{23}\] _In particular, by choosing \(X\) to be a density operator, we see that \(T_{\mathcal{E}}^{\alpha}\) is an observable whose expectation value under any given state quantifies the probability with which \(\mathcal{E}\) maps the state to the target--hence its name._ Proof.: Using the definition of \(K_{c}\) in terms of the \(\left|w_{c,m}\right\rangle\), \[T_{\mathcal{E}}^{\alpha} =\sum_{c}\left|w_{c}^{\alpha}\right\rangle\left\langle w_{c}^{ \alpha}\right|\] \[=\sum_{c,m,m^{\prime}}\alpha_{m}\left|w_{c,m}\right\rangle\left\langle w _{c,m^{\prime}}\right|\alpha_{m^{\prime}}^{*}\] \[=\sum_{c,m,m^{\prime}}\left|w_{c,m}\right\rangle\left\langle m \right|\alpha_{m}\left|m\right\rangle\left\langle m^{\prime}\right|\alpha_{m^ {\prime}}^{*}\left|m^{\prime}\right\rangle\left\langle w_{c,m^{\prime}}\right|\] \[=\sum_{c}K_{c}^{\dagger}\left|\alpha\right\rangle\left\langle \alpha\right|K_{c}=\mathcal{E}^{\dagger}\left(\left|\alpha\right\rangle \left\langle\alpha\right|\right). \tag{24}\] **Observation 3.3**.: _For any given channel \(\mathcal{E}\) and target \(\left|\alpha\right\rangle\), the target witness \(T_{\mathcal{E}}^{\alpha}\) has the following properties:_ 1. \(0\leq T_{\mathcal{E}}^{\alpha}\leq 1^{\Lambda}\)_._ 2. _If_ \(\mathcal{E}\) _is an MIO, then_ \[\mathrm{Tr}\left(\Delta[X]T_{\mathcal{E}}^{\alpha}\right)=\mathrm{Tr}\left( \Delta[X]\mathcal{E}^{\dagger}\left[\Delta\left(\left|\alpha\right\rangle \left\langle\alpha\right|\right)\right]\right)\] (25) _for any_ \(X\in\mathcal{L}\left(\mathcal{H}^{\Lambda}\right)\)_. Consequently,_ \(\mathrm{Tr}\,T_{\mathcal{E}}^{\alpha}=\mathrm{Tr}\left(\mathbb{1}^{\Lambda}\;T _{\mathcal{E}}^{\alpha}\right)=\mathrm{Tr}\,\mathcal{E}^{\dagger}\left[\Delta \left(\left|\alpha\right\rangle\left\langle\alpha\right|\right)\right]\)_._ Proof.: Point 1 follows immediately from the target-attainment probability interpretation in Observation 3.2. To show 2, suppose \(\mathcal{E}\) is an MIO. Then, for any incoherent basis element \(\left|a\right\rangle^{\mathrm{A}}\), and therefore, by Observation 3.2, \[\mathrm{Tr}\left(\left|a\right\rangle\left\langle a\right|\;T_{ \mathcal{E}}^{\alpha}\right) =\mathrm{Tr}\left[\left|a\right\rangle\left\langle a\right|\; \mathcal{E}^{\dagger}\left(\left|\alpha\right\rangle\left\langle\alpha\right| \right)\right]\] \[=\mathrm{Tr}\left[\mathcal{E}\left(\left|a\right\rangle\left\langle a \right|\right)\;\left|\alpha\right\rangle\left\langle\alpha\right|\right]\] \[=\mathrm{Tr}\left[\mathcal{E}\left(\left|a\right\rangle\left\langle a \right|\right)\;\Delta\left(\left|\alpha\right\rangle\left\langle\alpha\right| \right)\right]\] \[=\mathrm{Tr}\left[\left|a\right\rangle\left\langle a\right|\; \mathcal{E}^{\dagger}\circ\Delta\left(\left|\alpha\right\rangle\left\langle \alpha\right|\right)\right]. \tag{26}\] Since any incoherent operator \(\Delta[X]\) is a linear combination of the \(\left|a\right\rangle\left\langle a\right|\), point 2 follows. **Remark 3.4**.: The condition \(\mathrm{Tr}\,T_{\mathcal{E}}^{\alpha}=\mathrm{Tr}\,\mathcal{E}^{\dagger}\left[ \Delta\left(\left|\alpha\right\rangle\left\langle\alpha\right|\right)\right]\) holds for any channel \(\mathcal{E}\) that maps the identity to an incoherent output: \(\mathcal{E}\left(\mathbb{1}\right)=\Delta\circ\mathcal{E}\left(\mathbb{1}\right)\). Since this is all we use to derive most of our results, the results are valid for all channels with this property (e.g. all unital channels), although we will mention only MIO for clarity of presentation. **Remark 3.5**.: In all our subsequent use of this construction, the target \(\left|\alpha\right\rangle\) will be clear from the context, and therefore we will use the simplified notation \(\left|w_{c}\right\rangle\) and \(T_{\mathcal{E}}\). ## 4 Single-shot distillation We are now ready to apply the tool of target witnesses to the study of distillation. We first consider the task of distilling in the single-shot regime, i.e. from finite-sized inputs away from the asymptotic limit. Let us start with exact distillation: an MIO \(\mathcal{E}\), acting on an input \(\rho^{\mathrm{A}}\), is required to map it exactly to a desired target \(\left|\alpha\right\rangle\left\langle\alpha\right|^{\mathrm{M}}\). Since the desired output is pure and the transformation is required to be exact, \(\mathcal{E}\) must in fact uniformly map the entire space \(\mathcal{L}(\mathcal{V})\), with \(\mathcal{V}\equiv\mathrm{supp}\rho\subseteq\mathcal{H}^{\mathrm{A}}\), to \(\left|\alpha\right\rangle\left\langle\alpha\right|\) (up to a scalar factor)8; as such, the only relevant property of \(\rho\) is the structure of its support \(\mathcal{V}\). Footnote 8: We will hereafter abbreviate this condition, abusively, as \(\mathcal{E}\) mapping \(\mathcal{V}\) to \(\left|\alpha\right\rangle\). **Lemma 4.1**.: _If a channel \(\mathcal{E}^{\mathrm{A}\to\mathrm{M}}\) maps a subspace \(\mathcal{V}\subset\mathcal{H}^{\mathrm{A}}\) exactly to a target \(\left|\alpha\right\rangle\), then the target witness \(T_{\mathcal{E}}\) has the structure_ \[T_{\mathcal{E}}=1_{\mathcal{V}}+T_{\perp}, \tag{27}\] _where \(T_{\perp}\) is supported entirely on \(\mathcal{V}_{\perp}\subset\mathcal{H}^{\mathrm{A}}\), the subspace complementary to \(\mathcal{V}\)._ Proof.: Since \(\mathcal{E}\) is required to map all of \(\mathcal{V}\) to \(\left|\alpha\right\rangle\), Observation 3.2 implies that \(\left\langle v\right|T_{\mathcal{E}}\left|v\right\rangle=\left\langle v\right|v\right\rangle\) for all \(\left|v\right\rangle\in\mathcal{V}\), whereby \(1_{\mathcal{V}}T_{\mathcal{E}}1_{\mathcal{V}}=1_{\mathcal{V}}\). Furthermore, \(0\leq T_{\mathcal{E}}\leq 1^{\mathrm{A}}\) (Observation 3.3, point 1) then implies \(1_{\mathcal{V}}T_{\mathcal{E}}1_{\mathcal{V}_{\perp}}=0\). All our main results will follow by applying the above simple result (or its variants) to progressively more complex cases of distillation. ### Maximal distillation Let us start with the "idealized maximal distillation" case motivated by Section 1.5. **Lemma 4.2**.: _If an MIO channel \(\mathcal{E}^{\mathrm{A}\to\mathrm{M}}\) maps an \(S\)-dimensional subspace \(\mathcal{V}\subset\mathcal{H}^{\mathrm{A}}\) exactly to the \(M\)-fold standard coherent state \(\Psi_{M}\) with \(S=A/M\) (the divisibility being assumed),_ \[T_{\mathcal{E}}=1_{\mathcal{V}}. \tag{28}\] Proof.: By virtue of Lemma 4.1, it remains only to show that \(T_{\perp}=0\). Applying Observation 3.3, point 2 to \(\left|\alpha\right\rangle\equiv\left|\Psi_{M}\right\rangle\), \[\mathrm{Tr}\,T_{\mathcal{E}}=\mathrm{Tr}\,\mathcal{E}^{\dagger}\left[\Delta \left(\Psi_{M}\right)\right]=\mathrm{Tr}\,\mathcal{E}^{\dagger}\left(\frac{ \mathbb{1}^{\mathrm{M}}}{M}\right). \tag{29}\] Since A contains no \(\mathcal{V}\)-extraneous incoherent labels (see Remark 3.1), and since all of \(\mathcal{V}\) must be mapped to \(\Psi_{M}\), the image of \(\mathcal{E}\) is supported on the \(M\)-dimensional space spanned by the incoherent components of \(\left|\Psi_{M}\right\rangle\). The TP property of \(\mathcal{E}\) implies that \(\mathcal{E}^{\dagger}\) is unital on this image: \(\mathcal{E}^{\dagger}\left(\mathbb{1}^{\mathrm{M}}\right)=1^{\mathrm{A}}\). Therefore, \[\mathrm{Tr}\,T_{\mathcal{E}}=\mathrm{Tr}\,\frac{\mathbb{1}^{\mathrm{A}}}{M}= \frac{A}{M}=S. \tag{30}\] Thus, \(\mathrm{Tr}\,T_{\perp}=0\). Since \(T_{\mathcal{E}}\geq 0\), this implies \(T_{\perp}=0\). By applying coherence measures on Lemma 4.2, we have the following precursor to our first main result. **Proposition 4.3**.: _Let \(\mathcal{E}^{\mathrm{A}\to\mathrm{M}}(\cdot)\equiv\sum_{c}K_{c}(\cdot)K_{c}^{\dagger}\), with Kraus operators \(K_{c}:=\sum_{m\in\mathcal{M}}\left|m\right\rangle^{\mathrm{M}}\left\langle w_{ c,m}\right|^{\mathrm{A}}\), be an MIO channel that distills the standard resource \(\Psi_{M}\) from a subspace \(\mathcal{V}\subseteq\mathcal{H}^{\mathrm{A}}\) of dimensionality \(S=A/M\). Denoting \(t_{c,m}:=\left\langle w_{c,m}\right|w_{c,m}\right\rangle\) and \(t_{c}:=M^{-1}\sum_{m}t_{c,m}\), define the normalized states \(\left|\phi_{c,m}\right\rangle:=t_{c,m}^{-1/2}\left|w_{c,m}\right\rangle\), \(\left|\phi_{c}\right\rangle:=(Mt_{c})^{-1/2}\sum_{m}\left|w_{c,m}\right\rangle= \sum_{m}\sqrt{t_{c,m}}\left|\phi_{c,m}\right\rangle\), and \(\tau_{\mathcal{V}}:=\mathbb{1}_{\mathcal{V}}/S\); note that \(\left(t_{c,m}/\left[Mt_{c}\right]\right)_{m}\) and \(\left(t_{c}/S\right)_{c}\) are normalized distributions. Then, the following must hold:_ 1. \(\sum_{c}\frac{t_{c}}{S}C_{r}\left(\phi_{c}\right)\geq C_{f}\left(\tau_{\mathcal{V }}\right)\)_;_ 2. \(\max_{c}r_{C}\left(\phi_{c}\right)\geq r_{C}\left(\tau_{\mathcal{V}}\right)\)_;_ 3. \(\sum_{m}r_{C}\left(\phi_{c,m}\right)\geq r_{C}\left(\phi_{c}\right)\) _for all_ \(c\)_;_ 4. \(\log_{2}M\leq C_{r}\left(\tau_{\mathcal{V}}\right)\)_._ _Consequently,_ \[\max_{c,m}r_{C}\left(\phi_{c,m}\right)\geq\frac{\max\limits_{c}\sum\limits_{m}r_ {C}\left(\phi_{c,m}\right)}{M}\geq\frac{\max\limits_{c}r_{C}\left(\phi_{c} \right)}{M}\geq\left\{\begin{array}{c}\frac{r_{C}\left(\tau_{\mathcal{V}} \right)}{M}\\ \frac{2^{\frac{\tau_{c}}{S}}\log_{2}r_{C}\left(\phi_{c}\right)}{M}\end{array} \right\}\geq\frac{2^{C_{f}\left(\tau_{\mathcal{V}}\right)}}{M}\geq 2^{C_{f}\left(\tau_{\mathcal{V}}\right)}. \tag{31}\] _Furthermore, if \(\mathcal{E}\) is also IO, then for all \(c\),_ \[\log_{2}M+\sum_{m\in\mathcal{M}}\frac{t_{c,m}}{Mt_{c}}C_{r}\left(\phi_{c,m}\right) \geq C_{r}\left(\phi_{c}\right), \tag{32}\] _which adds more detail to the inequality chains in (31)._ Proof.: Applying the appropriate normalization factors to (28), \(\tau_{\mathcal{V}}=\sum_{c}\left(t_{c}/S\right)\phi_{c}\), whence points 1 and 2 follow. Point 3 follows from the sub-additivity of the coherence rank under vector addition. For point 4, since \(\mathcal{E}\left(\tau_{\rho}\right)=\Psi_{M}\) and the relative entropy of coherence is a monotone under MIO, \(C_{r}\left(\tau_{\rho}\right)\geq C_{r}\left(\Psi_{M}\right)=\log_{2}M\). Now suppose the \(K_{c}\) are IO Kraus operators, so that for any given \(c\), the distinct \(\left|\phi_{c,m}\right)\) don't overlap in their classical symbols. Then, \(\Delta\left(\phi_{c}\right)\cong\bigoplus_{m}\frac{t_{c,m}}{Mt_{c}}\Delta \left(\phi_{c,m}\right)\) and therefore \[C_{r}\left(\phi_{c}\right) =S\left[\Delta\left(\phi_{c}\right)\right]\] \[=H\left[\left(\frac{t_{c,m}}{Mt_{c}}\right)_{m}\right]+\sum_{m \in\mathcal{M}}\frac{t_{c,m}}{Mt_{c}}S\left[\Delta\left(\phi_{c,m}\right)\right]\] \[\leq\log_{2}M+\sum_{m\in\mathcal{M}}\frac{t_{c,m}}{Mt_{c}}C_{r} \left(\phi_{c,m}\right). \tag{33}\] **Remark 4.4**.: In fact, in the IO case, \(t_{c,m}=t_{c}\) for all \(\left(c,m\right)\) and, furthermore, \[C_{r}\left(\phi_{c}\right)=\log_{2}M+\sum_{m\in\mathcal{M}}M^{-1}C_{r}\left( \phi_{c,m}\right). \tag{34}\] To see this, note that since \(T_{\mathcal{E}}-\left|w_{c}\right\rangle\left\langle w_{c}\right|\geq 0\), \(\left|w_{c}\right\rangle\in\mathcal{V}\)\(\forall c\). Exploiting the action of the \(K_{c}\)'s on \(\mathcal{V}\), \[K_{c}\left|w_{c}\right\rangle \propto\left|\Psi_{M}\right\rangle\] \[\Rightarrow\left(\sum_{m_{1}}\left|m_{1}\right\rangle\left\langle w _{c,m_{1}}\right|\right)\left(\sum_{m_{2}}\left|w_{c,m_{2}}\right\rangle\right) \propto\sum_{m}\left|m\right\rangle\] \[\Rightarrow\sum_{m}\left|m\right\rangle\left\langle w_{c,m} \right|\left.w_{c,m}\right\rangle \propto\sum_{m}\left|m\right\rangle, \tag{35}\] the latter owing again to \(\left\langle w_{c,m_{1}}\right|\left.w_{c,m_{2}}\right\rangle\propto\delta_{ m_{1}m_{2}}\). Consequently, \(\left\langle w_{c,m}\right|\left.w_{c,m}\right)\) must be \(m\)-independent, which by normalization yields \(t_{c,m}=t_{c}\) for all \(\left(c,m\right)\). Hence, we can refine the observation preceding (33) to \(\Delta\left(\phi_{c}\right)\cong\bigoplus_{m}M^{-1}\Delta\left(\phi_{c,m}\right)\), in turn refining (33) to \[C_{r}\left(\phi_{c}\right) =H\left[\left(M^{-1}\right)_{m}\right]+\sum_{m\in\mathcal{M}}M^{ -1}S\left[\Delta\left(\phi_{c,m}\right)\right]\] \[=\log_{2}M+\sum_{m\in\mathcal{M}}M^{-1}C_{r}\left(\phi_{c,m} \right). \tag{36}\] Nevertheless, the essence of Proposition 4.3 is in bounding various measures of the collective coherence of the \(\phi_{c,m}\) in any IO Kraus operator representation of the channel in question. We will later prove a version of this result for more general cases, where the output is not a maximal \(\Psi_{M}\). In that context, we will see that a variant of the average coherence inequality (31) still holds, even though the properties discussed in this note do not. Note that the requisite rank of coherent measurements to implement the channel \(\mathcal{E}\) through the Kraus operators \(K_{c}\) is \[L:=\max_{c,m}r_{C}\left(\phi_{c,m}\right). \tag{37}\] With this operational interpretation, Proposition 4.3 immediately yields our first main result as a corollary, which we state without proof. **Theorem 1**.: _Any coherence-non-creating channel that deterministically maps a rank-\(S\) state \(\rho^{\text{A}}\) to a standard coherent resource \(\Psi_{M}\) with \(M=A/S\) must involve coherent measurements over at least \(M^{-1}r_{C}\left(\tau_{\rho}\right)\geq M^{-1}\exp_{2}C_{f}\left(\tau_{\rho} \right)\geq\exp_{2}\ell\left(\tau_{\rho}\right)\) elements of A's incoherent basis, where \(\tau_{\rho}:=\mathbb{1}_{\rho}/S\)._ **Remark 4.5**.: Proposition 4.3 is stronger than Theorem 1: it puts bounds not only on the maximal coherent measurement rank, but also on various collective or average coherence properties of the \(\left|\phi_{c,m}\right\rangle\). While the distribution governing this averaging is operationally appropriate only for an input satisfying \(\tau_{\rho}=\rho\) (i.e., one with a flat spectrum) and not otherwise, it is expected to approach the actual distribution of coherent measurement resources in the asymptotic limit (Section 5). As such, our bounds affect not only some outlying measurements occurring with small probabilities, but even the collective statistics of all involved measurements, especially in the asymptotic limit. It is also notable that we can get variants of Proposition 4.3 by using different coherence quantifiers, although generic quantifiers may not admit a clean splitting of the coherence of each \(\phi_{c}\) into separate terms for \(\Psi_{M}\) and the \(\phi_{c,m}\). The essential structure is captured by Lemma 4.2, whereof each coherence quantifier illuminates certain specific facets. Nevertheless, we have explicated and highlighted the weaker form in Theorem 1 for two reasons. Firstly, applying Proposition 4.3 to an instance requires a detailed specification of all the Kraus operators, which may be arbitrarily numerous. On the other hand, Theorem 1 only requires specifying the largest coherent measurement rank used in implementing a channel. Secondly, we will see in Section 5 that in the asymptotic limit, the above-noted operational interpretation of the average coherence is expected to be not only necessary but also sufficient for maximal distillation. We shall now extend the above results to more general output resources. ### Non-maximal distillation Consider the distillation of a generic \(\left|\alpha\right\rangle=\sum_{m}\alpha_{m}\left|m\right\rangle\) (without loss of generality, we can assume that \(\alpha_{m}\in\mathbb{R}\) and \(\alpha_{m}>0\)). First, we have a counterpart to Lemma 4.2. **Lemma 4.6**.: _Suppose \(\mathcal{E}^{\mathrm{A}\rightarrow\mathrm{M}}\) is an MIO channel that distills \(\left|\alpha\right\rangle\left\langle\alpha\right|^{\mathrm{M}}\) from an \(S\)-dimensional subspace \(\mathcal{V}\subseteq\mathcal{H}^{\mathrm{A}}\). Let \(\tau_{\mathcal{V}}:=1_{\mathcal{V}}/S\) as before. By normalizing the target witness \(T_{\mathcal{E}}\), define the density operator \(\tau_{\mathcal{E}}:=T_{\mathcal{E}}/\mathrm{Tr}T_{\mathcal{E}}\). Then, the following must hold:_ 1. \(S_{\min}\leq\mathrm{Tr}\ T_{\mathcal{E}}\leq S_{\max}\)_, where_ \(S_{\min/\max}:=\alpha_{\min/\max}^{2}A\) _with_ \(\alpha_{\min/\max}:=\min/\max\alpha_{m}\)_;_ 2. \(\tau_{\mathcal{E}}=p\tau_{\mathcal{V}}+\left(1-p\right)\tau_{\perp}\)_, with_ \(p\in\left[\frac{S}{S_{\max}},\frac{S}{S_{\min}}\right]\) _and_ \(\tau_{\perp}\) _a density operator supported on_ \(\mathcal{V}_{\perp}\)_._ Proof.: First, note that \(\Delta\left(\left|\alpha\right\rangle\left\langle\alpha\right|\right)=\sum_{m }\alpha_{m}^{2}\left|m\right\rangle\left\langle m\right|\). Thus, \[\alpha_{\min}^{2}\mathbb{1}^{\mathrm{M}}\leq\Delta\left(\left|\alpha\right\rangle \left\langle\alpha\right|\right)\leq\alpha_{\min}^{2}\mathbb{1}^{\mathrm{M}}, \tag{38}\] whence point 1 follows from the unitality condition \(\mathcal{E}^{\dagger}\left(\mathbb{1}^{\mathrm{M}}\right)=\mathbb{1}^{ \mathrm{A}}\) and Observation 3.3, point 2. Then combining this with the result of Lemma 4.1, we obtain point 2. This allows us to derive a non-maximal variant of Proposition 4.3, for which the following definition will be convenient. **Definition 6**.: Given \(\mathcal{P}\subseteq[0,1]\) and a density operator \(\tau^{\mathrm{A}}\), we define \[r_{C}^{\mathcal{P}}(\tau) :=\inf_{p\in\mathcal{P},\,\mathrm{Tr}(\tau\tau_{\perp})=0}r_{C} \left[p\tau+\left(1-p\right)\tau_{\perp}\right]; \tag{39}\] \[C_{f}^{\mathcal{P}}(\tau) :=\inf_{p\in\mathcal{P},\,\mathrm{Tr}(\tau\tau_{\perp})=0}C_{f} \left[p\tau+\left(1-p\right)\tau_{\perp}\right];\] (40) \[\ell^{\mathcal{P}}(\tau) :=C_{f}^{\mathcal{P}}(\tau)-C_{r}(\tau), \tag{41}\] where \(\tau_{\perp}\) takes values in the set of density operators supported on \(\mathrm{(supp}\tau)_{\perp}\subset\mathcal{H}^{\mathrm{A}}\), the subspace complementary to \(\mathrm{supp}\tau\). **Proposition 4.7**.: _Let \(\mathcal{E}^{\mathrm{A}\rightarrow\mathrm{M}}(\cdot)\equiv\sum_{c}K_{c}( \cdot)K_{c}^{\dagger}\), with \(K_{c}:=\sum_{m\in\mathcal{M}}\left|m\right\rangle^{\mathrm{M}}\left\langle w_{ c,m}\right|^{\mathrm{A}}\), be an MIO channel that distills \(\left|\alpha\right\rangle\) from an S-dimensional subspace \(\mathcal{V}\subseteq\mathcal{H}^{\mathrm{A}}\). Define \(\tau_{\mathcal{V}}\) and \(\tau_{\mathcal{E}}\) as in Lemma 4.6, and \(\tilde{S}:=\mathrm{Tr}\,T_{\mathcal{E}}\). Denoting \(t_{c,m}:=\left\langle w_{c,m}\right|w_{c,m}\right)\) and \(t_{c}:=\sum_{m}\alpha_{m}^{2}t_{c,m}\), define the normalized states \(\left|\phi_{c,m}\right\rangle:=t_{c,m}^{-1/2}\left|w_{c,m}\right\rangle\) and \(\left|\phi_{c}\right\rangle:=t_{c,m}^{-1/2}\sum_{m}\alpha_{m}\left|w_{c,m} \right\rangle=\sum_{m}\alpha_{m}\sqrt{\frac{t_{c,m}}{t_{c}}\left|\phi_{c,m}\right\rangle}\); note that \(\left(\alpha_{m}^{2}t_{c,m}/t_{c}\right)_{m}\) and \(\left(t_{c}/\tilde{S}\right)_{c}\) are now normalized distributions by Lemma 4.6. Finally, let \(\mathcal{P}:=\left[\frac{S}{S_{\max}},\frac{S}{S_{\min}}\right]\) with \(S_{\min/\max}:=\alpha_{\min/\max}^{2}A\). Then,_ \[\max_{c,m}r_{C}\left(\phi_{c,m}\right)\geq\frac{\max_{c}\sum_{m}r_{C}\left( \phi_{c,m}\right)}{M}\geq\frac{\max_{c}r_{C}\left(\phi_{c}\right)}{M}\geq \left\{\begin{array}{l}\frac{r_{C}^{\mathcal{P}}(\tau_{\mathcal{V}})}{M} \\ \frac{2^{\mathrm{var}}\frac{\mathrm{var}}{S}\log_{2}r_{C}\left(\phi_{c}\right)}{M }\end{array}\right\}\geq\frac{2^{\mathcal{P}}\frac{t_{c}}{\tilde{S}}\,c_{r} \left(\phi_{c}\right)}{M}\end{array}\right\}\geq\frac{2^{\mathcal{C}_{f}^{ \mathcal{P}}(\tau_{\mathcal{V}})}}{M}\geq 2^{\ell^{\mathcal{P}}(\tau_{\mathcal{V}})}. \tag{42}\] _Furthermore, if \(\mathcal{E}\) is also IO, then for all \(c\),_ \[\log_{2}M+\sum_{m\in\mathcal{M}}\frac{\alpha_{m}^{2}t_{c,m}}{t_{c}}C_{r}\left( \phi_{c,m}\right)\geq C_{r}\left(\phi_{c}\right). \tag{43}\] We omit the proof, since it follows exactly like that of Proposition 4.3. We leave further investigations concerning general \(\left|\alpha\right\rangle\)'s for future work. In the remainder of this paper, we will restrict ourselves to standard outputs \(\left|\alpha\right\rangle=\left|\Psi_{M}\right\rangle\), for which \(\alpha_{\min}=\alpha_{\max}=M^{-1/2}\) and so \(S_{\min}=S_{\max}=A/M\). **Corollary 4.7.1**.: _Suppose \(\mathcal{E}^{\mathrm{A}\rightarrow\mathrm{M}}(\cdot)\equiv\sum_{c}K_{c}(\cdot)K_ {c}^{\dagger}\), with \(K_{c}:=\sum_{m\in\mathcal{M}}\left|m\right\rangle^{\mathrm{M}}\left\langle w_{ c,m}\right|^{\mathrm{A}}\), is an MIO channel that distills \(\Psi_{M}\) from an \(S\)-dimensional subspace \(\mathcal{V}\subseteq\mathcal{H}^{\mathrm{A}}\). Let the operators \(\tau_{\mathcal{V}}\) and \(\tau_{\mathcal{E}}\) be as in Lemma 4.6, and define \(\tilde{S}:=\mathrm{Tr}\,T_{\mathcal{E}}=A/M\). Denoting \(t_{c,m}:=\left\langle w_{c,m}\right|w_{c,m}\right\rangle\) and \(t_{c}:=M^{-1}\sum_{m}t_{c,m}\), define the normalized states \(\left|\phi_{c,m}\right\rangle:=t_{c,m}^{-1/2}\left|w_{c,m}\right\rangle\) and \(\left|\phi_{c}\right\rangle:=\left(Mt_{c}\right)^{-1/2}\sum_{m}\left|w_{c,m} \right\rangle=\sum_{m}\sqrt{\frac{t_{c,m}}{Mt_{c}}}\left|\phi_{c,m}\right\rangle\). Then, the inequality chains in (42) hold with \(\mathcal{P}=\left\{S/\tilde{S}\right\}\). Furthermore, if \(\mathcal{E}\) is also IO, then (32) holds for all \(c\)._ This immediately begets our next main result, whose proof mirrors that of Theorem 1 and is therefore omitted. **Theorem 2**.: _Any coherence-non-creating channel that deterministically maps a rank-\(S\) state \(\rho^{\mathrm{A}}\) to a standard coherent resource \(\Psi_{M}\) with \(M=pA/S\) must involve coherent measurements over at least \(M^{-1}r_{C}^{\left\{p\right\}}\left(\tau_{\rho}\right)\geq M^{-1}\exp_{2}C_{f}^{ \left\{p\right\}}\left(\tau_{\rho}\right)\geq\exp_{2}\ell^{\left\{p\right\}} \left(\tau_{\rho}\right)\) elements of \(\mathrm{A}\)'s incoherent basis, where \(\tau_{\rho}:=1_{\rho}/S\)._ **Remark 4.8**.: Our results on maximal distillation are in fact corollaries of the non-maximal versions. But we deemed the former to be of sufficient importance to warrant the order of presentation that we have chosen. ### Approximate distillation Let us now consider the task of approximate distillation, where we only require an output that is close enough (as quantified by some parameter) to a standard resource. Formally, given an input \(\rho^{\mathrm{A}}\) and an error tolerance \(\epsilon\in[0,1]\), we shall require that the action of a channel \(\mathcal{E}\) satisfy \[F\left[\mathcal{E}(\rho),\Psi_{M}\right]\geq 1-\epsilon, \tag{44}\] where \(F(\sigma,\tau):=\left(\operatorname{Tr}\!\sqrt{\sqrt{\sigma\tau\sqrt{\sigma}}} \right)^{2}\) is the _Uhlmann-Jozsa fidelity_. Since \(\Psi_{M}\) is pure, the condition (44) simplifies as \(F\left[\mathcal{E}(\rho),\Psi_{M}\right]=\left\langle\Psi_{M}\right|\mathcal{E }(\rho)\left|\Psi_{M}\right\rangle=\operatorname{Tr}\left(\rho T_{\mathcal{E}}\right)\) (using Observation 3.2). Thus, \[\operatorname{Tr}\left(\rho T_{\mathcal{E}}\right)\geq 1-\epsilon. \tag{45}\] Notably, unlike in exact distillation where only the space \(\operatorname{supp}\!\rho\) mattered, here the detailed structure of \(\rho\) must be taken into account. Say \(\rho=\sum_{s}r_{s}\psi_{s}\) is an eigendecomposition. Denoting \(r_{\max/\min}:=\max/\min_{s}r_{s}\), we have \(\mathbbm{1}_{\rho}=\sum_{s}\psi_{s}\geq\sum_{s}\left(r_{s}/r_{\max}\right) \psi_{s}=\rho/r_{\max}\). Therefore, (45) implies \[\operatorname{Tr}\left(\mathbbm{1}_{\rho}T_{\mathcal{E}}\right)\geq\frac{1- \epsilon}{r_{\max}}. \tag{46}\] Meanwhile, another consequence of (45) is as follows: since each \(s\) term is weighted by an \(r_{s}\) factor, \(\left\langle\psi_{s}\right|T_{\mathcal{E}}\left|\psi_{s}\right\rangle\) can afford to be further away from the ideal value \(1\) for those \(s\) whose \(r_{s}\) are smaller. A lower bound on the smallest possible single \(\left\langle\psi_{s}\right|T_{\mathcal{E}}\left|\psi_{s}\right\rangle\) is \(F_{\min}\), defined through \(r_{\min}F_{\min}+\left(1-r_{\min}\right)\cdot 1=1-\epsilon\). This is solved by \(F_{\min}=1-\epsilon/r_{\min}\), and so \[\operatorname{Tr}\left(\mathbbm{1}_{\rho}T_{\mathcal{E}}\right)\geq S\cdot \left(1-\frac{\epsilon}{r_{\min}}\right), \tag{47}\] where \(S=\operatorname{rank}\!\rho\) as before. Combining (46) and (47), we have \[\operatorname{Tr}\left(\mathbbm{1}_{\rho}T_{\mathcal{E}}\right)\geq S_{\rho, \epsilon}, \tag{48}\] where \[S_{\rho,\epsilon}:=\max\left\{\frac{1-\epsilon}{r_{\max}},\,S\left(1-\frac{ \epsilon}{r_{\min}}\right)\right\}. \tag{49}\] When \(\rho\) is nearly maximally-mixed on its support, i.e. \(r_{\max}\approx r_{\min}\approx 1/S\) (as in the asymptotic case that we will soon take up), the first bound is tighter: \(\frac{1-\epsilon}{r_{\max}}\approx S\left(1-\epsilon\right)\), whereas \(S\left(1-\frac{\epsilon}{r_{\min}}\right)\approx S\left(1-S\epsilon\right)\). The second bound is more useful when \(\rho\) is far from maximally-mixed (i.e. \(r_{\max}\gg 1/S\)) and \(\epsilon\ll r_{\min}\): then, \(\frac{1-\epsilon}{r_{\max}}\ll S\), while \(S\left(1-\frac{\epsilon}{r_{\min}}\right)\approx S\). These constraints, together with those of Observation 3.3, can be used to find lower bounds on the coherent measurement cost, as we did in the exact case. There we had \(T_{\mathcal{E}}^{\rho}\equiv\mathbbm{1}_{\rho}T_{\mathcal{E}}\mathbbm{1}_{\rho }=\mathbbm{1}_{\rho}\), leading to the exact block structure (27). In the approximate case, for small enough \(\epsilon\) we should be able to bound the amplitude of the cross-block parts. We leave this line of inquiry and the pursuit of "good" bounds for future work, here contenting ourselves with a crude bound that suffices for our analysis of maximal asymptotic distillation (Section 5). For this bound, we will show that the normalized density operator \(\tau_{\mathcal{E}}:=T_{\mathcal{E}}/\tilde{S}\) (where \(\tilde{S}:=\operatorname{Tr}T_{\mathcal{E}}=A/M\)) is "not too different from" \(\tau_{\rho}:=\mathbbm{1}_{\rho}/S\). Let \(\tau_{\mathcal{E}}^{\rho}:=\mathbbm{1}_{\rho}\tau_{\mathcal{E}}\mathbbm{1}_{\rho}\), and define its normalized version \(\tau_{\mathcal{E}}^{\downarrow\rho}:=\tau_{\mathcal{E}}^{\sigma}/\operatorname{ Tr}\tau_{\mathcal{E}}^{\sigma}\). Note that \(\operatorname{Tr}\tau_{\mathcal{E}}^{\rho}\geq S_{\rho,\epsilon}/\tilde{S}\). Then, \[F\left(\tau_{\mathcal{E}},\tau_{\rho}\right) =\left(\operatorname{Tr}\!\sqrt{\sqrt{\tau_{\rho}}\tau_{\mathcal{ E}}\sqrt{\tau_{\rho}}}\right)^{2}\] \[=\frac{1}{S}\left(\operatorname{Tr}\!\sqrt{\tau_{\mathcal{E}}^{ \rho}}\right)^{2}\geq\frac{S_{\rho,\epsilon}}{S\tilde{S}}\left\|\sqrt{\tau_{ \mathcal{E}}^{\rho}}\right\|_{1}^{2}. \tag{50}\] Since \(T_{\mathcal{E}}\leq 1\), also \(T_{\mathcal{E}}^{\rho}\leq 1\). Thus, \(\tau_{\mathcal{E}}^{\rho}=T_{\mathcal{E}}^{\rho}/\operatorname{Tr}T_{\mathcal{E}} ^{\rho}\leq\mathbbm{1}/S_{\rho,\epsilon}\). Meanwhile, \(\left\|\sqrt{\tau_{\mathcal{E}}^{\downarrow\rho}}\right\|_{2}=\sqrt{ \operatorname{Tr}\tau_{\mathcal{E}}^{\downarrow\rho}}=1\). Therefore, \[\left\|\sqrt{\tau_{\mathcal{E}}^{\downarrow\rho}}\right\|_{1} \geq\min_{\mathbf{x}\in\mathbb{R}^{S}}\left\{\left\|\mathbf{x} \right\|_{1}:\;\left\|\mathbf{x}\right\|_{2}=1,\,\left\|\mathbf{x}\right\|_{ \infty}\leq\frac{1}{\sqrt{S_{\rho,\epsilon}}}\right\}\] \[\geq\sqrt{S_{\rho,\epsilon}}. \tag{51}\] Combining this with the bound in (50) and expressing the result in terms of the Bures distance \(B(\sigma,\tau):=\sqrt{2\left(1-\sqrt{F(\sigma,\tau)}\right)}\), \[B\left(\tau_{\mathcal{E}},\tau_{\rho}\right)\leq\sqrt{2\left(1-\frac{S_{\rho, \epsilon}}{\sqrt{S\tilde{S}}}\right)}=:\delta. \tag{52}\] This \(\delta\) depends on \(\epsilon\), \(A\), \(M\), and \(\rho\); we suppress its dependencies to avoid clutter. Notice that \(\delta\) approaches \(0\) when \(SM\approx A\) and \(\epsilon\ll 1/S\), which is the case we will encounter in maximal asymptotic distillation. We will skip an approximate analog to Proposition 4.3 and proceed directly to an analog to Theorem 1, which follows by applying Lemma B.2 (the asymptotic continuity of the coherence of formation [13]) on (52) and repeating the arguments of Theorem 1. **Theorem 3**.: _Any coherence-non-creating channel that deterministically maps a rank-\(S\) state \(\rho^{\mathrm{A}}\), satisfying \(r_{\min}\mathbbm{1}^{\mathrm{A}}\leq\rho^{\mathrm{A}}\leq r_{\max}\mathbbm{1}^ {\mathrm{A}}\), to an output \(\sigma^{\mathrm{M}}\) such that \(F\left(\sigma,\Psi_{M}\right)\geq 1-\epsilon\) for \(M=A/\tilde{S}\) must involve coherent measurements over at least_ \[M^{-1}\exp_{2}\left[C_{f}\left(\tau_{\rho}\right)-\delta\log_{2}A-(1+\delta)h \left(\frac{\delta}{1+\delta}\right)\right] \tag{53}\] _elements of \(\mathrm{A}\)'s incoherent basis, where \(\tau_{\rho}:=\mathbbm{1}_{\rho}/S\) and \(\delta:=\sqrt{2\left(1-\frac{S_{\rho,\epsilon}}{\sqrt{S\tilde{S}}}\right)}\) with_ \[S_{\rho,\epsilon}:=\max\left\{\frac{1-\epsilon}{r_{\max}},\,S\left(1-\frac{ \epsilon}{r_{\min}}\right)\right\}. \tag{54}\] This bound is good for the case where \(M\) is near-maximal, i.e. \(\tilde{S}\approx S\). We leave a more careful analysis of the non-maximal approximate case for future work. Moving on, we shall derive a slight variant that performs well for nearly-maximally-mixed \(\rho\); it will simplify our task in the asymptotic case. Our approach above was to show that \(\tau_{\mathcal{E}}\) is close to \(\tau_{\rho}\). When \(\rho\) is close to maximally-mixed on its support, we can use \(\rho\) itself instead of \(\tau_{\rho}\). Noting that \(\rho\geq r_{\min}\mathbb{1}_{\rho}\), we can modify (50) to \[F\left(\tau_{\mathcal{E}},\rho\right) \geq r_{\min}\left(\mathrm{Tr}\sqrt{\mathbb{1}_{\rho}\tau_{ \mathcal{E}}\mathbb{1}_{\rho}}\right)^{2}\] \[\geq\frac{r_{\min}S_{\rho,\epsilon}}{\tilde{S}}\left\|\sqrt{ \tau_{\mathcal{E}}^{|\rho|}}\right\|_{1}^{2}\geq\frac{r_{\min}S_{\rho,\epsilon }^{2}}{\tilde{S}}. \tag{55}\] Repeating the rest of the steps as above, we have: **Lemma 4.9**.: _An MIO channel mapping a state \(\rho^{\mathrm{A}}\), satisfying \(r_{\min}\mathbb{1}^{\mathrm{A}}\leq\rho^{\mathrm{A}}\leq r_{\max}\mathbb{1}^ {\mathrm{A}}\), to an output \(\sigma^{\mathrm{M}}\) such that \(F\left(\sigma,\Psi_{M}\right)\geq 1-\epsilon\) for \(M=A/\tilde{S}\) must involve coherent measurements over at least_ \[M^{-1}\exp_{2}\left[C_{f}\left(\rho\right)-\delta\log_{2}A-(1+\delta)h\left( \frac{\delta}{1+\delta}\right)\right] \tag{56}\] _elements of \(\mathrm{A}\)'s incoherent basis, where_ \[\delta:=\sqrt{2\left(1-S_{\rho,\epsilon}\sqrt{\frac{r_{\min}}{\tilde{S}}} \right)} \tag{57}\] _with \(S_{\rho,\epsilon}:=\left(1-\epsilon\right)/r_{\max}\)._ ## 5 Asymptotic distillation The asymptotic limit (see Section 1.2 for a brief background) is an important window into a resource theory. The behaviour of resource-theoretic quantities in this limit is often--aptly--compared with the classic laws of thermodynamics: asymptotic equipartition leads to certain near-universal features across diverse resource theories, such as extensivity of resource distillation yields and formation costs. We will now present some evidence suggesting the extensivity of the coherent measurement cost (quantified as the requisite number of elementary coherent gates such as the qubit Hadamard gate) of asymptotically maximal coherence distillation. Chitambar [19] showed that the resource theory of coherence is asymptotically reversible under the class of free operations called dephasing-covariant operations (DIO), with the asymptotic rate of interconversion given by the relative entropy of coherence \(C_{r}\). A consequence of this fact is that the rate of distillation of copies of \(\Psi_{2}\) from those of a state \(\rho\) under even the most relaxed class of free operations--the coherence-non-creating operations MIO--is bounded above by \(C_{r}(\rho)\). Both DIO and IO, although strict subclasses of MIO, achieve this distillation rate [13]. For MIO achieving this maximal rate, we have the following conjectures. **Conjecture 1**.: _Suppose a sequence \(\mathcal{E}_{n}\) of MIO channels is maximally-distilling on copies of an input \(\rho\). That is, \(\mathcal{E}_{n}\) acting on \(\varrho_{n}\equiv\rho^{\otimes n}\) achieves \(F\left[\mathcal{E}_{n}\left(\varrho_{n}\right),\Psi_{M_{n}}\right]\geq 1-o(1)\) for \(\log_{2}M_{n}=n\left[C_{r}(\rho)-o(1)\right]\). Then any Kraus operator decomposition of \(\mathcal{E}_{n}\) involves coherent measurements over at least \(L_{n}\) elements of the input's incoherent basis, where_ \[\log_{2}L_{n}\geq n\left[\ell(\rho)-o(1)\right]. \tag{58}\] _This rank bound applies both to the single most coherent measurement element and to the average (of the rank's logarithm) over all involved measurements under the distribution induced by the input._ **Conjecture 2**.: _For any \(\rho\), there exists a sequence \(\mathcal{E}_{n}\) of maximally-distilling IO channels with all but an asymptotically-vanishing fraction of measurements individually attaining the bound of Conjecture 1, as well as attaining it on average._ ### Towards bounding the asymptotic cost We will now attempt to prove Conjecture 1, essentially through a formalization of our crude typicality-based statements in Section 1.2. The crudeness, when examined deeper, turns out unfortunately to conceal some finer features of asymptotic typicality that thwart our efforts at completing the proof. Nevertheless, we hope that the following account of our proof attempt elicits a more complete treatment from the community, thereby either proving or refuting our conjecture. **Remark 5.1**.: In the following, we will make use of the triangle inequality property of the _Fubini-Study metric_ (also called the _fidelity angle_)_\(\theta(\rho,\sigma):=\arccos\sqrt{F(\rho,\sigma)}\); namely, \[\theta(\rho,\sigma)+\theta(\sigma,\tau)\geq\theta(\tau,\rho). \tag{59}\] In our calculations, the approximation parameters will be associated with \(1-F(\cdot,\cdot)\), whereby they are the squared _sines_ of the associated fidelity angles. For convenience in applying the angle triangle inequality in their terms, we will use the shorthand \[\epsilon\boxplus\delta :=\sin^{2}\left(\arcsin\epsilon+\arcsin\delta\right)\] \[=\left(\sqrt{\epsilon(1-\delta)}+\sqrt{\delta(1-\epsilon)}\right) ^{2}. \tag{60}\] Note that \(\epsilon\boxplus\delta\leq\left(\sqrt{\epsilon}+\sqrt{\delta}\right)^{2}\leq 4 \max\{\epsilon,\delta\}\) in general; but if \(\delta\to 0\) while \(\epsilon\) is held fixed, \(\epsilon\boxplus\delta\rightarrow\epsilon\). Proof sketch for Conjecture 1.: We shall build on Lemma 4.9, with9,10 "\(M\)"\(\equiv M_{n}=\exp_{2}\left(n\left[C_{r}(\rho)-\epsilon_{n}^{(0)}\right]\right)\). To estimate the other relevant parameters, we will use the quantum asymptotic equipartition property (AEP) reviewed in Appendix C. For some \(\delta_{\mathrm{S}}>0\), let \(\varrho_{n}^{\delta_{\mathrm{S}}}\) denote the unnormalized projection of \(\varrho_{n}\) onto its \(\delta_{\mathrm{S}}\)-weakly-typical subspace, and \(\varrho_{n}^{|\delta_{\mathrm{S}}}\) the normalized version thereof; the analog to "\(\rho\)" will be \(\varrho_{n}^{|\delta_{\mathrm{S}}}\). Applying Lemma C.4, \(\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{S}}}\geq 1-\delta_{\mathrm{S}}\) and \(2^{-n[S(\rho)+\delta_{\mathrm{S}}]}\mathbbm{1}\leq\varrho_{n}^{\delta_{ \mathrm{S}}}\leq 2^{-n[S(\rho)-\delta_{\mathrm{S}}]}\mathbbm{1}\), so that \[2^{-n[S(\rho)+\delta_{\mathrm{S}}]}\mathbbm{1}\leq\varrho_{n}^{|\delta_{ \mathrm{S}}}\leq\frac{2^{-n[S(\rho)-\delta_{\mathrm{S}}]}\mathbbm{1}}{1- \delta_{\mathrm{S}}}. \tag{61}\] The lower bound above functions as "\(r_{\mathrm{min}}\)"; we will presently define "\(r_{\mathrm{max}}\)", slightly differently from how we did in Lemma 4.9. Meanwhile, let us apply typicality on the classical sequences \(\mathbf{a}\equiv\left(a_{1}\ldots a_{n}\right)\) formed by the incoherent basis labels occurring in \(\varrho_{n}\), which are distributed according to \(\Delta\left(\varrho_{n}\right)\equiv\left[\Delta(\rho)\right]^{\otimes n}\). For any \(\delta_{\mathrm{A}}>0\) we can identify the \(\delta_{\mathrm{A}}\)-weakly-typical subalphabet \(\mathcal{A}_{n}^{\delta_{\mathrm{A}}}\). The corresponding system \(\mathrm{A}_{n}^{\delta_{\mathrm{A}}}\) will be the analog to Lemma 4.9's "A"; by the AEP, its dimensionality \[A_{n}^{\delta_{\mathrm{A}}}\leq\exp_{2}\left[n\left(S\left[\Delta(\rho) \right]+\delta_{\mathrm{A}}\right)\right], \tag{62}\] yielding the bound "\(\widetilde{S}\)"\(=A_{n}^{\delta_{\mathrm{A}}}/M_{n}\leq 2^{n\left[S(\rho)+\epsilon_{n}^{(0)}+ \delta_{\mathrm{A}}\right]}\). We will now work towards bounding "\(S_{\rho,\epsilon}\)". Let \(\varrho_{n}^{\delta_{\mathrm{A}}}\) denote the unnormalized projection of \(\varrho_{n}\) on this subalphabet, and \(\varrho_{n}^{\left|\delta_{\mathrm{A}}\right.}\) that on the complement thereof, so that \(\mathrm{Tr}\left(\varrho_{n}^{\delta_{\mathrm{A}}}+\varrho_{n}^{\left|\delta_ {\mathrm{A}}\right.}\right)=1\). We shall extend this superscript notation to projections of any operator. Furthermore, we will use superscripts combining \(\delta_{\mathrm{S}}\), \(\delta_{\mathrm{A}}\), and \(\backslash\) to denote the results of successive projections from the inside out. For example, \(\backslash\delta_{\mathrm{S}},\delta_{\mathrm{A}}\) will denote a projection on the complement of the \(\delta_{\mathrm{S}}\)-typical subspace followed by one on the \(\delta_{\mathrm{A}}\)-typical subalphabet. As before, we will denote the corresponding normalized density operators by preceding the superscripts with "\(|\)". With this notational arrangement, first note that \(\varrho_{n}^{\delta_{\mathrm{S}}}+\varrho_{n}^{\left|\delta_{\mathrm{S}} \right.}=\varrho_{n}\), since these projections are defined via \(\varrho_{n}\)'s eigenspaces. Note also that the \(\delta_{\mathrm{A}}\)-projections commute with the dephasing channel \(\Delta\), so that \(\mathrm{Tr}\varrho_{n}^{\left|\delta_{\mathrm{A}}\right.}=\mathrm{Tr}\Delta \left(\varrho_{n}^{\left|\delta_{\mathrm{A}}\right.}\right)=\mathrm{Tr}\left[ \Delta\left(\varrho_{n}\right)\right]^{\left|\delta_{\mathrm{A}}\right.}\leq \delta_{\mathrm{A}}\), the last inequality following from the properties of the \(\delta_{\mathrm{A}}\)-typical subalphabet. Using these facts, \[\mathrm{Tr}\left(\varrho_{n}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}}}+\varrho_{n}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}} \right.}\right) =\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{A}}}\leq\delta_{\mathrm{A}}\] \[\Rightarrow\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}}}\leq\delta_{\mathrm{A}}\] \[\Rightarrow\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}}}=\mathrm{Tr}\left(\varrho_{n}^{\delta_{\mathrm{S}}}-\varrho_{n}^{ \delta_{\mathrm{S}},\delta_{\mathrm{A}}}\right) \geq 1-\delta_{\mathrm{S}}-\delta_{\mathrm{A}}. \tag{63}\] As a step towards determining "\(S_{\rho,\epsilon}\)", we shall now show that \(\varrho_{n}\) is close to the normalized version \(\varrho_{n}^{|\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\) of the operator in the last line above. \[F\left(\varrho_{n},\varrho_{n}^{|\delta_{\mathrm{S}},\delta_{ \mathrm{A}}\rangle}\right) =\left(\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}}}\right)^{-1}F\left(\varrho_{n},\varrho_{n}^{\delta_{\mathrm{S}}, \delta_{\mathrm{A}}}\right)\] \[=\left(\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}}}\right)^{-1}F\left(\varrho_{n}^{\delta_{\mathrm{A}}},\varrho_{n}^{ \delta_{\mathrm{S}},\delta_{\mathrm{A}}}\right)\] \[\geq\left(\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}}}\right)^{-1}\left(\frac{\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{S}}, \delta_{\mathrm{A}}}}{\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{A}}}}\right)^{2}\] \[\geq 1-\delta_{\mathrm{S}}-\delta_{\mathrm{A}}, \tag{64}\] where we again used \(\varrho_{n}=\varrho_{n}^{\delta_{\mathrm{S}}}+\varrho_{n}^{\left|\delta_{ \mathrm{S}}\right.}\), this time to apply point 5 of Appendix A on \(\varrho_{n}^{\delta_{\mathrm{A}}}=\varrho_{n}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}}}+\varrho_{n}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}\right.}\). In a minor variation of the method of Lemma 4.9, we will make \(\left\|\varrho_{n}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}\right.}\right\|_ {\infty}\) the analog of "\(r_{\mathrm{max}}\)"--which is possible since we identify Lemma 4.9's "A" with \(\mathrm{A}_{n}^{\delta_{\mathrm{A}}}\). To bound this quantity, we first note that \[\varrho_{n}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\leq\varrho_{n}^{\delta_{ \mathrm{S}}}\leq\exp_{2}\left(-n\left[S(\rho)-\delta_{\mathrm{S}}\right]\right) \tag{65}\] by the quantum AEP. Since \(\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\geq 1-\delta_{ \mathrm{S}}-\delta_{\mathrm{A}}\) as noted in (63), this implies \[\varrho_{n}^{|\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\leq\frac{\exp_{2}\left(-n \left[S(\rho)-\delta_{\mathrm{S}}\right]\right)}{1-\delta_{\mathrm{S}}- \delta_{\mathrm{A}}}. \tag{66}\] By assumption, the sequence \(\mathcal{E}_{n}\) of channels achieves \(F\left[\mathcal{E}_{n}\left(\varrho_{n}\right),\Psi_{M_{n}}\right]\geq 1- \epsilon_{n}\). By (64), \(F\left(\varrho_{n},\varrho_{n}^{|\delta_{\mathrm{S}},\delta_{\mathrm{A}}|} \right)\geq 1-\delta_{\mathrm{S}}-\delta_{\mathrm{A}}\), so that (by contractivity and the fidelity-angle triangle inequality) \(F\left[\mathcal{E}_{n}\left(\varrho_{n}^{|\delta_{\mathrm{S}},\delta_{ \mathrm{A}}|}\right),\Psi_{M_{n}}\right]\geq 1-\epsilon_{n}^{(1)}\) with \(\epsilon_{n}^{(1)}:=\epsilon_{n}\boxplus\left(\delta_{\mathrm{S}}+\delta_{ \mathrm{A}}\right)\). The associated target witnesses \(T_{n}\equiv T_{\mathcal{E}_{n}}\) must therefore satisfy \(\mathrm{Tr}\left(T_{n}\varrho_{n}^{|\delta_{\mathrm{S}},\delta_{\mathrm{A}}|} \right)\geq 1-\epsilon_{n}^{(1)}\). Using an argument similar to Lemma 4.9's based on (66), \[\mathrm{Tr}\;T_{n}^{\delta_{\mathrm{A}},\delta_{\mathrm{S}}} \geq\frac{1-\epsilon_{n}^{(1)}}{\left\|\varrho_{n}^{|\delta_{ \mathrm{S}},\delta_{\mathrm{A}}|}\right\|_{\infty}}\] \[\geq\left[1 \[F\left(\tau_{n}^{|\delta_{\rm A}},\varrho_{n}^{|\delta_{\rm S}}\right)\geq\left[1- \epsilon_{n}^{(1)}\right]^{2}\left(1-\delta_{\rm S}-\delta_{\rm A}\right)^{2} \exp_{2}\left(-n\left[\epsilon_{n}^{(0)}+3\delta_{\rm S}+\delta_{\rm A}\right] \right)=:1-\epsilon_{n}^{(2)}. \tag{68}\] We have now arrived at our primary obstacle: for this \(\epsilon_{n}^{(2)}\) to be a vanishing sequence, we would need the exponents to all vanish. The \(n\epsilon_{n}^{(0)}\) is obviously menacing, as the definition of maximal distillation makes no stipulation whatsoever on how fast the \(\epsilon_{n}^{(0)}\) must decay. As for the \(n\delta\)'s: naively, we might expect that eventually taking the limits \(\delta_{\rm S},\delta_{\rm A}\to 0\) would at least get rid of these exponents. However, these parameters cannot be taken to zero _independently of \(n\)_: for any given \(\delta_{\rm S}\), the bounds in (61) are guaranteed only "for large enough \(n\)"; likewise for \(\delta_{\rm A}\) and (62). Indeed, the requisite \(n\) to validate these bounds scales as \(\delta^{-2}\), whereby \(\exp_{2}\left(-n\delta\right)\sim\exp_{2}\left(-\sqrt{n}\right)\). This puts a definitive end to any prospects of a bound like (68) succeeding. Let us now pretend we did not encounter the above problem, and continue our proof sketch as if \(\epsilon_{n}^{(2)}\in o(1)\). Using a projection-related property of the fidelity (point 4 of Appendix A), \(F\left(\varrho_{n},\varrho_{n}^{|\delta_{\rm S}}\right)={\rm Tr}\varrho_{n} ^{\delta_{\rm S}}\geq 1-\delta_{\rm S}\). Combining this with (68) through the angle triangle inequality, \[F\left(\tau_{n}^{|\delta_{\rm A}},\varrho_{n}\right)\geq 1-\epsilon_{n}^{(3)}, \tag{69}\] where \(\epsilon_{n}^{(3)}:=\epsilon_{n}^{(2)}\boxplus\delta_{\rm S}\). Finally, "\(\delta\)", for which we can conveniently use the same symbol, is given by \[\delta:=\sqrt{2\left(1-\sqrt{1-\epsilon_{n}^{(3)}}\right)}. \tag{70}\] Thus, the coherent measurement rank for the action of \({\cal E}_{n}\) on \({\rm A}_{n}^{\delta_{\rm A}}\) is no less than \[L_{n} :=M_{n}^{-1}\exp_{2}\left[C_{f}\left(\tau_{n}^{|\delta_{\rm A}} \right)\right]\] \[\geq\frac{\exp_{2}\left[C_{f}\left(\varrho_{n}\right)-\delta\log _{2}A_{n}-(1+\delta)h\left(\frac{\delta}{1+\delta}\right)\right]}{M_{n}}, \tag{71}\] where \(A_{n}\equiv A^{n}\), with \(A:={\rm rank}\Delta(\rho)\), is the dimensionality of the entire Hilbert space where \(\varrho_{n}\) acts. By the additivity of the coherence of formation under tensor products, \(C_{f}\left(\varrho_{n}\right)=nC_{f}\left(\rho\right)\), while \(\log_{2}M_{n}=n\left[C_{r}\left(\rho\right)-\epsilon_{n}^{(0)}\right]=n\left[ C_{r}\left(\rho\right)-o(1)\right]\). We now take the limit as both \(\delta_{\rm S}\) and \(\delta_{\rm A}\) approach zero, whereupon \(\epsilon_{n}^{(1)}\rightarrow\epsilon_{n}\), \(\epsilon_{n}^{(2)}\leq 2\epsilon_{n}+n\epsilon_{n}^{(0)}\), and \(\epsilon_{n}^{(3)}\leq 2\epsilon_{n}+n\epsilon_{n}^{(0)}\) as well. Thus, if we resolve to ignore the \(n\epsilon_{n}^{(0)}\), we get \(\delta\leq\sqrt{2\left(1-\sqrt{1-2\epsilon_{n}}\right)}\leq\sqrt{2\epsilon_{ n}}\in o(1)\), and so \[L_{n} \geq\exp_{2}\left(n\left[C_{f}\left(\rho\right)-C_{r}(\rho)-o(1) \right]-o[1]\right)\] \[\geq\exp_{2}\left(n\left[\ell\left(\rho\right)-o(1)\right]\right). \tag{72}\] Finally, in light of (69), we conclude that the bound applies asymptotically also to the _average_ logarithmic measurement coherence rank under the distribution induced by the input \(\varrho_{n}\). \(\Box\) We attempted to prove the conjecture via the asymptotic continuity of the coherence of formation, Lemma B.2. As such, we set ourselves the tall order of showing that the normalized target witness \(\tau_{n}\) is close in fidelity to \(\varrho_{n}\). In retrospect, the expectation that factors as large as \(2^{nS(\rho)}\) mutually cancel to leave something close to 1 was naively optimistic: the normalization involves exponential factors with the AEP-related \(\delta\)'s, not to mention the \(\epsilon_{n}^{(0)}\) from \(M_{n}\) that we already had to contend with. Indeed, even if \(\tau_{n}\) were exactly proportional to \(\mathbb{1}_{\varrho_{n}^{\delta_{\rm S}}}\)--which is the most complete characterization we got in the non-asymptotic case--we would then be required to show that the latter is close to \(\varrho_{n}^{\delta_{\rm S}}\). While AEP does guarantee that \(\varrho_{n}^{\delta_{\rm S}}\) gets rather flat in its spectrum, it is not nearly flat enough to have high fidelity with the maximally-mixed state on its support. A more direct approach might try to avoid having to precisely trade exponentially large factors. For example, there might be a modified "asymptotic continuity" property that holds for density operators that are not necessarily close in fidelity. Of course there can be no such property for fully general pairs of operators \(\sigma^{A_{n}}\) and \(\tau^{A_{n}}\). But in our context, the operators come from sequences of \(T_{n}\leq 1\) and \(\varrho_{n}=\rho^{\otimes n}\) that satisfy \({\rm Tr}\left(T_{n}\varrho_{n}\right)\to 1\). This possibly imposes additional structure that admits a modified notion of asymptotic continuity. For example, we can already observe from our proof sketch that \(F\left(\tau_{n},\varrho_{n}\right)\) can only decay sub-exponentially, which is not a generic property. We shall now make some speculative suggestions for how our problem might benefit from the tools and methods of smooth entropy calculus [20]. To this end, note that the \(C_{f}\) in our context enters as a lower bound to \(r_{C}\); in particular, in our analysis of the non-asymptotic approximate case, we have \[\sum_{c}q_{c}\log_{2}r_{C}\left(\phi_{c}\right)\geq\sum_{c}q_{c}C_{r}\left(\phi_ {c}\right)\geq C_{f}\left(\tau_{\cal E}\right), \tag{73}\] where \(\sum_{c}q_{c}\phi_{c}=\tau_{\cal E}\approx\mathbb{1}_{\rho}/{\rm rank}\rho \propto\rho^{0}\). For \(\alpha\in\mathbb{R}_{+}\), define \[C_{\alpha}(\phi):=H_{\alpha}\left[\Delta(\phi)\right] \tag{74}\] through the Renyi \(\alpha\)-entropy, and \[C_{\beta,\alpha}(\tau):=\min_{\sum_{e}q_{e}\phi_{e}=\tau^{\beta}/\mathrm{Tr}\tau^{ \beta}}\sum_{c}q_{e}C_{\alpha}\left(\phi_{c}\right). \tag{75}\] Then, \[\min_{\sum_{e}q_{e}\phi_{e}=1_{\rho}/\mathrm{rank}\rho}q_{e}\log_{2 }r_{C}\left(\phi_{c}\right) =C_{0,0}(\rho); \tag{76}\] \[C_{f}(\rho) =C_{1,1}(\rho). \tag{77}\] Both the parameters function essentially as in the definition of the Renyi entropies: \(\alpha\) on the level of the amplitudes of pure components in the incoherent basis, and \(\beta\) on the level of the spectral decomposition of a mixed input. Building on the above quantities, we could define so-called "smoothed" counterparts \(C_{\beta,\alpha}^{\epsilon}\), where \(\epsilon\) is an additional small real parameter; since we are only speculating, we do not specify any details about how this smoothing should be done, but our own Definition 6 is an example. It is then possible that there is an asymptotic equipartition property (like the one in [20]) for this family of measures, i.e. something that (speculatively) might entail \[\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{1}{n}C_{\beta,\alpha}^{\epsilon} \left(\rho^{\otimes n}\right)=C_{f}(\rho) \tag{78}\] for a range of \(\alpha,\beta\) values. If true, this could give us a way of relating our \(r_{C}\left(\tau_{n}\right)\) with \(C_{f}\left(\varrho_{n}\right)\)--which we know how to compute in terms of properties of \(\rho\). As a concrete example, our construction in Section 5.2 (specifically, point 3) could potentially be used to show this for the \(\left(\alpha,\beta\right)=\left(0,1\right)\) case. Of course, it is a different matter that all of these quantities would likely be hard to compute, since they involve optimizations over all convex decompositions. Nevertheless, there might still be a way to _formally_ establish an asymptotic equipartition property and relate all of these (computationally-intractable) measures evaluated on \(\varrho_{n}\) to the (also computationally-intractable but better-un-derstood) \(C_{f}(\rho)\). ### How to attain the bounds From putting a lower bound on the asymptotic coherent measurement cost, we now turn to attaining the bound (Conjecture 2). One way of proving Conjecture 2 would be to explicitly construct bound-attaining distillation channels. We now formulate some general guiding principles towards such constructions. Let us take (for example) the result of Proposition 4.7 and inspect the chain (42) of inequalities therein: \[\log_{2}M+\sum_{c,m}\frac{\alpha_{m}^{2}t_{c,m}}{\tilde{S}}C_{r} \left(\phi_{c,m}\right)\] \[\geq\sum_{c}\frac{t_{c}}{\tilde{S}}\left(H\left[\left(\frac{ \alpha_{m}^{2}t_{c,m}}{t_{c}}\right)_{m}\right]+\sum_{m}\frac{\alpha_{m}^{2}t_ {c,m}}{\tilde{S}}C_{r}\left[\phi_{c,m}\right]\right)\] \[=\sum_{c}\frac{t_{c}}{\tilde{S}}C_{r}\left(\phi_{c}\right)\geq C _{f}\left(\tau_{\mathcal{E}}\right)\geq C_{f}^{\mathcal{P}}\left(\tau_{ \mathcal{V}}\right). \tag{79}\] For the last inequality to be saturated, we need \(\tilde{S}=S\)--i.e. our "maximal distillation" condition \(M=A/S\). For the one before, the decomposition \(\sum_{c}\left(t_{c}/\tilde{S}\right)\phi_{c}\) of \(\tau_{\mathcal{E}}\) must be one that attains the bound in the definition of \(C_{f}\left(\tau_{\mathcal{E}}\right)\)--a so-called optimal decomposition. Finally, the inequality in the middle line is saturated when \(\left(\alpha_{m}^{2}t_{c,m}/t_{c}\right)_{m}\) is uniform for each \(c\). This is the case for a maximal uniform distillate, i.e. \(\left|\alpha\right\rangle=\left|\Psi_{M}\right\rangle\), as mentioned in Remark 4.4; we are not aware of weaker conditions where it still holds. These conditions would suffice, in principle, for the loosest bound of Proposition 4.7 (effectively Proposition 4.3, considering the above) to be attained without necessarily saturating the inequality in Theorem 1. The latter would further require 1. each of the \(C_{r}\left(\phi_{c}\right)\) to be equal to \(C_{f}\left(\tau_{\rho}\right)\); 2. each of the \(C_{r}\left(\phi_{c,m}\right)\) to be equal, in turn, to \(C_{f}\left(\tau_{\rho}\right)-\log_{2}M\); and moreover, 3. each \(\phi_{c,m}\) to be a uniform superposition. Condition 1 occurs when \(\tau_{\rho}\) is a so-called _flat-roof point_ for the convex-roof function in question [21]. There does not seem to be any work in the literature on flat-roof points for the function \(C_{f}\). As for the further conditions 2 and 3, suppose * 1 holds; * the Kraus operators are IO; * each \(\left(\alpha_{m}^{2}t_{c,m}/t_{c}\right)_{m}\) is uniform; and furthermore, * each \(\phi_{c}\) is a uniform superposition. This would automatically ensure both 2 and 3; once again, we are not aware of situations where these conditions could be met without those in the previous sentence. Finally, attaining the rank bound \(\ell\left(\tau_{\rho}\right)\) would necessitate \(C_{r}\left(\tau_{\rho}\right)=\log_{2}M\)--i.e., the distillate is also maximal in terms of \(C_{r}\). For each inequality in these chains, the condition for its saturation is essentially some sort of "flattening" of features. We are not aware of any noteworthy situations in which some of the bounds are tight while others are not; in particular, whether the "average coherence" bound of Proposition 4.3 can be attained in cases where the maximal rank bound of Theorem 1 cannot is an intriguing open question. There is one situation, though, where all manner of flattening tendencies collude: maximal asymptotic distillation. We use the above observations to attempt a systematic construction (summarized towards the end of Section 1.6) of a distillation protocol that asymptotically attains the \(\ell\)-based rank bound of Conjecture 1. The construction is summarized as follows: 1. In our proof sketch for Conjecture 1, we attempted to show that the target witness-based density operators \(\tau_{n}\equiv\tau_{\mathcal{E}_{n}}\) associated with any sequence of asymptotically maximally-distilling MIO channels \(\mathcal{E}_{n}\) must approach \(\varrho_{n}\) [see (69)]. Any given set of Kraus operators of \(\mathcal{E}_{n}\) correspond to the pure states in a certain convex decomposition of \(\tau_{n}\) (Definition 5), which would then be an approximate convex decomposition of \(\varrho_{n}\). For the converse, we adapt Winter and Yang's maximal IO distillation protocol [13] to derive conditions under which a given approximation sequence \(\tau_{n}\) corresponds to some maximally-distilling channel sequence \(\mathcal{E}_{n}\); we also show that _any_ convex decomposition of such a \(\tau_{n}\) yields _IO_ Kraus operators for \(\mathcal{E}_{n}\). We start our construction with \(\varrho_{n}\) itself, progressively working towards a viable \(\tau_{n}\). 2. If we choose an optimal decomposition attaining \(C_{f}\left(\varrho_{n}\right)\) from the convex roof of \(\varrho_{n}\), we already obtain Kraus operators that attain the measurement coherence bound in the _average_ sense of Proposition 4.3. It remains to flatten out further to attain the bound on a per-\(\left|\phi_{c,m}\right\rangle\) basis. 3. First, to flatten relative to \(c\), we construct a decomposition wherein all but an asymptotically-vanishing weight is carried by pure states whose individual \(C_{r}\) values are close to \(C_{f}\left(\varrho_{n}\right)\): simply take a decomposition \(\rho=\sum_{j}q_{j}\phi_{j}\) that is optimal for \(\rho\), and decompose (most of) \(\varrho_{n}\) into pure states of the form \(\left|\Phi_{\mathsf{c=j}}\right\rangle\equiv\bigotimes_{k=1}^{n}\left|\phi_{ j_{k}}\right\rangle\) where \(\mathbf{j}\equiv\left(j_{k}\right)_{k}\) is a strongly-typical sequence under \(\mathbf{q}^{\otimes n}\) (see Appendix C for background on asymptotic typicality)11. These pure components then have \(C_{r}\left(\Phi_{\mathbf{j}}\right)\approx C_{f}\left(\varrho_{n}\right)=nC_{ f}(\rho)\). Footnote 11: This works except when \(\mathbf{q}\) is uniform. For this case, we can use this construction for arbitrarily small non-uniform perturbations of \(\mathbf{q}\), resulting in corresponding perturbations of \(\rho\). The properties of convex roofs ensure that the perturbed \(\mathbf{q}\) decomposition is optimal for the perturbed \(\rho\)[22]. 4. To ensure that the final target witness is bounded as \(T_{n}\leq 1\) (as required for the associated map to be a valid subchannel), we project all remaining \(\left|\Phi_{\mathbf{j}}\right\rangle\) onto a strongly-typical subspace of \(\varrho_{n}\), resulting in \(\left|\Phi_{\mathbf{j}}^{\delta_{\mathsf{S}}}\right\rangle\). 5. To flatten relative to \(m\), we first show that each \(\left|\Phi_{\mathbf{j}}^{\delta_{\mathsf{S}}}\right\rangle\) can be made arbitrarily close to a near-uniform superposition \(\left|\Phi_{\mathbf{j}}^{\delta_{\mathsf{S}},\delta_{\mathsf{A}}}\right\rangle\). 6. We then draw again on Winter-Yang's construction to show that we can discard an asymptotically-vanishing fraction of the remaining vectors to leave only ones almost entirely contained (with an asymptotically-vanishing error) in a subspace \(\mathcal{V}\) with the following property: for any \(\left|w\right\rangle\in\mathcal{V}\), there is a collection \(\left\{\left|w_{m}\right\rangle\right\}_{m}\), with \(\left\langle w_{m}\right|\left.w_{m}\right\rangle=\left\langle w\right|\left.w\right\rangle\), such that \(\left|w\right\rangle=M_{n}^{-1/2}\sum_{m}\left|w_{m}\right\rangle\). Moreover, there exists a disjoint partitioning \(\left\{\mathcal{I}_{m}\right\}_{m}\) of \(\mathcal{A}_{n}\equiv\mathcal{A}^{n}\) such that \(\left|w_{m}\right\rangle\in\mathrm{span}\left\{\left|\mathbf{a}\right\rangle: \;\mathbf{a}\in\mathcal{I}_{m}\right\}\) for any \(\left|w\right\rangle\in\mathcal{V}\). The latter property enables the vectors to be used in constructing IO Kraus operators, while the former ensures that they are weighted equally over \(m\). Thus, we now have \(\left|\Phi_{\mathsf{j}}^{\delta_{\mathsf{S}},\delta_{\mathsf{A}}}\right\rangle \approx M_{n}^{-1/2}\sum_{m}\left|\Phi_{\mathsf{j},m}^{\delta_{\mathsf{S}}, \delta_{\mathsf{A}}}\right\rangle\) with \(\left\langle\Phi_{\mathsf{j},m}^{\delta_{\mathsf{S}},\delta_{\mathsf{A}}} \right|\left.\Phi_{\mathsf{j},m}^{\delta_{\mathsf{S}},\delta_{\mathsf{A}}}\right\rangle\) close to a uniform distribution over \(m\). 7. We then show that we can delete a vanishing fraction of \(\left|\Phi_{\mathsf{j}}^{\delta_{\mathsf{S}},\delta_{\mathsf{A}}}\right\rangle\) from each remaining \(\left|\Phi_{\mathsf{j}}^{\delta_{\mathsf{S}},\delta_{\mathsf{A}}}\right\rangle\) such that the remaining have logarithmic coherence rank tightly concentrated around \(C_{r}\left(\Phi_{\mathbf{j}}^{\delta_{\mathsf{S}},\delta_{\mathsf{A}}}\right) -\log_{2}M_{n}\approx n[C_{f}(\rho)-C_{r}(\rho)]=n\ell(\rho)\). We thus obtain IO Kraus operator fragments \(\left|w_{c,m}\right\rangle\propto\left|\Phi_{\mathsf{j},m}^{\delta_{\mathsf{S}},\delta_{\mathsf{A}}}\right\rangle\) that individually approach the measurement coherence rank bound of Proposition 1. 8. Finally, we show that the "IO Kraus operators" (in quotes because the resulting map may fail to be trace-non-increasing) constructed using these fragments asymptotically output an approximation to \(\Psi_{M_{n}}\) near-deterministically. In the case that the map is a sub-channel, we show that it can be completed to a channel with an IO residual subchannel incurring a measurement coherence overhead asymptotically vanishing in relation to the rest. We now go through the construction in detail. Note that the \(\mathsf{c}\)'s and \(\delta\)'s below are not the same as those in the proof sketch for Conjecture 1. Proof sketch for Conjecture 2.: We will try to construct a sequence of viable IO channels based on the one from Winter and Yang's maximal distillation protocol [13]. In our proof sketch for Conjecture 1, we tried to show that the target witness-based density operators \(\tau_{n}\) satisfy \[F\left(\tau_{n},\varrho_{n}\right)\geq 1-\epsilon_{n} \tag{80}\] with asymptotically-vanishing \(\epsilon_{n}\). Conversely, any sequence \(\tau_{n}\) satisfying (80) and some additional conditions can be used in constructing a maximally-distilling channel sequence. We will first derive these conditions. Each of Winter-Yang's channels12\(\tilde{\mathcal{E}}_{n}\) has the following special property: it can be decomposed into IO Kraus operators \(\tilde{K}_{s}\) that take the form Footnote 12: Where there is scope for confusion, we use tildes to denote objects associated with Winter–Yang’s construction. \[\tilde{K}_{s}=\sum_{m}\left|m\right\rangle\left\langle\tilde{w}_{s,m}\right|= \sqrt{M_{n}}\sum_{m}\left|m\right\rangle\left\langle\tilde{w}_{s}\right| \mathbb{1}_{\mathcal{I}_{m}}, \tag{81}\] where \(\left\{\mathcal{I}_{m}\right\}_{m}\) is a partitioning of \(\mathcal{A}_{n}\equiv\mathcal{A}^{n}\) into \(M_{n}=\exp_{2}\left(n\left[C_{r}(\rho)-\tilde{\epsilon}_{n}\right]\right)\) disjoint subalphabets, and \(\mathbb{1}_{\mathcal{I}_{m}}\) denotes the projector onto \(\mathrm{span}\left\{\left|\mathbf{a}\right\rangle:\,\mathbf{a}\in\mathcal{I}_{ m}\right\}\). Importantly, the fragments \(\left|\tilde{w}_{s,m}\right\rangle\) for a given \(m\) and various \(s\) all lie within the subspace corresponding to \(\mathcal{I}_{m}\), independent of \(s\). Consider the target witness \(\tilde{T}_{n}=\sum_{s}\left|\tilde{w}_{s}\right\rangle\left\langle\tilde{w}_{ s}\right|\). The particular Kraus operators in Winter-Yang's construction happen to involve \(\left|w_{c}\right\rangle\equiv\left|\tilde{w}_{s}\right\rangle\) that form a basis of \(\varrho_{n}\)'s eigenvectors. But thanks to the above-noted \(s\)-independent block structure, _any_ convex decomposition of \(T_{n}\) into arbitrary pure components \(\left|w_{c}\right\rangle\) can as well be used to construct a set of IO Kraus operators (implementing the same \(\tilde{\mathcal{E}}_{n}\)), since \(\left|w_{c,m}\right\rangle:=\sqrt{M_{n}}\mathbb{1}_{\mathcal{I}_{m}}\left|w_{c }\right\rangle\in\mathrm{span}\left\{\left|\mathbf{a}\right\rangle:\,\, \mathbf{a}\in\mathcal{I}_{m}\right\}\). More generally, we can even use some \(T_{n}\approx\tilde{T}_{n}\) and construct IO Kraus operators from arbitrary pure decompositions thereof: if \(T_{n}=\sum_{c}\left|w_{c}\right\rangle\left\langle w_{c}\right|\), we define the Kraus operators \[K_{c}:=\sum_{m}\left|m\right\rangle\left\langle\tilde{w}_{c,m}\right|:=\sqrt{ M_{n}}\sum_{m}\left|m\right\rangle\left\langle w_{c}\right|\mathbb{1}_{ \mathcal{I}_{m}} \tag{82}\] and the maps \(\mathcal{F}_{n}(\cdot):=\sum_{c}K_{c}(\cdot)K_{c}^{\dagger}\). Note that \[\sum_{c}K_{c}^{\dagger}K_{c}=M_{n}\sum_{m}\mathbb{1}_{\mathcal{I}_{m}}T_{n} \mathbb{1}_{\mathcal{I}_{m}}=:\mathbf{T}_{n}. \tag{83}\] Therefore, as long as \[M_{n}\mathbb{1}_{\mathcal{I}_{m}}T_{n}\mathbb{1}_{\mathcal{I}_{m}}\leq \mathbb{1}_{\mathcal{I}_{m}} \tag{84}\] for all \(m\), we can use the construction of (82) on \(T_{n}^{\backslash}:=M_{n}^{-1}\left(\mathbb{1}^{\Lambda_{n}}-\mathbf{T}_{n}\right)\) to obtain IO subchannels \(\mathcal{G}_{n}\) such that \(\mathcal{E}_{n}:=\mathcal{F}_{n}+\mathcal{G}_{n}\) are channels. With these conditions met, the \(\mathcal{E}_{n}\) are candidates for maximally-distilling channels if \(\mathrm{Tr}T_{n}=2^{n\left[S(\rho)-\delta\right]}\) and \(\tau_{n}:=T_{n}/\mathrm{Tr}T_{n}\) satisfy (80), since for any \(\delta_{\mathrm{S}}\)-typical subspace of \(\varrho_{n}\), \[\mathrm{Tr}\left(T_{n}\varrho_{n}^{\delta_{\mathrm{S}}}\right) \geq\lambda_{\min}\left(\varrho_{n}^{\delta_{\mathrm{S}}}\right) \mathrm{Tr}\left(T_{n}{}^{\rho}\right)\] \[\geq\left(1-\epsilon_{n}\right)\lambda_{\min}\left(\varrho_{n}^{ \delta_{\mathrm{S}}}\right)\mathrm{Tr}\left(T_{n}\right)\] \[\stackrel{{\delta,\delta_{\mathrm{S}}\to 0}}{{ \longrightarrow}}1-\epsilon_{n}. \tag{85}\] We stress that taking the \(\delta_{\mathrm{S}}\to 0\) limit without regard to \(n\) is problematic for the same reasons as in the proof sketch for Conjecture 1; here we are just carrying on in disregard to this problem. This concludes our formalization of point 1 above. As noted in point 2, we can choose any optimal decomposition of \(\varrho_{n}\) attaining \(C_{f}\left(\varrho_{n}\right)\) [possibly after some pruning to uphold (84)] to saturate the measurement coherence bound on average--but we can try to do better. As summarized in point 3, we will flatten relative to \(c\) by decomposing \(\varrho_{n}\) in a special way. Let \(\rho=\sum_{j}q_{j}\phi_{j}\) be an optimal decomposition attaining \(C_{f}(\rho)\); such a decomposition exists, by virtue of the finitude of \(A\). Since \(\varrho_{n}=\left(\sum_{j}q_{j}\phi_{j}\right)^{\otimes n}\), it can of course be decomposed convexly into pure states of the form \(\left|\Phi_{\varepsilon=\mathbf{j}}\right\rangle\equiv\bigotimes_{k=1}^{n} \left|\phi_{j_{k}}\right\rangle\), where \(\mathbf{j}\equiv\left(j_{k}\right)_{k}\). If we then take the strongly-\(\delta_{\mathrm{J}}\)-typical set \(\mathcal{T}_{n}^{\delta_{\mathrm{J}}}\) under the distribution \(\mathbf{Q}_{n}\equiv\mathbf{q}^{\otimes n}\) (see Definition C.1 and Lemma C.2), the letter frequencies in any \(\mathbf{j}\in\mathcal{T}_{n}^{\delta_{\mathrm{J}}}\) satisfy \(\left|f_{j}\left(\mathbf{j}\right)-q_{j}\right|\leq\delta_{\mathrm{J}}\). By the additivity of the relative entropy of coherence under tensor products, \[C_{r}\left(\Phi_{\mathbf{j}}\right) =\sum_{j}\left[f_{j}\left(\mathbf{j}\right)n\right]C_{r}\left( \phi_{j}\right)\] \[=n\sum_{j}\left[q_{j}+O\left(\delta_{\mathrm{J}}\right)\right]C_{r} \left(\phi_{j}\right)\] \[=n\left[C_{f}(\rho)+O\left(\delta_{\mathrm{J}}\right)\right] \tag{86}\] for any of these \(\mathbf{j}\). The part of \(\varrho_{n}\) composed thereof is \(\varrho_{n}^{\delta_{\mathrm{J}}}\equiv\sum_{\mathbf{j}\in\mathcal{T}_{n}^{ \delta_{\mathrm{J}}}}Q_{n}\left(\mathbf{j}\right)\Phi_{\mathbf{j}}\). By the AEP, it has weight \[\mathrm{Tr}\varrho_{n}^{\delta_{\mathrm{J}}}=Q_{n}^{\delta_{\mathrm{J}}}\geq 1- \delta_{\mathrm{J}}, \tag{87}\] where \(Q_{n}^{\delta_{\mathrm{J}}}:=\sum_{\mathbf{j}\in\mathcal{T}_{n}^{\delta_{\mathrm{J} }}}Q_{n}\left(\mathbf{j}\right)\). As before, letting \(\varrho_{n}^{|\delta_{\mathrm{J}}}\) denote the normalized version thereof, the behaviour of the fidelity under convex mixtures (point 5 of Appendix A) yields \[F\left(\varrho_{n},\varrho_{n}^{|\delta_{\mathrm{J}}}\right)\geq\left(Q_{n}^{ \delta_{\mathrm{J}}}\right)^{2}\geq 1-2\delta_{\mathrm{J}}. \tag{88}\] This concludes point 3. We will now subject the remaining \(\left|\Phi_{\mathbf{j}}\right\rangle\) to a minor modification to ensure that our final construction can achieve \(\mathrm{Tr}T_{n}\approx\exp_{2}\left[nS(\rho)\right]\). Let \(\mathcal{V}^{\delta_{\mathrm{S}}}\) be \(\varrho_{n}\)'s \(\delta_{\mathrm{S}}\)-strongly-typical subspace. Define \(\left|\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}}}\right\rangle:=\mathbb{1}_{ \mathcal{V}^{\delta_{\mathrm{S}}}}\left|\Phi_{\mathbf{j}}\right\rangle\) and \[\varrho_{n}^{\delta_{\mathrm{J}},\delta_{\mathrm{S}}}:=\sum_{\mathbf{j}\in \mathcal{T}_{n}^{\delta_{\mathrm{J}}}}Q_{n}\left(\mathbf{j}\right)\Phi_{ \mathbf{j}}^{\delta_{\mathrm{S}}}. \tag{89}\] By the quantum AEP (Lemma C.4), \[2^{-n\left[S(\rho)+\eta(\delta_{\mathrm{S}})\right]}\mathbb{1}\leq\varrho_{n}^{ \delta_{\mathrm{S}}}\leq 2^{-n\left[S(\rho)-\eta(\delta_{\mathrm{S}})\right]}\mathbb{1}, \tag{90}\] where \(\eta(\delta)\stackrel{{\delta\to 0}}{{\longrightarrow}}0\). Noting that \(\varrho_{n}^{\delta_{\mathrm{S}}}=\varrho_{n}^{\delta_{\mathrm{J}},\delta_{ \mathrm{S}}}+\varrho_{n}^{\delta_{\mathrm{J}},\delta_{\mathrm{S}}}\), \[\varrho_{n}^{\delta_{\mathrm{J}},\delta_{\mathrm{S}}}\leq\varrho_{n}^{\delta_{ \mathrm{S} Then, \(\sum_{j}q_{j}\left|\chi_{ji}\right|^{2}=r_{i}\), and therefore every \(\mathbf{i}\) sequence that is strongly-typical _conditional_ on \(\mathbf{j}\) is also part of the _unconditional_ strongly-typical set that determines the typical subspace. As such, the distribution of \(\mathbf{a}\) conditioned on \(\mathbf{j}\) is unaffected except for a \(\delta_{\mathrm{S}}\) loss of total measure. We will therefore consider the distribution of \(\mathbf{a}\) in \(\left|\Phi_{\mathbf{j}}\right\rangle\). Each \(\left|\Phi_{\mathbf{j}}\right\rangle\) can be made arbitrarily close to a uniform superposition in the incoherent \(\left|\mathbf{a}\right\rangle\) basis. Let \(\left|\phi_{j}\right\rangle=\sum_{a}\sqrt{\xi_{\mathrm{a}j}}e^{i\varphi_{j,a}} \left|a\right\rangle\) for \(\mathbf{\xi}_{ij}\equiv\left(\xi_{\mathrm{a}j}\right)_{a}\) a distribution and \(\varphi_{j,a}\in\mathbb{R}\); note that \(H\left(\mathbf{\xi}_{ij}\right)=C_{r}\left(\phi_{j}\right)\). Then, \(\left|\phi_{j}\right\rangle^{\otimes n_{j}}=\left(\sum_{a}\sqrt{\xi_{ \mathrm{a}j}}e^{i\varphi_{j,a}}\left|a\right\rangle\right)^{\otimes n_{j}}\). We now apply (weak) typicality on the sequences \(\mathbf{a}_{j}\equiv\left(a_{k}\right)_{k=1}^{n_{j}}\) (where the subscript \(j\) in \(\mathbf{a}_{j}\) signifies that the latter takes values in \(A_{n_{j}}\) and not \(A_{n}\)) under the distribution \(\mathbf{\xi}_{\left|j\right\rangle}^{\otimes n_{j}}\). The number of \(\delta_{\mathrm{A}}\)-weakly-typical sequences is \(\leq 2^{n_{j}\left[C_{r}\left(\phi_{j}\right)+\delta_{\mathrm{A}}\right]}\), and they collectively garner an amplitude \(\geq\sqrt{1-\delta_{\mathrm{A}}}\); this holds alike for all \(j\). Note that the strongly-typical \(\left|\Phi_{\mathbf{j}}\right\rangle\) described above can be obtained by applying subsystem permutations (which are incoherent unitaries) on \(\bigotimes_{j}\left|\phi_{j}\right\rangle^{\otimes n_{j}}\equiv\left[q_{j}+O \left(\delta_{j}\right)\right]\)n. Therefore, if \(\left|\Phi_{\mathbf{j}}^{\delta_{\mathrm{A}}}\right\rangle\) denote \(\left|\Phi_{\mathbf{j}}\right\rangle\) projected onto the span of \(\mathbf{a}\in A_{n}\) simultaneously \(\delta_{\mathrm{A}}\)-typicalized under all the \(\mathbf{\xi}_{\left|j\right\rangle}^{\otimes n_{j}}\), \[r_{C}\left(\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{A}}\right\rangle}\right) \leq 2^{n\left[C_{f}\left(\rho\right)+\delta_{\mathrm{A}}+O\left( \delta_{\mathrm{J}}\right)\right]}; \tag{92}\] \[\left|\left\langle\Phi_{\mathbf{j}}^{\delta_{\mathrm{A}}}\right| \Phi_{\mathbf{j}}\right\rangle\right| \geq\sqrt{\left(1-\delta_{\mathrm{A}}\right)^{J}}=1-O\left(\delta_{ \mathrm{A}}\right), \tag{93}\] where \(\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{A}}\right\rangle}\) denotes the normalized \(\Phi_{\mathbf{j}}^{\delta_{\mathrm{A}}}\) and \(J\) is the number of components in the optimal decomposition \(\rho=\sum_{j}q_{j}\phi_{j}\). Applying the same arguments to the subspace-typicalized \(\left|\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\right\rangle\), \[r_{C}\left(\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{ \mathrm{A}}\right\rangle}\right) \leq 2^{n\left[C_{f}\left(\rho\right)+\delta_{\mathrm{A}}+O\left( \delta_{\mathrm{J}}\right)+O\left(\delta_{\mathrm{S}}\right)\right]}; \tag{94}\] \[\left|\left\langle\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}}}\right|\left.\Phi_{\mathbf{j}}\right\rangle\right| \geq 1-O\left(\delta_{\mathrm{S}}\right)-O\left(\delta_{\mathrm{A}}\right). \tag{95}\] Applying Lemma B.1 (the asymptotic continuity of the relative entropy of coherence [13]), we also have \[C_{r}\left(\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}} \right\rangle}\right)=n\left[C_{f}\left(\rho\right)+\epsilon^{\left(1\right)} \right]. \tag{96}\] Now define \(\varrho_{n}^{\delta_{\mathrm{J}},\delta_{\mathrm{S}},\delta_{\mathrm{A}}}:= \sum_{\mathbf{j}\in\mathcal{T}_{n}^{\delta_{\mathrm{J}}}}Q_{n}\left(\mathbf{j }\right)\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\) and let \(\varrho_{n}^{\left|\delta_{\mathrm{J}},\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\) denote its normalized version. The joint concavity of the square-root fidelity implies \[F\left(\varrho_{n}^{\left|\delta_{\mathrm{J}},\varrho_{n}^{ \left|\delta_{\mathrm{J}},\delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle} \right) \geq\left(\sum_{\mathbf{j}\in\mathcal{T}_{n}^{\delta_{\mathrm{J}}}} \frac{Q_{n}\left(\mathbf{j}\right)}{Q_{n}^{\delta_{\mathrm{J}}}}\left|\left\langle \Phi_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\right|\left.\Phi_{ \mathbf{j}}\right\rangle\right|\right)^{2}\] \[\geq 1-O\left(\delta_{\mathrm{S}}\right)-O\left(\delta_{\mathrm{A}}\right)\] \[\Rightarrow F\left(\varrho_{n},\varrho_{n}^{\left|\delta_{\mathrm{J}}, \delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle} \geq 1-\left(2\delta_{\mathrm{J}}\boxplus\left[O\left(\delta_{ \mathrm{S}}\right)+O\left(\delta_{\mathrm{A}}\right)\right]\right). \tag{97}\] For convenience, let \(Q_{n}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\left(\mathbf{j}\right)\) denote the appropriate normalized measure such that \[\varrho_{n}^{\left|\delta_{\mathrm{J}},\delta_{\mathrm{S}},\delta_{\mathrm{A}} \right\rangle} =\sum_{\mathbf{j}\in\mathcal{T}_{n}^{\delta_{\mathrm{J}}}}Q_{n}^{ \delta_{\mathrm{S}},\delta_{\mathrm{A}}}\left(\mathbf{j}\right)\Phi_{\mathbf{j} }^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}}. \tag{98}\] This \(\varrho_{n}^{\left|\delta_{\mathrm{J}},\delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle}\) can be decomposed into the pure components \(\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\), each of whose coherence rank is no larger than \(\exp_{2}\left(n\left[C_{f}\left(\rho\right)+\delta_{\mathrm{A}}+O\left(\delta_{ \mathrm{J}}\right)+O\left(\delta_{\mathrm{S}}\right)\right]\right)\), thus accomplishing point 5. Next, as anticipated in point 6, we will show that the logarithm of the remaining coherence rank within each \(\mathcal{I}_{m}\) is tightly concentrated around the value \(n\ell(\rho)\). To this end, for any given \(\mathbf{j}\in\mathcal{T}_{n}^{\delta_{\mathrm{J}}}\), define \[\mathcal{M}_{n}^{\mathrm{J},\delta_{\mathrm{R}}}:=\left\{m:\ r_{C}\left(\Phi_{ \mathbf{j},m}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle} \right)\leq 2^{n\left[\ell(\rho)+\bar{\epsilon}_{n}+\delta_{\mathrm{R}}\right]}\right\}, \tag{99}\] where \(\Phi_{\mathbf{j},m}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle} :=\mathbb{1}_{\mathcal{I}_{m}}\Phi_{\mathbf{j},m}^{\left|\delta_{ \mathrm{S}},\delta_{\mathrm{A}}\right\rangle}\mathbb{1}_{\mathcal{I}_{m}}/ \mathrm{Tr}\left(\mathbb{1}_{\mathcal{I}_{m}}\Phi_{\mathbf{j},m}^{\left| \delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle}\right)\) and \(\delta_{\mathrm{R}}>0\). If \(\mu_{n}^{\mathrm{J},\delta_{\mathrm{R}}}:=\left|\mathcal{M}_{n}^{\mathrm{J}, \delta_{\mathrm{R}}}\right|/M_{n}\), \[2^{n\left[C_{f}\left(\rho\right)+\delta_{\mathrm{A}}+O\left( \delta_{\mathrm{J}}\right)+O\left(\delta_{\mathrm{S}}\right)\right]} \geq r_{C}\left(\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}}, \delta_{\mathrm{A}}\right\rangle}\right.\] \[=\sum_{m}r_{C}\left(\Phi_{\mathbf{ system: \[\tilde{\mathcal{E}}_{n}^{\mathrm{A}_{n}\rightarrow\mathrm{M}_{n}}\left(\varrho_{n} \right)=\mathrm{Tr}^{\mathrm{S}_{n}}\left[\mathcal{U}^{\mathrm{A}_{n}\to \mathrm{S}_{n}\mathrm{M}_{n}}\left(\varrho_{n}\right)\right], \tag{102}\] where \(\mathrm{S}_{n}\) and \(\mathrm{M}_{n}\) are suitable subsystems of \(\mathrm{A}_{n}\) and \(\mathcal{U}_{n}(\cdot)\equiv U_{n}(\cdot)U_{n}^{\dagger}\) for some unitary \(U_{n}\). Since \(\tilde{\mathcal{E}}_{n}\) are valid maximal distillation channels, there is an asymptotically-vanishing sequence \(\tilde{\epsilon}_{n}^{(1)}\) such that \[F\left[\tilde{\mathcal{E}}_{n}\left(\varrho_{n}\right),\Psi_{M_{n }}\right] \geq 1-\tilde{\epsilon}_{n}^{(1)}\] \[\Rightarrow\left\langle\Psi_{M_{n}}\right|\tilde{\mathcal{E}}_{n }\left(\varrho_{n}\right)\left|\Psi_{M_{n}}\right\rangle \geq 1-\tilde{\epsilon}_{n}^{(1)}\] \[\Rightarrow\mathrm{Tr}\left[\left(\mathbb{1}^{\mathrm{S}_{n}} \otimes\Psi_{M_{n}}\mathbb{M}_{n}\right)\mathcal{U}_{n}\left(\varrho_{n} \right)\right] \geq 1-\tilde{\epsilon}_{n}^{(1)}. \tag{103}\] Let \(\mathcal{V}^{\prime}:=\left(\mathbb{1}^{\mathrm{S}_{n}}\otimes\Psi_{M_{n}} \mathbb{M}_{n}\right)\left[\mathrm{supp}\;\mathcal{U}_{n}\left(\varrho_{n} \right)\right]\;=:\;\mathcal{W}^{\mathrm{S}_{n}}\otimes\left|\Psi_{M_{n}} \right\rangle^{\mathrm{M}_{n}}\); note that \(\mathcal{V}^{\prime}\) and \(\mathcal{W}\) are vector spaces by construction. For any \(\left|w\right\rangle\in\mathcal{W}\), \[\left|v\right\rangle:=U_{n}^{\dagger}\left(\left|w\right\rangle\otimes\left| \Psi_{M_{n}}\right\rangle\right)={M_{n}}^{-1/2}\sum_{m}\left|v_{m}\right\rangle \tag{104}\] with \(\left|v_{m}\right\rangle:=U_{n}^{\dagger}\left(\left|w\right\rangle\otimes \left|m\right\rangle\right)\), whereby \(\left\langle v_{m}\right|\)\(v_{m}\right\rangle=\left\langle v\right|\)\(v\) for all \(m\). Moreover, since \(U_{n}\) induces the block structure discussed in point 1, \(\left|v_{m}\right\rangle\in\mathrm{span}\left\{\left|\mathbf{a}\right\rangle:\; \mathbf{a}\in\mathcal{I}_{m}\right\}\). The span of all such \(\left|v\right\rangle\)'s is \(\mathcal{V}:=U_{n}^{\dagger}\mathcal{V}^{\prime}\). By unitarity, (103) implies \[\mathrm{Tr}\left(\mathbb{1}_{\mathcal{V}}\varrho_{n}\right)\geq 1-\tilde{ \epsilon}_{n}^{(1)}. \tag{105}\] Combining this with (97), we have \[\mathrm{Tr}\left(\mathbb{1}_{\mathcal{V}}\varrho_{n}^{\left| \delta_{j},\delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle}\right) \geq 1-\epsilon_{n}^{(2)}\] \[\Rightarrow\sum_{\mathbf{j}\in\mathcal{T}_{n}^{\delta_{\mathrm{S }},\delta_{\mathrm{A}}}}Q_{n}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\left( \mathbf{j}\right)\left\langle\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}}, \delta_{\mathrm{A}};\mathcal{V}\right|}\;\Phi_{\mathbf{j}}^{\left|\delta_{ \mathrm{S}},\delta_{\mathrm{A}};\mathcal{V}\right\rangle}\geq 1-\epsilon_{n}^{(2)}, \tag{106}\] where \(\left|\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}; \mathcal{V}\right\rangle}:=\mathbb{1}_{\mathcal{V}}\left|\Phi_{\mathbf{j}}^{ \left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle}\right\rangle\) (unnormalized). For any \(\epsilon\in\left(\epsilon_{n}^{(2)},1\right]\), let \[\mathcal{T}_{n}^{\delta_{j},\epsilon}:=\left\{\mathbf{j}\in\mathcal{T}_{n}^{ \delta_{j}}:\;\left\langle\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}}, \delta_{\mathrm{A}};\mathcal{V}\right|}\;\Phi_{\mathbf{j}}^{\left|\delta_{ \mathrm{S}},\delta_{\mathrm{A}};\mathcal{V}\right\rangle}\right\}\geq 1-\epsilon \right\} \tag{107}\] and \(Q_{n}^{\delta_{j},\epsilon}:=\sum_{\mathbf{j}\in\mathcal{T}_{n}^{\delta_{j}, \epsilon}}Q_{n}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\left(\mathbf{j}\right)\). Then, \[Q_{n}^{\delta_{j},\epsilon}\cdot 1+\left(1-Q_{n}^{\delta_{j}, \epsilon}\right)\left(1-\epsilon\right) \geq 1-\epsilon_{n}^{(2)}\] \[\Rightarrow Q_{n}^{\delta_{j},\epsilon} \geq 1-\frac{\epsilon_{n}^{(2)}}{\epsilon}. \tag{108}\] In particular, choosing \(\epsilon\equiv\epsilon_{n}^{(3)}:=\sqrt{\epsilon_{n}^{(2)}}\), \[Q_{n}^{\delta_{j},\epsilon^{(3)}}\geq 1-\epsilon_{n}^{(3)}. \tag{109}\] Consequently, \[\mathrm{Tr}\left[\sum_{\mathbf{j}\in\mathcal{T}_{n}^{\delta_{j}, \epsilon_{n}^{(3)}}}Q_{n}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\left( \mathbf{j}\right)\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}} ;\mathcal{V}\right|}\right]\geq Q_{n}^{\delta_{j},\epsilon_{n}^{(3)}}\min_{ \mathbf{j}\in\mathcal{T}_{n}^{\delta_{j},\epsilon_{n}^{(3)}}}\left\langle\Phi_{ \mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}};\mathcal{V}\right|}\; \Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}};\mathcal{V} \right\rangle}\geq\left[1-\epsilon_{n}^{(3)}\right]^{2}\geq 1-2\epsilon_{n}^{(3)}. \tag{110}\] Henceforth we shall only consider \(\mathbf{j}\in\mathcal{T}_{n}^{\delta_{j},\epsilon_{n}^{(3)}}\). Define \[\varrho_{n}^{\left|\delta_{j},\delta_{\mathrm{S}},\delta_{\mathrm{A}}, \epsilon_{n}^{(3)}\right\rangle}:=\frac{1}{Q_{n}^{\delta_{j},\epsilon}}\sum_{ \mathbf{j}\in\mathcal{T}_{n}^{\delta_{j},\epsilon_{n}^{(3)}}}Q_{n}^{\delta_{ \mathrm{S}},\delta_{\mathrm{A}}}\left(\mathbf{j}\right)\Phi_{\mathbf{j}}^{\left| \delta_{\mathrm{S}},\delta_{\mathrm{A}}}. \tag{111}\] Note that we have not projected the vectors onto \(\mathcal{V}\) in this definition: we have merely restricted the values of \(\mathbf{j}\) further. Due to (109), \[F\left(\varrho_{n}^{\left|\delta_{j},\delta_{\mathrm{S}},\delta_{\mathrm{A}}, \mathcal{V}\right\rangle},\varrho_{n}^{\left|\delta_{j},\delta_{\mathrm{S}}, \delta_{\mathrm{A}},\epsilon_{n}^{(3)}\right\rangle}\geq 1-2\epsilon_{n}^{(3)}, \tag{112}\] which implies via (97) that \[F\left(\varrho_{n},\varrho_{n}^{\left|\delta_{j},\delta_{\mathrm{S}},\delta_{ \mathrm{A}},\epsilon_{n}^{(3)}\right\rangle}\geq 1-\epsilon_{n}^{(4)}. \tag{113}\] Now define \(\left|\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}},\mathcal{V} \right\rangle}\right.\) by normalizing \(\left|\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}; \mathcal{V}\right\rangle}\right\rangle\). By implies \[\mathrm{Tr}\left(\mathbb{1}_{\mathcal{V}}\varrho_{n}\right)\geq 1-\tilde{ \epsilon}_{n}^{(1)}. \tag{105}\] Combining this with (97), we have \[\mathrm{Tr}\left(\mathbb{1}_{\mathcal{V}}\varrho_{\mathbf{j}}^{\left| \delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle}\right) \geq 1-\epsilon_{n}^{(2)}\] \[\Rightarrow\sum_{\mathbf{j}\in\mathcal{T}_{n}^{\delta_{j}}}Q_{n}^{ \delta_{\mathrm{S}},\delta_{\mathrm{A}}}\left(\mathbf{j}\right)\left\langle\Phi_{ \mathbf{j}}^{\left|\delta_ Thus, defining \(\left|\alpha\right\rangle:=\sum_{m}\alpha_{m}\left|m\right\rangle\), (114) begets \[\left|\left\langle\alpha\right|\,\Psi_{M_{n}}\right\rangle\right|^{2}\geq 1- \epsilon_{n}^{(3)}. \tag{117}\] Again applying Lemma B.1, \[C_{r}\left(\left|\alpha\right\rangle\left\langle\alpha\right| \right) =C_{r}\left(\Psi_{M_{n}}\right)+n\epsilon_{n}^{(5)}\] \[=\log_{2}M_{n}+n\epsilon_{n}^{(5)}=n\left[C_{r}(\rho)+\epsilon _{n}^{(6)}\right]. \tag{118}\] Define \(\left|\Phi_{\mathbf{j},m}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}} \right\rangle}\) as normalized versions of \(\mathbbm{1}_{\mathcal{I}_{m}}\left|\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S }},\delta_{\mathrm{A}}\right\rangle}\right\rangle\), and through these, the vector \[\left|\tilde{\Phi}_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A} }\right\rangle}\right\rangle:=M_{n}^{-1/2}\sum_{m}\left|\Phi_{\mathbf{j},m}^{ \left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle}\right\rangle. \tag{119}\] Although this vector is not necessarily in \(\mathcal{V}\), it has even \(m\) amplitudes and, moreover, satisfies \(\left\langle\tilde{\Phi}_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{ \mathrm{A}}\right\rangle}\right|\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}}, \delta_{\mathrm{A}}\right\rangle}=\left\langle\alpha\right|\,\Psi_{M_{n}}\)). Thus, applying Lemma B.1 again and using (96), \[C_{r}\left(\tilde{\Phi}_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{ \mathrm{A}}\right\rangle}\right)=n\left[C_{f}\left(\rho\right)+\epsilon_{n}^{( 7)}\right]. \tag{120}\] Note that \[C_{r}\left(\tilde{\Phi}_{\mathbf{j}}^{\left|\delta_{\mathrm{S}}, \delta_{\mathrm{A}}\right\rangle}\right) =C_{r}\left(\Psi_{M_{n}}\right)+M_{n}^{-1}\sum_{m}C_{r}\left( \Phi_{\mathbf{j},m}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}}\right\rangle }\right)\] \[=n\left[C_{r}(\rho)-\tilde{\epsilon}_{n}\right]+M_{n}^{-1}\sum_{m} C_{r}\left(\Phi_{\mathbf{j},m}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}} \right\rangle}\right). \tag{121}\] Thus, \[M_{n}^{-1}\sum_{m}C_{r}\left(\Phi_{\mathbf{j},m}^{\left|\delta_{\mathrm{S}}, \delta_{\mathrm{A}}\right\rangle}\right)=n\left[\ell(\rho)+\epsilon_{n}^{(8)} \right]. \tag{122}\] Now recall (101), where we showed that for \(\delta_{\mathrm{R}}\gg\delta_{\mathrm{A}}+O\left(\delta_{\mathrm{J}}\right)+O \left(\delta_{\mathrm{S}}\right)\), the fraction of \(m\) values (under the uniform measure \(M_{n}^{-1}\)) with \(r_{C}\left(\Phi_{\mathbf{j},m}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}} \right\rangle}\right)>\exp_{2}\left(n\left[\ell(\rho)+\tilde{\epsilon}_{n}+ \delta_{\mathrm{R}}\right]\right)\) is no larger than \(\exp_{2}\left(-n\delta_{\mathrm{R}}\right)\). Define \[\bar{\mathcal{M}}_{n}^{\mathrm{I},\delta_{\mathrm{R}}}:=\left\{m:\;C_{r} \left(\Phi_{\mathbf{j},m}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}} \right\rangle}\right)\leq n\left[\ell(\rho)+\tilde{\epsilon}_{n}+\delta_{ \mathrm{R}}\right]\right\} \tag{123}\] Since \(C_{r}\leq\log_{2}r_{C}\), \[1-\frac{\left|\bar{\mathcal{M}}_{n}^{\mathrm{I},\delta_{\mathrm{R}}}\right|}{M _{n}}\leq\exp_{2}\left(-n\delta_{\mathrm{R}}\right). \tag{124}\] Thus, (122) implies \[M_{n}^{-1}\sum_{m\in\bar{\mathcal{M}}_{n}^{\mathrm{I},\delta_{\mathrm{R}}}}C_ {r}\left(\Phi_{\mathbf{j},m}^{\left|\delta_{\mathrm{S}},\delta_{\mathrm{A}} \right\rangle}\right)\geq n\left[\ell(\rho)+\epsilon_{n}^{(9)}\right]. \tag{125}\] As in the context of (110), this is again a situation where the average value of a function is close to an extremal value. Choosing \(\delta_{\mathrm{R}}\approx\epsilon_{n}^{(10)}:=\sqrt{\epsilon_{n}^{(9)}}\) and making the other \(\delta\)'s small enough, we can apply similar arguments to get \[\frac{\left|\mathcal{M}_{n}^{\mathrm{I},\pm\epsilon_{n}^{(10)}}\right|}{M_{n} }\geq 1-\epsilon_{n}^{(10)}, \tag{126}\] where \[\mathcal{M}_{n}^{\mathrm{I},\pm\epsilon_{n}^{(10)}} :=\] \[\left\{m:\;C_{r}\left(\Phi_{\mathbf{j},m}^{\left|\delta_{\mathrm{S }},\delta_{\mathrm{A}}\right\rangle}\right)\in n\left[\ell(\rho)-\epsilon_{n}^{( 10)},\ell(\rho)+\epsilon_{n}^{(10)}\right]\right\}. \tag{127}\] Now defining \[\left|\bar{\Phi}_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}, \epsilon_{n}^{(10)}}\right\rangle:=M_{n}^{-1/2}\sum_{m\in\mathcal{M}_{n}^{\pm \epsilon_{n}^{(10)}}}\left|\Phi_{\mathbf{j},m}^{\left|\delta_{\mathrm{S}}, \delta_{\mathrm{A}}\right\rangle}\right), \tag{128}\] it follows from (126) that \[\left|\left\langle\bar{\Phi}_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}},\epsilon_{n}^{(10)}}\right|\bar{\Phi}_{\mathbf{j}}^{\left|\delta_{ \mathrm{S}},\delta_{\mathrm{A}}\right\rangle}\right|\geq 1-\epsilon_{n}^{(10)}. \tag{129}\] Similarly defining \(\left|\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}},\epsilon_{n}^{( 10)}}\right\rangle\) and then applying \[\left|\left\langle\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}},\delta_{ \mathrm{A}}\right|}\right.\bar{\Phi}_{\mathbf{j}}^{\left|\delta_{\mathrm{S}}, \delta_{\mathrm{A}}\right\rangle}\right|=\left|\left\langle\alpha\right|\,\Psi_{M _{n}}\right\rangle|\geq\sqrt{1-\epsilon_{n}^{(3)}} \tag{130}\] twice, we have \[\left|\left\langle\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}},\epsilon_{n}^{(10)}}\right|\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}}, \delta_{\mathrm{A}}}\right\rangle \right| \approx\left|\left\langle\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}}, \delta_{\mathrm{A}},\epsilon_{n}^{(10)}}\right|\bar{\Phi}_{\mathbf{j}}^{\delta_{ \mathrm{S}},\delta_{\mathrm{A}}}\right\rangle\right|\] \[=\left|\left\langle\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}}}\right|\bar{\Phi}_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{ \mathrm{A}},\epsilon_{n}^{(10)}}\right\rangle\right|\] \[\approx\left|\left\langle\bar{\Phi}_{\mathbf{j}}^{\delta_{ \mathrm{S}},\delta_{\mathrm{A}}}\right|\bar{\Phi}_{\mathbf{j}}^{\delta_{ \mathrm{S}},\delta_{\mathrm{A}},\epsilon_{n}^{(10)}}\right\rangle\right|. \tag{131}\] Thus, \[\left|\left\langle\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}, \epsilon_{n}^{(8)}}\right|\Phi_{\mathbf{j}}^{\left|\delta_{\mathrm{S}}, \delta_{\mathrm{A}}}\right\rangle\right|\geq 1-\epsilon_{n}^{(11)}. \tag{132}\] We now define \[\tau_{n}:=\mathrm{norm}\sum_{\mathbf{j}\in\mathcal{T}_{n}^{\delta_{\mathrm{S}}, \delta_{\mathrm{A}}}}Q_{n}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}}\left( \mathbf{j}\right)\Phi_{\mathbf{j}}^{\delta_{\mathrm{S}},\delta_{\mathrm{A}}, \epsilon_{n}^{(10)}},\] (133 2. Unnormalized projection of a pure component in a convex decomposition; 3. Renormalization of the entire operator by a factor close to \(1\). Therefore, \[\tau_{n}\leq\left(1+\epsilon_{n}^{(13)}\right)2^{-n[S(\rho)-\eta(\delta_{S})]}. \tag{135}\] Defining \(\bar{S}_{n}:=\left(\left\|\tau_{n}\right\|_{\infty}\right)^{-1}\), \[T_{n}:=\bar{S}_{n}\tau_{n} \tag{136}\] satisfies \(T_{n}\leq 1\) and \(\text{Tr}T_{n}\approx\left(1-\epsilon_{n}^{(13)}\right)2^{n[S(\rho)-\eta( \delta_{S})]}\). But this \(T_{n}\) may not satisfy condition (84), since in general \[M_{n}1_{\mathcal{I}_{m}}T_{n}1_{\mathcal{I}_{m}}\nleq 1_{\mathcal{I}_{m}}. \tag{137}\] This adds to our growing list of obstacles against completing our proof. A possible remedy for this particular challenge might be to appeal to the symmetric and random nature of the choice of partitioning (see [13, Supplemental Material, Lemma 16]; [23, Prop. 2.4]) to prune out a small fraction of offending \(m\) values. Alternately, in the step (111) where (as we then explicated) we only restricted \(\mathbf{j}\) to \(\mathcal{T}_{n}^{\delta_{j},\epsilon_{n}^{(3)}}\), we could additionally project each pure component \(\left|\Phi_{\mathbf{j}}^{\left|\delta_{\delta_{\delta}},\delta_{\mathbf{j}} \right.}\right.\) onto \(\mathcal{V}\), which by its special structure automatically satisfies \[M_{n}1_{\mathcal{I}_{m}}1_{\mathcal{V}}1_{\mathcal{I}_{m}}\leq 1_{\mathcal{I}_{m}}. \tag{138}\] If we therefore ensure that the overall \(T_{n}\) (at this projection step) satisfies \(T_{n}\lesssim 1_{\mathcal{V}}\), the subsequent modifications would still maintain \(M_{n}1_{\mathcal{I}_{m}}T_{n}1_{\mathcal{I}_{m}}\lesssim 1_{\mathcal{I}_{m}}\), even though the eventual \(T_{n}\) may not be supported on \(\mathcal{V}\). This would enable us to uphold (84) by applying a normalization factor close to \(1\) (as in the other steps). However, projecting \(\left|\Phi_{\mathbf{j}}^{\left|\delta_{\delta},\delta_{\mathbf{j}}\right.}\right.\) onto \(\mathcal{V}\) brings another problem: while we were able to approximate each \(\left|\Phi_{\mathbf{j}}^{\left|\delta_{\mathbf{j}}\right.}\right.\) with a near-uniform superposition \(\left|\Phi_{\mathbf{j}}^{\left|\delta_{\mathbf{j}},\delta_{\mathbf{j}}\right.}\right.\) by appealing to the former's tensor-product structure, no such structure is assured for \(\left|\Phi_{\mathbf{j}}^{\left|\delta_{\mathbf{j}},\delta_{\mathbf{j}}, \mathcal{V}\right.}\right.\). Though we are able to show that this projected vector is close to the unprojected one, it is not close enough to assure the retention of the near-uniformity of the superposition. Here again, we speculate that the symmetry and randomness in the choice of partitioning might help argue for the existence of near-uniform approximations to the \(\mathcal{V}\)-projected vectors. Finally, if everything worked out, we would have a residual \(T_{n}^{\lambda}\) with trace exponentially smaller than that of \(T_{n}\). Thus, the additional coherent measurement cost from it, as well as its contribution to the average (measured in logarithmic rank), would be negligible. This concludes our survey of our attempted construction to prove Conjecture 2, and of the various challenges encountered therein. As one may have noticed, the overall difficulties here are much more involved than in the case of Conjecture 1. This suggests that Conjecture 2 may stand a bleaker chance of holding up. ## 6 Ramifications of our conjectures If true, conjectures 1 and 2 would pin down the coherent measurement cost of asymptotically maximal distillation, up to subextensive terms. We shall now explore some implications that would then follow. We will first discuss the operational implications for maximal distillation, and then sketch some possibilities for an asymptotic tradeoff between the coherent measurement budget and the distilled yield. All statements in the remainder of this section will be premised on conjectures 1 and 2. ### Maximal distillation may be a net loss According to Conjecture 1, the coherent measurements used in any maximal distillation protocol (on average, up to subextensive terms) would be equivalent to a number of cobits (or qubit Hadamard gates) no smaller than \(n\ell(\rho)\), where \(n\) is the number of copies of the input \(\rho\) used. Meanwhile, the distilled yield is equivalent to \(nC_{r}(\rho)\) cobits. Could the coherent measurement cost exceed the yield, i.e. \(\ell(\rho)>C_{r}(\rho)\)? Indeed, this does sometimes happen. While no general method is known for efficiently computing the coherence of formation, it does have a closed-form expression for states of a single qubit [16]: \[C_{f}(\rho)=h\left(\frac{1+\sqrt{1-4\left|\left\langle 0\right|\rho\left|1 \right\rangle\right|^{2}}}{2}\right). \tag{139}\] Using this formula, we computed the excess coherence cost \(\ell(\rho)-C_{r}(\rho)\) for a representative sample of qubit states (Fig. 1). The results suggest that a nonzero measure of states incur an excess cost for maximal distillation, and moreover, that this excess cost can be an arbitrarily large multiple of the distilled yield! Coherence distillation is sometimes an instrumental or incidental outcome in a larger task, e.g. entanglement distillation using incoherent local operations [24]. But if the task is coherence distillation itself, then our results call into question its operational utility in situations where \(\ell(\rho)\geq C_{r}(\rho)\): if we are able to implement \(n\ell(\rho)\) Hadamard gates, we should rather use them to simply prepare as many fresh cobits than squander them in distilling only \(nC_{r}(\rho)\). ### Asymptotic cost-yield tradeoff An obvious question that arises is what the cost of _non-_maximal distillation is. In this section we will make some conjectures on this question, restricting our consideration to IO. Since the coherence rank \(M_{n}\approx\exp_{2}\left[n\,C_{r}(\rho)\right]\) of the maximal distillate and that of the conjectured requisite measurement, \(L_{n}\approx\exp_{2}\left[n\,\ell(\rho)\right]\), both scale exponentially in the number of copies, one might naively expect that the best one can do with a measurement rank scaling \(\tilde{L}_{n}\approx\left(L_{n}\right)^{t}\) with \(0\leq t<1\) is to distill maximally from a fraction \(t\) of the input's copies, thereby achieving \(\tilde{M}_{n}\approx\left(M_{n}\right)^{t}\), corresponding to the distillation rate \(t\,C_{r}(\rho)\). However, this would only be a loose lower bound on the achievable rate, as can already be seen by putting Conjecture 2 together with past results on SIO distillation [12, 25]. Recall that SIO are just IO that don't use coherent measurements, i.e. the case \(t=0\). We know that the maximal SIO-distillable rate is nonzero for certain input instances whose maximal IO rate is strictly larger--thus, in these cases, although \(\log_{2}L_{n}\in\Omega(n)\) (assuming Conjecture 1) for maximal IO distillation, \(\tilde{L}_{n}=1\in O\left[\left(L_{n}\right)^{t=0}\right]\) is nevertheless able to achieve \(\Omega(n)\ni\log_{2}\tilde{M}_{n}\notin O\left[tn\,C_{r}(\rho)\right]\). Can we go further and find the exact asymptotic trade-off between the cost and the yield? **Definition 7** (Feasible cost-yield pair).: \([l,r]\in\mathbb{R}^{2}\) is a _feasible cost-yield pair_ for \(\rho\) if there exists a sequence \(\mathcal{E}_{n}\) of IO channels that respectively distill from \(\rho^{\otimes n}\) at the asymptotic rate \(r\) and coherent measurement cost \(l\), defined as \[l=\lim_{n\to\infty}\frac{\log_{2}L_{n}}{n} \tag{140}\] with \(L_{n}\) the cost of \(\mathcal{E}_{n}\) both on (logarithmic) average and on a per-measurement basis (for almost all measurements involved)--the two being identical under Conjecture 2. For example, \([l,r]=[\ell(\rho),C_{r}(\rho)]\) is a feasible pair for \(\rho\) according to Conjecture 2. Taking Theorem 2 and conjectures 1 and 2 as clues, we now make the informed guess that some quantity like the ones defined in Definition 6 would quantify the necessary coherent measurement cost of non-maximal asymptotic distillation. If so, the freedom we have in optimizing a distillation strategy would consist of choosing an appropriate density operator \(\tau_{\perp}\) orthogonal to \(\varrho_{n}\), such as to minimize the mixture's \(C_{f}\). Intuitively, for \(\tau_{\perp}\) to help reduce the final \(C_{f}\), its incoherent alphabet must overlap with \(\varrho_{n}\)'s as much as possible. One possibility suggested by this intuition is to pick \(\tau_{\perp}\) so that the mixture becomes \(\sigma^{\otimes n}\) for some state \(\sigma\) satisfying \(\Delta(\sigma)=\Delta(\rho)\). In particular, define the set \[\mathcal{D}(\rho):=\left\{\sigma:\;\Delta(\sigma)=\Delta(\rho),\;\rho \stackrel{{\text{SIO}}}{{\longmapsto}}\sigma\right\}. \tag{141}\] As per Conjecture 2, \([l,r]=[\ell(\sigma),C_{r}(\sigma)]\) for any \(\sigma\in\mathcal{D}(\rho)\) would also be a feasible pair for \(\rho\), since each copy of \(\rho\) could first be mapped to one of \(\sigma\) without requiring coherent measurements. A special case of this is when \(\sigma\) is Lami's \(\bar{\rho}\)[12, 25], the state consisting only of the pure diagonal blocks of \(\rho\)--we then get the SIO-feasible pair \([\ell\left(\bar{\rho}\right),C_{r}\left(\bar{\rho}\right)]=[0,Q\left(\rho\right)]\), where \(Q(\rho)\) is Lami's _quintessential coherence_. We can achieve any convex combination of a set of feasible pairs by performing the respective protocols achieving them on appropriate fractions of the total number of input copies. Finally, the feasibility of \([l,r]\) trivially implies that of any \([l,r_{1}<r]\). Thus, Conjecture 2 implies that any pair in the following set is feasible for \(\rho\): \[\mathcal{C}(\rho):=\operatorname{conv}\left\{\left[\ell(\sigma),r\right]:\;r \in[0,C_{r}(\sigma)]\,,\;\sigma\in\mathcal{D}(\rho)\right\}. \tag{142}\] The symmetry and typicality properties of the asymptotic limit suggest that there might be no nontrivial feasible pairs besides these. We discussed above why we believe it would not help to expand \(\mathcal{D}(\rho)\) to include states with a diagonal part different from the input's. As for the possibility of collective SIO pre-processing on many copies of \(\rho\), Lami's results on SIO distillation hint against the prospect of this producing new feasible pairs. There would of course be IO strategies that deviate from maximal distillation in ways other than an SIO pre-processing. But based on our preliminary studies of distillation in the dilation picture (Appendix E), we are inclined to believe that these variations do not expand the feasible set either. Hence, we have the following: **Conjecture 3**.: _The \(\mathcal{C}(\rho)\) defined through (141) and (142) is the set of all feasible cost-yield pairs for \(\rho\)._ In the above line of reasoning we considered using the coherence budget \(l\) only in implementing an IO acting on the given input. But allowing the coherent action to be used possibly also in preparing fresh coherent states (as discussed in Section 6.1) is arguably more operationally meaningful. In that case, \([l,l]\) should also be considered a feasible pair for any \(l\geq 0\) and any \(\rho\). If we wish to include these in our reckoning, we could modify the definition (142) by replacing \(C_{r}(\sigma)\) with \(\max\left\{\ell(\sigma),C_{r}(\sigma)\right\}\). In general, the problem of computing the largest achievable rate \(r\) for a given coherent measurement budget \(l\), or inversely, that of computing the least \(l\) achieving a desired rate \(r\), is likely to be very hard (even if our conjectures hold)--after all, we have no known efficient way to compute \(\ell(\rho)\) itself, saying nothing of the difficulty of finding the set \(\mathcal{D}(\rho)\). But we expect there to be a non-trivial cost-yield tradeoff landscape, finding which would be both fundamentally and operationally important. ## 7 Conclusion We laid out a framework for quantifying the cost of coherent measurements involved in distilling coherent states from arbitrary inputs using coherence-non-creating channels (MIO). The framework uses a construction we call the _target witness_--a measurement effect that quantifies the probability with which a given channel maps an input to a desired target state. This object also doubles up as a quantifier of the requisite coherent measurement action in implementing the channel. We derived conditions on the target witness--and thereby, lower bounds on the coherent measurement cost--for exact (maximal and non-maximal) and approximate maximal coherence distillation from finite-sized inputs. Based on our result on the approximate case, we conjectured a scaling law for the coherent measurement cost of asymptotic distillation at the maximal rate--namely, that the necessary and sufficient cost (in an equivalent number of qubit Hadamard gates) is extensive in the number of input copies, with a rate given by the input's irretrievable coherence \(\ell(\rho)=C_{f}(\rho)-C_{r}(\rho)\). We went through detailed and rigorous proof sketches for both the necessity and the sufficiency of this cost, and discussed the difficulties we encountered in completing the proofs. We then discussed some implications of our conjectures. First, we noted that our conjectured coherent measurement cost exceeds the very distilled yield in a nonzero measure of qubit instances of maximal distillation. Thus, maximal coherence distillation as a standalone operational task may sometimes amount to a net attrition of coherence resource and should therefore be superseded by coherent state preparation. We then made some speculations on an asymptotic tradeoff between the coherent measurement budget and distilled yield. A question of possible interest for future work is whether the appearance of the irretrievable coherence--a signature of irreversibility--is accidental, or if, rather, there is a deeper connection between resource-theoretic irreversibility and ancillary (or otherwise hidden) costs of distillation. The fact that our results all apply to MIO, which do not exhibit irreversibility, supports the accident hypothesis, but there may yet be some subtleties that we have missed. Our results and conjectures have foundational significance in understanding the resource theory of coherence, but also operational implications in applications that use the resource. Further investigation on our conjectures may shed light on information-theoretic aspects of coherence manipulation, e.g. operational links between coherence distillation and encoding tasks, a possible asymptotic equipartition property of convex-roof extensions of entropic coherence quantifiers, etc. Developing methods and heuristics to actually compute such quantifiers would be a challenging project in itself. Other natural avenues for inquiry would be in situations involving the interplay of coherence with other resources, e.g. multipartite settings with local incoherent operations [24]. Besides the target witness-based approach detailed in this paper, we also attempted some alternative approaches towards quantifying the coherent measurement cost of distillation. In Appendix D, we describe an approach wherein we studied the behaviour of certain SIO monotones [25] under constrained IO; in Appendix E, we present our preliminary investigations on what we call decoupling schemes--certain linear-algebraic structures that emerge when distillation is framed in the dilation picture. These suggest yet other fruitful lines of inquiry, including possible connections with established notions of decoupling [26, 27, 28, 29, 30, 31]. ## Acknowledgements The author thanks Anurag Anshu, Kishor Bharti, Dagmar Bruss, Ian George, Ludovico Lami, Alessandro Luongo, Iman Marvian, Ryuji Takagi, Thomas Theurer and Benjamin Yadin for helpful discussions. Special thanks are due to Eric Chitambar, Mile Gu and Yunlong Xiao for particularly detailed and engaging discussions. This project was originally conceived during a collaboration with Francesco Buscemi and Bartosz Regula. The author expresses his sincerest gratitude for their help through intense discussions during the early phase and occasional consultations thereafter. ## Appendix A Useful properties of the fidelity In this paper we adhere to the _Uhlmann-Jozsa_ (i.e., square-root-free) definition of the fidelity, \(F(\sigma,\tau):=\left(\operatorname{Tr}\!\sqrt{\sqrt{\sigma}\tau\sqrt{\sigma}} \right)^{2}\), which we will apply on arbitrary positive-semidefinite arguments (not only density operators). We will make use of the following properties, referring to this section when we do so: 1. One or both arguments pure: \(F(\psi,\tau)=\left\langle\psi\right|\tau\left|\psi\right\rangle\). 2. Linearity in nonnegative scalar factors: \(F(t\sigma,\tau)=tF(\sigma,\tau)\) for \(t\geq 0\). 3. Joint concavity of the square-root fidelity: for \(0\leq p\leq 1\), \(\sqrt{F\left[p\sigma_{1}+(1-p)\sigma_{2},p\tau_{1}+(1-p)\tau_{2}\right]}\geq p \sqrt{F\left(\sigma_{1},\tau_{1}\right)}+(1-p)\sqrt{F\left(\sigma_{2},\tau_{2 }\right)}\). 4. Fidelity of an operator with a normalized projection thereof: for some space \(\mathcal{V}\), if \(\sigma^{\mathcal{V}}:=\mathbb{1}_{\mathcal{V}}\sigma\mathbb{1}_{\mathcal{V}}\) and \(\sigma^{\mathcal{|V}}:=\sigma^{\mathcal{V}}/\mathrm{Tr}\sigma^{\mathcal{V}}\), then \[F\left(\sigma,\sigma^{\mathcal{|V}}\right) =F\left(\sigma^{\mathcal{V}},\sigma^{\mathcal{|V}}\right)\] \[=\left(\mathrm{Tr}\sigma^{\mathcal{V}}\right)F\left(\sigma^{ \mathcal{|V}},\sigma^{\mathcal{|V}}\right)\] \[=\mathrm{Tr}\sigma^{\mathcal{V}}.\] (125) 5. Fidelity of a density operator with a convex sub-component thereof: if \(\tau=p\tau_{1}+(1-p)\tau_{2}\) is a density operator with \(0\leq p\leq 1\) and \(\tau_{1/2}\) also density operators, then by joint concavity, \[F\left(\tau,\tau_{1}\right) =F\left[p\tau_{1}+(1-p)\tau_{2},p\tau_{1}+(1-p)\tau_{1}\right]\] \[\geq\left[p\sqrt{F\left(\tau_{1},\tau_{1}\right)}+(1-p)\sqrt{F \left(\tau_{2},\tau_{1}\right)}\right]^{2}\] \[\geq\left[p\cdot 1+(1-p)\cdot 0\right]^{2}=p^{2}.\] (126) ## Appendix B Useful results from the literature In our work we will make use of the following results from [13], namely the asymptotic continuity of the relative entropy of coherence \(C_{r}\) and the coherence of formation \(C_{f}\). **Lemma B.1** ([13, Supplemental Material, Lemma 12]).: _For two states \(\tau_{1}^{\Lambda}\) and \(\tau_{2}^{\Lambda}\) with \(\left\|\tau_{1}-\tau_{2}\right\|_{1}\leq\delta\),_ \[\left|C_{r}\left(\tau_{1}\right)-C_{r}\left(\tau_{2}\right)\right|\leq\delta \log_{2}A+2h\left(\frac{\delta}{2}\right), \tag{127}\] _where \(h(t)=-t\log_{2}t-(1-t)\log_{2}(1-t)\) is the binary entropy function._ **Lemma B.2** ([13, Supplemental Material, Lemma 15]).: _For two states \(\tau_{1}^{\Lambda}\) and \(\tau_{2}^{\Lambda}\) with \(B\left(\tau_{1},\tau_{2}\right)\leq\delta\),_ \[\left|C_{f}\left(\tau_{1}\right)-C_{f}\left(\tau_{2}\right)\right|\leq\delta \log_{2}A+(1+\delta)h\left(\frac{\delta}{1+\delta}\right). \tag{128}\] ## Appendix C Asymptotic typicality Here we give a brief self-contained review of the necessary background on asymptotic typicality for i.i.d. classical and quantum samples. For a detailed treatment, see [18]. ### Classical sources and typical sets We will denote classical variables with uppercase letters and specific values they take with lowercase. Suppose a classical i.i.d. stochastic source outputs a variable \(X\) taking values in a _finite_ alphabet \(\mathcal{X}\) and distributed according to \(\mathbf{p}\equiv\left[p(x)\right]_{x\in\mathcal{X}}\). Consider a length-\(n\) sample \(\mathbf{X}\equiv X_{1}X_{2}\ldots X_{n}\) from the source; it takes values of length-\(n\) sequences \(\mathbf{x}\equiv x_{1}x_{2}\ldots x_{n}\in\mathcal{X}^{n}\). By the source's i.i.d. property, the sample distribution is \(\mathbf{p}_{n}\left(\mathbf{X}\right)=\mathbf{p}^{\otimes n}\). Recall that \(H(\mathbf{p})\equiv H(X)_{\mathbf{p}}=-\sum_{x\in\mathcal{X}}p(x)\log_{2}p(x)\) denotes the Shannon entropy of the source. **Definition C.1** (Typical sequences and sets).: For any real \(\delta>0\), a \(\delta\)-weakly-typical (or entropy-typical) sequence \(\mathbf{x}\) is one that satisfies \[2^{-n[H(\mathbf{p})+\delta]}\leq p_{n}\left(\mathbf{x}\right)\leq 2^{-n[H( \mathbf{p})-\delta]}. \tag{129}\] For any \(x\in\mathcal{X}\), let \(f_{x}\left(\mathbf{x}\right)\) denote the frequency of occurrences of \(x\) in \(\mathbf{x}\), i.e. \(f_{x}\left(\mathbf{x}\right)=\left|\left\{j:x_{j}=x\right\}\right|/n\). A \(\delta\)-strongly-typical (or letter-typical) sequence \(\mathbf{x}\) is one that satisfies \[\sum_{x\in\mathcal{X}}\left|f_{x}\left(\mathbf{x}\right)-p(x)\right|\leq\delta. \tag{130}\] The set \(\bar{\mathcal{T}}_{n}^{\delta}\) of all \(\delta\)-weakly-typical sequences is called the \(\delta\)-weakly-typical set, and the set \(\mathcal{T}_{n}^{\delta}\) of all \(\delta\)-strongly-typical sequences the \(\delta\)-strongly-typical set. **Lemma C.2** (Asymp. equipartition property [AEP]).: _For any \(\delta>0\) and large enough \(n\), the weakly-typical set \(\bar{\mathcal{T}}_{n}^{\delta}\) has the following properties:_ 1. \(\sum_{\mathbf{x}\in\bar{\mathcal{T}}_{n}^{\delta}}p_{n}(\mathbf{x})\geq 1-\delta\)_;_ _._ 2. \((1-\delta)2^{n[H(\mathbf{p})-\delta]}\leq\left|\mathcal{T}_{n}^{\delta}\right|\leq 2 ^{n[H(\mathbf{p})+\delta]}\)_._ _Furthermore, there exists a continuous real-valued function \(\eta(\delta)\), such that \(\eta(\delta)\stackrel{{\delta\to 0}}{{\longrightarrow}}0\) and for large enough \(n\), the strongly-typical set \(\mathcal{T}_{n}^{\delta}\) has the following properties:_ 1. \(2^{-n[H(\mathbf{p})+\eta(\delta)]}\leq p_{n}\left(\mathbf{x}\right)\leq 2^{-n[H( \mathbf{p})-\eta(\delta)]}\) _for all_ \(\mathbf{x}\in\mathcal{T}_{n}^{\delta}\)_. In other words,_ \(\mathcal{T}_{n}^{\delta}\subset\bar{\mathcal{T}}_{\eta(\delta)}^{n}\)_;_ 2. \(\sum_{\mathbf{x}\in\mathcal{T}_{n}^{\delta}}p_{n}(\mathbf{x})\geq 1-\delta\)_;_ 3. \((1-\delta)2^{n[H(\mathbf{p})-\eta(\delta)]}\leq\left|\mathcal{T}_{n}^{\delta} \right|\leq 2^{n[H(\mathbf{p})+\eta(\delta)]}\)_._ Intuitively, we can understand AEP as follows: for a large enough sample from an i.i.d. source, the weakly- or strongly-typical set for any \(\delta>0\) supports all but \(\delta\) of the probability. The cardinality of this set scales exponentially, roughly as \(\exp_{2}\left[nH(\mathbf{p})\right]\); note that this is an exponentially small fraction of the cardinality \(\left|\mathcal{X}\right|^{n}\) of all \(n\)-length sequences! Finally, the distribution of sequences within this set is nearly uniform. ### Quantum sources and typical subspaces As in the classical case, there is also a quantum AEP. Suppose a quantum source outputs i.i.d. copies of an elementary _finite-dimensional_ system A, each prepared in the state \(\rho^{\text{A}}\). Considering a length-\(n\) sample \(\varrho_{n}\equiv\rho^{\otimes n}\), the quantum counterparts to the classical sequences \(\mathbf{x}\) are directions in the Hilbert space of \(\text{A}_{n}\equiv\text{A}^{\otimes n}\), and the quantum counterparts to sequence probabilities are the directional densities induced by \(\varrho_{n}\). In particular, the eigenvectors of \(\varrho_{n}\), and their associated eigenvalues, embody an informationally-complete description of its structure. If \(\rho=\sum_{x\in[A]}r_{x}\psi_{x}\) is an eigendecomposition, then \(\left|\Psi_{\mathbf{x}}\right\rangle:=\bigotimes_{k=1}^{n}\left|\psi_{x_{k}}\right\rangle\), for \(\mathbf{x}\equiv(x_{k})_{k=1}^{n}\in[A]^{n}\), constitute a complete basis of eigenvectors of \(\varrho_{n}\), with associated eigenvalues \(R_{\mathbf{x}}:=\prod_{k}r_{x_{k}}\). Based on these, we can define quantum counterparts of typical sequences and sets: **Definition C.3** (Typical eigenvectors and subspaces).: For any real \(\delta>0\), a \(\delta\)-weakly- (respectively, strongly-) typical eigenvector \(\left|\Psi_{\mathbf{x}}\right\rangle\) is one whose associated label sequence \(\mathbf{x}\) is \(\delta\)-weakly- (resp. strongly-) typical under the distribution \(\mathbb{F}^{\otimes n}\). The \(\delta\)-weakly- (resp. strongly-) typical subspace \(\bar{\mathcal{V}}_{n}^{\delta}\) (resp. \(\mathcal{V}_{n}^{\delta}\)) is the span of the \(\delta\)-weakly- (resp. strongly-) typical eigenvectors. Note that these subspaces do not depend on the choice of eigenbasis \(\left\{\left|\psi_{x}\right\rangle\right\}_{x}\). **Lemma C.4** (Quantum AEP).: _For any \(\delta>0\) and large enough \(n\), the weakly-typical subspace \(\bar{\mathcal{V}}_{n}^{\delta}\) has the following properties:_ 1. \(\operatorname{Tr}\left(\mathbb{1}_{\bar{\mathcal{V}}_{n}^{\delta}}\varrho_{n} \right)\geq 1-\delta\)_;_ 2. \((1-\delta)2^{n[S(\rho)-\delta]}\leq\dim\bar{\mathcal{V}}_{n}^{\delta}\leq 2^{n[S (\rho)+\delta]}\)_._ _Again, there exists a continuous real-valued function \(\eta(\delta)\), such that \(\eta(\delta)\stackrel{{\delta\to 0}}{{\longrightarrow}}0\) and for large enough \(n\), the strongly-typical subspace \(\mathcal{V}_{n}^{\delta}\) has the following properties:_ 1. \(2^{-n[S(\rho)+\eta(\delta)]}\mathbb{1}\leq 1_{\mathcal{V}_{n}^{\delta}} \varrho_{n}\mathbb{1}_{\mathcal{V}_{n}^{\delta}}\leq 2^{-n[S(\rho)-\eta(\delta)]} \mathbb{1}\)_;_ 2. \(\operatorname{Tr}\left(\mathbb{1}_{\mathcal{V}_{n}^{\delta}}\varrho_{n} \right)\geq 1-\delta\)_;_ 3. \((1-\delta)2^{n[S(\rho)-\eta(\delta)]}\leq\dim\mathcal{V}_{n}^{\delta}\leq 2^{n[S (\rho)+\eta(\delta)]}\)_._ ## Appendix D Limit's monotones under rank-constrained IO In order to tease out SIO-distillable coherence, Lami [25] introduced a family of functions \(\mu_{k}\) that are monotones under SIO but not under general IO. For any \(\rho^{\text{A}}\), let \[R^{\rho}:=\left[\Delta(\rho)\right]^{-1/2}\rho\left[\Delta(\rho)\right]^{-1/2}, \tag{114}\] with the inverse defined on the support of \(\Delta(\rho)\). Then, \[\mu_{k}(\rho):=\max_{\mathcal{I}\subseteq\mathcal{A}:\left|\mathcal{I}\right| =k}\log_{2}\left\|\mathbb{1}_{\mathcal{I}}R^{\rho}\mathbb{1}_{\mathcal{I}} \right\|_{\infty}. \tag{115}\] One way of approaching the problem of quantifying the requisite coherent measurement cost of coherence distillation would be to study the behaviour of these SIO monotones under IO with constrained coherent measurement action. **Definition D.1** (\(L\)-ary IO).: An IO \(\mathcal{E}\) is \(L\)_-ary_ if it can be decomposed in terms of IO Kraus operators each of whose rows has \(\leq L\) nonzero entries. We can call the \(L\) for a given IO its _arity_. For convenience, also define \[\mathcal{P}_{k,L}:=\left\{\mathscr{I}\equiv\left\{\mathcal{I}_{m}\subseteq \mathcal{A}\right\}_{m\in[k]}:\;\left|\mathcal{I}_{m}\right|\leq L\;\&\;\left| \mathcal{I}_{m}\cap\mathcal{I}_{m^{\prime}}\right|\propto\delta_{mm^{\prime}}\; \forall m,m^{\prime}\in[k]\right\}. \tag{116}\] **Remark D.2**.: A generic \(L\)-ary IO Kraus operator with an \(M\)-dimensional output has the form \[K=\sum_{m}\left|m\right\rangle\left\langle v_{m}\right|, \tag{117}\] where \(\left|v_{m}\right\rangle\in\text{span}\left\{\left|i\right\rangle\right\}_{i \in\mathcal{I}_{m}}\) for some \(\mathscr{I}\in\mathcal{P}_{M,L}\). We will now perform an \(L\)-ary construction analogous to the definition (44). For any given \(\mathscr{I}\in\mathcal{P}_{M,L}\), define \[\Delta_{\mathscr{I}}(\cdot):=\sum_{m}\mathbbm{1}_{I_{m}}(\cdot) \mathbbm{1}_{I_{m}}, \tag{45}\] and therewith, for any \(\rho\), \[R^{\rho}_{\mathscr{I}}:=\left[\Delta_{\mathscr{I}}(\rho)\right]^{-1/2}\rho \left[\Delta_{\mathscr{I}}(\rho)\right]^{-1/2}. \tag{46}\] It is worth noting that \[\mathbbm{1}_{\mathcal{I}_{m}}R^{\rho}_{\mathscr{I}}\mathbbm{1}_{ \mathcal{I}_{m}}=\mathbbm{1}_{\rho_{\mathcal{I}_{m}}=1_{\mathcal{I}_{m}}\rho 1 _{\mathcal{I}_{m}}}. \tag{47}\] Now define \[\mu_{M,L}(\rho):=\max_{\mathscr{I}\in\mathcal{P}_{M,L}}\log_{2} \left\|R^{\rho}_{\mathscr{I}}\right\|_{\infty}. \tag{48}\] **Observation D.3**.: _Given an input \(\rho\),_ 1. _For an individual_ \(L\)_-ary IO Kraus operator_ \(K\) _that partitions its input according to some_ \(\mathscr{I}\in\mathcal{P}_{M,L}\)_, let_ \(\sigma:=K\rho K^{\dagger}\)_. Then,_ \(\left\|R^{\sigma}\right\|_{\infty}\leq\left\|R^{\rho}_{\mathscr{I}}\right\|_ {\infty}\)_. By convexity,_ \(\mu_{M}\left[\mathcal{E}(\rho)\right]\leq\mu_{M,L}(\rho)\) _for any_ \(L\)_-ary IO_ \(\mathcal{E}\)_._ 2. _For each_ \(\mathscr{I}\in\mathcal{P}_{M,L}\)_, there exists an_ \(L\)_-ary IO Kraus operator_ \(K\) _partitioning by_ \(\mathscr{I}\) _such that, for_ \(\sigma:=K\rho K^{\dagger}\)_,_ \(\left\|R^{\rho}\right\|_{\infty}=\left\|R^{\rho}_{\mathscr{I}}\right\|_{\infty}\)_._ _Consequently,_ \[\max_{\text{$L$-ary IO $\mathcal{E}$}}\mu_{M}\left[\mathcal{E}(\rho) \right]=\mu_{M,L}(\rho). \tag{49}\] Proof.: Let \[K=\sum_{m}\left|m\right\rangle\left\langle v_{m}\right|, \tag{50}\] where \(\left|v_{m}\right\rangle\in\operatorname{span}\left\{\left|a\right\rangle \right\}_{a\in\mathcal{I}_{m}}\) for some \(\mathscr{I}\in\mathcal{P}_{M,L}\). For every \(m\), define \(\left|u_{m}\right\rangle:=\sqrt{\rho_{\mathcal{I}_{m}}}\left|v_{m}\right\rangle \in\operatorname{supp}\left(\rho_{\mathcal{I}_{m}}\right)\). Then, for \(\sigma:=K\rho K^{\dagger}\), \[\left\langle m_{1}\right|\sigma\left|m_{2}\right\rangle =\left\langle v_{m_{1}}\right|\rho\left|v_{m_{2}}\right\rangle\] \[=\left\langle v_{m_{1}}\right|\sqrt{\rho_{\mathcal{I}_{m_{1}}}}R^{ \rho}_{\mathscr{I}}\sqrt{\rho_{\mathcal{I}_{m_{2}}}}\left|v_{m_{2}}\right\rangle\] \[=\left\langle u_{m_{1}}\right|R^{\rho}_{\mathscr{I}}\left|u_{m_{2 }}\right\rangle \tag{51}\] for all \(m_{1}\), \(m_{2}\). Therefore, \[R^{\sigma} =\sum_{m_{1},m_{2}}\frac{\left|m_{1}\right\rangle\left\langle m_{ 1}\right|\sigma\left|m_{2}\right\rangle\left\langle m_{2}\right|}{\sqrt{\left| m_{1}\right|\sigma\left|m_{1}\right\rangle\left\langle m_{2}\right|\sigma \left|m_{2}\right\rangle}}\] \[=\sum_{m_{1},m_{2}}\frac{\left|m_{1}\right\rangle\left\langle u_{ m_{1}}\right|R^{\rho}_{\mathscr{I}}\left|u_{m_{2}}\right\rangle\left\langle m_{2} \right|}{\sqrt{\left\langle u_{m_{1}}\right|u_{m_{1}}\right\rangle\left\langle u _{m_{2}}\right|u_{m_{2}}\rangle}}, \tag{52}\] the last line following from (47). Now, for any \(\left|\phi\right\rangle\in\operatorname{supp}\left(R^{\sigma}\right)\), define \[\left|\psi\right\rangle:=\sum_{m}\frac{\left|u_{m}\right\rangle\left\langle m \right|\phi\right\rangle}{\sqrt{\left\langle u_{m}\right|u_{m}}}. \tag{53}\] \(\left\langle\psi\right|\psi\left\rangle=\left\langle\phi\right|\phi\right\rangle\) by construction, while \(\left\langle\psi\right|R^{\rho}_{\mathscr{I}}\left|\psi\right\rangle= \left\langle\phi\right|R^{\sigma}\left|\phi\right\rangle\) follows from (52). Thus, \(\left\|R^{\sigma}\right\|_{\infty}\leq\left\|R^{\rho}_{\mathscr{I}}\right\|_{\infty}\). Conversely, given any \(\mathscr{I}\in\mathcal{P}_{M,L}\) and \[\left|\psi\right\rangle:=\sum_{m}\left|u_{m}\right\rangle \tag{54}\] with \(\left|u_{m}\right\rangle\in\operatorname{supp}\left(\rho_{\mathcal{I}_{m}}\right)\), define \(\left|v_{m}\right\rangle:=\rho_{\mathcal{I}_{m}}^{-1/2}\left|u_{m}\right\rangle\) for each \(m\). Then, \(K:=\sum_{m}\left|m\right\rangle\left\langle v_{m}\right|\) is an \(L\)-ary IO Kraus operator such that, for \(\sigma:=K\rho K^{\dagger}\), (52) holds. Defining \[\left|\phi\right\rangle:=\sum_{m}\sqrt{\left\langle u_{m}\right|u_{m}\right\rangle }\left|m\right\rangle, \tag{55}\] we have \(\left\langle\phi\right|\phi\left\rangle=\left\langle\psi\right|\psi\right\rangle\) and \(\left\langle\phi\right|R^{\sigma}\left|\phi\right\rangle=\left\langle\psi \right|R^{\rho}_{\mathscr{I}}\left|\psi\right\rangle\). Thus, \(\left\|R^{\sigma}\right\|_{\infty}\leq\left\|R^{\rho}_{\mathscr{I}}\right\|_ {\infty}\) can be saturated by choosing \(\left|\psi\right\rangle\) appropriately. Finally, if \(\mathscr{I}\) is a partitioning that attains the maximum on the right side of (48), and \(K\) a Kraus operator constructed as above to saturate \(\left\|R^{\sigma}\right\|_{\infty}\leq\left\|R^{\rho}_{\mathscr{I}}\right\|_{\infty}\), we can complete this to an \(L\)-ary IO channel \(\mathcal{E}\) with other Kraus operators whose output spaces don't overlap with that of \(K\). This \(\mathcal{E}\) then attains the maximum. This bound is tight when the maximization is over _all_\(L\)-ary IO \(\mathcal{E}\). But what is relevant in the context of distillation is the \(\mu_{M}\) for \(M\) equalling the dimension of the channel's output. Clearly, this is a much more constrained quantity and would generically not attain the bound we have found. Specifically, the argument used in the last paragraph of the proof would fail when the output space is constrained to be \(M\)-dimensional. If we are to use this bound to pin down the distillation rate under \(L\)-ary IO, we need to understand the behaviour of \(\mu_{M,L}\) under tensor product copies (i.e. an analog of [25, Proposition 16]). By virtue of asymptotic typicality, we might be able to get by without having to incorporate the constraint mentioned above. We leave these points for future work. ## Appendix E Decoupling schemes Here we will study a type of decoupling task, whereof coherence distillation is a special case. In particular, we will look at _exact deterministic decoupling of a pure output_. The input in this task is some state \(\rho\), and the goal is to apply a channel \(\mathcal{E}\) such that \(\mathcal{E}(\rho)=\left|\alpha\right\rangle\left\langle\alpha\right|\) for some specified pure state \(\left|\alpha\right\rangle:=\sum_{m}\alpha_{m}\left|m\right\rangle\). On the face of it, it may appear superfluous to view this as decoupling--for example, \(\mathcal{E}\) could just ignore the input and prepare the required output. But the utility of the decoupling perspective will become apparent presently. Recall that exactly producing a pure output \(\left|\alpha\right\rangle\left\langle\alpha\right|^{\text{M}}\) from \(\rho^{\text{A}}\) entails that the channel \(\mathcal{E}\) map the entire space \(\mathcal{L}\left(\mathcal{V}\equiv\operatorname{supp}\rho\right)\) to (scalar multiples of) \(\left|\alpha\right\rangle\left\langle\alpha\right|\). Let \(\dim\mathcal{V}\) and \(\left\{\left|v_{s}\right\rangle\right\}_{s\in\mathcal{S}}\) be a \(\mathcal{V}\) basis. Then, \(\mathcal{E}\) must have a dilation \(V\) with the action \[V^{\mathrm{A}\rightarrow\mathrm{CM}}\left|v_{s}\right\rangle^{\mathrm{A}}=\left|u _{s}\right\rangle^{\mathrm{C}}\left|\alpha\right\rangle^{\mathrm{M}} \tag{108}\] with orthonormal \(\left\{\left|u_{s}\right\rangle\right\}_{s\in\mathcal{S}}\). If \(V\) can be completed to a unitary within \(\mathcal{H}^{\mathrm{A}}\), we then have some well-defined \(\left|v_{s,m}\right\rangle^{\mathrm{A}}:=V^{\dagger}\left(\left|u_{s}\right\rangle ^{\mathrm{C}}\left|m\right\rangle^{\mathrm{M}}\right)\in\mathcal{H}^{\mathrm{ A}}\). Again by unitarity, the collection \(\mathscr{V}_{m}:=\left\{\left|v_{s,m}\right\rangle\right\}_{s\in\mathcal{S}}\) for each \(m\in\mathcal{M}\) must be orthonormal; in fact, the entire collection \(\mathscr{V}_{\mathcal{M}}:=\bigcup_{m\in\mathcal{M}}\mathscr{V}_{m}\) must be orthonormal. The conditions above are already necessary for the existence of a unitary \(V^{\mathrm{A}\rightarrow\mathrm{CM}}\) that accomplishes the task; the requirement that it be an IO dilation will bring in further constraints. But we will ignore these for the time being and try to learn what we can about unitary decoupling in general. **Definition E.1** (Decoupling scheme).: Given an \(S\)-dimensional subspace \(\mathcal{V}\) of a Hilbert space \(\mathcal{H}^{\mathrm{A}}\cong\mathbb{C}^{A}\) and a unit vector \(\alpha\equiv\left(\alpha_{m}\right)_{m\in\mathcal{M}\equiv[M]}\in\mathbb{C}^{M}\), a collection \(\mathscr{V}_{\mathcal{M}}\equiv\left\{\left|v_{s,m}\right\rangle^{\mathrm{A}} \in\mathcal{H}^{\mathrm{A}}\right\}_{m\in\mathcal{M},s\in\mathcal{S}}= \bigcup_{m}\left(\mathscr{V}_{m}\equiv\left\{\left|v_{s,m}\right\rangle^{ \mathrm{A}}\right\}_{s}\right)\) of orthonormal vectors is an \(\alpha\)_-decoupling scheme for \(\mathcal{V}\) in \(\mathcal{H}^{\mathrm{A}}\)_ if \(\mathscr{V}\equiv\left\{\left|v_{s}\right\rangle=\sum_{m}\alpha_{m}\left|v_{s, m}\right\rangle\right\}_{s}\) is a basis for \(\mathcal{V}\). **Observation E.2**.: _Given an \(\alpha\)-decoupling scheme \(\mathscr{V}_{\mathcal{M}}\) in \(\mathcal{H}^{\mathrm{A}}\) for \(\mathscr{V}\), let \(\left|\alpha\right\rangle:=\sum_{m}\alpha_{m}\left|m\right\rangle\), \(\mathcal{V}:=\mathrm{span}\mathscr{V}\), and \(\mathscr{V}_{\mathcal{M}}:=\mathrm{span}\mathscr{V}_{\mathcal{M}}\). For some system \(\mathrm{C}\) of dimensionality \(C\geq A/M\), define a unitary \(V^{\mathrm{A}\rightarrow\mathrm{CM}}\) whose action on \(\mathscr{V}_{\mathcal{M}}\) is given by_ \[V^{\mathrm{A}\rightarrow\mathrm{CM}}_{\mathcal{V}_{\mathcal{M}}}=\sum_{s\in \mathcal{S},m\in\mathcal{M}}\left|s\right\rangle^{\mathrm{C}}\left|m\right\rangle ^{\mathrm{M}}\left\langle v_{s,m}\right|^{\mathrm{A}}. \tag{109}\] _Then, for any \(\left|v\right\rangle\equiv\sum_{s}\xi_{s}\left|v_{s}\right\rangle\in\mathcal{V}\),_ \[V\left|v\right\rangle=\left(\sum_{s}\xi_{s}\left|s\right\rangle\right)^{ \mathrm{C}}\otimes\left|\alpha\right\rangle^{\mathrm{M}}. \tag{110}\] We omit a proof for the above observation, since it is straightforward. Nevertheless, it is instructive inplicating the operational motivation for our definition of the term "\(\alpha\)-decoupling scheme". It also suggests that the decoupling perspective on the problem has potential nontrivial utility when the space \(\mathcal{H}^{\mathrm{A}}\) within which the channel can be dilated is explicitly considered. So far we restricted ourselves to cases where it can be done through unitary action within \(\mathcal{H}^{\mathrm{A}}\). Such a unitary gave us a decoupling scheme consisting of orthogonal vectors \(\left|v_{s,m}\right\rangle^{\mathrm{A}}\). But in general, we need to consider decoupling isometries that may not be completable to a unitary within \(\mathcal{H}^{\mathrm{A}}\). The associated analogs to decoupling schemes could then be oblique and overcomplete. To understand these cases, let us once again examine the basic condition on an \(\alpha\)-decoupling isometry \(V\) on a space \(\mathcal{V}\): \[V^{\mathrm{A}\rightarrow\mathrm{CM}}\left|v_{s}\right\rangle^{\mathrm{A}}= \left|u_{s}\right\rangle^{\mathrm{C}}\left|\alpha\right\rangle^{\mathrm{M}}, \tag{111}\] where \(\left|v_{s}\right\rangle\) is a basis on \(\mathcal{V}\). Now let us expand \(\mathrm{A}\) to some \(\bar{\mathrm{A}}\), of dimensionality \(\bar{A}:=CM\), such that \(V^{\mathrm{A}\rightarrow\mathrm{CM}}\) can be completed to the unitary \(V^{\mathrm{A}\rightarrow\mathrm{CM}}\). By assumption, \(V^{\dagger}\left|u_{s}\right\rangle^{\mathrm{C}}\left|\alpha\right\rangle^{ \mathrm{M}}=\left|v_{s}\right\rangle^{\mathrm{A}}=\left|\bar{v}_{s}\right\rangle ^{\bar{A}}\). Indeed, if we complete \(\left\{\left|u_{s}\right\rangle\right\}_{s\in\mathcal{S}}\) to an \(\mathcal{H}^{\mathrm{C}}\) basis \(\left\{\left|u_{c}\right\rangle\right\}_{c\in\mathcal{C}}\) and thereby define \(\left|\bar{v}_{c}\right\rangle^{\bar{A}}:=V^{\dagger}\left|u_{c}\right\rangle ^{\mathrm{C}}\left|\alpha\right\rangle^{\mathrm{M}}\) for all \(c\in\mathcal{C}\), then by construction, \(\bar{\mathcal{V}}:=\mathrm{span}\left[\bar{\mathcal{V}}\equiv\left\{\left|\bar{v }_{c}\right\rangle^{\bar{A}}\right\}_{c\in\mathcal{C}}\right]\) is an \(\alpha\)-decouplable subspace of \(\mathcal{H}^{\bar{A}}\). Recalling our ultimate aim to estimate the coherent measurement cost of an IO whose dilation is some \(V\) as above, let us inspect a set of Kraus operators that would implement the combined action of the embedding of \(\mathrm{A}\) in \(\bar{\mathrm{A}}\), then the unitary \(V\), and finally a \(\mathrm{C}\)-partial trace; since we have kept \(\mathrm{C}\) and \(\left|u_{c}\right\rangle^{\mathrm{C}}\) generic, we lose no generality in decomposing the partial trace in the canonical basis \(\left\langle c\right|^{\mathrm{C}}\), as in Remark 3.1. The Kraus operators, then, are \[K^{\mathrm{A}\rightarrow\mathrm{M}}_{c} =\left\langle c\right|^{\mathrm{C}}V^{\bar{\mathrm{A}}\rightarrow \mathrm{CM}}\mathbbm{1}^{\mathrm{A}} \tag{112}\] \[=:\sum_{m\in\mathcal{M}}\left|m\right\rangle^{\mathrm{M}}\left\langle w _{c,m}\right|^{\mathrm{A}}.\] Here \(\left|w_{c,m}\right\rangle^{\mathrm{A}}=\mathbbm{1}^{\mathrm{A}}\left|\bar{w}_{c,m}\right\rangle^{\bar{A}}\), the latter defined through (113) Notice that \(\bar{\mathscr{W}}_{\mathcal{M}}\equiv\left\{\left|\bar{w}_{c,m}\right\rangle^{ \bar{A}}\right\}_{c,m}\) is, just like \(\bar{\mathscr{V}}_{\mathcal{M}}\), an \(\alpha\)-decoupling scheme for \(\bar{\mathcal{V}}\) in \(\mathcal{H}^{\bar{A}}\); the schemes are associated, respectively, with the \(\bar{\mathcal{V}}\) bases \(\bar{\mathscr{W}}\equiv\left\{\left|\bar{w}_{c}\right\rangle^{\bar{A}}:=\sum_{m} \alpha_{m}\left|\bar{w}_{c,m}\right\rangle^{\bar{A}}\right\}_{c}\) and \(\bar{\mathcal{V}}\). We summarize the above observations in the following\(\ldots\)well, observation. **Observation E.3**.: _If \(\mathcal{E}^{\mathrm{A}\rightarrow\mathrm{M}}\) is a channel (IO or otherwise) that deterministically maps a subspace \(\mathcal{V}\in\mathcal{H}^{\mathrm{A}}\) to \(\left|\alpha\right\rangle\left\langle\alpha\right|^{\mathrm{M}}\), any Kraus operator decomposition of \(\mathcal{E}\) must involve operators of the form_ \[K_{c}=\sum_{m\in\mathcal{M}}\left|m\right\rangle^{\mathrm{M}}\left\langle w_{c,m }\right|^{\mathrm{A}} \tag{114}\] _with \(\left|w_{c,m}\right\rangle^{\mathrm{A}}=\mathbbm{1}^{\mathrm{A}}\left|\bar{w}_{c,m }\right\rangle^{\bar{A}}\) projections of an \(\alpha\)-decoupling scheme \(\bar{\mathscr{W}}_{\mathcal{M}}\equiv\left\{\left|\bar{w}_{c,m}\right\rangle^{ \bar{A}}\right\}_{c,m}\) for some \(\bar{\mathcal{V}}\supset\mathcal{V}\) in some \(\mathcal{H}^{\bar{A}}\supset\mathcal{H}^{\mathrm{A}}\)._ Thus, the problem of finding the least-coherent IO channel(s) executing a given decoupling task involves optimizing over all possible decoupling schemes within arbitrarily large \(\bar{\mathrm{A}}\) and \(\bar{\mathcal{V}}\). Since \(\bar{\mathscr{W}}\) (introduced before Observation E.3) is a \(\bar{ where we used the fact that \(\mathcal{V}\subset\mathcal{H}^{\Lambda}\). Thus, using the concept of decoupling schemes we have arrived at the result of Observation 3.3, point 1 via a different route. More research into these structures may allow us to harness their properties to attack channel-related problems in the dilation picture.
2303.02513
Model-Agnostic Meta-Learning for Multilingual Hate Speech Detection
Hate speech in social media is a growing phenomenon, and detecting such toxic content has recently gained significant traction in the research community. Existing studies have explored fine-tuning language models (LMs) to perform hate speech detection, and these solutions have yielded significant performance. However, most of these studies are limited to detecting hate speech only in English, neglecting the bulk of hateful content that is generated in other languages, particularly in low-resource languages. Developing a classifier that captures hate speech and nuances in a low-resource language with limited data is extremely challenging. To fill the research gap, we propose HateMAML, a model-agnostic meta-learning-based framework that effectively performs hate speech detection in low-resource languages. HateMAML utilizes a self-supervision strategy to overcome the limitation of data scarcity and produces better LM initialization for fast adaptation to an unseen target language (i.e., cross-lingual transfer) or other hate speech datasets (i.e., domain generalization). Extensive experiments are conducted on five datasets across eight different low-resource languages. The results show that HateMAML outperforms the state-of-the-art baselines by more than 3% in the cross-domain multilingual transfer setting. We also conduct ablation studies to analyze the characteristics of HateMAML.
Md Rabiul Awal, Roy Ka-Wei Lee, Eshaan Tanwar, Tanmay Garg, Tanmoy Chakraborty
2023-03-04T22:28:29Z
http://arxiv.org/abs/2303.02513v1
# Model-Agnostic Meta-Learning for Multilingual ###### Abstract Hate speech in social media is a growing phenomenon, and detecting such toxic content has recently gained significant traction in the research community. Existing studies have explored fine-tuning language models (LMs) to perform hate speech detection, and these solutions have yielded significant performance. However, most of these studies are limited to detecting hate speech only in English, neglecting the bulk of hateful content that is generated in other languages, particularly in low-resource languages. Developing a classifier that captures hate speech and nuances in a low-resource language with limited data is extremely challenging. To fill the research gap, we propose HateMAML, a model-agnostic meta-learning-based framework that effectively performs hate speech detection in low-resource languages. HateMAML utilizes a self-supervision strategy to overcome the limitation of data scarcity and produces better LM initialization for fast adaptation to an unseen target language (i.e., cross-lingual transfer) or other hate speech datasets (i.e., domain generalization). Extensive experiments are conducted on five datasets across eight different low-resource languages. The results show that HateMAML outperforms the state-of-the-art baselines by more than 3% in the cross-domain multilingual transfer setting. We also conduct ablation studies to analyze the characteristics of HateMAML. Hate speech detection, cross-lingual transfer, meta-learning. ## I Introduction Motivation. Online hate speech that expresses hate or encourages violence towards a person or a group based on characteristics such as race, religion, or gender is a growing global concern. The hateful content has broken the cohesiveness of online social communities and resulted in violent hate crimes1. Therefore, detecting and moderating hate speech in online social media is a pressing issue. Footnote 1: [https://thewire.in/law/hate-speech-what-it-is-and-why-it-matterers](https://thewire.in/law/hate-speech-what-it-is-and-why-it-matterers) The gravity of the situation has motivated social media platforms and academic researchers to propose traditional machine learning and deep learning solutions to detect online hate speech automatically [1, 2, 3] and early [4, 5]. Among the solutions, large pre-trained language models (LMs), which have demonstrated their superiority in many NLP tasks [6], have also shown excellent performance in the hate speech detection task [7]. However, existing studies have predominantly focused on detecting hate speech in English and content from Western cultures. This neglects the large portion of online hate speech from other languages and cultures. Low-resource languages often have very limited or no training samples available. This can make it challenging to develop supervised classifiers. We need models that can efficiently adapt with few training data or methods that can transfer knowledge from another dataset. In such a case, a high-resource language can augment the low-resource language by knowledge transfer. For example, Aluru et al. [8] proposed a method to fine-tune a multilingual BERT model for hate speech detection in low-resource languages that leverages cross-lingual transfer from high-resource pre-training. There are very few works that deal with multilingual hate speech detection. A viable approach is to fine-tune pre-trained LMs, which is explored in existing studies [7, 8, 9]. The underlying intuition is that the large LMs generate shared embeddings in many languages, enabling cross-lingual transfer from supervised training in the _high-resource_ languages such as English. This cross-lingual transfer can be achieved as follows: (i) training only in the source language and evaluating in target language, known as 'zero-shot', or (ii) training in the target language only or joint training of source and target languages, known as 'few-shot' learning. In multilingual hate speech detection, English and a few high-resource languages are considered "sources", and low-resource languages of interest are considered "targets". A source language is usually a magnitude larger than a target language. However, the fine-tuning of pre-trained LMs also has several limitations [10]. An obvious issue is the data scarcity in target low-resource languages; it is challenging to capture hate speech and its nuances with insufficient observations. The performance of cross-lingual transfer learning may adversely be affected if source and target languages are from distant language families [10, 11]. Hate speech datasets can come from many domains, which can be divided into two levels: one comes from language shift, and the other is content in each language. For instance, a dataset in one language may be collected from the Twitter platform, while another could be collected from Youtube. Standard fine-tuning overfits the training languages and will face the difficulty of adapting to a new domain or an unseen language. Conneau et al. [12] show that adding more languages improves the performance on low-resource languages but hurts the performance on high-resource languages. We adopt a meta-training strategy for low-resource hate speech detection to address these concerns. Our proposed model is based on a meta-learning approach that can be used for low-resource hate speech detection. The model is capable of handling domain adaptation and preventing negative transfer. Meta-learning has been successful in low-resource settings and can be used to learn a good initial point for fine-tuning a new task, making the fine-tuning process more efficient. **Research objectives.** In this paper, we aim to tackle the challenges of multilingual hate speech detection and address the limitations of existing fine-tuned pre-trained LMs in hate speech detection. We propose HateMAML, a model-agnostic meta-learning-based framework that effectively performs hate speech detection in low-resource languages. Unlike existing hate speech detection approaches that fine-tune LMs on the source and target only, we focus on resource maximization by using resources in other "_auxiliary_" languages beyond source and target. More specifically, we adopt the model-agnostic meta-learning (MAML) [13] strategy, in which simulated training in few-shot meta-tasks produces a meta-learner that can learn a target task quickly. Recent studies have explored adopting MAML in low-resource cross-lingual adaptation for NLP tasks [14, 15]. The underlying intuition of HateMAML is to train on samples from auxiliary languages and adapt it to detect hate speech in a target language in which there are few or no datasets available [15]. To address diversity and nuances in detection of hate speech across languages and domains, we also modify the meta-learning loss function in HateMAML and add a domain adaptation loss. Intuitively, task-specific learner evaluates their own training samples and samples from another language. This mimics the scenario we are interested in: training in one language while evaluating an unseen language during test time. Such training objective forces the meta-learner to generalize to a target language domain while learning from source domain samples. Considering the data scarcity of non-English languages, we further propose a self-guided meta-learning-based fine-tuning mechanism. This mechanism utilizes unlabeled examples of the low-resource target language and generates silver labels for fine-tuning the target. We first train a multilingual predictor model using training data in a high-resource language, e.g., English. The predictor model predicts unlabeled samples in the target, e.g., Spanish. A small portion of quality predictions are filtered out using a threshold value to gather silver labels and enhance the training data in both the source and target languages. Finally, we adapt HateMAML into an iterative self-refining loop that replaces and refines the current predictor model and improves the quality of the adapted model in the target. In early studies, cross-lingual meta-training was mostly limited to language pairs [15]. This setting is, however, not truly multilingual since we need separate models for each target language. We study the applicability of meta-training when training data is available in more than one low-resource language. To this end, we investigate whether meta-training is better than standard multilingual fine-tuning in two scenarios: when training examples are only available for some languages and when both high and low-resource languages have sufficient training data. The first scenario is a stress test for cross-lingual domain adaptation, evaluating the trained model on a set of reserved low-resource languages. The other scenario explores meta-training benefits over standard fine-tuning. Though the early investigation was limited to one auxiliary and target language, the current configuration allows us to investigate cross-lingual meta-training for hate speech detection at scale. In other words, we adopt meta-training as a proxy for general multilingual fine-tuning. Our results show that when training data is available from many languages, meta-training HateMAML yields better performance than standard fine-tuning. **Contributions.** We present HateMAML, a meta-learning framework for multilingual hate speech detection with limited data at scale. Our major contributions are summarized below: 1. We propose HateMAML2, a domain-adaptive MAML algorithm for hate speech detection in low-resource languages. Our algorithm efficiently exploits the available resources, outperforming state-of-the-art cross-lingual fine-tuning baselines. One of the novelties of HateMAML lies in the efficient utilization of available samples in the validation set of source and training data in auxiliary languages. Footnote 2: [https://github.com/bottle_shop/safe/HateMAML](https://github.com/bottle_shop/safe/HateMAML) 2. To mimic an extreme data-scarce scenario in the target language (zero-shot), we introduce a self-refining variant of HateMAML. We adopt an iterative meta-learning approach to generate silver labels from unlabeled data in the target language. Footnote 3: We conduct extensive experiments in zero-shot and fine-tuning settings across eight different low-resource languages. Our experiment results show that HateMAML consistently outperforms state-of-the-art methods by more than 3% in the cross-lingual transfer settings. Footnote 4: We perform ablation studies, in which we carefully examine the contribution of the auxiliary languages across each _auxiliary- target_ language pair. We also assess the algorithm's robustness by varying (i) the amount of meta-training samples, (ii) cross-domain adaptation, and (iii) one model to support all language configurations. Footnote 5: We conduct cross-domain hate speech detection to address data set diversity in various languages and show that HateMAML achieves a significant domain adaptation. Our experiments on meta-training over all available language training data suggest that meta-learning could be an alternative to standard fine-tuning that yields superior performance. ## II Related Work **Multilingual hate speech detection** In the last few years, hate speech detection has gained significant attention, with the majority of work dedicated towards monolingual hate speech [1, 2]. This skew in attention has naturally led to the majority of datasets being in English [16, 17, 18]. However, hate speech is a global phenomenon, and increased diversity in available resources is critical for developing automated systems. Recent efforts addressed this issue, including multiple shared tasks (such as SemEval 2020 [19], HASOC 2020 [20], Evalita 2018 [21] and HateEval 2019 [22]). Such efforts helped in bringing much-needed traction toward multilingual hate speech research. Additionally, recent development in transformer-based large multilingual LMs (mBERT [6], XLM-R [12]), pre-trained on more than 100 languages, has aided in developing state-of-the-art classifier in resource-lean languages. Prior studies on multilingual hate speech detection have explored: (i) resource development such as creating datasets [19, 23], shared tasks and organizing workshops, (ii) cross-lingual transfer learning with multilingual shared embeddings and pre-trained LMs [8, 9], (iii) utilization of additional features from relevant domains [24] (e.g., emotion, sentiment), and (iv) data augmentation [25, 26, 9] techniques including use of external service (e.g., translation APIs) for supervised training. **Cross-lingual and multilingual transfer.** Knowledge transfer from a high-resource language (i.e., English) to a low-resource one (i.e., Hindi) has been shown to be an effective resource optimization technique. Ranasinghe and Zampieri [7] used a two-step training method: first training multilingual transformer models on English data, and then the resulting model is fine-tuned on the low resource target language. Such multi-phase training was found to be a better model initialization for fine-tuning the target. Aluru et al. [8] conducted an extensive study on state-of-the-art multilingual embeddings including LASER [27] and MUSE [28]. Another line of studies [9, 30, 29] utilized translation-based solutions to alleviate the shortage of data in low-resource languages in hate speech. These models rely heavily on translation APIs, which might be susceptible to producing low-quality translations. Moreover, the translation of a large source corpus can be expensive. Majority of the studies on cross-lingual and multilingual hate detection fall into pre-trained model fine-tuning [31, 29, 32]. However, Nozza [10] found that due to the domain shift of hate speech across languages, standard fine-tuning methods are less effective for both zero-shot and few-shot settings. To address the limitation of low-resource language fine-tuning, _we develop_HateMAML _for fine-tuning hate classifiers with limited data and an improved mechanism for domain-adaptive cross-lingual transfer._ **Meta-learning.** Meta-learning, also known as "learning to learn", is a few-shot learning technique geared toward learning few-shot tasks and fast adaptation. Unlike standard supervised fine-tuning on downstream tasks that result in a _final_ classifier, meta-learning focuses on producing a rich initialization point from where an unseen target task can be learned quickly. Some common meta-learning approaches include (i) optimization-based [13, 33], and (ii) metric-based [34, 35]. Finn et al. [13] introduced model-agnostic meta-learning (MAML), an optimization-based meta-learning framework for few-shot tasks. Several recent studies have explored gradient-based meta-learning for few-shot text classification [36, 37, 38]. Meta-learning for domain generalization is also studied to handle domain shift during training in diverse problems, including supervised learning and reinforcement learning [39]. Meta-learning has also been explored in zero-shot and few-shot cross-lingual settings. Gu et al. successfully adapted MAML for low-resource NMT tasks using auxiliary languages, resulting in competitive performance with only a fraction of the training data [14]. X-MAML was then proposed by [15] for meta-training in cross-lingual NLU tasks, similar to our interest. X-MAML explores various auxiliary languages to identify the optimal composition for zero-shot cross-lingual transfer. Meta-learning has also been applied in the detection of offensive language in cross-lingual and code-mixed texts [40, 41] and other harmful content such as multilingual rumours [42]. However, the limited availability of multilingual hate speech datasets, comprising of only two or three languages, presents a challenge in finding an effective auxiliary language. **Data augmentation/self-training.** Data augmentation approaches can be divided into three categories: (i) rule-based [7], (ii) interpolation-based [43, 44, 45], and (iii) model-based [46, 47, 48]. In cross-lingual hate speech, translation techniques [9, 30, 29] are commonly used for data augmentation during training. Rather than collecting and annotating new hate speech data, bootstrapping on unlabeled samples to create pseudo-labeled data provides a way for data augmentation and fine-tuning on low-resource languages. We draw inspiration from [26] to device a bootstrapping-based self-training approach for HateMAML. **How is our approach different?** We explore cross-lingual meta-training in the domain of hate speech for both zero-shot and fine-tuning configurations. Our proposed method of zero-shot meta-training, HateMAML, is a novel idea that can be further adapted for languages with no availability of labeled data. We focus on resource maximization and domain generalization while transferring task-specific knowledge to low-resource languages. We carry out a large-scale study in multilingual hate speech detection across diverse domains on available hate speech datasets. ## III Methodology We devise a gradient-based meta-learning algorithm named HateMAML for multilingual hate speech detection. Our proposed algorithm is tailored to improve cross-lingual transfer in target low-resource languages. We create a set of meta-tasks from samples for both high- and low-resource languages and simulate episodic meta-training similar to MAML [13]. Our goal is to produce better initialization of model parameters that can adapt to a low-resource target language task using - (i) none (i.e., zero-shot) or (ii) only a small number (i.e., few-shot) of labeled examples. We select an auxiliary language to (meta-)train the model in the zero-shot setting. In the few-shot setting, we meta-train the model on the target language. We assume some labeled data (e.g., 2024 samples) is available for supervised fine-tuning on the target language. Hate speech datasets from different languages often comprise several domains and topics. For example, while many English datasets capture the domain of racism-related hate, hate speech datasets in Hindi are more religion/politics-oriented. Here, the term 'domain' is loosely used, referring to different datasets and languages. Therefore, the task of cross-lingual hate speech detection implicitly encompasses cross-domain adaptation. We aim to _fast adapt to both target language and domains_. To account for this, our meta-adaptation model includes a domain generalization loss. The idea is to produce a good initialization that can perform well on diverse domains. ### _Model Agnostic Meta-Learning (MAML)_ MAML assumes a distribution \(p(\mathcal{T})\) of tasks \(\{\mathcal{T}_{1},\mathcal{T}_{2},\cdots,\mathcal{T}_{m}\}\). A meta-model \(f_{\theta}(\cdot)\) with parameters \(\theta\) is learned through episodic meta-training over sample tasks \(\mathcal{T}\sim p(\mathcal{T})\). MAML has two loops - (i) an inner loop for task-specific adaptation and (ii) an outer loop to _meta-learn_ that fast adapt to an unseen novel task. MAML fine-tunes the meta model \(f_{\theta}\) to a particular task \(\mathcal{T}_{i}\) through gradient descent: \[\theta^{\prime}_{i}\leftarrow\theta-\alpha\nabla\mathcal{L}_{\mathcal{T}_{i}}( f_{\theta}) \tag{1}\] where \(\alpha\) is the step size, and \(\mathcal{L}\) is the classification loss. The task-specific training outcome is evaluated on the associated test set. The meta-learner optimization objective is to minimize the _meta loss_ computed from all the training tasks: \[\min_{\theta}\sum_{i=1}^{m}\mathcal{L}_{\mathcal{T}_{i}}(f_{\theta^{\prime}}) =\sum_{i=1}^{m}\mathcal{L}_{\mathcal{T}_{i}}(f_{\theta-\alpha\nabla_{\theta} \mathcal{L}_{\mathcal{T}_{i}}(f_{\theta})}). \tag{2}\] The meta parameter \(\theta\) is then updated to: \[\theta=\theta-\beta\nabla_{\theta}\sum_{i=1}^{m}\mathcal{L}_{\mathcal{T}_{i}} (f_{\theta^{\prime}_{i}}) \tag{3}\] where \(\beta\) is the meta-learner learning rate. We can accumulate multiple episodes of tasks to update \(\theta\). Though MAML training requires double gradient descent optimization, a first-order approximation is used in practice. Different from standard fine-tuning, meta-training does not result in a final model. However, it is a reasonably good starting point (initialization) from which learning a new task can be executed quickly. Intuitively, training across a set of meta-tasks can be seen as **auxiliary**; and fast adaptation on the unseen **target** task is the main goal. A few recent studies build on this motivation [14, 15] where supports from auxiliary languages are used in cross-lingual meta-training. This is also a key inspiration for our model. ### _Proposed Model: HateMAML_ The training of our proposed HateMAML model facilitates hate classifier adaptation and cross-lingual transfer in target languages while requiring only a small number of samples for model fine-tuning. HateMAML can be used for training with/without labeled data, only requiring access to training data in the source language for the initial fine-tuning. Any multilingual encoders that have shared embeddings, such as mBERT and XLM-R can be used as the base model. To further improve cross-lingual transfer, we use a few labeled samples in the target language (few-shot training), which helps quick adaptation. Additionally, HateMAML benefits from task-specific training evaluation on a (virtual) domain test (query) set at each episode that mimics target domain evaluation during meta-training [39]. The intuition of the domain query set is to achieve training generalization in unseen target languages and domains. We discuss training datasets accumulation, meta-tasks generation, and domain loss in HateMAML and present a concise training procedure in detail below. A rough sketch of HateMAML can be found in Figure 1(right). **HateMAML Training Data.** We need a set of support and query batches sampled from \(\mathcal{D}\), where \(\mathcal{D}\) refers to available training data from training languages, generally small in size. Suppose we have samples in source \((\mathcal{D}_{src})\), auxiliary \((\mathcal{D}_{aux})\) and target \((\mathcal{D}_{tgt})\) languages. Our training data \(\mathcal{D}\) consists of the tuple: (i) \((\mathcal{D}_{src},\mathcal{D}_{aux})\) in the zero-shot setting, and (ii) \((\mathcal{D}_{src},\mathcal{D}_{aux})\) in the few-shot setting. To this end, we split the training set for each language into a support (\(\mathcal{D}_{lang}^{support}\)) and query (\(\mathcal{D}_{lang}^{query}\)), where \(lang\in\{src,aux,tgt\}\). We randomly sample a (virtual) domain set \(\mathcal{D}^{domain-query}\) from either \(\mathcal{D}_{aux}^{query}\) or \(\mathcal{D}_{tgt}^{query}\). By doing so, we can mimic real train-test domain shifts so that over many iterations, we can train a model to achieve better generalization on the target language's **final** test set. **Meta-Tasks Generation.** HateMAML requires episodic training on meta-tasks containing support (meta-training) and query (meta-testing) sets. For each episode, three subsets of examples are sampled separately: the support set \(\mathcal{S}\), the query set \(\mathcal{Q}\), and the (virtual) domain query set \(\mathcal{Q}^{\prime}\). \[\begin{split}& S\gets S\cup\wedge(X,Y)\in\mathcal{D}\\ & Q\gets Q\cup\wedge(X,Y)\smallsetminus S\in\mathcal{D}\\ & Q^{\prime}\gets Q^{\prime}\cup\wedge(X,Y)\smallsetminus(S \cup Q)\in\mathcal{D}\end{split} \tag{4}\] We repeat this process multiple times. Suppose we have a total of \(D\) training samples in a given language and \(K\) samples used for support, and \(L\) for both query sets in each episode. In total, we will have \(D/K+L\) episodes. Note that \(\mathcal{Q}^{\prime}\) is virtual and selected randomly. **(Virtual) Domain Query Loss.** We add query loss term in Equation 3 that accounts for domain generalization. We now compute meta-update in Equation 3 as follows: \[\theta=\theta-\beta\nabla_{\theta}\sum_{i=1}^{m}\mathcal{L}_{\mathcal{T}_{i}} ^{(\mathcal{Q}_{i},\mathcal{Q}^{\prime}_{i})}(f_{\theta^{\prime}_{i}}) \tag{5}\] It is a variation of standard MAML training that addresses supervised training limitations such as domain shift in low-resource languages. We compare our proposed training procedure with MAML to show the effectiveness. ### _HateMAML Algorithm_ HateMAML training requires a base classifier. A base classifier can be any multilingual encoder, such as mBERT, XLM-R, fine-tuned on a source language such as English. Next, we decide between zero-shot or few-shot training based on the availability of training data. The training choice is passed to Algorithm 1 along with the base model parameters. For the zero-shot scenario, we select an auxiliary language for model training, and the resulting meta-model will be evaluated on the selected target language. Meta-learner parameters \(\theta\) are initialized from a fine-tuned base model (_line 2_). Now, we sample a batch of pseudo-tasks \(\mathcal{T}_{i}\). Each task contains a triplet \((\mathcal{S},\mathcal{Q},\mathcal{Q}\prime)\) representing the support, query, and domain query for model training, respectively. We take one inner loop gradient step using training loss on \(\mathcal{S}\) (_line 10_) and adapt model parameters to \(\theta^{\prime}\). Now, we evaluate task-specific training outcomes using \(\mathcal{Q}\) and \(\mathcal{Q}^{\prime}\), and save it for outer loop update (_line 12_). Each task has its own task-adaptive model parameters \(\theta^{\prime}_{i}\). At the end of iterations of all tasks, we aggregate saved gradients and update the meta-model \(\theta\) (_line 13_). At this step, we have completed one outer loop. We repeat _lines 8-13_ till model training is not finished. After training completion, based on the earlier choice of zero-shot or few-shot, we evaluate the resulting model's performance on the target language's **final** test set. ### _Self-refinement and Zero-shot Learning_ We now present a semi-supervised self-training procedure shown in Figure 1 (left) for hate speech classifier adaptation while no target or auxiliary language is available. Our proposed approach requires the availability of the source language. We first develop a 'base model' by fine-tuning source language training data. Now, we can use the trained model to make predictions on unlabeled samples from the target language. We want to use these predicted samples for supervised training. Directly including all the predicted samples in training may not result in a good classifier, as training samples are noisy and do not represent ground-truth labels. We filter out samples with low prediction confidence using a threshold value to keep it balanced. We call the filtered outcomes _silver labels_ and prepare a new training dataset. We use HateMAML to carry out meta-training since it requires a few samples and can adapt quickly. As we use a small number of samples, noise injection is less than standard fine-tuning. We randomly take filtered samples as silver labels and run HateMAML using few-shot training shown in Algorithm 1. After completing one full training on the silver training data, the new classifier should expectantly have better target language knowledge. To make it work in practice requires self-refinement with multiple iterations. We repeat this procedure by treating the resulting model as the base for generating silver labels again, replacing the old base model. In our experiments, approximately \(5\) refinement iterations with HateMAML are enough for producing a quality classifier on the target test data. ## IV Hate speech detection datasets We train and evaluate the baselines and HateMAML using five publicly-available multilingual hate speech datasets. These datasets contain a variety of low-resource languages. Table I summarizes the datasets. * _normal_, _spam_, _abusive_, and _hateful_. For our study, we removed the tweets labeled _spam_. * **HatEval19**[22], released in the shared task of SemEval-2019, is a multilingual hate speech dataset in English and Fig. 1: Proposed HateMAML model sketch. (Left) Model training in zero-shot and few-shot setups. We use an auxiliary language validation set in zero-shot learning, and target language labeled data is used in few-shot. In both cases, we include a source language validation set and create a (virtual) domain set that helps us to mimic target domain performance evaluation during meta-training. We sample a batch of triplets \((\mathcal{S},\mathcal{Q},\mathcal{Q}^{\prime})\) and simulate episodic training. (Right) Self-training procedure with HateMAML, which produces silver labels from unlabeled data in the target language, model training on those, replace base model. Repeat the procedure for \(N\) times. Spanish, specifically targeted hate against immigrants and women. * English, German, and Hindi. The authors first created an extensive archive of tweets and subsequently used weakly trained binary classifiers to extract potential hateful tweets. The extracted ones are finally labeled by annotators. The main aim of the dataset was to create an unbiased multilingual hate speech corpus. * Twitter and news headlines. It consists of test sets from Twitter and news. * English, Arabic, Danish, Greek, and Turkish. Native speakers annotated the dataset. Arabic, Greek, and Turkish were collected from Twitter, while the Danish dataset is from Facebook, Reddit, and a local newspaper. The task also produced a weakly-supervised English dataset. ## V Experiments ### _Experiment Settings_ We consider three training setups: (i) _zero-shot_ with no fine-tuning in the target, (ii) _domain adaptation_ in which we consider training on a set of auxiliary languages and test on a held-out language set, and (iii) _full_ fine-tuning using the training samples from all languages. Our experimental setup also requires a _base model_, acquired from a pre-trained model, which is fine-tuned on the source (English) language. For the cross-lingual analysis, we experiment on eight low-resource languages and one high-resource English. For the experiments on HASOC20 and HatEval19, we utilize the English samples in the training set to train the base model. For the experiments on SemEval2020 and HaSpeedDe20, we use Founta-EN as English training data is not available. The meta-training samples are retrieved from the source languages' _validation sets_ as well as the auxiliary and target languages' _training sets_. The source languages' _training sets_ are used to train the _base model_. Across all the experiments, we aggregate samples from each language and then generate meta-tasks to be used in meta-training. We sample task triplets with a specified number of shots, \(K=L\), where \(K=32\). We utilize two multilingual transformer-based encoders as base models: (i) **mBERT**[6] (bert-base-multilingual-uncased), and (ii) \(\textbf{XLM}-\textbf{R}\)[12] (xlm-roberta-base). For both models, the output from the _pooler_ layer is fed to a 2-layer FFN classification head. ### _Baselines_ HateMAML is a model-agnostic algorithm for low-resource cross-lingual adaptation. Therefore, we explore the following baselines for our experiments: * **Fine-tuning**: Standard LM fine-tuning is adapted as a baseline for both zero-shot and few-shot scenarios. Here, we fine-tune on **mBERT** and **XLM-R**. * **X-MAML**[15]: Cross-lingual episodic meta-training using auxiliary language composition, including pairs. We initialize the meta-learner from the base model obtained from English fine-tuning. The meta-learned model is then evaluated on the test set in the target language. Our implementation only considers one auxiliary language. ### _Variations of HateMAML_ Besides benchmarking the baselines, we formulate and evaluate different variations of HateMAML: * **HateMAML\({}_{\textit{exp}}\):** Simulated meta-training on a batch of tasks created from aggregated samples in source language 'validation set' and 'training set' from auxiliary language. It provides zero-shot as training choice in Algorithm 1 (SS III-C). This is similar to X-MAML [15], but it uses both source and auxiliary languages for meta-training, while X-MAML only uses auxiliary language samples. * **HateMAML\({}_{\textit{self-tuning}}\):** Choose a target language for self-training on silver data. First, we generate silver labels and keep only 300 samples. Next, we apply HateMAML in an iterative manner to the silver-labeled data from the target language as explained in Section III-D. * **HateMAML\({}_{f}\):** We generate meta-task from a set of available languages and apply HateMAML. This is very similar to fine-tuning aggregated training examples from the selected language set. ### _Implementation Details_ For implementing the transformer-base models, we use the HuggingFace transformers library3. The chosen pre-trained model is initialized from pre-trained weights provided in the transformers library. The classification heads are implemented using the pytorch4 linear layers, initialized randomly. For implementing the meta-learning features (for example, first-order approximation), we use the learn2learn library5. Footnote 4: [https://pytorch.org/](https://pytorch.org/) Footnote 5: [https://github.com/learables/learn2learn](https://github.com/learables/learn2learn) ## VI Results and analysis ### _Zero-shot Experiments_ Experiments in the zero-shot setting can be found in Table II. The mBERT initialization uses the \(\clubsuit\) sign, and \(\clubsuit\) refers to XLM-R initialization. The reported results of all the experiments are average values across five random runs. We report the average and standard deviation of the **F1-score (macro)**. \(\texttt{HateMAML}_{zero}\) outperforms baseline models including X-MAML with both initializations (\(\clubsuit\) and \(\clubsuit\)). We achieve the best overall score with an average of 11% over mBERT, 7% over XLM-R, 7% over \(\clubsuit\) X-MAML. Baseline models, mBERT and XLM-R, have poor _zero-shot_ performance. The use of auxiliary language to obtain the best meta-learner produces rewarding outcomes for all the experiments. Even though X-MAML uses a similar cross-lingual, auxiliary language-based, meta-training setting to ours, HateMAML gives superior performance, likely due to the domain-adaptive training in HateMAML. To summarize, in most cases, HateMAML's auxiliary to target transfer improves the results of zero-shot learning substantially compared to the standard fine-tuning baselines. We observe that the XLM-R-initialized models perform better than mBERT in zero-shot prediction. Even the best performing \(\clubsuit\)HateMAML model has a small gap between XLM-R zero-shot performance. One explanation is that XLM-R is trained on a relatively large corpus and a deeper transformer architecture, giving it a strong pre-training benefit for zero-shot transfer learning. We observe similar trends in performance with HateMAML when initialized with XLM-R base models across all the datasets. For the SemEval20 Danish (DA) test set, it is noted that the XLM-R model performs the best compared to other meta-learning models. One reason could be that this dataset was created from Facebook, Reddit, and news sources, unlike the other datasets that have been created from Twitter. **Language similarity and cross-lingual transfer.** We carefully analyze the support that each target language receives from different auxiliary languages. The SemEval20 dataset, with the largest number of possible (auxiliary, target) language pairs, shows the benefit of meta-training on auxiliary language for both X-MAML and HateMAML\({}_{zero}\) models. Meta-training helps in increasing the zero-shot performance for SemEval20 from 0.473 avg. F1 (mBERT) to 0.592 F1 (\(\clubsuit\)HateMAML). X-MAML also shows substantial improvement for both initializations. We notice that SemEval20 Turkish (TR) tends to be the best auxiliary language dataset in almost all cases for both base models (see Table VI). For HASOC20 experiments, we notice small gain in F1-score while training on auxiliary samples. The languages - German (DE) and Hindi (HI), do not seem to be a good mix, coming from two distant language families (details of family languages is provided in Table V). **Self-training improves zero-shot performance.** For datasets that have no auxiliary language, i.e., HateEval19 and HaSpedDe20, self-training HateMAML is a convenient algorithm to maximize the performance. It mitigates the need for auxiliary language or a labeled target language training set. We find that HateMAML\({}_{self-training}\) produces comparable performance to meta-training in auxiliary HateMAML\({}_{zero}\) (see Table II). The gain is achieved from meta-training on silver labels of the target using multiple iterations of self-refinement. We also utilize source language validation set while self-training (HateMAML\({}_{self-training}\)). The intuition is to have some gold-labeled samples to balance between noise and ground truth. We observe a slight improvement given the source ground-truth labels in training. ### _Domain Adaptation Experiments_ Table III reports the results of domain adaptation experiments. We first select two language families for training: Germanic and Romance. The languages in the selected languages families, namely EN, DA, DE, ES, and IT, are considered auxiliary and used in the training set. The trained model is evaluated on the held-out languages. We use training samples in increasing order to evaluate the model performance in the order of available data. Each language's varying training data size has three values, i.e., \(1024\), \(2048\), and \(4096\). However, if a language does not contain the selected number of samples, i.e., DE has only \(2373\) training samples, we capped it to the maximum of found training samples. It can be seen that HateMAML produces superior results on average compared to fine-tuning. We find that domain-adaptive meta-training in HateMAML\({}_{ft}\) (\(\clubsuit\) or \(\clubsuit\)) has a consistently better F1 score than fine-tuning. We achieve an overall improvement of \(3\%\) over mBERT, \(24\%\) over XLM-R while training \(2048\) samples per language. Both baseline models face a performance drop in held-out languages, in which XLM-R faces a significant performance drop. To summarize, standard fine-tuning does not generalize well during training on the held-out target languages. On the other hand, HateMAML's domain-adaptive training helps retain consistent performance on both auxiliary and target languages. ### _Full Fine-tuning vs Meta-training_ To make one model support all languages, we evaluate a meta-training strategy while training on data available in eight languages. We train a model varying the training data size and evaluate performance under increasing data availability. We create a set of meta-tasks by gathering support and query sets from eight languages. We also fine-tune aggregated training data. In Table IV, we show how HateMAML performs in comparison to mBERT and XLM-R fine-tuning. In general, we notice an upward trend while increasing the number of training data for all languages. As expected, the meta-learned initialized \(\texttt{HateMAML}_{ft}\) models perform better than the standard fine-tuning. \(\texttt{HateMAML}_{ft}\) outperforms on average for increasing the order of available data. We observe that \(\texttt{HateMAML}_{ft}\) gains in F1 score for all languages - 2.4% and 3% improvement over baselines mBERT and XLM-R using \(4096\) samples. To improve performance further, we can increase training data, if available. This suggests that meta-training can be an alternative to standard fine-tuning for hate speech detection. ## VII Conclusion Our extensive experiments show that \(\texttt{HateMAML}\) is able to perform well in both zero-shot and few-shot hate speech detection by improving cross-lingual transfer in target languages. \(\texttt{HateMAML}\) can be trained with or without labeled data. This model-agnostic approach supports multilingual encoders with shared embeddings such as mBERT and XLM-R as base learners. The proposed zero-shot self-refining technique adapts a base learner to be an effective predictor in the target, reducing the need for ground-truth labels in fine-tuning. To further improve cross-lingual transfer, we use a few labeled samples in the target language (few-shot training), which helps to adapt fast and boosts fine-tuning performance. Additionally, our method benefits from task-specific learner evaluation on a (virtual) domain test (query) set at each episode that mimics target domain evaluation during cross-lingual meta-training. The intuition of the domain query set is to achieve training generalization in unseen target languages and domains. We also found that meta-learner adaptation effectively supports all available languages using a single predictor model, making it highly suitable for detecting hate speech in multiple languages and domains. In this paper, our major contributions are two-fold. (i) We proposed \(\texttt{HateMAML}\), a novel model-agnostic cross-lingual meta-training algorithm for hate speech detection that supports resource maximization. We evaluated \(\texttt{HateMAML}\) on five benchmark datasets across eight languages and varying domains. We showed that cross-lingual meta-training outperforms state-of-the-art fine-tuning baselines in both zero-shot and fine-tuning settings. We also found that meta-adaptation is effective in cross-domain evaluation, making it highly suitable for detecting cross-domain hate speech. (ii) We devised a self-training procedure that aids in hate speech detection in extreme data-scarce scenarios. In future, we aim to explore multi-class hate speech detection and meta-training on a set of heterogeneous tasks such as aggression, target identification, code-mixing, etc. We hope our work will motivate modeling fast adaptation for cross-lingual training and zero-shot hate speech detection.
2310.15157
A Realist Interpretation of Unitarity in Quantum Gravity
Unitarity is a difficult concept to implement in canonical quantum gravity because of state non-normalizability and the problem of time. We take a realist approach based on pilot-wave theory to address this issue in the Ashtekar formulation of the Wheeler-DeWitt equation. We use the postulate of a definite configuration in the theory to define a global time for the gravitational-fermionic system recently discussed in (Phys. Rev. D 106.10 (2022): 106012), by parameterizing a variation of a Weyl-spinor that depends on the Kodama state. The total Hamiltonian constraint yields a time-dependent Schrodinger equation, without semi-classical approximations, which we use to derive a local continuity equation over the configuration space. We implement the reality conditions at the level of the guidance equation, and obtain a real spin-connection, extrinsic curvature and triad along the system trajectory. We obtain quantum corrections to deSitter spacetime from the guidance equation. The non-normalizable Kodama state is naturally factored out of the full quantum state in the conserved current density, opening the possibility for quantum-mechanical unitarity. We also give a pilot-wave generalisation of the notion of unitarity applicable to non-normalizable states, and show the existence of equilibrium density for our system. Lastly, we find unitary states in mini-superspace by finding an approximate solution to the Hamiltonian constraint.
Indrajit Sen, Stephon Alexander, Justin Dressel
2023-10-23T17:56:28Z
http://arxiv.org/abs/2310.15157v4
# A Realist Interpretation of Unitarity in Quantum Gravity ###### Abstract Unitarity is a difficult concept to implement in canonical quantum gravity because of state non-normalizability and the problem of time. We take a realist approach based on pilot-wave theory to address this issue in the Ashtekar formulation of the Wheeler-de Witt equation. We use the postulate of a definite configuration in the theory to define a global time for the gravitational-fermionic system recently discussed in (Phys. Rev. D 106.10 (2022): 106012), by parameterizing a variation of a Weyl-spinor that depends on the Kodama state. The total Hamiltonian constraint yields a time-dependent Schrodinger equation, without semi-classical approximations, which we use to derive a local continuity equation over the configuration space. We implement the reality conditions at the level of the guidance equation, and obtain a real spin-connection, extrinsic curvature and triad along the system trajectory. We obtain quantum corrections to deSitter spacetime from the guidance equation. The non-normalizable Kodama state is naturally factored out of the full quantum state in the conserved current density, opening the possibility for quantum-mechanical unitarity. We also give a pilot-wave generalisation of the notion of unitarity applicable to non-normalizable states, and show the existence of equilibrium density for our system. Lastly, we find unitary states in mini-superspace by finding an approximate solution to the Hamiltonian constraint. Introduction Quantum gravity effects may become important in regimes where quantum fluctuations of the gravitational field and high curvature coincide, such as close to the classical big bang and black hole singularities. In such situations and given the perturbative non-renormalizibility of quantum gravity, a non-perturbative treatment is a desired option. A conservative approach to quantization is a Schrodinger quantization, such as the Wheeler-deWitt equation(WdW) [1]. The WdW equation is non-polynomial in the metric variables and is difficult to solve. Progress was made with the Ashtekar variables which rendered the WdW equation polynomial in the configuration variables [2]. A major leap in progress was found by Kodama by solving the WdW equation of general relativity in terms of the Ashtekar connection [3]. This solution, called the Chern-Simons Kodama (CSK) is an exact wavefunction that solves the quantum WdW equation for a positive cosmological constant. It was shown that this Chern-Simons Kodama state consistently reduces to the Hartle-Hawking-Vilenkin state of de-Sitter space, and contains a multitude of other solutions, including black-hole quantum spacetimes [4; 5]. Recently, an exact CSK state was found with the inclusion of fermions [6]. Despite this success, the CSK state as well as other formulations of the WdW equation, is fraught with conceptual and technical problems that all approaches to the WdW suffer [7; 8; 9]. Since time evolution is a gauge redundancy, the CSK state is timeless. Also, the Lorentzian CSK state is non-normalizable for the naive-inner product, although a recent proposal for a new non-perturbative inner product was proposed [10]. The twin problems of time and non-normalizability make the definition of unitarity murky. Another issue is that the non-normalizable part of the Kodama state, when linearized yields gravitons with negative energy in its spectrum. These problems are to be expected since the CSK state is background independent and a proposed ground state. In this work, we address the interconnected problems of time, normalizability and unitarity by recasting the WdW equation in the Ashtekar formalism using pilot-wave theory [11; 12; 13; 14; 15], which is a realist formulation of quantum mechanics. Our approach introduces three new ideas to attack these problems. First, we use the postulate of a definite configuration in pilot-wave to define for the first time a real, relational time in terms of variation of massless fermionic field. This allows us to discuss time evolution of the quantum state, which is shown to follow a Schrodinger equation, without using semi-classical approximations. Second, we approach the question of unitarity by deriving a continuity equation over the configuration space, instead of using operator valued reality conditions. This enables us to find a locally conserved current density on the configuration space and thereby discuss unitarity from quantum-mechanical perspective. Third, we also generalize the notion of unitarity from pilot-wave perspective, which allows us to discuss unitarity without imposing normalizability. The article is structured as follows. In section II we give an introduction to the Ashtekar formalism and the Kodama state. We give an introduction to field-theoretic pilot-wave theory with a brief discussion of complex massive scalar field in section III. In section IV, we develop a pilot-wave formulation of the gravitational-fermionic system in [6], making use of some of the ideas developed in [16]. We first introduce a global time that parameterizes a particular variation of the fermionic field and then derive the continuity equation and corresponding current density in IV.1. We discuss the physical interpretation in IV.2, including normalizability in IV.2.1, guidance equation for the Ashtekar connection in IV.2.2, and reality conditions in IV.2.3. We discuss the notion of unitarity in our approach in section V. We discuss quantum-mechanical unitarity using our continuity equation in V.1, a generalized notion of unitarity using pilot-wave in V.2, and the existence of pilot-wave unitary states in mini-superspace in V.3.2. We discuss our results and future directions in VI. ## II The Ashtekar formalism and the CSK state In pursuit of a Wheeler-DeWitt quantization of gravity, it is instructive to understand how the Ashtekar connection and the resulting Hamiltonian, diffeomorphism, and gauge constraints emerge from a manifestly covariant 4D theory of gravity. In what follows we closely follow the derivation of the Ashtekar variables in the work of [6]. In the Ashtekar formalism [17; 2], gravitational dynamics on a four-dimensional manifold \(\mathcal{M}_{4}\) is not described by a metric \(g_{\mu\nu}\) but,1 rather, a real-valued gravitational field \(e^{I}_{\mu}(x)\), mapping a vector \(v^{\mu}\) in the tangent space of \({\cal M}_{4}\) at the point \(x\) into Minkowski spacetime \(M_{4}\) [with metric \(\eta_{IJ}={\rm diag}(-1,1,1,1)_{IJ}\)]. Locally, the metric on \({\cal M}_{4}\) is \(g_{\mu\nu}=\eta_{IJ}e^{I}_{\mu}e^{J}_{\nu}\). Footnote 1: We use the mostly plus metric signature, i.e. \(\eta_{\mu\nu}=(-,+,+,+)\) in units of \(c=1\). We use boldface letters \({\bf x}\) to indicate 3-vectors, and we use \(x\) to denote 4-vectors. Conventions for curvature tensors, covariant and Lie derivatives are all taken from Carroll [18]. Greek indices \((\mu,\nu,\ldots)\) denote spacetime indices, Latin indices \((a,b,\ldots)\) denote spatial indices, and Latin indices \((I,J,\ldots)\) and \((i,j,\ldots)\) denote indices for the internal space ranging from 0, \(\ldots\) 3 for the former and 1, \(\ldots\) 3 for the latter. The Lorentz connection \(\omega_{\mu I}{}^{J}\) is \(\omega_{I}{}^{J}\equiv\omega_{\mu I}{}^{J}\,{\rm d}x^{\mu}\), \({\rm d}\omega_{I}{}^{J}\equiv\partial_{\mu}\omega_{\nu I}{}^{J}\,{\rm d}x^{ \mu}\wedge{\rm d}x^{\nu}\) is the exterior derivative, and the curvature of \(\omega\) is \(R_{I}{}^{J}={\rm d}\omega_{I}{}^{J}+\omega_{I}{}^{K}\wedge\omega_{K}{}^{J}\). The action of self-dual gravity is (up to the gravitational constant \(8\pi G\)) \[S=\frac{1}{32\pi G}\int_{{\cal M}_{4}}\left[*(e^{I}\wedge e^{J})\wedge R_{IJ}+ ie^{I}\wedge e^{J}\wedge R_{IJ}-\frac{\Lambda}{6}\epsilon_{IJKL}e^{I}\wedge e^{J} \wedge e^{K}\wedge e^{L}\right], \tag{1}\] where \(*\) is the Hodge dual, the first term is the Hilbert-Palatini action and the second is the Holst term (proportional to the first Bianchi identities in the absence of torsion). Here we are interested in the Hamiltonian formulation in Ashtekar variables [2; 19]. In the gauge choice \(e^{0}_{\mu}=0\), it is convenient to define the densitized triad \(E^{a}_{i}=\epsilon_{ijk}\epsilon^{abc}e^{j}_{b}e^{k}_{c}\), which is conjugate to the self-dual connection \[A^{i}_{a}(x)\equiv-\frac{1}{2}\epsilon^{ij}{}_{k}\omega_{aj}{}^{k}-i\omega_{a 0}{}^{i}. \tag{2}\] As the Lorentz connection (and, in particular, the spin connection \(\Gamma^{i}_{a}\equiv-\frac{1}{2}\epsilon^{ij}{}_{k}\omega_{aj}{}^{k}\)) is real, \(A\) is complex-valued and obeys the reality conditions (for a discussion, see e.g., [20]) \[A^{i}_{a}+\overline{A^{i}_{a}}=2\Gamma^{i}_{a}[E]\,,E^{i}_{a}=\overline{E^{i} _{a}} \tag{3}\] where \(\overline{X}\) denotes complex conjugate of \(X\) and the spin connection solves the equation \({\rm d}e+\Gamma[E]\wedge e=0\). The Poisson bracket of the elementary variables \(A\) and \(E\) is \[\left\{A^{i}_{a}({\bf x},t),E^{b}_{j}({\bf y},t)\right\}=i8\pi G\delta^{b}_{a }\delta^{i}_{j}\delta({\bf x}-{\bf y})\,. \tag{4}\] Introducing the "magnetic" field and the gauge field strength \[B^{ai} \equiv \frac{1}{2}\epsilon^{abc}F^{i}_{bc}\,, \tag{5}\] \[F^{k}_{ab} = \partial_{a}A^{k}_{b}-\partial_{b}A^{k}_{a}+(8\pi G)\epsilon_{ij} ^{\phantom{ij}k}A^{i}_{a}A^{j}_{b}\,, \tag{6}\] Now, let us construct the CSK state by solving the Wheeler-DeWitt equation. Given the Hamiltonian \[\mathcal{H}_{WDW}=\epsilon_{ijk}E^{ai}E^{bj}\bigg{(}F^{k}_{ab}+\frac{\Lambda}{ 3}\epsilon_{abc}E^{ck}\bigg{)}, \tag{7}\] which acts on some wave function \(\psi[A]\), and we want to find the form of \(\psi[A]\) that is annihilated by (7). Applying the regular canonical quantization procedure, i.e. \[\hat{E}^{ai}\to 8\pi G\hbar\frac{\delta}{\delta A_{ai}}, \tag{8}\] the annihilation of the quantum state becomes \[\widehat{\mathcal{H}}_{WDW}\psi[A]=(8\pi G\hbar)^{2}\epsilon_{ijk}\frac{ \delta}{\delta A_{ai}}\frac{\delta}{\delta A_{bj}}\bigg{(}F^{k}_{ab}+\frac{8 \pi G\hbar\Lambda}{3}\epsilon_{abc}\frac{\delta}{\delta A_{ck}}\bigg{)}\psi[A ]=0. \tag{9}\] Putting the expression inside the brackets to zero, we get \[\epsilon_{abc}\frac{\delta\psi}{\delta A_{ck}}=-\frac{3}{8\pi G\hbar\Lambda}F^ {k}_{ab}\psi[A]. \tag{10}\] Contracting both sides with \(\epsilon^{dab}\) gives us \[2\delta^{d}_{c}\frac{\delta\psi}{\delta A_{ck}}=-\frac{3}{\ell^{2}_{\rm Pl} \Lambda}\epsilon^{dab}F^{k}_{ab}\psi[A]\Leftrightarrow\frac{\delta\psi}{ \delta A_{ai}}=-\frac{3}{2\ell^{2}_{\rm Pl}\Lambda}\epsilon^{abc}F^{i}_{bc} \psi[A], \tag{11}\] where \(\ell^{2}_{\rm Pl}=8\pi G\hbar\) is the Planck length. Recognizing the term multiplying the wave function to be the Chern-Simons functional, we can write down the exact solution to the Wheeler-DeWitt equation as being \[\psi_{K}[A]\equiv\mathcal{N}\exp\biggl{(}\frac{3}{2\ell^{2}_{\rm Pl}\Lambda} \int Y_{\rm CS}[A]\biggr{)}, \tag{12}\] where \(\mathcal{N}\) is some normalization constant independent of the gauge field and \[Y_{\text{CS}}[A]=\text{Tr}\bigg{[}A\wedge\text{d}A+\frac{2}{3}A\wedge A\wedge A \bigg{]}=-\frac{1}{2}\bigg{(}A^{i}\,\text{d}A^{i}+\frac{1}{3}\epsilon_{ijk}A^{ i}A^{j}A^{k}\bigg{)} \tag{13}\] is the Chern-Simons functional, with the trace taken in the Lie algebra. It can be said that the WKB semiclassical limit of the CSK state is de Sitter spacetime [21],2 with Footnote 2: See [22] for criticisms of this view. \[A^{i}_{a}=i\sqrt{\frac{\Lambda}{3}}\,e^{\sqrt{\frac{\Lambda}{3}}\,t}\delta^{i}_ {a}\,,\qquad E^{a}_{i}=e^{2\sqrt{\frac{\Lambda}{3}}\,t}\delta^{a}_{i}\,. \tag{14}\] Now that we have the CSK state solely in terms of the gravitational connection and the cosmological constant, we would like to explore a full nonperturbative state that also includes the fermionic Hamiltonian. ## III Pilot-wave formulation of massive complex scalar field It is helpful to begin with a discussion of pilot-wave theory with field ontology as an example (for further discussion, see [11; 14; 23; 24]). Consider a massive complex scalar field \(\phi(\vec{x},t)\) on flat space-time with the Lagrangian density \(\mathcal{L}=\partial^{\mu}\overline{\phi}\partial_{\mu}\phi-m^{2}\overline{ \phi}\phi\), where \(m\) labels the mass, \(\overline{\phi}\) labels the complex conjugate of \(\phi\) and the space-time metric is \(\eta=(1,-1,-1,-1)\). The conjugate momenta are \(\pi=\delta\mathcal{L}/\delta\partial_{0}\phi=\partial^{0}\overline{\phi}\) and \(\overline{\pi}=\delta\mathcal{L}/\delta\partial_{0}\overline{\phi}=\partial^{ 0}\phi\). The Hamiltonian density is \(\mathcal{H}=\pi\phi+\overline{\pi}\overline{\phi}-\mathcal{L}=\overline{\pi} \pi+\vec{\nabla}\overline{\phi}\cdot\vec{\nabla}\phi+m^{2}\overline{\phi}\phi\). To quantize the system, the canonical commutation relations are imposed [25]. Working in the \(\phi\), \(\overline{\phi}\) representation, the conjugate momenta are represented by the operators \(\hat{\pi}\rightarrow-i\hbar\delta/\delta\phi\), \(\hat{\overline{\pi}}\to i\hbar\delta/\delta\overline{\phi}\) and the Schrodinger equation becomes \[\int_{\mathcal{M}}\hat{\mathcal{H}}\Psi =i\hbar\frac{\partial\Psi}{\partial t} \tag{15}\] \[\Rightarrow\int_{\mathcal{M}}\bigg{[}\hbar^{2}\frac{\delta^{2} \Psi}{\delta\phi\delta\overline{\phi}}+(\vec{\nabla}\overline{\phi}\cdot\vec{ \nabla}\phi+m^{2}\overline{\phi}\phi)\Psi\bigg{]} =i\hbar\frac{\partial\Psi}{\partial t} \tag{16}\] where \(\mathcal{M}\) labels the spatial manifold and \(\Psi[\phi,\overline{\phi},t]\) is a functional of \(\phi\) and \(\overline{\phi}\). Using (16) and its complex conjugate, we can prove the following continuity equation \[\frac{\partial|\Psi|^{2}}{\partial t}+\nabla_{\phi}J+\overline{\nabla}_{\phi} \overline{J}=0 \tag{17}\] where \(\nabla_{\phi}=\int_{\mathcal{M}}\delta/\delta\phi\) and \[J=\frac{\hbar}{2i}\bigg{[}\overline{\Psi}\frac{\delta\Psi}{\delta\overline{ \phi}}-\Psi\frac{\delta\overline{\Psi}}{\delta\overline{\phi}}\bigg{]}=R^{2} \frac{\delta S}{\delta\overline{\phi}} \tag{18}\] Here \(\Psi=Re^{iS/\hbar}\) and \(R\), \(S\) are real time-dependent functionals of \(\phi\), \(\overline{\phi}\). The evolution of the field is given by the guidance equation \[\frac{\delta\phi(\vec{x})}{\delta t}\equiv\frac{J}{|\Psi|^{2}}=\frac{\delta S[ \phi,\overline{\phi},t]}{\delta\overline{\phi(\vec{x})}} \tag{19}\] Equation (19) implies that the evolution of the scalar field is spatially nonlocal (over \(\mathcal{M}\)). This is because \(S[\phi,\overline{\phi},t]\) is a functional of \(\phi(\vec{x})\), \(\overline{\phi(\vec{x})}\), and thus depends on values of \(\phi(\vec{x})\), \(\overline{\phi(\vec{x})}\) at all \(\vec{x}\) in \(\mathcal{M}\) in general. Also note that (19) is a local guidance equation in the configuration space \((\phi,\overline{\phi})\). That is, the evolution of a particular field \(\phi(\vec{x})\) does not depend on other field configurations as \(S[\phi,\overline{\phi},t]\) in (19) is evaluated at a particular point on the configuration space. ## IV Schrodinger equation of the gravitational-fermionic system We wish to quantize general relativity with a positive cosmological constant and a two-component Weyl Spinor. As we will see, the corresponding total Hamiltonian constraint, which was discovered in [6], becomes equivalent to a time dependent Schrodinger equation, where the first order spinor Hamiltonian will play exactly the role of a relational clock. This approach has advantages over scalar-field relational clocks since the latter may introduce negative norm states due to being second order in time derivative, as opposed to the Dirac equation, which is first order. The Gravitational-Spinor action is: \[S_{H+D}=\frac{1}{2\kappa}\int d^{4}x\,e(\,e^{\mu}_{I}e^{\nu}_{J}R^{IJ}_{\mu\nu}- \Lambda+i\bar{\Psi}\gamma^{I}e^{\mu}_{I}D_{\mu}\Psi-i\overline{D_{\mu}\Psi} \gamma^{I}e^{\mu}_{I}\Psi) \tag{20}\] where the covariant derivative is: \[D_{\mu}\Psi = \partial_{\mu}\Psi-\frac{1}{4}A^{IJ}_{\mu}\gamma_{I}\gamma_{J}\Psi \tag{21}\] \[\overline{D_{\mu}\Psi} = \partial_{\mu}\bar{\Psi}+\frac{1}{4}\bar{\Psi}\gamma_{I}\gamma_{J }A^{IJ}_{\mu} \tag{22}\] upon performing a \(3+1\) decomposition as discussed in section II, the total Hamiltonian density for the combined fermionic gravitational system [6] is \(\kappa^{-1}(\tilde{N}\hat{\mathcal{H}}+N^{a}\hat{\mathcal{V}}_{a})\) where \[\hat{\mathcal{H}} = \frac{1}{2\kappa}\epsilon_{ijk}\hat{E}^{bj}\hat{E}^{ai}\bigg{(}F^ {k}_{ab}+\frac{\Lambda}{3}\epsilon_{abc}\hat{E}^{ck}\bigg{)}+(\widehat{ \mathcal{D}}_{a}\xi)_{A}\sigma^{AB}_{i}\hat{E}^{ai}\widehat{\Pi}_{B}+\hat{E} ^{ai}(\widehat{\mathcal{D}}_{a}\xi)_{A}\sigma^{AB}_{i}\widehat{\Pi}_{B} \tag{23}\] \[\hat{\mathcal{V}}_{a} = \frac{i}{2\kappa}F^{k}_{ab}\hat{E}^{b}_{k}+(\widehat{\mathcal{D} }_{a}\xi)_{B}\hat{\Pi}^{B} \tag{24}\] and \(N>0\), \(N^{a}\) are the lapse function and shift vectors respectively. We have chosen the Ashtekar ordering [2, 3, 21, 26] for the purely gravitation part of the constraints. For the interaction terms between gravity and fermion, we have Weyl ordered \(\hat{E}_{ai}\) and ordered \(\widehat{\Pi}_{B}\) to directly operate on the quantum state. We remove divergent terms throughout in our calculations. The total Hamiltonian constraint is \[\int_{\mathcal{M}}\ \ \kappa^{-1}(\tilde{N}\hat{\mathcal{H}}+N^{a}\hat{ \mathcal{V}}_{a})\Psi[A,\xi]=0 \tag{25}\] where \(\mathcal{M}\) labels the spatial manifold. We use the quantization scheme (using commutators [27, 28]) \[\hat{E}^{ai}\rightarrow\frac{\delta}{\delta A_{ai}},\ \ \ \ \widehat{\Pi}_{A} \rightarrow-i\frac{\delta}{\delta\xi^{A}} \tag{26}\] where we have used natural units. Equation (25) implies \[\int_{\cal M}\tilde{N}\frac{\delta}{\delta A_{ai}}\Bigg{[}\frac{ \epsilon_{ijk}}{2}\frac{\delta}{\delta A_{bj}}\bigg{(}F^{k}_{ab}+\frac{\Lambda}{ 3}\epsilon_{abc}\frac{\delta}{\delta A_{ck}}\bigg{)}-2(\widehat{\cal D}_{a} \xi)_{A}\sigma^{AB}_{i}i\frac{\delta}{\delta\xi^{B}}\Bigg{]}\Psi[A,\xi]\] \[+\int_{\cal M}iN_{b}\frac{\delta}{\delta A_{ai}}\Bigg{[}\frac{F^{ b}_{ai}}{2}\Psi[A,\xi]\Bigg{]}=\int_{\cal M}N^{b}(\widehat{\cal D}_{b}\xi)^{B}i \frac{\delta}{\delta\xi^{B}}\Psi[A,\xi] \tag{27}\] Let us define \[\frac{\partial\Psi[A,\xi]}{\partial t^{\prime}}\equiv\int_{\cal M}\ \frac{\delta\xi^{B}}{\delta t^{\prime}}\frac{\delta}{\delta\xi^{B}}\Psi[A,\xi] \tag{28}\] where \[\frac{\delta\xi^{B}}{\delta t^{\prime}}\equiv N^{b}(\widehat{\cal D}_{b}\xi) ^{B} \tag{29}\] That is, we choose a particular variation of the fermionic field \(\xi\), suggested by the form of the Hamiltonian, to implicitly define a real, formal time variable \(t^{\prime}\) that parameterizes this variation. Note that equation (29) is not a semi-classical background as \(\xi\) is piloted by \(\Psi[A,\xi]\) via its dependence on \(A_{ai}\) (see equation (43) below). Equation (29) is naturally consistent with a pilot-wave interpretation as the latter posits a definite system configuration \((A,\xi)\). It is also consistent with any other interpretation where a definite configuration of the quantum system is physically meaningful. We can use (29) to rewrite equation (27) as \[\int_{\cal M}\tilde{N}\frac{\delta}{\delta A_{ai}}\Bigg{[}\frac{ \epsilon_{ijk}}{2}\frac{\delta}{\delta A_{bj}}\Bigg{(}F^{k}_{ab}+\frac{\Lambda }{3}\epsilon_{abc}\frac{\delta}{\delta A_{ck}}\bigg{)}-2(\widehat{\cal D}_{a} \xi)_{A}\sigma^{AB}_{i}i\frac{\delta}{\delta\xi^{B}}\Bigg{]}\Psi[A,\xi]\] \[+\int_{\cal M}iN_{b}\frac{\delta}{\delta A_{ai}}\Bigg{[}\frac{F^{ b}_{ai}}{2}\Psi[A,\xi]\Bigg{]}=i\frac{\partial\Psi[A,\xi]}{\partial t^{ \prime}} \tag{30}\] which resembles the time-dependent Schrodinger equation. ### Continuity equation and current density Using the complex conjugates of equations (28) and (30), we define (suppressing the labels \(A\), \(\xi\) in \(\Psi\) for brevity) \[\frac{\partial\overline{\Psi}\Psi}{\partial t^{\prime}}=\int_{\mathcal{M}}\ \frac{\delta\overline{\xi^{B}}}{\delta t^{\prime}}\frac{\delta \overline{\Psi}}{\delta\overline{\xi^{B}}}\Psi+\int_{\mathcal{M}}\ \overline{\Psi}\frac{\delta\xi^{B}}{\delta t^{\prime}}\frac{\delta\Psi}{ \delta\xi^{B}} \tag{31}\] We know from [16] that the current density is generally of the form \(|\Psi|^{2}\Omega\), where \(\Omega\) depends on the configuration variables and their conjugates, is independent of time, and is real and positive semi-definite. We define \(\Omega\equiv\Omega[A,\overline{A}]\) as we have used \(\xi\) to define our time variable \(t^{\prime}\). We can then show that \[\frac{\partial(|\Psi|^{2}\Omega)}{\partial t^{\prime}}+\int_{ \mathcal{M}}\Bigg{[}\frac{\delta}{\delta A_{ai}}\bigg{(}\Omega\overline{\Psi} \bigg{\{}\frac{i\tilde{N}\epsilon_{ijk}}{2}\frac{\delta}{\delta A_{bj}}\bigg{(} F_{ab}^{k}+\frac{\Lambda}{3}\epsilon_{abc}\frac{\delta}{\delta A_{ck}}\bigg{)} \Psi+2\tilde{N}(\widehat{\mathcal{D}}_{a}\xi)_{A}\sigma_{i}^{AB}\frac{\delta} {\delta\xi^{B}}\Psi\] \[-N_{b}\frac{F_{ai}^{b}}{2}\Psi\bigg{\}}\bigg{)}+c.c\Bigg{]}=\int _{\mathcal{M}}|\Psi|^{2}\Bigg{[}\frac{\delta\Omega}{\delta A_{ai}}\Bigg{\{} \frac{i\tilde{N}\epsilon_{ijk}}{2\Psi}\frac{\delta}{\delta A_{bj}}\bigg{(}F_{ ab}^{k}+\frac{\Lambda}{3}\epsilon_{abc}\frac{\delta}{\delta A_{ck}}\bigg{)}\Psi\] \[+\frac{2\tilde{N}}{\Psi}(\widehat{\mathcal{D}}_{a}\xi)_{A}\sigma_ {i}^{AB}\frac{\delta}{\delta\xi^{B}}\Psi+N_{b}\frac{F_{ai}^{b}}{2}\bigg{\}}+c.c\Bigg{]} \tag{32}\] where _c.c_ denotes complex conjugate of the term in square bracket, and we have used \(\delta\Psi/\delta\overline{A}_{ai}=\delta\overline{\Psi}/\delta A_{ai}=0\ \forall a,i\) as \(\Psi\) is a holomorphic functional of \(A\). The right-hand side of equation (32) can be written as \[\int_{\mathcal{M}}\Bigg{[}\frac{\delta}{\delta A_{bj}}\bigg{(} \overline{\Psi}\frac{\delta\Omega}{\delta A_{ai}}\frac{i\tilde{N}\epsilon_{ ijk}}{2}\bigg{(}F_{ab}^{k}+\frac{\Lambda}{3}\epsilon_{abc}\frac{\delta}{ \delta A_{ck}}\bigg{)}\Psi\bigg{)}+c.c\Bigg{]}-\Bigg{[}\frac{\delta}{\delta A _{ck}}\bigg{(}\overline{\Psi}\Psi\frac{\delta^{2}\Omega}{\delta A_{ai}\delta A _{bj}}\frac{i\tilde{N}\epsilon_{ijk}}{2}\frac{\Lambda}{3}\epsilon_{abc}\bigg{)}\] \[+c.c\Bigg{]}+\Bigg{[}\frac{\delta}{\delta\xi^{B}}\bigg{(}\frac{ \delta\Omega}{\delta A_{ai}}2\overline{\Psi}\Psi\tilde{N}(\widehat{\mathcal{D }}_{a}\xi)_{A}\sigma_{i}^{AB}\bigg{)}+c.c\Bigg{]}+|\Psi|^{2}\bigg{\{}-\Bigg{[} \frac{\delta^{2}\Omega}{\delta A_{ai}\delta A_{bj}}\frac{i\tilde{N}\epsilon_ {ijk}}{2}F_{ab}^{k}+c.c\Bigg{]}+\] \[\Bigg{[}\frac{\delta^{3}\Omega}{\delta A_{ai}\delta A_{bj}\delta A _{ck}}\frac{i\tilde{N}\epsilon_{ijk}}{2}\frac{\Lambda}{3}\epsilon_{abc}+c.c \Bigg{]}-\Bigg{[}N_{b}\frac{\delta\Omega}{\delta A_{ai}}\frac{F_{ai}^{b}}{2} +c.c\Bigg{]}\bigg{\}} \tag{33}\] We require that \(\Omega\) be such that all the source-like terms vanish. This will be true if \[-\Bigg{[}\ \frac{\delta^{2}\Omega}{\delta A_{ai}\delta A_{bj}}\frac{i \tilde{N}\epsilon_{ijk}}{2}F^{k}_{ab}+c.c\Bigg{]}+\Bigg{[}\frac{\delta^{3}\Omega }{\delta A_{ai}\delta A_{bj}\delta A_{ck}}\frac{i\tilde{N}\epsilon_{ijk}}{2} \frac{\Lambda}{3}\epsilon_{abc}+c.c\Bigg{]}\] \[-\bigg{[}N_{b}\frac{\delta\Omega}{\delta A_{ai}}\frac{F^{b}_{ai}} {2}+c.c\bigg{]}=0 \tag{34}\] Equation (34) supplies the \(\Omega\) needed to define the current density. We observe that \[\Omega[A,\overline{A}]=\frac{1}{\Psi_{K}[A]\overline{\Psi_{K}[A]}} \tag{35}\] solves (34), where \(\Psi_{K}[A]\) is the Kodama state. As the weight factor \(\Omega\) is unique and does not depend on the Hamiltonian or the state [29], we take (35) henceforth. Equation (32) can then be written as \[\frac{\partial(|\Psi|^{2}\Omega)}{\partial t^{\prime}}+\nabla^{ai}J_{ai}+ \overline{\nabla}^{ai}\overline{J}_{ai}+\nabla_{B}J^{B}+\overline{\nabla}_{B }\overline{J}^{B}=0 \tag{36}\] where \(\nabla^{ai}\equiv\int_{\mathcal{M}}\ \delta/\delta A_{ai}\), \(\nabla_{B}\equiv\int_{\mathcal{M}}\ \delta/\delta\xi^{B}\) and \[J_{ai}= \frac{|\Psi|^{2}}{\Psi_{K}\overline{\Psi_{K}}}\bigg{\{}\frac{i \tilde{N}\ell_{\rm Pl}^{2}}{2}\epsilon_{ijk}\bigg{(}F^{k}_{ab}\bigg{[}\frac{ \delta\ln\Psi}{\delta A_{bj}}+\frac{\delta\ln\Psi_{K}}{\delta A_{bj}}\bigg{]}+ \frac{\ell_{\rm Pl}^{2}\Lambda}{3}\epsilon_{abc}\bigg{[}\frac{1}{\Psi}\frac{ \delta^{2}\Psi}{\delta A_{ck}\delta A_{bj}}+\frac{\delta\ln\Psi}{\delta A_{ck} }\frac{\delta\ln\Psi_{K}}{\delta A_{bj}}\] \[-\frac{1}{\Psi_{K}}\frac{\delta^{2}\Psi_{K}}{\delta A_{ck}\delta A _{bj}}+2\frac{\delta\ln\Psi_{K}}{\delta A_{ck}}\frac{\delta\ln\Psi_{K}}{ \delta A_{bj}}\bigg{]}\bigg{)}+2\tilde{N}\ell_{\rm Pl}^{2}(\widehat{\mathcal{D }}_{a}\xi)_{A}\sigma_{i}^{AB}\frac{\delta\ln\Psi}{\delta\xi^{B}}-N_{b}\frac{ \ell_{\rm Pl}^{2}}{2\kappa\hbar}F^{b}_{ai}\bigg{\}} \tag{37}\] \[J^{B}= 2\frac{\ell_{\rm Pl}^{2}|\Psi|^{2}}{\Psi_{K}\overline{\Psi_{K}}} \frac{\delta\ln\Psi_{K}}{\delta A_{ai}}\tilde{N}(\widehat{\mathcal{D}}_{a}\xi )_{A}\sigma_{i}^{AB} \tag{38}\] Note that equation (36) is not yet a satisfactory continuity equation, as there are 'temporal flux' terms \(J^{B}\), \(\overline{J}^{B}\) corresponding to \(\xi^{B}\), \(\overline{\xi}^{B}\). We can absorb them into the current density term by redefining the time parameter \(t^{\prime}\to t\) such that \[\frac{\delta\xi^{B}}{\delta t}=N^{b}(\widehat{\mathcal{D}}_{b}\xi)^{B}+2\ell_ {\rm Pl}^{2}\frac{\delta\ln\Psi_{K}}{\delta A_{ai}}\tilde{N}(\widehat{ \mathcal{D}}_{a}\xi)_{A}\sigma_{i}^{AB} \tag{39}\] Equation (36) can then be written as the continuity equation \[\frac{\partial(|\Psi|^{2}\Omega)}{\partial t}+\nabla^{ai}J_{ai}+\overline{ \nabla}^{ai}\overline{J}_{ai}=0 \tag{40}\] where \(J_{ai}\) is given by (37). ### Physical interpretation Let us consider the physical interpretation given the weight factor (35) and the continuity equation (40). Let us first take the question of normalizability of the quantum state. #### iv.2.1 Normalizability It was shown by the authors of [6], that \(\Psi[A,\xi]=\Psi_{K}[A]\Phi[A,\xi]\) is a good ansatz for the gravitational-fermionic WdW equation. The continuity equation (40) can be rewritten as \[\frac{\partial|\Phi|^{2}}{\partial t}+\nabla^{ai}J_{ai}+\overline{\nabla}^{ai }\overline{J}_{ai}=0 \tag{41}\] which makes it evident that the non-normalizable Chern-Simons-Kodama state \(\Psi_{K}\) is factored out of the current density. Therefore, to interpret (41) as a probability conservation equation, the normalizability condition is imposed on \(\Phi[A,\xi]\) - not the full quantum state \(\Psi[A,\xi]\). Using (30), we can show that \(\Phi[A,\xi]\) follows the Schrodinger equation \[\int_{\mathcal{M}}\frac{1}{\Psi_{K}}\frac{\delta}{\delta A_{ai}} \bigg{\{}\bigg{[}\tilde{N}\frac{\epsilon_{ijk}}{2}\frac{\delta}{\delta A_{bj }}\bigg{(}F_{ab}^{k}+\frac{\Lambda}{3}\epsilon_{abc}\frac{\delta}{\delta A_{ ck}}\bigg{)}+iN_{b}\frac{F_{ai}^{b}}{2}\bigg{]}\Psi_{K}\Phi[A,\xi]\bigg{\}}\] \[+\int_{\mathcal{M}}\frac{\delta}{\delta A_{ai}}\bigg{[}-2\tilde{ N}(\widehat{\mathcal{D}}_{a}\xi)_{A}\sigma_{i}^{AB}i\frac{\delta}{\delta\xi^{B}} \Phi[A,\xi]\bigg{]}=i\frac{\partial\Phi[A,\xi]}{\partial t} \tag{42}\] with respect to the time parameter \(t\) in (39). We further discuss probabilities in section (V). #### iv.2.2 Guidance equations The conceptual role of the continuity equation derived from the quantum state, in pilot-wave theory, is to define the guidance equation. Using (37) and the standard pilot-wave prescription for the ansatz \(\Psi[A,\xi]=\Psi_{K}[A]\Phi[A,\xi]\), the guidance equation \[\frac{\delta A_{ai}}{\delta t}\equiv\frac{J_{ai}}{|\Phi|^{2}}=\frac {i\tilde{N}\ell_{\rm Pl}^{2}}{2}\frac{\delta\ln\Psi_{K}}{\delta A_{bj}}\epsilon_ {ijk}\bigg{(}2F_{ab}^{k}+\ell_{\rm Pl}^{2}\Lambda\epsilon_{abc}\frac{\delta\ln \Psi_{K}}{\delta A_{ck}}\bigg{)}-N_{b}\frac{\ell_{\rm Pl}^{2}}{2\kappa\hbar}F_{ ai}^{b}\\ +\frac{i\tilde{N}\ell_{\rm Pl}^{2}}{2}\epsilon_{ijk}\bigg{(}2F_{ ab}^{k}\frac{\delta\ln\Phi}{\delta A_{bj}}+\frac{\ell_{\rm Pl}^{2}\Lambda}{3} \epsilon_{abc}\bigg{[}2\frac{\delta\ln\Phi}{\delta A_{bj}}\frac{\delta\ln\Psi _{K}}{\delta A_{ck}}+\frac{1}{\Phi}\frac{\delta^{2}\Phi}{\delta A_{ck}\delta A _{bj}}\bigg{]}\bigg{)}+2\tilde{N}\ell_{\rm Pl}^{2}(\widehat{\mathcal{D}}_{a} \xi)_{A}\sigma_{i}^{AB}\frac{\delta\ln\Phi}{\delta\xi^{B}} \tag{43}\] determines the evolution of the Ashtekar connection with respect to the fermionic time \(t\). We note that the first line of (43) is the classical equation of motion for the connection with \(E^{bj}\) substituted by \(\delta\ln\Psi_{K}/\delta A_{bj}\). This form of \(E^{bj}\) can be shown to give the classical deSitter solution [21]. The first term in the second line of (43) contains the quantum corrections to the deSitter solution, whereas the second term contains the quantum contribution from the fermionic interaction with \(\Pi_{B}\) given by \(\delta\ln\Phi/\delta\xi^{B}\). Also note that equation (43) is nonlocal in the sense that the evolution of the connection at a particular point in physical space generally depends upon the value of the connection at other points in physical space, similar to equation (19). The guidance equation for the fermion is given by equation (39). We note that (39) resembles the classical equation of motion with \(E^{ai}\) substituted by \(\delta\ln\Psi_{K}/\delta A_{ai}\). However, as \(A_{ai}\) is guided by the full quantum state \(\Psi[A,\xi]\) in (43), the evolution of the fermion is implicitly state dependent and shows quantum behaviour. #### ii.2.3 Reality conditions We impose the reality conditions at the level of the guidance equation (43). We first note that, in the Ashtekar formulation of classical general relativity, the following conditions \[A_{ai}+\overline{A}_{ai} =2\Gamma_{ai} \tag{44}\] \[E_{ai} =\overline{E}_{ai} \tag{45}\] have to be imposed to recover the real sector (with real metric), where \(\Gamma_{ai}\) is the 3D spin connection. In the orthodox quantum formulation of canonical quantum gravity, the reality conditions are generalised to the operator conditions \[\hat{A}_{ai}+\hat{A}_{ai}^{\dagger} = 2\hat{\Gamma}_{ai} \tag{46}\] \[\hat{E}_{ai} = \hat{E}_{ai}^{\dagger} \tag{47}\] We pursue here an approach based on pilot-wave theory to generalizing the classical reality conditions (44), (45). We demand that these conditions be met for the configuration-space trajectory determined by the guidance equation (43). This allows us to extract the real sector for an arbitrary solution to the Schrodinger-like equation (30), regardless of normalizability issues. It is clear from (43) that the first reality condition (44) will be trivially satisfied for any arbitrary solution \(\Psi\) if we define \[2\frac{\delta\Gamma_{ai}(t)}{\delta t}\equiv\frac{\delta}{\delta t}\big{(}A_{ ai}(t)+\overline{A}_{ai}(t)\big{)} \tag{48}\] at all points of the system trajectory. Let us next consider the second reality condition (45). Using the definition \[\Gamma_{ai}=\frac{1}{2}\epsilon_{ijk}E^{bk}\big{(}E_{a,b}^{j}-E_{b,a}^{j}+E_{j }^{c}E_{a}^{l}E_{c,b}^{l}\big{)}+\frac{1}{4}\epsilon_{ijk}E^{bk}\bigg{(}2E_{a }^{j}\frac{\mathbf{E}_{,b}}{\mathbf{E}}-E_{b}^{j}\frac{\mathbf{E}_{,a}}{ \mathbf{E}}\bigg{)} \tag{49}\] where \(\mathbf{E}\equiv\det(E)\), we can solve for \(E_{ai}(t)\) given \(\Gamma_{ai}(t)\) along the system trajectory from (48). Since the \(\Gamma_{ai}\) is real, (49) admits real solutions and the second reality condition (45) is thereby satisfied. Lastly, we can obtain the extrinsic curvature \[K_{ai}=\frac{1}{2\tilde{N}}\bigg{(}\frac{\partial N_{i}}{\partial x^{a}}+ \frac{\partial N_{a}}{\partial x^{i}}-\frac{\partial g_{ai}}{\partial t}\bigg{)} \tag{50}\] along the system trajectory from the imaginary part of the connection as \[A_{ai}=\Gamma_{ai}-iK_{ai} \tag{51}\] Probabilities, Unitarity and Mini-Superspace ### Quantum-mechanical unitarity Let us first consider whether the quantum-mechanical notion of unitarity is applicable. We note that since the non-normalizable \(\Psi_{K}[A]\) is factored out of the current density in (41), it is possible that \(\Phi[A,\xi]=\Psi[A,\xi]/\Psi_{K}[A]\) can be appropriately normalized. In that case, the continuity equation (41) may be interpreted as a statement of local probability conservation, analogous to the continuity equation in orthodox quantum mechanics. In addition, if the current \(J^{ai}\to 0\) at large \(|A_{ai}|\), then probabilities remain normalized3 with respect to the fermionic time and our system may be said to be quantum-mechanical unitary. We leave it to future work to determine whether such \(\Phi[A,\xi]\) exist. Footnote 3: In general, the normalization of a density \(\rho(\vec{x},t)\) evolving via the continuity equation \(\frac{\partial\rho}{\partial t}+\vec{\nabla}\cdot(\rho\vec{v})=0\) is preserved if the current \(\rho\vec{v}\to 0\) as \(|x|\rightarrow\infty\). In the following, we also explore a generalised notion of unitarity that agrees with quantum-mechanical unitarity and, further, is applicable to non-normalizable \(\Phi[A,\xi]\). ### Pilot-wave unitarity The key idea here is that pilot-wave theory posits a probability continuity equation that is logically independent [11; 12; 30; 31; 32; 33] of the continuity equation derived from the quantum state (41), whose role is only to define the guidance equation (43) for a single configuration. We may, therefore, consider an initial normalized density of configurations \(\rho[A,\overline{A},\xi,\overline{\xi},0]\) for a theoretical ensemble4 regardless of the normalizability of \(\Phi[A,\xi]\)[34]. The time evolution of the density is given by Footnote 4: Since pilot-wave theory has a single-world ontology, probabilities here can only refer to a single universe. For example, we can consider agents having incomplete knowledge about the universe. Such agents may assign probabilities to the possible initial configurations of the universe for a theoretical ensemble. \[\frac{\partial\rho[A,\overline{A},\xi,\overline{\xi},t]}{ \partial t}+\nabla^{ck}(\rho[A,\overline{A},\xi,\overline{\xi},t]\frac{\delta A _{ck}}{\delta t})+\overline{\nabla}^{ck}(\rho[A,\overline{A},\xi,\overline{ \xi},t]\frac{\delta\overline{A}_{ck}}{\delta t})\] \[+\nabla^{B}(\rho[A,\overline{A},\xi,\overline{\xi},t]\frac{ \delta\xi_{B}}{\delta t})+\overline{\nabla}^{B}(\rho[A,\overline{A},\xi, \overline{\xi},t]\frac{\delta\overline{\xi}_{B}}{\delta t})=0 \tag{52}\] Equations (41) and (52) imply that \[\frac{d}{dt}\frac{\rho[A,\overline{A},\xi,\overline{\xi},t]}{|\Phi[A,\xi]|^{2}}=0 \tag{53}\] where \[\frac{d}{dt}\equiv\frac{\partial}{\partial t}+\int_{\mathcal{M}}\frac{\delta A _{ai}}{\delta t}\frac{\delta}{\delta A_{ai}}+\int_{\mathcal{M}}\frac{\delta \overline{A_{ai}}}{\delta t}\frac{\delta}{\delta\overline{A_{ai}}}+\int_{ \mathcal{M}}\frac{\delta\xi^{B}}{\delta t}\frac{\delta}{\delta\xi^{B}}+\int_{ \mathcal{M}}\frac{\delta\overline{\xi}^{B}}{\delta t}\frac{\delta}{\delta \overline{\xi}^{B}} \tag{54}\] denotes the total time derivative operator. The relation (53) implies that the ratio of \(\rho[A,\overline{A},\xi,\overline{\xi},t]\) to \(|\Phi[A,\xi]|^{2}\) remains constant along the system trajectories on configuration space. A density \(\rho[A,\overline{A},\xi,\overline{\xi},t]\) that is equal to \(|\Phi[A,\xi]|^{2}\) over an evolving compact support of the configuration space has been defined to be in pilot-wave equilibrium [34], which is a generalization of the notion of quantum equilibrium [30; 31; 32; 33]. For example, an initial density (up to normalization factor) \[\rho[A,\overline{A},\xi,\overline{\xi},0]=\begin{cases}|\Phi[A,\xi]|^{2},&(A,\xi)\in\Omega_{0}\\ 0,&(A,\xi)\in\mathcal{C}\setminus\Omega_{0}\end{cases} \tag{55}\] where \(\Omega_{0}\equiv\{(A,\xi)|\rho[A,\overline{A},\xi,\overline{\xi},0]>0\}\) is a compact support on the configuration space \(\mathcal{C}\), will evolve to \[\rho[A,\overline{A},\xi,\overline{\xi},t]=\begin{cases}|\Phi[A,\xi]|^{2},&(A,\xi)\in\Omega_{t}\\ 0,&(A,\xi)\in\mathcal{C}\setminus\Omega_{t}\end{cases} \tag{56}\] where \(\Omega_{t}\equiv\{(A,\xi)|\rho[A,\overline{A},\xi,\overline{\xi},t]>0\}\) is the time evolved support on the configuration space. The behaviour of such densities has been explored for the case of harmonic oscillators in [34]. Let us next consider the notion of unitarity from a pilot-wave perspective. We define \(\Phi[A,\xi]\) to be a unitary state if and only if \[\lim_{|A_{ck}|\rightarrow\infty}\rho[A,\overline{A},\xi,\overline{\xi},t] \frac{\delta A_{ck}}{\delta t}=0\;\;\forall c,k \tag{57}\] for any initially normalized \(\rho[A,\overline{A},\xi,\overline{\xi},0]\) evolving via (52) at any finite \(t>0\), and where \(\delta A_{ck}/\delta t\) is determined from (43). As \(\delta\xi^{B}/\delta t\propto\xi^{B}\) from (39), the condition (57) implies that \(\rho[A,\overline{A},\xi,\overline{\xi},t]\) remains normalized with time. Clearly, the pilot-wave notion of unitarity (57) is applicable regardless of the normalizability of \(\Phi[A,\xi]\). Note that a unitary non-normalizable state is not identical to a bound non-normalizable state [34]. We now explore the behaviour of solutions to the Hamiltonian constraint in the context of this discussion. This is a technically challenging question to investigate in full generality, so we address this here in the mini-superspace (FRW) approximation, which is relevant for quantum cosmology. ### Mini-superspace Assuming homogenity and isotropy, we take \(A_{ck}(\vec{x})=iA\delta_{ck}\) and \(\xi^{B}(\vec{x})=\xi\). This implies that \[F^{k}_{ab}=-\kappa A^{2}\epsilon^{k}_{ab} \tag{58}\] \[(\widehat{\mathcal{D}}_{a}\xi)_{A}=\kappa iA\tau^{C}_{aA}\xi_{C} \tag{59}\] The Hamiltonian constraint \(\hat{\mathcal{H}}\Psi=0\) simplifies to \[3i\frac{\partial^{2}(A^{2}\Psi)}{\partial A^{2}}+\hbar\Lambda\frac{\partial^ {3}\Psi}{\partial A^{3}}+2A\tau^{C}_{aA}\xi_{C}\sigma^{aAB}\frac{\partial}{ \partial A}\frac{\partial\Psi}{\partial\xi^{B}}+\tau^{C}_{aA}\xi_{C}\sigma^{ aAB}\frac{\partial\Psi}{\partial\xi^{B}}=0 \tag{60}\] As such, equation (60) does not have separable solutions in \(A\), \(\xi\). #### iv.3.1 Approximately separable solutions Let us make the simplifying assumption that the last term in (60) is small, which we will justify later. We then look for separable solutions \(\Psi(A,\xi)=\chi(A)\phi(\xi)\). For such solutions, (60) implies \[\frac{3i\phi}{A\chi^{\prime}}\frac{d^{2}(A^{2}\chi)}{dA^{2}}+\frac{\hbar\Lambda \phi}{A\chi^{\prime}}\frac{d^{3}\chi}{dA^{3}}+2\tau^{C}_{aA}\xi_{C}\sigma^{ aAB}\frac{d\phi}{d\xi^{B}}=0 \tag{61}\] Clearly, the first two terms depend only on \(A\) whereas the third term depends only on \(\xi\). Let us introduce a separation constant \({\cal E}\) (in general complex) such that \[\frac{3i}{A\chi^{\prime}}\frac{d^{2}(A^{2}\chi)}{dA^{2}}+\frac{\hbar \Lambda}{A\chi^{\prime}}\frac{d^{3}\chi}{dA^{3}}={\cal E} \tag{62}\] \[2\tau^{C}_{aA}\xi_{C}\sigma^{aAB}\frac{d\phi}{d\xi^{B}}=-{\cal E}\phi \tag{63}\] The differential equation for \(\phi\) can be written as \[\xi\frac{d\phi}{d\xi}=-i{\cal E}_{0}\phi \tag{64}\] where \({\cal E}_{0}=2{\cal E}/(\sigma^{+}_{aA}\sigma^{aA+}+\sigma^{-}_{aA}\sigma^{aA +}+\sigma^{+}_{aA}\sigma^{aA-}+\sigma^{-}_{aA}\sigma^{aA-})\) and we have used \(\tau\equiv-i\sigma/2\). The general solution to (64) is \(\phi(\xi)=ce^{-i{\cal E}_{0}\ln\xi}\), where \(c\) is an arbitrary constant. We note the resemblance of this solution to the time-dependent part \(e^{-iEt/\hbar}\) of an energy eigenstate corresponding to energy \(E\). The approximate solution to (62) for \(\chi\) is \[\chi(A)=c_{1}A^{-\frac{9+\sqrt{9+18i{\cal E}-{\cal E}^{2}}+{\cal E}i}{6}}+c_{2 }A^{-\frac{9-\sqrt{9+18i{\cal E}-{\cal E}^{2}}+{\cal E}i}{6}} \tag{65}\] where we have neglected the third-derivative term multiplied by \(\hbar\Lambda\). Note that we can select \({\cal E}\) in (65) such that the last term in (60) is indeed small, as assumed. #### iii.2.2 Unitarity and Torsion The current (37) can be rewritten as \[J_{ai}=|\Phi|^{2}\biggl{\{}\frac{i\tilde{N}\ell_{\rm Pl}^{2}}{2}\epsilon_{ijk }\biggl{(}\frac{\ell_{\rm Pl}^{2}\Lambda}{3}\epsilon_{abc}\frac{1}{\Psi}\frac{ \delta^{2}\Psi}{\delta A_{ck}\delta A_{bj}}\biggr{)}+2\tilde{N}\ell_{\rm Pl}^{ 2}(\widehat{\cal D}_{a}\xi)_{A}\sigma^{AB}_{i}\frac{\delta\ln\Psi}{\delta \xi^{B}}-N_{b}\frac{\ell_{\rm Pl}^{2}}{2\kappa\hbar}F^{b}_{ai}\biggr{\}} \tag{66}\] The guidance equation (43) can be shown to reduce to \[i\frac{dA}{dt}=-\frac{\tilde{N}\ell_{\rm Pl}^{2}}{\chi}\frac{d}{dA}\biggl{(} \frac{\ell_{\rm Pl}^{2}\Lambda}{3}\frac{d}{dA}\biggr{)}\chi+2i\hbar\tilde{N}{ \cal E}A \tag{67}\] for separable solution \(\chi(A)\phi(\xi)\) corresponding to \(\mathcal{E}\). Suppose that \(\chi(A)=A^{d}\), where \(d\equiv-(9+\sqrt{9+18i\mathcal{E}-\mathcal{E}^{2}}+\mathcal{E}i)/6\), then (67) becomes \[\frac{dA}{dt}=i\tilde{N}\ell_{\rm Pl}^{2}\bigg{(}\frac{\ell_{\rm Pl}^{2}\Lambda }{3}\frac{d(d-1)}{A^{2}}\bigg{)}+2\hbar\tilde{N}\mathcal{E}A \tag{68}\] Clearly, for large \(A\), \(dA/dt\) increases approximately linearly and, using (57), \(\chi(A)\phi(\xi)\) is pilot-wave unitary. Equation (68) also implies that, in general, \(A(t)\) will have both real and imaginary parts. This implies the presence of both normal and parity-violating torsion [35; 36]. #### v.3.3 Evolution of the fermionic field Lastly, the evolution of the fermionic field (39) becomes \[\frac{d\xi^{B}}{dt}=i\kappa\bigg{[}9\tilde{N}\kappa\frac{A^{3}(\tau_{A}^{iC}) \sigma_{i}^{AB}}{\Lambda}+N^{a}A\tau_{a}^{BC}\bigg{]}\xi_{C} \tag{69}\] Equation (69) implies that \(\xi^{+}\) and \(\xi^{-}\) will quickly differ, even if \(\xi^{B}=\xi\) at \(t=0\). ## VI Discussion We have described an interacting gravitational-fermionic system in Ashtekar formulation using the language of pilot-wave theory. We summarise here the key results of our work and potential directions for future research. We have obtained a natural time variable for the combined system, without semiclassical approximations, by parameterizing variation of the fermionic field that depends on the Kodama state. The total Hamiltonian constraint is expressed as a Schrodinger equation with respect to the fermionic time. Our work suggests that the problem of time in quantum gravity and the problem of preferred global time required to define pilot-wave dynamics are closely linked. Both are solved simultaneously in our approach, obviating the criticism that the preferred global time is necessarily ad hoc in pilot-wave theory. For future work, it will be interesting to apply this approach to the problem of time to scenarios with additional matter fields coupled to gravity. A straightforward application would be to vary each matter field and then sum over all to define a partial time derivative of the quantum state, in analogy with summing over the different spinor components in equation (39). We have derived a local continuity equation over the configuration space and discussed unitarity from both quantum-mechanical and pilot-wave perspective. It is interesting that in the context of the Tunnelling Wavefunction of the Universe, Vilenkin was able to define a conserved current for configurations in mini-superspace and it would be interesting to explore the relationship between our conserved current and his [37; 38]. In our conserved current density, the non-normalizable Kodama state is found to naturally factor out from the full quantum state. A natural question that arises for future work is whether the remaining part of the quantum state can be appropriately normalized, thereby proving quantum-mechanical unitarity. We have also given a pilot-wave generalization of the notion of unitarity, which reduces to the quantum-mechanical notion for normalizable states but is also applicable to non-normalizable states. We have shown the existence of approximate pilot-wave unitary states in mini-superspace. We leave for future work whether pilot-wave unitary states exist in general. We have explored pilot-wave dynamics for the physically relevant quantities in our system. We have retrieved real spin connection, triad and extrinsic curvature along the system trajectory in configuration space by imposing the reality conditions at the level of the guidance equation for the connection. Interestingly, the guidance equation for the connection naturally resolves into the classical equation of motion, giving us deSitter spacetime as a solution, plus quantum corrections. We have also shown the existence of pilot-wave equilibrium densities [34], which lead to Born-rule-like probabilities. It is interesting that we have used commutators to quantize the fermionic field [27; 28], and this leads to considerable simplicity in interpretation for the guidance equations. It will be interesting to explore the violation of spin-statistics theorem in quantum gravity in future, as this is closely related to the long-standing question of particle versus field ontology for fermions in pilot-wave theory [23; 24; 39]. It is important to extract testable cosmological predictions from our approach. We know that the connection in FRW is the co-moving Horizon, \(A\sim Ha\)[35; 36], so that the evolution of the Horizon may be obtained from the guidance equation and this may yield predictions in light of the Hubble tension. Such a link would connect non-local dynamics in pilot-wave theory to the evolution of the Hubble parameter, but this is still speculative. ###### Acknowledgements. We thank Abhay Ashtekar for encouragement and technical comments on aspects of this work. We thank David Spergel for inspiring SA, years ago, to look at the de Broglie-Bohm framework in the context of cosmology. We thank Laurent Feidel, Joao Magueijo, Lee Smolin and Antonino Marciano for critical and useful comments. IS is grateful to Matt Leifer for encouragement and helpful discussions. IS was supported by a fellowship from the Grand Challenges Initiative at Chapman University.
2310.19745
Cosmological constraints from harmonic space analysis of DES Y3 3x2 clustering
The large-scale distribution of matter, as mapped by photometric surveys like the Dark Energy Survey (DES), serves as a powerful probe into cosmology. It is especially sensitive to both the amplitude of matter clustering ($\sigma_8$) and the total matter density ($\Omega_m$). The fiducial analysis of the two-point clustering statistics of these surveys is invariably done in configuration space where complex masking scheme is easier to handle. However, such an analysis inherently mixes different scales together, requiring special care in modeling. In this study, we present an analysis of DES Y3 3x2 clustering data in harmonic space where small and large scales are better separated and can be neatly modeled using perturbative techniques. Using conservative scale cuts together with Limber approximation and a Gaussian covariance assumption in a first study, we model the clustering data under a linear bias model for galaxies, incorporating comprehensive treatment for astrophysical effects. We subsequently extend this fiducial analysis to explore a third-order biasing prescription. For our fiducial analysis, we get $S_8=0.789\pm0.020$, consistent with the configuration space analysis presented by the DES collaboration, although under our different modeling choices, we find a preference for a lower $\Omega_m$ and a higher $\sigma_8$. The analysis sets the stage for a future search for signatures of primordial non-Gaussianity and blue-tilted isocurvature perturbations from photometric surveys.
Utkarsh Giri, Sai Chaitanya Tadepalli
2023-10-30T17:09:44Z
http://arxiv.org/abs/2310.19745v2
# Cosmological constraints from harmonic space analysis of DES Y3 3x2 clustering ###### Abstract The large-scale distribution of matter, as mapped by photometric surveys like the Dark Energy Survey (DES), serves as a powerful probe into cosmology. It is especially sensitive to both the amplitude of matter clustering (\(\sigma_{8}\)) and the total matter density (\(\Omega_{m}\)). The fiducial analysis of the two-point clustering statistics of these surveys is invariably done in configuration space where complex masking scheme is easier to handle. However, such an analysis inherently mixes different scales together, requiring special care in modeling. In this study, we present an analysis of DES Y3 3x2 clustering data in harmonic space where small and large scales are better separated and can be neatly modeled using perturbative techniques. Using conservative scale cuts together with Limber approximation and a Gaussian covariance assumption in a first study, we model the clustering data under a linear bias model for galaxies, incorporating comprehensive treatment for astrophysical effects. We subsequently extend this fiducial analysis to explore a third-order biasing prescription. For our fiducial analysis, we get \(S_{8}=0.789\pm 0.020\), consistent with the configuration space analysis presented by the DES collaboration, although under our different modelling choices, we find a preference for a lower \(\Omega_{m}\) and a higher \(\sigma_{8}\). The analysis sets the stage for a future search for signatures of primordial non-Gaussianity and blue-tilted isocurvature perturbations from photometric surveys. + Footnote †: preprint: APS/123-QED ## I Introduction Over the last few decades, our knowledge and understanding of the Universe have grown by leaps and bounds, thanks mainly to massive amounts of high-quality data accumulated by Cosmic Microwave Background (CMB) experiments like WMAP [1] and Planck [2] and spectroscopic and photometric galaxy surveys like the SDSS[3] and DES [4] experiments. This has led to the emergence of a very successful and _mostly_ concordant model - the \(\Lambda\)CDM model, in which the universe is dominated by dark energy in the form of a cosmological constant with structures forming in potential wells sourced by cold dark matter [5; 6]. Of late, however, several tensions have started to appear in the \(\Lambda\)CDM model and have grown in significance over time as the precision of experiments and analysis has advanced [7; 6]. Among these, the Hubble tension [8] and the \(S_{8}\) tension [9] are arguably the two most prominent ones. Both the Hubble tension as well as the \(S_{8}\) tension pertain to a significant discrepancy between derived values of the parameters from low and high redshift observables. The \(S_{8}\) tension, in particular, refers to the tentative evidence for a slightly lower value of \(S_{8}\) from galaxy survey experiments when compared to its value derived from CMB experiments. Although a clear resolution has not yet been found, these recent developments have shown the importance of performing comparative analysis between different datasets and using different analysis approaches. In this study, we present a harmonic space analysis of the 3x2 clustering statistics of DES Y3 data using the pseudo-\(C_{l}\) framework. The pseudo-\(C_{l}\) estimator is a well-tested and state-of-the-art approach for analyzing two-point statistics in harmonic space. We validate our setup by first analyzing the DES Y3 cosmic shear angular power spectrum and reproducing the results of Doux _et al._[10]. We subsequently analyze galaxy clustering, galaxy-galaxy lensing and cosmic shear clustering data - the so called 3x2 clustering statistcs, in a combined setup under the 6-parameter \(\Lambda\)CDM mode. For our fiducial 3x2 analysis, we use a conservative scale cuts on large and small scales with a small-scale cut of \(k_{\rm max}=0.1\) Mpc\({}^{-1}\) for galaxies for all tomographic bins. In this first study, we work under the Limber approximation and use Gaussian covariance model for our dataset but use a comprehensive modeling for astrophysical and other systematic effects. Finally, we extend our fiducial analysis to explore a third-order biasing prescription. Our results are _broadly_ consistent with the DES Y3 configuration space analysis with an excellent agreement for the \(S_{8}\) parameter. Notably, we find a preference for relatively lower \(\Omega_{m}\) and a higher \(\sigma_{8}\) compared to DES Y3 results which possibly arises due our different modeling choices. We leave systemic exploration of the modelling differences for a later work and briefly discuss them in SSVIII After describing our dataset and the underlying theory in SSII and SSIII respectively, we present our data processing in SSIV and our modeling in SSV. The inference framework is described in SSVI. Finally, in SSVII we present our results before concluding in SSVIII. ## II Dataset Over six years from 2013 to 2019, the Dark Energy Survey mapped approximately 5000 square degrees of the southern sky in the \(grizY\) band, using the 4m Blanco telescope at Cerro Tololo Inter-American Observatory in Chile. Several end-use catalogues have been publicly released by the collaboration from one and three years of data collection. More details of survey and the associated data products can be found in [11] and references therein. The dataset used in this work comprises of a sample of 10.7 million lens galaxies from the MagLim galaxy catalogue and a sample of 100 million source galaxies in the calibrated shape catalogue released by the DES collaboration as part of their Y3 release [12]. The MagLim lens catalogue is a magnitude limited sample of galaxies obtained by applying a magnitude cut in the \(i\) band given by \(i<4z_{phot}+18\). A further lower magnitude cut of \(i<17.5\) is applied to remove contamination from stars and other bright objects. This selection criteria was derived in [13] with the goal to optimize for the cosmological constraining power and results in a catalogue which has a decent photometric redshift uncertainties while at the same time having a very high number density. The MagLim catalogue was the first magnitude limited photometric catalogue used for constraining cosmological parameters. Each galaxy in the catalogue comes with an associated weight corresponding to the inverse of the estimated angular selection function. The redshifts of lens galaxies are estimated using the directional Neighbourhood Fitting (DNF) algorithm [14] and the entire sample is divided into 6 tomographic bins from \(z=0.2\) to \(z=1.05\) with bin edges \(z=(0.20,0.40,0.55,0.70,0.85,0.95,1.05)\). The real-space analysis by [11] found issues with the last two bins of the lens sample and did not include them in their fiducial analysis. Following on their footsteps, we also discard the last two bins of the MagLim lens sample. Thus our data comprises of all four tomographic bins of the source catalogue and only the first four bins of the Maglim lens catalogue. The source galaxy catalogue consists of 100 million samples with their shapes estimated by the self-calibrating Metacalibration algorithm [15; 16] using information from \(riz\) bands. The Metacalibration algorithm is an approach for producing unbiased estimate of shear from observed galaxy ellipticity. It involves first applying small amount of synthetic shear to a de-convolved image of an observed source, before re-convolving it with a Point space function (PSF) model and estimating the response \(R\) which linearly relates the observed shape to the true shear, bypassing the need for simulation based calibration. The entire source galaxy catalogue is divided into fours tomographic redshift bins of nearly equal number density, with redshifts inferred using a self-organizing map (SOM) algorithm. In Figure 1, we show the redshift distribution of the source and lens samples. ## III Theory In this section, we present a brief overview of the theoretical framework underpinning the 3x2 analysis, which encompasses measurements derived from cosmic shear, galaxy clustering, and galaxy-galaxy lensing. The 3x2 analysis, at its core, constitutes a statistical approach for making cosmological inferences based on the two-point correlation functions involving the observed projected galaxy density field, denoted as \(\delta_{g}\), and the weak lensing shear field, represented as \(\gamma\). These two fields' two-point auto and cross-correlations yield three distinct sets of observables, as delineated below. ### Galaxy density field The observed galaxy over-density within a given tomographic bin (\(i\)) when projected onto the celestial sphere can be expressed as a combination of projected galaxy density contrast, modulation by magnification (\(\mu\)) and distortion from redshift-space measurements: \[\delta^{i}_{g,\rm obs}(\vec{\theta})=\delta^{i}_{g,\rm D}(\vec{\theta})+ \delta^{i}_{g,\rm RSD}(\vec{\theta})+\delta^{i}_{g,\mu}(\vec{\theta}) \tag{1}\] where \[\delta^{i}_{g,\rm D}(\vec{\theta})=\int d\chi W^{i}_{g}(\chi)\delta^{i,(\rm 3D )}_{g}\left(\vec{\theta}\chi,\chi\right) \tag{2}\] is the line-of-sight projection of the 3-D galaxy density contrast at a position \(\vec{\theta}\) on the sky, \(\chi\) is the radial comoving distance to the redshift \(z\) and \[W^{i}_{g}(\chi)=n^{i}_{g}(z)\frac{dz}{d\chi} \tag{3}\] is the normalized window function of galaxies proportional to the normalized number density distribution \(n^{i}_{g}(z)\) of the lens galaxy samples. We construct the 3D galaxy density contrast using a pertrubative bias expansion consisting of all field level operators allowed by Galilean symmetry \[\delta^{i,(\rm 3D)}_{g}(x)=b^{i}_{1}\delta_{m}(x)+\frac{b^{i}_{2}}{2}\delta^{2}_{ m}(x)+b^{i}_{\rm\Omega_{2}}{\cal G}_{2}(x)+... \tag{4}\] where \(\delta_{m}\) is matter overdensity field and \({\cal G}_{2}(x)\) is a second-order Galilean operator (See Appendix A). Here we have assumed that the bias values remain the same within a tomographic bin. The magnification term is given by \[\delta^{i}_{g,\mu}(\vec{\theta})=C^{i}\kappa^{i}_{g}(\vec{\theta}) \tag{5}\] with the magnification bias amplitude \(C^{i}\), and where the tomographic convergence field is given as \[\kappa^{i}_{g}(\vec{\theta})=\int d\chi W^{i}_{\kappa,g}(\chi)\delta_{m} \left(\vec{\theta}\chi,\chi\right) \tag{6}\] with the lens efficiency window \[W^{i}_{\kappa,g}(\chi)=\frac{3\Omega_{m}H_{0}^{2}}{2}\int_{\chi}^{\chi_{H}}d\chi^ {\prime}n^{i}_{g}(\chi^{\prime})\frac{\chi}{a(\chi)}\frac{\chi^{\prime}-\chi}{ \chi^{\prime}}. \tag{7}\] The RSD contribution \(\delta^{i}_{g,\text{RSD}}(\vec{\theta})\) is typically very small for photometric surveys and henceforth we will not include it in our analysis. ### Cosmic Shear The lensing potential \(\psi\) at a given position (\(\vec{\theta}\)) in the sky is a projection of 3D Newtonian potential \(\Phi\) \[\psi(\vec{\theta})=2\int\frac{d\chi}{\chi}\Phi(\chi,\vec{\theta}\chi)q(\chi) \tag{8}\] where \(q\) is called the lensing efficiency \[q(\chi)=\int d\chi^{\prime}n(\chi^{\prime})\frac{\chi^{\prime}-\chi}{\chi} \tag{9}\] The lensing efficiency \(q\) encodes information about the geometry of the Universe. The 3D potential is related to the matter field via the Poisson equation. The second order derivatives of the lensing potential defines the shear \(\Gamma\) and convergence \(\kappa\) The shear field \(\gamma=\gamma_{1}+i\gamma_{2}\) is a spin 2 field and is related to the potential via \[\kappa=\frac{1}{4}(\vec{0}\vec{0}+\vec{0}\vec{0})\psi(\vec{\theta});\qquad \gamma(\vec{\theta})=\frac{1}{2}\vec{0}\vec{0}\psi(\vec{\theta})\] \[\kappa=(\psi_{11}+\psi_{22})/2\] \[(\gamma_{1},\gamma_{2})=\big{(}(\psi_{11}-\psi_{22})/2,(\psi_{12}+\psi_{21})/ 2\big{)} \tag{10}\] In this paper, we work in the harmonic space where the shear \(\gamma\) can be equivalently expressed in terms of spin-weighted spherical harmonics basis \({}_{s}Y_{lm}\) \[\gamma(\theta)=-\sum_{lm}{}_{\pm 2}Y_{lm}(E_{lm}\pm iB_{lm}) \tag{11}\] where \(E\) and \(B\) are the curl-free and divergence-free modes. The gravitational induced \(E\)-mode power spectrum is related to convergence power spectrum \(C_{\kappa\kappa}\) as \[C_{EE}(l)=G(l)^{2}C_{\kappa\kappa}(l) \tag{12}\] where \(G_{l}\) is an \(l\)-dependent spin-prefactor given by \[G(l)=\frac{l}{(l+1/2)^{2}}=\frac{1}{(l+1/2)^{2}}\sqrt{\frac{(l+2)!}{(l-2)!}} \tag{13}\] for the spin-2 shear field. The prefactor is \(\sim 1\) for \(l\sim\mathcal{O}(10)\). Beyond this gravitational signal, effects like intrinsic alignment contribute additional power to the observed E-mode power spectrum. We use a non-linear alignment (NLA) [17; 18] model to model the intrinsic alignment. To evaluate \(C_{EE}^{ij}\) and \(C_{gg}^{ij}\) (and cross-spectra \(C_{gE}^{ij}\)) we work under the Limber approximation [19; 20] which is sufficiently accurate for \(l>40\). Under this approximation, the auto and cross power spectrum \(C_{AB}^{ij}(l)\) for two tracers \(A\) and \(B\) for tomographic bins \(i\) and \(j\) can be written as [19; 20] \[C_{AB}^{ij}(l)=\int\frac{d\chi}{\chi^{2}}W^{i}_{A}(\chi)W^{j}_{B}(\chi)P_{mm} \bigg{(}k=\frac{l+1/2}{\chi},\ z(\chi)\bigg{)}\] where \(P_{mm}\) is the matter power spectrum and \(W^{i,j}_{AB}\) is a kernel encoding the weight specific to a tracer \(A\) for tomographic bin \(i\). For the \(E\)-mode, the kernel has a slightly complex form given by \[W^{i}_{E}(\chi)\equiv G_{l}\frac{3}{2}H_{0}^{2}\Omega_{m}\frac{\chi}{a(\chi)} \int dz^{\prime}n^{i}_{s}(z^{\prime})\bigg{[}\frac{\chi(z^{\prime})-\chi}{ \chi(z^{\prime})}\bigg{]} \tag{14}\] where \(n^{i}_{s}\) is the source galaxy number density for bin \(i\). Although our discussion is in terms of auto spectrum, it is straightforwardly generalized to get expressions for cross spectra as well. Figure 1: Normalized photometric Redshift distribution \(n(z)\) for the Metacalibration source in four tomographic bins and for the MagLim lens sample in six tomographic bins. Map making & \(C_{l}\) estimation ### Map making Before generating shear maps from source catalogue, the DES Y3 source catalogue requires to be corrected for possible multiplicative or additive biases as outlined in [10]. For each redshift bin, we compute the weighted mean ellipticity and subtract that from the observed ellipticities of each galaxy. The metacalibration algorithm which self-calibrates the shear statistics, artificially shears the galaxies by a fixed amount. The resulting change in ellipticity is used to calibrate a _total_ shear response R which is used to normalize the measurement [21; 22] \[e_{i}\rightarrow\frac{e_{i}-\langle e_{i}\rangle}{R} \tag{15}\] The de-trended catalogue is then used to generate weighted map of the tangential shear field \(\gamma=(\gamma_{1},\gamma_{2})\) from observed ellipticity \(e=(e_{1},e_{2})\) on a healpix [23; 24] grid with \(N_{side}=4096\) for each tomographic bin [25]. \[\hat{\gamma}(\theta_{p})=(\hat{\gamma}_{1},\hat{\gamma}_{2})=\left(\frac{\sum_ {i\in p}w_{i}^{s}e_{1,i}}{\sum_{i\in p}w_{i}^{s}},\frac{\sum_{i\in p}w_{i}^{s} e_{2,i}}{\sum_{i\in p}w_{i}^{s}}\right) \tag{16}\] where the sum is over source galaxies \(i\) in the pixel \(p\). \(w_{i}^{s}\) is the weight assigned to that galaxy with and \((e_{1,i},e_{2,i})\) is its measured ellipticity. The corresponding anisotropic noise variance map is obtained by [10; 25] \[\sigma^{\gamma}(\theta_{p})=\frac{\sum_{i\in p}w^{s_{i}2}(e_{1,i}^{2}+e_{2,i} ^{2})}{(\sum_{i\in p}w_{i}^{s})^{2}} \tag{17}\] The above map-making operation is performed for each redshift bin separately and results in four shear maps corresponding to four tomographic bins of the shape catalogue. For the maglim lens catalogue, the weighted galaxies counts are similarly deposited on a healpix map of \(N_{side}=4096\). We then subtract the mean number count to get the galaxy overdensity map for each tomographic bin [26]. \[\delta_{g}(\theta_{p})=\frac{\sum_{i\in p}w_{i}^{l}}{\langle\sum_{i\in p}w_{i }^{l}\rangle}-1 \tag{18}\] where \(w_{i}^{l}\) is the weight for lens galaxy \(i\) in pixel \(p\) ### \(C_{l}\) estimation The angular power spectrum \(C_{l}\) are defined for fields on full sky as \[\langle f_{l}^{a}{f_{l^{\prime}}^{b}}^{\dagger}\rangle=C_{l}^{ab}\delta_{ll^{ \prime}}\delta_{mm^{\prime}} \tag{19}\] where \(f^{a}\) and \(f^{b}\) are scalar fields [27] defined on the full sky and \(C_{l}^{ab}\) is the cross-spectrum between them. Photometric surveys like DES survey partially sky and are thus sample masked version of full-sky cosmological fields. The masking in configuration space results in effective coupling of modes in harmonic space, making accurate power spectrum estimation and subsequent likelihood analysis, very challenging. In this work, we use the pseudo-\(C_{l}\) framework implemented in the pymaster library [28] for estimation and likelihood analysis of angular power spectrum statistics of shear and galaxy fields. The pseudo-\(C_{l}\) framework is a near-optimal approach for power spectrum estimation for masked photometric survey maps and we briefly describe the approach here and refer the readers to for more details. A field \(f\) in the sky mapped by a survey like DES with some complex masking/weighting \(w\) can be expressed as \[\tilde{f}^{a}(\theta)=w^{a}(\theta)f^{a}(\theta) \tag{20}\] where \(f^{a}\) is the true underlying cosmological field while \(\tilde{f}^{a}\) is the masked version which we observe. In harmonic space, we have \[\tilde{f}^{a}_{l}=\sum_{l^{\prime}l^{\prime\prime}}D_{l^{\prime}l^{\prime \prime}}w^{a}_{l^{\prime}}f^{a}_{l^{\prime\prime}} \tag{21}\] where \(D_{l^{\prime}l^{\prime\prime}}\) is a spin-dependent coupling factor. As a result of this coupling, the cross-spectrum between fields \(f^{a}\) and \(f^{b}\) is given by \[C_{l}^{ab}=\sum_{l^{\prime}}M_{ll^{\prime}}C_{l}^{ab} \tag{22}\] where \(M_{ll^{\prime}}\) is the mode-coupling matrix for a given mask and leads to coupling/correlation between modes \(l\neq l^{\prime}\). The above relation is not easily invertible. The pseudo-\(C_{l}\) algorithm instead first performs a binning operation on the coupled pseudo-\(C_{l}\) and then employs a effective decoupling operation on the binned power spectrum to estimate the true, unbiased binned \(C_{L}\) of the field. \[\hat{C_{L}^{ab}}=\sum(M^{ab})^{-1}_{LL^{\prime}}\tilde{C}^{ab}_{L^{\prime}} \tag{23}\] where \[M^{ab}_{LL^{\prime}}=\sum_{l\in L}\sum_{l^{\prime}inL^{\prime}}M^{ab}_{ll^{ \prime}} \tag{24}\] where we have assumed that each \(C_{l}^{ab}\) getting summed in a band appears with a constant weighting \(w=1/N\) where \(N\) is the number of multipoles \(l\) contributing to the bandpower. Finally, we note that before comparing this estimate of bandpower \(\hat{C}_{L}^{ab}\) to theory power spectrum in the likelihood analysis, one needs to _forward_ model the effect of binning and decoupling on the theory power spectrum. The angular power spectrum estimation methodology described above is applied to DES Y3 maps of source and lens samples generated following the methodology described in SSIV.1. For shear power spectrum estimation, we use the binning strategy from [10] to estimate the power spectrum of the \(E\) mode of the shear field \(\gamma\), \(C^{ij}_{EE}(l)\) in 32 square-root spaced bins from \(l_{min}=8\) to \(l_{max}=2048\). At linear-order, the \(B\)-mode power spectra is expected to be zero and therefore we exclude that from our analysis. The galaxy-galaxy lensing power spectra \(C^{ij}_{gE}\) is estimated for all the \(4\times 4=16\) bin combinations. For galaxy-galaxy clustering, we only estimate the auto-power spectrum \(C^{ii}_{gg}(l)\) since cross-power spectrum is not expected to have much meaningful signal. We thus generate a total of 30 3x2 angular power spectrum combination each in 32 bins for \(l=8\) to \(l=2048\). For the galaxy auto-spectrum, we estimate the noise contribution by first estimating the homogeneous Poisson noise from the observed galaxy number density and then applying the mask-dependent coupling operations using pymaster and subtract that from the total auto-spectrum. The lensing noise is similarly computed removed from the shear auto-spectrum. More details of the procedure can be found in [10; 25; 26]. The estimated power spectra of DES Y3 maps are presented in 2 and 3. ## V Modelling We closely follow the prescription presented in [11] to model the 3x2 signal, noise and systematics. We work under the \(\Lambda\)CDM model with six cosmological parameters - \(\Omega_{m}\), \(\Omega_{b}\), \(h\), \(n_{s}\), \(\sigma_{8}\) and \(m_{\nu}\), which have their usual meaning. We use CCL public library [29] with the default CAMB backend [30] and use 'takahashi' version of HALOFIT [31] to model non-linear power spectrum. For our fiducial analysis where we use the linear bias model for galaxies, we have four linear bias parameters-\((b_{1}^{1},b_{1}^{2},b_{1}^{3},b_{1}^{4})\) for each of the four tomographic redshift bin. This choice of constant bias per bin has been found to be a very good approximation by [32] and used in all of the DES Y3 analysis paper (however see [33] for a discussion of systematics associated with this simplified approach). The angular power spectra for galaxies in bin \(i\) and \(j\) is given by \[C^{ij}_{gg}(l)=b_{1}^{i}b_{1}^{j}C(l) \tag{25}\] where \(C(l)\) is the projected matter power spectrum for using Eq. III.2. The bulk radial velocities of galaxies or the redshift space distortions produce negligible modification to the underlying power spectrum for DES Y3 hence we do not model them. Gravitation lensing of photons by intervening matter alters the number count of galaxies by altering the size and magnitude around the Figure 2: _Top panel_. Shear (\(E\)-mode) power spectrum for DES Y3. The \(C_{l}\)’s are binned into 32 square-root spaced bins from \(l_{min}=8\) to \(l_{max}=2048\). The errors come from the diagonal covariance matrix computed analytically using pymaster under the DES-Y3 specifications. survey selection cutoff. We include a treatment for the resulting magnification bias using fixed bias provided by DES and given by \(b_{mag}=(0.42,0.30,1.76,1.94)\). The observed ellipticities in the galaxy shapes are sourced not just by the gravitational shear but also by the large-scale tidal field in which the galaxies form and reside. This leads to correlated ellipticities which need to be modelled. To model this intrinsic alignment of galaxies, we use the non-linear alignment (NLA) model [17; 18]. The bias in NLA model is given by \[b_{\rm IA}(z)=a\bar{C}\frac{\rho_{circ}\Omega_{m}}{D(z)}\bigg{(}\frac{1+z}{1+ z_{0}}\bigg{)}^{\eta} \tag{26}\] where \(a\) and \(\eta\) are the parameters of the model, \(z_{0}\) is the mean source catalogue redshift set to 0.62 in the analysis and \(\bar{C}=5\times 10^{-14}{\rm M}_{\odot}h^{-2}{\rm Mpc}^{2}\) is a normalization constant A systematic bias to the redshift distribution \(n(z)\) of the MagLim catalogue is modeled using _shift_ and _scaling/stretch_ parameterizations i.e. there are four shift parameters \(\mu_{i}\) to model a possible shift in the mean of \(n^{i}(z)\) for each bin \(i\) and four scaling parameters \(\sigma_{i}\) which model possible dilation of the given distribution. The modeling can be mathematically expressed as \[n(z)\rightarrow\frac{1}{\sigma}n\bigg{(}\frac{z-\mu-\langle z\rangle}{\sigma }+\langle z\rangle\bigg{)} \tag{27}\] The dilation has been found to have a significant role in characterizing the lens sample properly. For the source sample, however, a four parameter modelling of a shift in the mean has been found to be sufficient in capturing any dominant systematic. Finally, we model a possible multiplicative bias to the shear power spectrum via \(m_{i}\) for each bins \[C^{ij}(l)=(1+m_{i})(1+m_{j})C^{ij}(l) \tag{28}\] Figure 3: _Top panel_. Galaxy auto-spectrum \(C_{gg}(l)\) for the four MagLim catalogue. Shot noise contribution has be removed. _Bottom panel_. Galaxy-galaxy lensing \(C_{gE}(l)\) power spectra for the MagLim catalogue and Metacalibration source catalog for all tomographic bin combination. This comprises our description of the 28-parameter fiducial model. A more detailed description could be found in [11]. When performing our analysis using a nonlinear galaxy bias model, we will refer to the expression given in Eq. (101) for the galaxy one-loop power spectrum up to cubic order in bias operators. For a 3x2 analysis using the nonlinear galaxy bias model in Appendix A, we include 8 additional free bias parameters, \(b_{2}^{i}\) and \(b_{\nabla\delta}^{i}\) for each of the 4 tomographic bins \(i\). We set the remaining two nonlinear bias parameters as \(b_{\mathcal{G}_{2}}^{i}=-2/7(b_{1}^{i}-1)\) and \(b_{\Gamma_{3}}^{i}=23/42(b_{1}^{i}-1)\) using the parametric forms inspired from co-evolution models [34; 35]. **Scale cuts:** In our fiducial 3x2 analysis, for the galaxy overdensity, we use simple but well motivated small-scale cut of \(k_{\rm max}=0.1\) Mpc\({}^{-1}\) for each tomographic bin, in line with real-space cuts used in [36; 32]. For the shear component, we use the cuts generated in [10] for their fiducial analysis. Finally, after binning the data in square-root space, we discard the first three bins from each of the 3x2 statistics. This roughly corresponds to a large-scale cut of \(l_{min}\) 45. We discuss the motivation and reasoning behind these choices in SSIII. The size of our data vector for the 3x2 analysis after these cuts is 177. Note however that for the 1x2 shear power spectrum analysis, we do not discard the low-l bins as we want to keep our analysis as close to [10] as possible. The size of the data vector for the 1x2 analysis is 119. In our analysis based on the nonlinear bias model, we consider a uniform scale cut of \(k_{\rm max}=0.3\) Mpc\({}^{-1}\) for galaxy clustering and galaxy-galaxy lensing data vectors. We haven't fully explored the scale cuts for the nonlinear bias model and note that this particular choice is motivated by results presented in [37] where the authors obtain unbiased results from a similar nonlinear bias model up to \(k_{\rm max}\lesssim 0.3\) Mpc\({}^{-1}\). ## VI Parameter inference In order to infer the cosmological parameters using the 3x2 clustering data, we use the Bayesian framework. Under the Bayesian approach, the posterior distribution \(\mathcal{P}(\Theta|D)\) over model parameters \(\Theta\) of our fiducial model described in SSV, for a dataset \(\mathcal{D}\) is given by \[\mathcal{P}(\Theta|D)\propto\mathcal{P}(D|\Theta)P(\Theta)=\mathcal{L}(D)P(\Theta) \tag{29}\] where \(\mathcal{P}(\Theta)\) is the joint prior distribution over the model parameters and \(\mathcal{L}(\Theta)\) is the likelihood. We assume prior independence and define \(\mathcal{P}(\Theta)\) as a product of independent distributions; either uniform or Gaussian distributions. Our likelihood function is a Gaussian noise model for the band-powers given by \[\mathcal{L}=\exp\bigl{(}-\chi^{2}(\Theta,\mathcal{D})/2\bigr{)} \tag{30}\] with \[\chi^{2}=-\sum_{i}\big{(}\mathcal{D}_{i}-C_{i}^{m}\big{)}Cov^{-1}\big{(} \mathcal{D}_{i}-C_{i}^{m}\big{)} \tag{31}\] where \(\mathcal{D}\) is the data vector of 3x2 clustering band-powers estimated following the steps described in the last section and \(C^{m}\) is the band-power estimated as a function of model parameters. The index \(i\) runs over all the elements of the 3x2 band-powers left after scale cuts are applied. For the covariance \(Cov\), we work under the Gaussian approximations and estimate disconnected contribution to the power spectrum covariance. We use pymaster to estimate the Gaussian covariance under the improved nearest kernel approximation (iNKA) following [25] and[26]. This has been shown to be a decent approximation for analysis in configuration space and also in the case of shear power spectrum analysis presented in [10]. The assumption of Gaussian likelihood for band-powers above \(L\sim\mathcal{O}(50)\) is pretty good and has been shown to work well in the past. With our square-root spaced binning strategy, Gaussian likelihood assumption for \(L<100\) remains an assumption which we will explore in a future work. We sample the posterior using emcee[38]. For our 28 parameter fiducial 3x2 analysis, we use the exact same priors used in [11] with the prior volume on \(A_{s}\) linearly mapped to a volume on \(\sigma_{8}\). The cosmic shear analysis uses a subset of the 28 parameters and the priors for those are also the same and come from [10; 11]. ## VII Results ### Cosmic shear analysis We begin by presenting results for cosmic shear angular power spectrum analysis closely following the steps of Doux _et al._[10]. Our dataset, including the data processing and preparations as well as the scale cuts, are kept as close as possible to the fiducial analysis of Doux _et al._[10]. Our power spectrum covariance is a simple Gaussian covariance computed using iNKA approach at a fiducial cosmology using CCL. Our model includes 6 \(\Lambda\)CDM cosmology parameters together with four source distribution shift parameters \(\mu_{i}^{s}\) and four calibration parameters \(m_{i}^{s}\), one for each bin. Our intrinsic alignment model is a two parameter NLA model. Our results for the main parameters \(\Omega_{m},\sigma_{8}\) and \(S_{8}\equiv\sigma_{8}\sqrt{\Omega_{m}/0.3}\) are: \[\Omega_{m} =0.273^{+0.053}_{-0.072}\] \[\sigma_{8} =0.86^{+0.10}_{-0.10}\] \[S_{8} =0.801^{+0.023}_{-0.023}\] Our results are in excellent agreement with harmonic space analysis presented in Doux _et al._[10] and also with the configuration space analysis presented in Amon _et al._[21], Secco _et al._[22]. The small \(\sim 0.3\sigma\) difference in \(S_{8}\) compared to Doux _et al._[10] possibly arises due to slight differences in our modelling choices. In Figure 4 we show the marginalized 1D and 2D posteriors for the relevant parameters. The posterior agrees well with Figure 12 of Doux _et al._[10]. For the 119 element 1x2 data vector of angular power spectrum, our best-fit \(\chi^{2}\) is 130.8 with p-value of \(p=0.217\). ### 3x2 analysis In this section, we present the results of our 3x2 clustering analysis which uses 6-parameter \(\Lambda\)CDM model together with the linear model for galaxy bias, with an additional 18 nuisance parameters for including four lens distribution shift (\(\mu_{i}^{l}\)), four lens distribution dilation \(\sigma_{i}^{l}\)), four source distribution shift parameters \(\mu_{i}^{s}\) and four calibration parameters \(m_{i}^{s}\), one for each bin. We use a 2 paramater NLA model for intrinsic alignment. A detailed description of these modelling parameters is provided in SSV. Our marginalized constraints for the main parameters \(\Omega_{m},\sigma_{8}\) and \(S_{8}\equiv\sigma_{8}\sqrt{\Omega_{m}/0.3}\) are: \[\Omega_{m} =0.265^{+0.025}_{-0.031}\] \[\sigma_{8} =0.843^{+0.060}_{-0.060}\] \[S_{8} =0.789^{+0.020}_{-0.020}\] In Figure 5, we show the two-dimensional joint posterior distributions of these parameters as well as the marginalized 1D distributions. Our constraints broadly (within \(\sim 2\sigma\)) agree with those presented in [11]. In particular, the agreement for the \(S_{8}\) values between the configuration space analysis of Abbott _et al._[11] and our harmonic space analysis is really good. For the 177 element 3x2 data vector of angular power spectrum, our best-fit \(\chi^{2}\) is 174.8 with p-value of \(p=0.533\). We reiterate that for our 3x2 analysis, we discard the first 3 bins of all our binned power spectra (corresponding to a \(l_{min}\gtrsim 40\)) but include those bins in our 1x2 analysis where we have tried to replicate the results of [10]. We discuss more differences between our analysis and that of [11] in the discussion section. Our 3x2 analysis based on a nonlinear galaxy bias model consists of 30 nuisance parameters and 5 cosmological parameters. In this analysis, we set the energy density of massive neutrinos to zero. Our marginalized constraints are: \[\Omega_{m} =0.306^{+0.026}_{-0.025}\] \[\sigma_{8} =0.781^{+0.035}_{-0.043}\] \[S_{8} =0.786^{+0.014}_{-0.016}.\] Using 302 elements in our data vector for the 3x2 angular power spectra, we obtain a best-fit \(\chi^{2}=346.2\) with a p-value of \(p=0.041\). Our constraints agree with those presented in [11]. Once again, we note a good agreement for the \(S_{8}\) values between the harmonic space analysis in this work and the configuration space analysis of [11]. Figure 4: MCMC constraints from cosmic shear \(E\)-mode power spectrum analysis. We present 1D and 2D marginalized posterior distributions for \(\Omega_{m}\), \(\sigma_{8}\) and \(S_{8}\). The contours in darker shade denote 68% confidence interval with the lighter shade denoting the remaining region of the 95% confidence interval. Figure 5: MCMC constraints from a linear-bias model based analysis of 3x2 clustering of DES Y3 data. We present 1D and 2D marginalized posterior distributions for \(\Omega_{m}\), \(\sigma_{8}\) and \(S_{8}\). The contours in darker shade denote 68% confidence interval with the lighter shade denoting the remaining region of the 95% confidence interval. ## VIII Discussion Below we describe approximations made in our analysis and its limitations. Although we do not ourselves perform a simulation based validation of our approach, almost all our choices are guided and backed by validation performed by DES and other previous studies and we refer the reader to them. * Our biggest assumption and limitation is the use of Gaussian covariance approximation which potentially underestimates error by \(\mathcal{O}(10)\%\). Although [39] showed that non-Gaussian part of the covariance has a negligible impact on both maximum posterior and parameter constraints for a configuration space analysis, one would ideally want to show this explicitly for a harmonic-space analysis. This could be even more important for our nonlinear bias model where we consider data vectors up to a fiducial scale cut of \(k_{\text{max}}=0.3\) Mpc\({}^{-1}\). We postpone this to a later work along with a more detailed epxloration of the scale cuts for the nonlinear model. * We bin modes of the angular power spectra in square-spaced bins following [10]. With our square-root spaced binning strategy, Gaussian likelihood assumption for \(L\lesssim 50\) remains untested that we will explore in a future work. * When deciding on scale cuts for the 3x2 analysis, we follow a very conservative strategy and not only impose a small-scale cut for galaxies in \(k_{max}=0.1\text{Mpc}^{-1}\) to mitigate issues related to bias and baryonic effects, but also a large-scale cut in \(l_{min}\) for both galaxies and cosmic shear. We exclude bins with \(l\lesssim 40\) corresponding to the first 3 bins under our square-space binning strategy for all the 3x2 clustering statistics. The pymaster based Gaussian covariance estimation is slightly inaccurate on \(l<40\) thus discarding these modes is a good conservative choice. Additionally, the Gaussian likelihood model assumptions is a better description of the dataset sans the largest-scale modes. This also helps us evade any unknown systematics which typically afflict photometric surveys on the largest scales and produce spurious power. As a result, Limber approximation is good enough for our use case and so we work under Limber approximation for all our 3x2 clustering data. * We use HALOFIT fitting formula to model linear and nonlinear matter power spectrum in our fiducial analysis. HALOFIT is fit on \(N\)-body simulation data and has an scale-dependent accuracy which can worse than 5% at some scales of interest. Alternatives to HALOFIT include emulators like cosmic emu[40] backo[41] and anzu[42] but these emulators have limited coverage in cosmological parameter space and hence can't be used to sample broad parameter space volume. [32] found HALOFIT to be sufficiently accurate for DES Y3. [39]. * We ignore B-modes in cosmic shear. The B-modes are not induced by gravitational lensing at linear order but can be induced by higher-order effects and by astrophysical effects. Moreover, \(E/B\) mode mixing due to miscalibration of PSF is a systematic effect that can fake the B mode signal. Several diagnostic and validation studies have found B modes consistent with zero for DES Y3 [10; 43]. * We use the 2-parameter Non-linear Intrinsic Alignment model to characterize galaxy intrinsic alignments. This is a limiting case of the TATT model used in DES fiducial analysis but has been shown to be a sufficiently good model for DES data. * The 1-sigma errors derived from the nonlinear bias model are noticeably reduced compared to our fiducial analysis. This reduction may, in part, be linked to the assumption of Gaussian covariance along with the access to a larger number of data points from smaller scales. Furthermore, it's worth emphasizing the importance of conducting a more comprehensive investigation into the scale cuts. We defer this to a future analysis. Under these modeling assumptions and simplifications, we have analyzed MagLim lens and Metacalibration source catalogues released by DES Y3, by modeling the two-point clustering of galaxy-galaxy, galaxy-lensing, and cosmic shear in harmonic space. Our results are consistent with the results of the same two-point clustering study performed by DES in configuration space, although we find a preference for slightly lower \(\Omega_{m}\) and a higher \(\sigma_{8}\) in the harmonic space analysis. While we have tried to keep our analysis as similar to DES Y3 fiducial analysis as possible, differences remain that can explain this shift. We plan to use the pipeline developed in this work to investigate signatures of primordial non-Gaussianity in photometric datasets using its effect on large-scale galaxy power spectrum [44] as well as cross-correlation of galaxy maps with other large-scale tracers [45; 46; 47]. Recently it was shown that non-perturbative approaches involving soft and collapsed limits of higher-order correlations [48; 49; 50] can provide strong constraints on \(f_{\text{NL}}\) using very non-linear scales. It would be interesting to explore the viability of such an approach on photometric datasets. Additionally, we are keen to investigate constraints on the amplitude of CDM blue-tilted isocurvature fluctuations [51]. As shown in [52], analysis of galaxy clustering for large blue-tilted isocurvature fluctuations requires a careful treatment of the divergences in various one and higher-order loop terms. This involves a consistent renormalization using relevant counter-terms and a complete operator basis up to cubic order in the bias expansion. It will be interesting to compare the constraints obtained from a 3x2 analysis with a recent forecast presented in [52]. ## Acknowledgements We thank Keith Bechtol and Moritz Muenchmeyer for useful discussions. Support for this research was provided by the University of Wisconsin - Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation. We have extensively used several python libraries including numpy[53], matplotlib[54], CLASS[55], getdist[56] and SciencePlots[57]. ## Appendix A Nonlinear bias model The galaxy clustering observable in harmonic space is determined from the 3D galaxy-galaxy auto-correlation power spectrum \(P_{gg}\). In the linear bias theory, the galaxy/halo over-density is modeled at linear order in underlying matter overdensity as \[\delta_{g}=b_{1}\delta_{m}+\epsilon, \tag{10}\] where \(\epsilon\) is the stochastic part which is not correlated with the large-scale matter density field. The above linear model yields the power spectrum \(P_{gg}\) as \[P_{gg}(k,z)=b_{1}^{2}(z)P_{\text{mm}}(k,z)+P_{\text{shot}}(z). \tag{11}\] In a purely perturbative approach, the matter power spectrum \(P_{\text{mm}}\) at linear bias order is taken to be the linear power spectrum \(P_{\text{lin}}\). However, recent analyses ([58; 37]) have demonstrated that substituting the linear matter power spectrum \(P_{mm}\) with the complete nonlinear power spectrum results in improved data fitting. This substitution also yields unbiased constraints, often accompanied by a modest reduction in the \(\chi^{2}\) value, particularly from access to slightly smaller scales. For example, the DES Y3 fiducial analysis [11] uses the nonlinear matter power spectrum from the HALOFIT fitting function.[59] This approach extends the \(k\)-range over which one can probe the observables with manageable theory error margins from a linear bias expansion. The \(k\)-range can be determined by evaluating suitable scale-cuts such that one obtains unbiased results on the cosmological parameters. This is often achieved by testing the accuracy of the underlying theoretical model on mock galaxy catalogs. However, it is well known that the linear bias theory is incomplete since gravitational effects naturally induce higher-order operators. Hence, we also consider a nonlinear galaxy bias expansion (in Eulerian coordinates ) including all operators allowed by Galilean symmetry up to cubic order in the magnitude of the linear matter overdensity \(\delta^{(1)}\)[60]: \[\delta_{g}(x) =\sum_{\mathcal{O}}\left(b_{\mathcal{O}}+\epsilon_{\mathcal{O}} (x)\right)\mathcal{O}(x)+b_{\epsilon}\epsilon(x) \tag{12}\] \[=b_{1}\delta(x)+b_{\epsilon}\epsilon(x)\] \[+\frac{b_{2}}{2}\delta^{2}(x)+b_{\mathcal{G}_{2}}\mathcal{G}_{2} (x)+\epsilon_{\delta}(x)\delta(x)\] \[+b_{\delta\mathcal{G}_{2}}\delta(x)\mathcal{G}_{2}(x)+\frac{b_{ 3}}{6}\delta^{3}(x)+b_{\mathcal{G}_{3}}\mathcal{G}_{3}(x)+b_{\Gamma_{3}} \Gamma_{3}(x)\] \[+\epsilon_{\delta^{2}}(x)\delta^{2}(x)+\epsilon_{\mathcal{G}_{2 }}(x)\mathcal{G}_{2}(x)\] \[+b_{\nabla^{2}\delta}\nabla^{2}\delta(x)\] where all the operators \(\mathcal{O}\) in the above expression are considered to be coarse-grained and the subscript \(\Lambda\) is dropped for brevity. Eq. (12) is a double expansion in density fluctuations and their derivatives since every insertion of a Laplacian is equivalent to a second-order correction to an operator \(\mathcal{O}\).[61] Hence, the derivative operator in the last-line of Eq. (12) is counted approximately as cubic order in bias expansion. The operator sets \(\{\delta^{2},\mathcal{G}_{2},\epsilon_{\delta}\delta\}\) and \(\{\mathcal{G}_{2}\delta,\delta^{3},\mathcal{G}_{3},\Gamma_{3},\epsilon_{\delta 2}\delta^{2},\epsilon_{\mathcal{G}_{2}}\mathcal{G}_{2},\nabla^{2}\delta(x)\}\) are second and third order respectively and we refer the readers to [60] for details. Meanwhile, \(\epsilon_{\mathcal{O}}\) in Eq. (12) are the stochastic noise contributions to galaxy formation. These are considered to be uncorrelated with the long-wavelength fluctuations at large scales. The one-loop galaxy power spectrum at \(O\left(\left(\delta^{(1)}\right)^{4}\right)\) can be written as \[P_{gg}(k,z) =b_{1}^{2}(z)P_{\text{NL}}(k,z)+P_{gg}^{\text{NLO}}(k,z)+P_{gg, \nabla^{2}\delta}(k,z)\] \[+P_{gg,\epsilon}(k,z) \tag{13}\] where we take the non-linear matter power spectrum, \(P_{\text{NL}}\) from HALOFIT fitting function and evaluate the remaining next-to-leading order (NLO) one-loop contributions (without stochastic terms) from CLASSPT [62]. The NLO one-loop power spectrum contributions are given as \[P_{gg}^{\text{NLO}}(k,z)/D^{4}(z) =b_{1}(z)b_{2}(z)\mathcal{I}_{\delta^{(2)}\delta^{2}}(k)\] \[+2b_{1}(z)b_{\mathcal{G}_{2}}(z)\mathcal{I}_{\delta^{(2)} \mathcal{G}_{2}}(k)\] \[+b_{1}(z)\left(2b_{\mathcal{G}_{2}}(z)+\frac{4}{5}b_{\Gamma_{3}} (z)\right)\mathcal{F}_{\mathcal{G}_{2}}(k)\] \[+b_{2}(z)b_{\mathcal{G}_{2}}(z)\mathcal{I}_{\delta^{2}\mathcal{G} _{2}}(k)+\frac{1}{4}b_{2}^{2}(z)\mathcal{I}_{\delta^{2}\delta^{2}}(k)\] \[+b_{\mathcal{G}_{2}}^{2}(z)\mathcal{I}_{\mathcal{G}_{2}\mathcal{ G}_{2}}(k) \tag{14}\] where \(D(z)\equiv D_{+}(z)/D_{+}(0)\) is the normalized growth factor, and the loop contributions \(\mathcal{I}_{\mathcal{O},\mathcal{O}^{\prime}}(k)\) are given in [52]. For the leading derivative term, we write \[P_{gg,\nabla^{2}\delta}(k,z)=-2b_{1}(z)b_{\nabla^{2}\delta}(z)\left(\frac{k}{k _{*}}\right)^{2}P_{\text{NL}}(k,z). \tag{15}\] Our approach, which involves incorporating a nonlinear matter power spectrum for the leading higher-derivative bias terms, aligns with a similar strategy presented in previous works, including [37]. We set the clustering scale \(k_{*}\) for each tomographic bin \(i\) to the following fiducial expression: \[k_{*}(z_{i})=0.4\ D^{-4/3}(z_{i})(\text{h/Mpc}) \tag{20}\] where \(z_{i}\) is the mean redshift of tomographic bin \(i\). This particular choice of redshift dependence for \(k_{*}\) is motivated by a similar expression for \(k_{\text{NL}}\) obtained from linear dimensionless matter power spectrum. Contributions from other operators, such as \(\delta^{3},\mathcal{G}_{3},\delta\mathcal{G}_{2}\) do not appear as they are eliminated during the renormalization process. In this analysis, we have ignored the stochastic contributions.
2308.03557
Data-driven robust MPC of tiltwing VTOL aircraft
This paper investigates robust tube-based Model Predictive Control (MPC) of a tiltwing Vertical Take-Off and Landing (VTOL) aircraft subject to wind disturbances and model uncertainty. Our approach is based on a Difference of Convex (DC) function decomposition of the dynamics to develop a computationally tractable optimisation with robust tubes for the system trajectories. We consider a case study of a VTOL aircraft subject to wind gusts and whose aerodynamics is defined from data.
Martin Doff-Sotta, Mark Cannon, Marko Bacic
2023-08-07T13:09:39Z
http://arxiv.org/abs/2308.03557v1
# Data-driven robust MPC of tiltwing VTOL aircraft ###### Abstract This paper investigates robust tube-based Model Predictive Control (MPC) of a tiltwing Vertical Take-Off and Landing (VTOL) aircraft subject to wind disturbances and model uncertainty. Our approach is based on a Difference of Convex (DC) function decomposition of the dynamics to develop a computationally tractable optimisation with robust tubes for the system trajectories. We consider a case study of a VTOL aircraft subject to wind gusts and whose aerodynamics is defined from data. ## 1 Introduction Urban Air Mobility (UAM) has a potential to transform transportation of people and goods in congested cities [1] while potentially reducing ground traffic. A key enabler of this technology is a class of zero carbon emission eVTOL aircraft concepts [2], many based on tiltrotor, tiltduct or tiltwing vehicle configurations powered by batteries or by making use of zero carbon fuel like hydrogen [3]. However, the lower energy density of energy storage options relative to liquid carbon based fuels restrict the operational range, hover time and cruise speeds. Additionally such concepts require transition between vertical and horizontal flight further complicating operational scenarios. Consequently limited energy and transition envelope constraints impede the throughput of flight operations by limiting the number of flights, timely allocation of landing slots, duration of holding patterns in hover, and separation between vehicles. To maximise throughput it is clear that both the energy spent in a non-wing-borne flight phase and the time and space needed to transition between thrust-borne and wing-borne flight should be minimised. This paper proposes a robust control methodology for transitions of VTOL aircraft between wing-borne and thrust-borne flight phases in the presence of wind gusts, model uncertainty and state constraints. We explore a Model Predictive Control framework due to its potential to outperform classical control architectures by optimising future control sequences with respect to a specified objective (usually minimum time or minimum energy transition) with explicit constraint handling using a model of the vehicle. This methodology can also provide guarantees of robustness to model uncertainty and external disturbances [4]. Although we develop here MPC policies for a generic class of tiltwing aircraft, the ideas presented in this paper are equally applicable to tilt rotors and tilt ducts. Various approaches have been proposed to address this problem. In [5], a cascaded PID control architecture was proposed for the transition of a prototype tiltrotor VTOL aircraft. The transition is achieved through smooth scheduling functions of the forward velocity or tilt angle, and the simulation results are supported by flight test results. Modeling, control and flight testing for the transition of a tiltwing aircraft are achieved in [6], extending the P-PID structure from the PX4 open-source software with feedback linearisation, gain scheduling and model-based control allocation. A gain-scheduled LQR control architecture is presented in [7] for the transition of a tandem tiltwing aircraft in the presence of moderate wind gusts. Instabilities were observed for wind gusts of intensity larger than \(5\mathrm{m/s}\) during transitions, which limits the viability of the approach in the presence of wind in realistic conditions. Recent advances in robust tube-based MPC allow robust control of nonlinear systems whose dynamics can be represented as a difference of convex functions [8]. The main idea is to successively linearise the dynamics around predicted trajectories and treat linearisation errors as bounded disturbances. Because the linearised functions are convex, so are their linearisation errors, and since these errors are maximised at the boundary of the domain on which they are evaluated, they can therefore be bounded tightly. The trajectories of model states are thus bounded tightly by a sequence of sets (known as a tube [4]) defined by convex inequalities. Although very efficient, the scope of applicability is initially limited to systems with convex dynamics. However, we show that if the dynamics are sufficiently regular, techniques from difference of convex (DC) decomposition of polynomials [9] can be used to represent the VTOL nonlinear dynamics as a difference of convex functions, which allows the powerful approach in [8] to be used. The dynamics need not be derived using first principles modelling; we employ a mixture of data-based and physical models. The proposed approach is the culmination of our work [10, 11] on convex trajectory optimisation of VTOL aircraft. The specific contributions of this research over earlier work are as follows: i) we propose a computationally tractable, optimal, robust control architecture for VTOL aircraft subject to additive disturbance and model uncertainty; ii) we combine DC decomposition with robust tube-based MPC and demonstrate the applicability and generalisability of the procedure in [8]; iii) we show that our technique also applies when parts of the model are defined from data. This paper is organised as follows. We start by developing a mathematical model of a tiltwing VTOL aircraft subject to wind disturbance in Section II.A. In Section III, we formulate the MPC optimisation problem and leverage a DC decomposition of the nonlinear dynamics to construct robust tubes for the state trajectories. Section IV discusses simulation results obtained for a case study based on the Airbus A\({}^{3}\) Vahana. Section V presents conclusions. ## II Modelling ### Assumptions We assume a flat earth model and consider trajectory optimisation in the longitudinal plane alone. Tiltwing aircraft are considered (see Figure 1), with one or more wing surfaces carrying thrust effectors that can be rotated by an actuator through 90 degrees as the aircraft transitions between wing-borne and thrust-borne flight. We further assume classical inner/outer loop flight control laws for stabilisation of aircraft attitude with time-scale separation, so that the closed loop pitch dynamics are much faster than those of the desired flight path and tiltwing actuation. Consequently the pitch angle \(\theta\) (defined here as the angle of the fuselage axis from horizontal earth plane) is assumed to be maintained at all times by the attitude control loop at a constant reference angle \(\theta^{r}=0\). ### Equations of motion Consider a longitudinal point-mass model of a tiltwing VTOL aircraft equipped with propellers (Figure 1) subject to a wind gust disturbance. The Equations Of Motion (EOM) with respect to inertial frame \(O_{XZ}\) are given by \[\dot{X}=V_{x},\quad\dot{Z}=V_{z}, \tag{1}\] \[m\dot{V}_{x}=\underbrace{T\cos\left(\alpha+\gamma\right)-D\cos\gamma-L\sin \gamma}_{f_{1}}+W_{x}, \tag{2}\] \[m\dot{V}_{z}=\underbrace{-T\sin\left(\alpha+\gamma\right)+D\sin\gamma-L\cos \gamma+mg}_{f_{2}}+W_{z}, \tag{3}\] where \(T\) is the thrust magnitude, \(L\), \(D\) are the lift and drag forces, \(V_{x}\), \(V_{z}\) the components of the aircraft velocity in the inertial frame \(V=\sqrt{V_{x}^{2}+V_{z}^{2}}\), \(\alpha\) is the angle of attack, \(\gamma=-\arcsin(V_{z}/V)\) is the flight path angle (defined as the angle of the velocity vector from horizontal), and \((X,Z)\) the position in inertial frame. Wind gusts are modelled by additive bounded disturbances \(W_{x}\) and \(W_{z}\), which are assumed to lie at all times within known bounds. The dynamics of the rotating wing are given by \[J_{w}\dot{\zeta}=M,\quad i_{w}=\zeta, \tag{4}\] where \(J_{w}\) is the rotational inertia of the wing (about the \(y\)-axis), \(M\) is the total torque delivered by the tilting actuators, \(\zeta\) is the tiltwing rate and \(i_{w}\) is the tiltwing angle. The angles \(i_{w}\), \(\theta\), \(\alpha\) and \(\gamma\) are related by (see Figure 1) \[i_{w}+\theta=\alpha+\gamma. \tag{5}\] From momentum theory, the propeller generates an induced speed \(v_{i}\) that is implicitly defined by \[\rho An(V\cos\alpha+v_{i})(k_{w}v_{i})-T=0,\] where \(\rho\) is the air density, \(A\) the rotor disk area, \(n\) the number of propellers, and \(k_{w}\approx 2\)[10]. The effective (blown) velocity \(V_{e}\) and effective (blown) angle of attack \(\alpha_{e}\) seen by the wing due to the effect of the propeller wake on the wing are given by \[V_{e}\cos\alpha_{e}=V\cos\alpha+k_{w}v_{i}, \tag{6}\] \[V_{e}\sin\alpha_{e}=V\sin\alpha,\] (7) \[V_{e}^{2}=V^{2}+\frac{2T}{\rho An}. \tag{8}\] The total lift and drag are modeled as the weighted sum of the blown and unblown terms as follows \[L =\tfrac{1}{2}\lambda\rho SC_{L}(\alpha_{e})V_{e}^{2}+\tfrac{1}{2}( 1-\lambda)\rho SC_{L}(\alpha)V^{2}, \tag{9}\] \[D =\tfrac{1}{2}\lambda\rho SC_{D}(\alpha_{e})V_{e}^{2}+\tfrac{1}{2}( 1-\lambda)\rho SC_{D}(\alpha)V^{2}, \tag{10}\] where \(S\) is the wing area, \(\lambda\) is a weighting term representing the portion of the wing in the wake, and \(C_{L}\) and \(C_{D}\) are lift and drag coefficients. To illustrate the approach we define \(C_{L}\) and \(C_{D}\) using data derived from the Tangler-Ostowari post-stall model [12] (see Figure 2), which determines lift and drag forces over a wide range of \(\alpha\) and \(\alpha_{e}\) values as the wing operates at high angles of attack during transitions. ### Constraints For the problem considered in this paper we assume that the gust disturbances \(W_{x},W_{z}\) in the forward and vertical directions are bounded: \(W_{i}\in[\underline{W}_{i},\overline{W}_{i}]\) for \(i=x,z\). Since wing tilting actuators have finite torque capacity we also assume bounded wing acceleration/deceleration rates given by \(\underline{M}\) and \(\overline{M}\). To ensure sensible trajectories for passenger comfort and g-loads, we further introduce constraint limits on horizontal (\(\tilde{V}_{x}\)) and vertical (\(\tilde{V}_{z}\)) accelerations. Finally, Figure 1: Force and velocity definitions for a VTOL aircraft. constraints on thrust range, tilt angle range and absolute velocities can all be expressed in the compact form as input and state constraints [10], \[\underline{V}_{x}\leq V_{x}\leq\overline{V}_{x},\quad\underline{V}_{z}\leq V_{z} \leq\overline{V}_{z},\quad 0\leq T\leq\overline{T},\quad\underline{i}_{w}\leq i_{w} \leq\overline{i}_{w}, \tag{11}\] \[V_{x}(t_{0})=V_{x,0},\quad V_{z}(t_{0})=V_{z,0},\quad i_{w}(t_{0})=i_{0}, \tag{12}\] \[\underline{W}_{x}\leq W_{x}\leq\overline{W}_{x},\quad\underline{W}_{z}\leq W_ {z}\leq\overline{W}_{z}, \tag{13}\] \[\underline{a}\leq\dot{V}_{x}\leq\overline{a},\quad\underline{a}\leq\dot{V}_{z }\leq\overline{a}, \tag{14}\] \[\underline{M}/J_{w}\leq\overline{i}_{w}\leq\overline{M}/J_{w}. \tag{15}\] ## III. Robust MPC formulation We now introduce a robust predictive control law for the system presented in Section II.A. Assuming we are only concerned with velocity control, the states \((X,Z)\) can be computed _a posteriori_ via (1) and thus eliminated from the analysis. Moreover, combining equations (2)-(10), we can further eliminate the flight path angle, angle of attack, and tiltwing rate from the formulation and express the dynamics with only two states \(V_{x},V_{z}\) and two inputs \(i_{w},T\) as follows \[mV_{x}=f_{1}(V_{x},V_{z},i_{w},T)+W_{x}, \tag{16}\] \[mV_{z}=f_{2}(V_{x},V_{z},i_{w},T)+W_{z}, \tag{17}\] where \(i_{w}\) is now an input constrained by its second order derivative (see equation 15). Fig. 2: **Lift and drag coefficients as a function of angle of attack.** To setup the MPC problem we use the trajectory constraints given by (11)-(15) together with the terminal set for the velocities[8] \[\left\|\begin{bmatrix}V_{x}(t_{f})-V_{x}^{r}(t_{f})\\ V_{z}(t_{f})-V_{z}^{r}(t_{f})\end{bmatrix}\right\|_{\hat{Q}}^{2}\leq\hat{\gamma}, \tag{18}\] where the notation \(\cdot^{r}\) was used to denote a reference to be tracked, \(t_{f}\) is a fixed terminal time, \(\hat{\gamma}\) and \(\hat{Q}>0\) are respectively a terminal set bound and penalty matrix that can be computed following Appendix of [8]. Note that equation (18) enforces the conditions: \(|V_{x}(t_{f})-V_{x}^{r}(t_{f})|\leq\delta^{V}\), \(|V_{z}(t_{f})-V_{z}^{r}(t_{f})|\leq\delta^{V}\), \(|i_{w}(t_{f})-i_{w}^{r}(t_{f})|\leq\delta^{i_{w}}\), \(|T(t_{f})-T^{r}(t_{f})|\leq\delta^{T}\), where \(\delta^{V},\delta^{i_{w}},\delta^{T}\), are terminal sets bounds. The control objective is to achieve tracking of a reference trajectory \(V_{x}^{r},V_{z}^{r},i_{w}^{r},T^{r}\) while rejecting the wind disturbances \(W_{x},W_{z}\) acting on the system. To do so, at each time step, we could compute a receding horizon control law that minimises the worst case quadratic objective defined for \(Q_{x}>0\), \(Q_{u}>0\) by \[\begin{split} J(u)=\max_{\begin{subarray}{c}W_{x}\in\{\|W_{x}, \|W_{x}\|\\ W_{z}\in\{\|W_{z},\|W_{z}\|\}\end{subarray}}}\left\{\left\|\begin{bmatrix}V_{x} (i_{w}(t_{f}),T(t_{f}),W_{x}(t_{f}),W_{z}(t_{f}))-V_{x}^{r}(t_{f})\\ V_{z}(i_{w}(t_{f}),T(t_{f}),W_{x}(t_{f}),W_{z}(t_{f}))-V_{z}^{r}(t_{f})\end{bmatrix} \right\|_{\hat{Q}}^{2}\\ +\int_{t_{0}}^{t_{f}}\left\|\begin{bmatrix}V_{x}(i_{w}(t),T(t),W_{ x}(t),W_{z}(t))-V_{x}^{r}(t)\\ V_{z}(i_{w}(t),T(t),W_{x}(t),W_{z}(t))-V_{z}^{r}(t)\end{bmatrix}\right\|_{Q_{ x}}^{2}+\left.\begin{array}{c}\left\|\begin{bmatrix}i_{w}(t)-i_{w}^{r}(t)\\ T(t)-T^{r}(t)\end{bmatrix}\right\|_{Q_{u}}^{2}\mathrm{d}t\right\}\end{split} \tag{19}\] subject to (11)-(18). Note that the states are uncertain and that the state trajectories in the objective are computed as the worst case realisation of the time-varying gust disturbances \(W_{x},W_{z}\). We would then apply the first element of the obtained optimal control sequence, update the current state and input and repeat the process at each time step. Since the model includes the nonlinear functions \(f_{1}\) and \(f_{2}\) and that the state is uncertain, computing the control law would require solving a min-max Nonlinear Program (NLP) which is intractable in practice. In what follows, we introduce a DC decomposition of these nonlinear functions in order to obtain a computationally efficient implementation of the robust MPC problem. ### DC decomposition Motivated by the fact that convex functions can be bounded tightly by convex and linear inequalities (as in [8]), we seek DC decompositions of \(f_{1}\), \(f_{2}\): \(f_{1}=g_{1}-h_{1}\) and \(f_{2}=g_{2}-h_{2}\), where \(g_{1}\), \(h_{1}\), \(g_{2}\), \(h_{2}\) are convex. A DC decomposition always exists if \(f_{1}\), \(f_{2}\in\mathcal{C}^{2}\)[13] and can be precomputed offline. A similar procedure was first presented in [11] and follows an idea from [9] on DC decomposition of nonconvex polynomials using algebraic techniques. In what follows we detail the procedure for the DC decomposition of a general function \(f\) (and hence its applicability to \(f_{1}\) and \(f_{2}\)): 1. Fit polynomial to data Assume that the nonlinear model*\(f\) can be approximated arbitrarily closely by a polynomial of degree \(2d\) in Gram form such that \(f\approx y(x)^{\top}Py(x)\), where \(P=P^{\top}\) is the Gram matrix and \(y=[1,x_{1},x_{2},\ldots,x_{n},x_{1}x_{2},\ldots,x_{n}^{d}]^{\top}\) is a vector of monomials of degree up to \(d\) (\(y\) has size \(C_{d+|x|}^{|x|}\)). Generate \(N_{s}\) samples \(F_{s}=f(x_{s}),\forall s\in[1,...,N_{s}]\) of the nonlinear model and solve the following least squares problem: Footnote *: Note that it does not need to be a mathematical function but can be defined from data. In the present case, \(f\) is partly defined from data through the lift and drag coefficients in Figure 2. \[\mathcal{LS}:\quad\min_{P}\quad\sum_{s=0}^{N_{s}}\|F_{s}-y(x_{s})^{\top}Py(x_{s })\|_{2}^{2},\quad\text{ s.t. }\quad P=P^{\top}.\] #### Ii-A2 Compute the Hessians of the decomposition Let \(g(x)\approx y(x)^{\top}Qy(x)\) and \(h(x)\approx y(x)^{\top}Ry(x)\) be convex polynomials such that their Hessians \(d^{2}g(x)/dx^{2}=y(x)^{\top}H_{g}y(x)\) and \(d^{2}h(x)/dx^{2}=y(x)^{\top}H_{h}y(x)\) are Positive Semi-Definite (PSD). Finding \(Q,R\) such that \(P=Q-R\) and \(h(x)\), \(g(x)\) are convex reduces to solving the following Semi Definite Program (SDP) \[\mathcal{SDP}:\max_{Q,\sigma}\quad\sigma\quad\text{ s.t. }\quad H_{g}(Q)\geq\sigma I,\quad H_{h}(Q)\geq\sigma I,\] with \(\forall i\), \(j\) \[[H_{g}]_{ij}=D_{j,i}^{\top}Q+QD_{i,j}+D_{i}^{\top}QD_{j}+D_{j}^{ \top}QD_{i},\] \[[H_{h}]_{ij}=D_{j,i}^{\top}(Q-P)+(Q-P)D_{i,j}+D_{i}^{\top}(Q-P)D_ {j}+D_{j}^{\top}(Q-P)D_{i},\] where \(I\) is the identity matrix of compatible dimensions, \(D_{i}\) is a matrix of coefficients such that \(dy/dx_{i}=D_{i}y\) and \(D_{i,j}=D_{i}D_{j}\). ### Successive convex programming tube MPC for DC systems The nonlinear dynamics in (16)-(17) can now be expressed in a DC form as follows \[m\dot{V}_{x} =g_{1}(V_{x},V_{z},i_{w},T)-h_{1}(V_{x},V_{z},i_{w},T)+W_{x}, \tag{20}\] \[m\dot{V}_{z} =g_{2}(V_{x},V_{z},i_{w},T)-h_{2}(V_{x},V_{z},i_{w},T)+W_{z}, \tag{21}\] and the DC-TMPC algorithm presented in [8] can be applied to the system. In what follows we will exploit the convexity properties of the functions \(g_{1}\), \(h_{1},g_{2}\), \(h_{2}\) in (20)-(21) to approximate the dynamics by a set of convex inequalities with tight bounds on the state trajectories. To do so, we linearise the dynamics successively around feasible guessed trajectories and treat the linearisation error as a bounded disturbance [8]. We use the fact that the linearisation error of a convex (resp. concave) function is also convex (resp. concave) and can thus be bounded tightly. This allows us to construct a robust optimisation using the tube-based MPC framework [4], and to obtain solutions that are robust to the model error introduced by the linearisation (i.e. model uncertainty) and to wind gusts (i.e. exogeneous additive disturbances). The DC-TMPC framework is based on the following ingredients: #### Iii-B1 Parameterisation of the control input We start by assuming the following two-degree of freedom parameterisation of the control inputs as follows [4] \[i_{w}=\mu+K_{i_{w}}(V_{x}-V_{x}^{\circ})+K^{\prime}_{i_{w}}(V_{z} -V_{z}^{\circ}),\] \[T=\tau+K_{T}(V_{x}-V_{x}^{\circ})+K^{\prime}_{T}(V_{z}-V_{z}^{ \circ}),\] where \(V_{x}^{\circ},V_{z}^{\circ}\) are guess trajectories for the states, \(\mu\), \(\tau\) are feedforward terms (solution of the MPC optimisation stated in Section III.C) and \(K_{i_{w}},K^{\prime}_{i_{w}}\), \(K_{T},K^{\prime}_{T}\) are feedback gains to be computed e.g. by solving a LQR problem for the linearised nominal (\(W_{x},W_{z}=0\)) system (20)-(21). Note that \(g_{1},h_{1},g_{2},h_{2}\) defined in (20)-(21) are now functions of \(V_{x},V_{z}\), \(\mu,\tau\). #### Iii-B2 Successive linearisations We assume the existence+ of a set of feasible guess trajectories \(V_{x}^{\circ},V_{z}^{\circ},\mu^{\circ},\tau^{\circ}\) for (20)-(21) and successively linearise the dynamics around the guessed trajectories. The Taylor series expansion of the nonlinear dynamics is given by \(\forall i\in\{1,2\}\) Footnote †: We can obtain feasible initial trajectories by simulating the nominal aircraft dynamics with a prior-determined control law, such as PID, and checking _a posteriori_ that other constraints are satisfied. An alternative method is to solve an initial feasibility problem as discussed in [8]. \[g_{i}=\lfloor g_{i}\rfloor_{(V_{x}^{\circ},V_{z}^{\circ},\mu^{ \circ},\tau^{\circ})}+\lceil g_{i}\rceil_{(V_{x}^{\circ},V_{z}^{\circ},\mu^{ \circ},\tau^{\circ})},\] \[h_{i}=\lfloor h_{i}\rfloor_{(V_{x}^{\circ},V_{z}^{\circ},\mu^{ \circ},\tau^{\circ})}+\lceil h_{i}\rceil_{(V_{x}^{\circ},V_{z}^{\circ},\mu^{ \circ},\tau^{\circ})},\] where the notation \(\lfloor f\rfloor_{x^{\circ}}=f(x^{\circ})+\nabla f^{\top}(x^{\circ})(x-x^{ \circ})\) stands for the Jacobian linear approximation of \(f\) around \(x^{\circ}\) and \(\lceil f\rceil_{x^{\circ}}\) the corresponding linearisation error. After each iteration of the algorithm, the guessed trajectories are updated with the solution of the MPC optimisation and a new pass is initiated by linearising the dynamics around the new estimate. ### Parameterisation of the uncertainty sets We assume that the uncertain state trajectories \(V_{x},V_{z}\) lie within "tubes" whose cross-sections are parameterised by means of elementwise bounds \(V_{x}\in[\underline{V}_{x},\overline{V}_{x}],V_{z}\in[\underline{V}_{z}, \overline{V}_{z}]\), which are optimisation variables, see Figure 3. ### DC properties of dynamics By convexity of \(g_{1},g_{2},h_{1},h_{2}\), the associated linearisation errors are necessarily convex and take their maximum on the boundary of the set over which the functions are constrained. Moreover, by definition, their minimum on this set is zero (Jacobian linearisation). It follows that the bounds on the states dynamics satisfy the following convex inequalities \[m\dot{\overline{V_{x}}} \geq\max_{V_{x}\in\{\underline{V}_{x},\overline{V}_{x}\},V_{z} \in\{\underline{V}_{z},\overline{V}_{z}\}}\big{\{}g_{1}(V_{x},V_{z},\mu,\tau )-\lfloor h_{1}\rfloor_{(V_{x}^{2},V_{z}^{2},\mu^{*},\tau^{*})}(V_{x},V_{z}, \mu,\tau)+\overline{W}_{x}\big{\}}, \tag{22}\] \[m\dot{\overline{V_{z}}} \geq\max_{V_{x}\in\{\underline{V}_{x},\overline{V}_{x}\},V_{z} \in\{\underline{V}_{z},\overline{V}_{z}\}}\big{\{}g_{2}(V_{x},V_{z},\mu,\tau )-\lfloor h_{2}\rfloor_{(V_{x}^{2},V_{z}^{2},\mu^{*},\tau^{*})}(V_{x},V_{z}, \mu,\tau)+\overline{W}_{z}\big{\}},\] (23) \[m\dot{V_{x}} \leq\min_{V_{x}\in\{\underline{V}_{x},\overline{V}_{x}\},V_{x} \in\{\underline{V}_{z},\overline{V}_{z}\}}\big{\{}-h_{1}(V_{x},V_{z},\mu,\tau )+\lfloor g_{1}\rfloor_{(V_{x}^{2},V_{z}^{2},\mu^{*},\tau^{*})}(V_{x},V_{z}, \mu,\tau)+\underline{W}_{x}\big{\}},\] (24) \[m\dot{\underline{V_{z}}} \leq\min_{V_{x}\in\{\underline{V}_{x},\overline{V}_{x}\},V_{x} \in\{\underline{V}_{z},\overline{V}_{z}\}}\big{\{}-h_{2}(V_{x},V_{z},\mu,\tau )+\lfloor g_{2}\rfloor_{(V_{x}^{2},V_{z}^{2},\mu^{*},\tau^{*})}(V_{x},V_{z}, \mu,\tau)+\underline{W}_{z}\big{\}}, \tag{25}\] Figure 3: Schematic visualisation of the tube. The uncertain state trajectory (red) lies within a tube (blue) centred around the nominal trajectory (orange), sampled at various time steps (3a). Also shown is the tube evolution for state \(V_{x}\) only (3b) and a snapshot of the tube cross section at an arbitrary time (3c). Conditions (22)-(25) must be satisfied by the tube bounding uncertain model trajectories. These constraints involve only minimisations of linear functions and maximisations of convex functions. Therefore they reduce to a finite number of constraints involving the tube vertices (i.e the variables \(\{\underline{V}_{x},\overline{V}_{x}\},\{\underline{V}_{z},\overline{V}_{z}\}\)). Thus each of the constraints (22)-(25) reduces to \(2^{2}=4\) convex inequalities. ### Discrete time DC-TMPC In order to obtain a finite dimensional robust MPC optimisation, we discretise the problem with a fixed sampling interval \(\delta\) and evaluate all variables over a finite horizon \(N\). The notation \(\{x_{0},x_{1},\ldots,x_{N-1}\}\) is used for the sequence of current and future values of a variable \(x\) predicted at the \(n\)-th discrete-time step, so that \(x_{k}\) denotes the predicted value of \(x((n+k)\delta)\). The MPC optimisation at the \(n\)-th discrete-time step is initialised with a feasible predicted trajectory \((V_{x}^{\circ},V_{z,k}^{\circ},\mu_{k}^{\circ},\tau_{k}^{\circ})\) and the following optimisation problem \(\mathcal{P}\) (obtained by discretising and gathering equations (15)-(19), (22)-(25)) is solved sequentially \[\min_{\overline{V}_{x},\underline{V},\overline{V}_{z},\underline{V}_{x}, \underline{V}_{z},\underline{V}_{x},\underline{V}_{z},\underline{V}_{x}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z},\underline{V}_{z}, \underline{V}_{z},\underline{V} \[\begin{split}\tilde{t}_{w}&\geq\max_{\begin{subarray}{c}V_{x} \in\{\underline{V}_{x,k},\overline{V}_{x,k}\},\\ V_{z}\in(\underline{V}_{z,k},\overline{V}_{z,k})\end{subarray}}\left\{\mu_{k}+K _{i_{w,k}}(V_{x}-V_{x,k}^{\circ})+K_{i_{w,k}}^{\prime}(V_{z}-V_{z,k}^{\circ})\right\} \\ \underline{V}_{x}&\leq\underline{V}_{x,k},\overline{V}_{x,k} \leq\overline{V}_{x},\underline{V}_{z}\leq\underline{V}_{z,k},\overline{V}_{z, k}\leq\overline{V}_{z},\\ \underline{a}&\leq\frac{V_{x,k+1}-\overline{V}_{x,k}}{ \delta},\,\underline{a}\leq\frac{V_{z,k+1}-\overline{V}_{z,k}}{\delta},\, \frac{\overline{V}_{x,k+1}-\underline{V}_{x,k}}{\delta}\leq\overline{a},\, \frac{\overline{V}_{z,k+1}-\underline{V}_{z,k}}{\delta}\leq\overline{a},\\ 0&\leq\min_{\begin{subarray}{c}V_{x}\in(\underline{V}_{x,k}, \overline{V}_{x,k}),\\ V_{z}\in(\underline{V}_{x,k},\overline{V}_{z,k})\end{subarray}}\left\{\tau_{k} +K_{T_{k}}(V_{x}-V_{x,k}^{\circ})+K_{T_{k}}^{\prime}(V_{z}-V_{z,k}^{\circ}) \right\},\\ \overline{T}&\geq\max_{\begin{subarray}{c}V_{x}\in(\underline{V}_{x, k},\overline{V}_{x,k}),\\ V_{z}\in(\underline{V}_{z,k},\overline{V}_{z,k})\end{subarray}}\left\{\tau_{k} +K_{T_{k}}(V_{x}-V_{x,k}^{\circ})+K_{T_{k}}^{\prime}(V_{z}-V_{z,k}^{\circ}) \right\},\\ \underline{M}&\leq\min_{\begin{subarray}{c}V_{x}\in(\underline{V}_{x, k},\overline{V}_{x,k}),\\ V_{z}\in(\underline{V}_{z,k},\overline{V}_{z,k})\end{subarray}}\left\{J_{w} \Delta^{2}\{\mu_{k}+K_{i_{w,k}}(V_{x,k}-V_{x,k}^{\circ})+K_{i_{w,k}}^{\prime} (V_{z,k}-V_{z,k}^{\circ})\}\right\},\\ \overline{M}&\geq\max_{\begin{subarray}{c}V_{x}\in(\underline{V}_{x,k}, \overline{V}_{x,k}),\\ V_{z}\in(\underline{V}_{z,k},\overline{V}_{z,k})\end{subarray}}\left\{J_{w} \Delta^{2}\{\mu_{k}+K_{i_{w,k}}(V_{x,k}-V_{x,k}^{\circ})+K_{i_{w,k}}^{\prime} (V_{z,k}-V_{z,k}^{\circ})\}\right\},\\ \hat{\gamma}&\geq\max_{\begin{subarray}{c}V_{x}\in(\underline{V}_{x,k}, \overline{V}_{x,k}),\\ V_{z}\in(\underline{V}_{z,k},\overline{V}_{z,k})\end{subarray}}\left\| \begin{bmatrix}V_{x}-V_{x,N}^{\prime}\\ V_{z}-V_{z,N}^{\prime}\end{bmatrix}\right\|_{\hat{Q}}^{2}\end{split} \tag{26}\] where \(V_{x}(0),V_{z}(0)\) at time step \(n=0\) are given in Table 2 depending on the transition scenario considered. We defined the second order forward finite difference operator as \(\Delta^{2}f_{k}=\frac{f_{k+2}-2f_{k+1}+f_{k}}{\delta^{2}}\). Note that the possible vertices for the tube are given by \[\mathcal{V}=\left\{\begin{bmatrix}\underline{V}_{x,k}\\ \underline{V}_{z,k}\end{bmatrix},\begin{bmatrix}\overline{V}_{x,k}\\ \underline{V}_{z,k}\end{bmatrix},\begin{bmatrix}\underline{V}_{x,k}\\ \underline{V}_{z,k}\end{bmatrix},\begin{bmatrix}\overline{V}_{x,k}\\ \overline{V}_{z,k}\end{bmatrix}\right\},\] which allows us to express each maximisation / minimisation above as a set of 4 inequalities at most. Moreover, since the feedback gains and the terminal penalty matrix are known _a priori_, this number can be further reduced to 2 for all inequalities but the first four. Once problem \(\mathcal{P}\) is solved, the guessed trajectories are updated with the solution as follows [8] \[V_{x,0}\gets V_{x}(n\delta),\quad V_{z,0}\gets V_{z}(n\delta), \tag{27}\] \[i_{w,k}^{\circ}\leftarrow\mu_{k}+K_{i_{w,k}}(V_{x,k}-V_{x,k}^{\circ})+K_{i_{w,k }}^{\prime}(V_{z,k}-V_{z,k}^{\circ}), \tag{28}\] \[T_{k}^{\circ}\leftarrow\tau_{k}+K_{T_{k}}(V_{x,k}-V_{x,k}^{\circ})+K_{T_{k}}^{ \prime}(V_{z,k}-V_{z,k}^{\circ}), \tag{29}\] \[V_{x,k+1}\gets V_{x,k}+\delta f_{1}(V_{x,k},V_{z,k},i_{w,k}^{\circ},T_{k}^ {\circ})/m, \tag{30}\] \[V_{z,k+1}\gets V_{z,k}+\delta f_{2}(V_{x,k},V_{z,k},i_{w,k}^{\circ},T_{k}^ {\circ})/m, \tag{31}\] \[V^{\circ}_{x,k}\gets V_{x,k+1}, V^{\circ}_{z,k}\gets V_{z,k+1}, \tag{32}\] \[\mu^{\circ}_{k}\leftarrow\mu_{k}, \tau^{\circ}_{k}\leftarrow\tau_{k}, \tag{33}\] for \(k=0,\ldots N-1\) and the process of solving \(\mathcal{P}\) and updating the trajectories with (27)-(33) is repeated until \(||[\tau\quad\mu]^{\top}||<\epsilon\). The control law at time \(n\) is then implemented by taking the first element of the control sequence \[i_{w}(n\delta)=i^{\circ}_{w,0},\quad T(n\delta)=T^{\circ}_{0}.\] At time \(n+1\), we set \(V_{x,0}=V_{x}((n+1)\delta)\), \(V_{z,0}=V_{z}((n+1)\delta)\), and update, \(\forall k=0,\ldots,N-2\)[8] \[i^{\circ}_{w,k}\gets i^{\circ}_{w,k+1}, T^{\circ}_{k}\gets T^{\circ}_{k+1},\,\mu^{\circ}_{k}\leftarrow\mu^{ \circ}_{k+1},\,\tau^{\circ}_{k}\leftarrow\tau^{\circ}_{k+1}, \tag{34}\] \[V^{\circ}_{x,k+1}\gets V^{\circ}_{x,k}+\delta(f_{1}(V^{ \circ}_{x,k},V^{\circ}_{z,k},i^{\circ}_{w,k},T^{\circ}_{k})+W_{x})/m,\] (35) \[V^{\circ}_{z,k+1}\gets V^{\circ}_{z,k}+\delta(f_{2}(V^{ \circ}_{x,k},V^{\circ}_{z,k},i^{\circ}_{w,k},T^{\circ}_{k})+W_{z})/m, \tag{36}\] and finally, as per the dual mode MPC paradigm [8] \[i^{\circ}_{w,N-1}\leftarrow\hat{K}_{i_{w}}(V^{\circ}_{x,N-1}-V^{ r}_{x,N-1})+V^{r}_{x,N-1}+\hat{K}^{\prime}_{i_{w}}(V^{\circ}_{z,N-1}-V^{r}_{z,N-1})+V^{r} _{z,N-1}, \tag{37}\] \[T^{\circ}_{N-1}\leftarrow\hat{K}_{T}(V^{\circ}_{x,N-1}-V^{r}_{x,N-1})+V^{r}_{x,N-1}+\hat{K}^{\prime}_{T}(V^{\circ}_{z,N-1}-V^{r}_{z,N-1})+V^{ r}_{z,N-1},\] (38) \[V^{\circ}_{x,N}\gets V^{\circ}_{x,N-1}+\delta(f_{1}(V^{ \circ}_{x,N-1},V^{\circ}_{z,N-1},i^{\circ}_{w,N-1},T^{\circ}_{N-1})+W_{x})/m,\] (39) \[V^{\circ}_{z,N}\gets V^{\circ}_{z,N-1}+\delta(f_{2}(V^{ \circ}_{x,N-1},V^{\circ}_{z,N-1},i^{\circ}_{w,N-1},T^{\circ}_{N-1})+W_{z})/m, \tag{40}\] where the terminal gains \(\hat{K}_{i_{w}},\hat{K}^{\prime}_{i_{w}},\hat{K}_{T},\hat{K}^{\prime}_{T}\) can be computed following the Appendix in [8]. ## IV Results We consider a case study based on the transition of the Airbus A\({}^{3}\) Vahana (i) from powered to wing-borne flight (forward transition) and (ii) from wing-borne to powered flight (backward transition). In what follows, unless otherwise stated, simulations are conducted in the absence of wind. Parameters and transition boundary conditions are reported in Table 1 and 2. The terminal times for the forward and backward transitions are respectively set to \(t_{f}=25s\) and \(t_{f}=17s\) and the time step is \(\delta\approx 0.22s\) in both cases, resulting in respectively \(N=110\) and \(N=75\) discretisation points. Optimisation problem \(\mathcal{P}\) is solved using CVXPY [14] with solver MOSEK [15]. ### DC decomposition The DC decompositions of \(f_{1}\) and \(f_{2}\) are computed according Section III.A. In each case, the approximation polynomial degree \(2d\) is set to \(2\) and the nonlinear model \(f\) is sampled at \(N_{s}=1\)e+4 evaluation points. \(500\) random test points are then generated to obtain the results presented in Table 3. The least-squares mean relative error measures how well the polynomial model \(y^{\top}Py\) fits the nonlinear model \(f\). The obtained errors are acceptable in the present scenario but could be further reduced if increasing the polynomial degree or using a different approximation model (e.g. radial basis functions, neural networks). This would typically come at the cost of increased computation times for the MPC optimisation problem. Figure 4 illustrates the quality of \begin{table} \begin{tabular}{l l l l} \hline \hline **Parameter** & **Symbol** & **Value** & **Units** \\ \hline Mass & \(m\) & \(752.2\) & kg \\ \hline Gravity acceleration & \(g\) & \(9.81\) & \(\text{m}\,\text{s}^{-2}\) \\ \hline Wing area & \(S\) & \(8.93\) & \(\text{m}^{2}\) \\ \hline Disk area & \(A\) & \(2.83\) & \(\text{m}^{2}\) \\ \hline Wing inertia & \(J_{w}\) & \(1100\) & \(\text{kg}\,\text{m}^{2}\) \\ \hline Density of air & \(\rho\) & \(1.225\) & \(\text{kg}\,\text{m}^{-3}\) \\ \hline Maximum thrust & \(\overline{T}\) & \(8855\) & N \\ \hline Tiltwing angle range & \(\left[i_{w},\tilde{i}_{w}\right]\) & \(\left[-10,100\right]\) & deg \\ \hline Acceleration range & \(\left[a,\overline{a}\right]\) & \(\left[-0.3g,0.3g\right]\) & \(\text{m}\,\text{s}^{-2}\) \\ \hline Forward velocity range & \(\left[V_{x},\widetilde{V}_{x}\right]\) & \(\left[0,60\right]\) & \(\text{m}/\text{s}\) \\ \hline Vertical velocity range & \(\left[V_{z},\widetilde{V}_{z}\right]\) & \(\left[-30,30\right]\) & \(\text{m}/\text{s}\) \\ \hline Torque range & \(M,\overline{M}\) & \(\left[-50;50\right]\) & N m \\ \hline Number of propellers & \(n\) & \(4\) & – \\ \hline Time step & \(\delta\) & \(0.22\) & s \\ \hline Degree of polynomial \(f\) & \(2d\) & \(2\) & – \\ \hline \hline \end{tabular} \end{table} Table 1: Model parameters derived from \(\text{A}^{3}\) Vahana \begin{table} \begin{tabular}{l l l l} \hline \hline **Parameter** & **Symbol** & **Value** & **Units** \\ \hline \multicolumn{4}{c}{**Forward transition**} \\ Forward velocity & \(\left\{V_{x,0};V_{x,f}\right\}\) & \(\left\{0;40\right\}\) & \(\text{m}/\text{s}\) \\ \hline Vertical velocity & \(\left\{V_{z,0};V_{z,f}\right\}\) & \(\left\{0;0\right\}\) & \(\text{m}/\text{s}\) \\ \hline \multicolumn{4}{c}{**Backward transition**} \\ Forward velocity & \(\left\{V_{x,0};V_{x,f}\right\}\) & \(\left\{40;0\right\}\) & \(\text{m}/\text{s}\) \\ \hline Vertical velocity & \(\left\{V_{z,0};V_{z,f}\right\}\) & \(\left\{0;0\right\}\) & \(\text{m}/\text{s}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Boundary conditions for transitions \begin{table} \begin{tabular}{l l l l} \hline \hline **Function** & **LS mean relative error (\%)** & **Resolute of \(y^{\top}(Q-R-P)y=0\)** & **Occurence of non PSD Hessian** \\ \hline \(f_{1}\) & \(5.8\) & \(5\)e\(-15\) & None \\ \hline \(f_{2}\) & \(7.5\) & \(2\)e\(-12\) & None \\ \hline \hline \end{tabular} \end{table} Table 3: DC decomposition and least-squares (LS) fit results for \(500\) random test points. the fit for a given tiltwing angle and thrust magnitude (projection is required for visualisation purposes). The residue of \(y^{\top}(Q-R-P)y=0\) illustrates the accuracy of the DC decomposition of the polynomial approximation, and is excellent in both cases. Finally, we verify that there was no convexity violation by computing the Hessians of the functions at each test point and checking for positive semidefiniteness. A typical DC decomposition is shown in Figure 5 (with projection). Figure 4: **Left: least-squares fit of \(f_{1}\) samples (blue dots) by the polynomial model (red curve) given \(i_{w}\) and \(T\). Right: contour plot of the percent relative fitting error.** Figure 5: **DC decomposition \(f=g-h\) for given \(i_{w}\) and \(T\).** ### Forward transition At first, we set the penalty matrix in the objective to \(Q_{x}=\text{diag}(1,\text{1e+4})\) to achieve a constant altitude forward transition. The results are shown in Figure 6. As the aircraft transitions from powered lift to cruise, the velocity magnitude increases, the thrust and tiltwing angle decrease, illustrating the change in lift generation from propellers to wing. The tiltwing angle drop at the beginning results in an increase in the effective angle of attack. Note that the solution (plain blue) has converged to the desired reference trajectory (dashed green) despite the initial discrepancy with the feasible guess trajectory (dashed orange). The objective can be changed to achieve faster transitions. For example, if the penalty matrix is set to \(Q_{x}=\text{diag}(100,1)\), the obtained results are presented in Figure 7. The reference forward velocity is achieved faster than previously, but this comes at the expense of an altitude drops. A trade-off between both objectives (reaching the desired forward or vertical velocity) can be achieved by varying the penalty matrix. ### Backward transition For completeness, we consider the scenario consisting of a backward transition with an increase in altitude, see Figure 8. This is characterised by a decrease in velocity magnitude and increase in thrust to support the powered flight mode (hover). An increase in altitude of about 75 m is needed for this manoeuvre, and the wing is stalled. ### Robustness to wind To simulate the effect of wind gust on the aircraft, we consider EASA "Means of Compliance with the Special Condition VTOL", SS2215 on flight load conditions [16] and assume that the aircraft is subject to a discrete wind gust Figure 6: Constant altitude forward transition. with velocity \(U\) following a "one-minus-cosine" law \[U(x_{g})=\frac{U_{de}}{2}\left(1-\cos\left(\frac{2\pi x_{g}}{25\bar{c}}\right) \right),\] where \(0\leq x_{g}\leq 25\bar{c}\) is the distance penetrated into the gust, \(\bar{c}\) is the mean geometric chord of the wing, and \(U_{de}\) the design gust velocity. The wind gust parameters are reported in Table 4 and the gust velocity profile with these values is presented in Figure 9. We then consider both crosswind and headwind scenarios for the gust direction. ### Crosswind It is assumed that the wind gust velocity acts normally to the aircraft flight path (velocity vector), i.e. along \(\vec{L}\) in Figure 1. This has the effect of modifying the velocity and angle of attack seen by the wing and hence the lift and drag as follows \[L=\tfrac{1}{2}\lambda\rho SC_{L}(\alpha_{e}^{\prime})V_{e}^{\prime 2}+\tfrac{1}{ 2}(1-\lambda)\rho SC_{L}(\alpha^{\prime})V^{\prime 2},\] \[D=\tfrac{1}{2}\lambda\rho SC_{D}(\alpha_{e}^{\prime})V_{e}^{\prime 2}+\tfrac{1}{ 2}(1-\lambda)\rho SC_{D}(\alpha^{\prime})V^{\prime 2},\] where \[V^{\prime 2}=V^{2}+U(x_{g})^{2},\quad\alpha^{\prime}=\alpha+\arctan\left( \frac{U}{V}\right),\] Figure 7: A faster forward transition. \[V_{e}^{\prime 2}=V_{e}^{2}+U(x_{g})^{2},\quad\alpha_{e}^{\prime}=\alpha_{e}+ \arctan\left(\frac{U}{V_{e}}\right).\] The torque created by the imbalance in lift due to the depth difference along the wing is assumed to be negligible, which justifies our assumption that no wind gust disturbance acts on the tiltwing rotational dynamics in equation (4). To evaluate the time varying wind gust bounds \([\underline{W}_{i}(t),\overline{W}_{i}(t)]\), \(\forall i\in\{x,z\}\) we consider the maximum increment in drag and lift along the guess trajectory as follows \[\Delta L_{\max}(t)=\tfrac{1}{2}\rho SC_{L}(\alpha^{\circ})U_{de}^{2}+\tfrac{1 }{2}\rho Sb_{1}\arctan\left(U_{de}/V^{\circ}\right)(U_{de}^{2}+V^{\circ^{2}})\] \(\Delta D_{\max}(t)=\tfrac{1}{2}\rho SC_{D}(\alpha^{\circ})U_{de}^{2}+\tfrac{1 }{2}\rho S(a_{2}(\arctan\left(U_{de}/V^{\circ}\right)^{2}+2\alpha^{\circ} \arctan\left(U_{de}/V^{\circ}\right))+a_{1}\arctan\left(U_{de}/V^{\circ} \right))(U_{de}^{2}+V^{\circ^{2}})\). In order to evaluate the effect of wind gusts on the aircraft, we conduct multiple simulations by varying the instant at which the aircraft encounters a wind gust during the forward and backward transitions and we observe the subsequent deviations from the reference: * **Forward transition with crosswind.** The results are illustrated in Figure 10. For the wind gusts occuring at times \(t=5\) s and \(t=10\) s, the deviations observed are reasonable with vertical velocity not exceeding \(3\)\(\mathrm{m/s}\) in magnitude. The deviation is more important when the disturbance occurs at \(t=0\) s since the vehicle is in hover \begin{table} \begin{tabular}{l l l l} \hline \hline **Parameter** & **Symbol** & **Value** & **Units** \\ \hline Design gust velocity & \(U_{de}\) & \(9.14\) & \(\mathrm{m/s}\) \\ \hline Mean geometric chord & \(\bar{c}\) & \(1\) & \(\mathrm{m}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Wind gust parameters as defined in [16]. Figure 8: Backward transition. mode, but the system eventually recovers and stabilises. * **Backward transition with crosswind.** The backward transition could not be achieved with crosswind gusts of \(9.14m/s\), so the wind speed was reduced to \(5.14m/s\) to obtain the results in Figure 12. In all cases, the forward and vertical velocities are slightly perturbed and the system eventually recovers. #### 4.2.2 Headwind In case of headwind, the wind gust velocity acts anti-parallel to the aircraft velocity vector \(\vec{V}\). This modifies the lift and drag as follows (note that this does not affect the angle of attack) \[L =\tfrac{1}{2}\lambda\rho SC_{L}(\alpha_{e})V_{e}^{\prime 2}+\tfrac{1}{2}(1- \lambda)\rho SC_{L}(\alpha)V^{\prime 2},\] \[D =\tfrac{1}{2}\lambda\rho SC_{D}(\alpha_{e}^{\prime})V_{e}^{\prime 2 }+\tfrac{1}{2}(1-\lambda)\rho SC_{D}(\alpha^{\prime})V^{\prime 2},\] Figure 10: Forward transition with crosswind gust occurring at various times. Figure 9: Wind gust velocity profile. where \[V^{\prime 2}=V^{2}+U(x_{g})^{2},\] \[V_{e}^{\prime 2}=V_{e}^{2}+U(x_{g})^{2},\] and we deduce the maximum increment in drag and lift along the guess trajectory as follows \[\Delta L_{\max}(t) =\tfrac{1}{2}\rho SC_{L}(\alpha^{\circ})U_{de}^{2}\] \[\Delta D_{\max}(t) =\tfrac{1}{2}\rho SC_{D}(\alpha^{\circ})U_{de}^{2}.\] We then conduct a series of simulations of both forward and backward transitions subject to headwind gusts occurring at various time instants: * **Forward transition with headwind.** The results are presented in Figure 12. As illustrated, there is almost no variation depending on when the gust is applied on the aircraft, which seems to indicate that headwinds are much less harmful than crosswinds for the closed-loop stability. This is due to the angle of attack not being affected by headwinds. * **Backward transition with headwind.** The results are presented in Figure 13. Note that contrary to the simulations with crosswinds, the aircraft is capable to withstand headwinds at \(9.14m/s\). ### Convergence Convergence of the algorithm is illustrated in Figure 14, showing that the objective value decreases at each time step. Finally, Figure 15 shows the average computation time to solve problem \(\mathcal{P}\) as a function of the number of discretisation points \(N\). The experiment was conducted on a MacBook Pro with a 2.9 GHz dual-core Intel Core i7 processor (mid-2012). For example, for \(N=100\), the average computation time was \(1.9s\). Although this would not allow to compute the solution within the specified time step in real time, it should be noted that CVXPY is not optimised Figure 11: Backward transition with crosswind gust occurring at various times. for performance and that reductions in computation times of about an order of magnitude can be expected with first order solvers such as ADMM [17]. This is in stark contrast to state-of-the-art generic NLP approaches that quote computation times of the order of minutes to solve similar VTOL transition optimisation problems (e.g. see [12]). ## 5 Conclusion We presented a novel computationally tractable robust data-driven tube MPC scheme based on a DC decomposition of the nonlinear dynamics of a tiltwing VTOL aircraft to achieve robust transitions in the presence of wind. The DC structure of the dynamics allowed us to express the MPC optimisation at each time step as a sequence of convex programs generated by successively linearising around guess trajectories and bounding tightly the effect of the necessarily convex linearisation errors. We demonstrated the viability of the scheme by considering a case study inspired from the Airbus Vahana \(A^{3}\) VTOL aircraft using a mixture of data-based and mathematical models. Forward and backward transitions were successfully achieved, as well as transitions subject to wind gusts. Future work has been identified as follows: i) Figure 12: Forward transition with headwind gust occurring at various times. Figure 13: Backward transition with headwind gust. leveraging first order solvers (e.g. ADMM) to accelerate computation times and enable real-time implementations; ii) complete study of the effect of: uncertainty set parameterisation, DC decomposition technique, approximation function, etc. on the performances of the algorithm; iii) adaptation of the method to function approximation via deep neural network to allow a higher degree of generalisability ; iv) extension of the framework to constraints of stochastic nature (e.g. von Karman wind turbulence model could be leveraged to achieve VTOL transitions that are less conservative).
2306.04794
Compressibility and speeds of sound across the superfluid to supersolid phase transition of an elongated dipolar gas
We investigate the excitation spectrum and compressibility of a dipolar Bose-Einstein condensate in an infinite tube potential in the parameter regime where the transition between superfluid and supersolid phases occurs. Our study focuses on the density range in which crystalline order develops continuously across the transition. Above the transition the superfluid shows a single gapless excitation band, phononic at small momenta and with a roton at a finite momentum. Below the transition, two gapless excitations branches (three at the transition point) emerge in the supersolid. We examine the two gapless excitation bands and their associated speeds of sound in the supersolid phase. Our results show that the speeds of sound and the compressibility are discontinuous at the transition, indicating a second-order phase transition. These results provide valuable insights into the identification of supersolid phenomena in dipolar quantum gases and the relationship to supersolidity in spin-orbit coupled gases.
P. B. Blakie, L. Chomaz, D. Baillie, F. Ferlaino
2023-06-07T21:36:29Z
http://arxiv.org/abs/2306.04794v1
Compressibility and speeds of sound across the superfluid to supersolid phase transition of an elongated dipolar gas ###### Abstract We investigate the excitation spectrum and compressibility of a dipolar Bose-Einstein condensate in an infinite tube potential in the parameter regime where the transition between superfluid and supersolid phases occurs. Our study focuses on the density range in which crystalline order develops continuously across the transition. Above the transition the superfluid shows a single gapless excitation band, phononic at small momenta and with a roton at a finite momentum. Below the transition, two gapless excitations branches (three at the transition point) emerge in the supersolid. We examine the two gapless excitation bands and their associated speeds of sound in the supersolid phase. Our results show that the speeds of sound and the compressibility are discontinuous at the transition, indicating a second-order phase transition. These results provide valuable insights into the identification of supersolid phenomena in dipolar quantum gases and the relationship to supersolidity in spin-orbit coupled gases. ## I Introduction Experiments with dipolar Bose-Einstein condensates (BECs) have observed the transition to a supersolid ground state [1; 2; 3], and have studied its elementary excitations [4; 5; 6]. The supersolid state breaks translational invariance by developing a spatially modulated (crystalline) structure, while still exhibiting superfluidity. For a \(D\)-dimensional crystal the supersolid state will exhibit \(D+1\) gapless excitation branches. These reflect the number of Nambu-Goldstone modes associated with the spontaneously broken symmetries of the supersolid state [7]. The excitations can be classified by the character of fluctuations they cause [6; 8; 9]. Although there is hybridization of the properties of the branches, \(D\) of the gapless branches are generally associated with density fluctuations and are termed density or phonon branches. The remaining gapless branch of excitations is associated with phase fluctuations and is referred to as a phase or Bogoliubov branch, related to superfluid aspects of the system (i.e. tunneling of atoms between unit cells). The majority of recent experimental work with dipolar BECs have used cigar shaped potentials in which a \(D=1\) supersolid transition can occur. The relevant thermodynamic limit of this system is an infinitely long tube trap, i.e. a system with transverse confinement only and a fixed linear density [10; 11; 12; 13; 14] [see Fig. 1(a)]. It is found that, depending on the density, the supersolid transition in the thermodynamic system can be continuous or discontinuous [12; 13; 14]. The continuous transition emerges for a range of intermediate densities, and occurs when a roton excitation gap of the uniform BEC state vanishes [10; 12; 13] [see Fig. 1(b)]. Experimental and theoretical studies of the finite system also reveal similar behavior (e.g. see [15; 16; 17]). It has also been shown that two gapless excitation branches (a density and a phase branch) emerge in the transition to the 1D supersolid state [4; 5; 6; 10; 13; 18]. We note that in the finite sized experimental systems the supersolid transition is revealed by a bifurcation in the compressional excitations of the gas [4; 6], which can be interpreted as in-phase and out-of-phase combinations of the gapless excitations of the thermodynamic system. We also note studies of supersolid properties in the therm Figure 1: (a) Schematic figure of thermodynamic system: infinite tube-shaped potential confining dipoles polarized along \(y\). (b) Phase diagram showing the superfluid fraction of the ground state as a function of density \(n\) and the s-wave scattering length \(a_{s}\). The uniform BEC (superfluid) and modulated supersolid states are shown separated by a continuous (red line) or discontinuous (black line) transition. Dashed box indicates the parameter regime we consider in this work, focusing on the continuous transition. Vertical colored lines show the parameter regime for the transition data considered in Fig. 4(a). Phase diagram for \({}^{164}\)Dy using \(a_{\rm dd}=130.8\,a_{0}\) and with \(\omega_{\rho}=2\pi\times 150\,\)Hz (see [14]). modynamic regime for bosons with dipole-dipole interactions (DDIs) with \(D=2\)[19; 20] or general soft-core interactions with \(D=1\)[21], \(D=2\)[8; 22; 23; 24; 25], or \(D=3\)[26; 9; 22]. For the \(D=1\) soft-core system the transition is always continuous1. For the \(D>1\) cases the transition to the supersolid state is generally first order2 and the system properties change abruptly at the transition. Footnote 1: The discontinuous regime for the \(D=1\) tube confined dipolar BEC emerges from the three-dimensional character of the system Footnote 2: There is a critical point for the \(D=2\) DDI case where the transition is continuous at particular critical density [20]. Spin-orbit coupled BECs also exhibit a phase transition to a supersolid-like stripe phase, which has been observed in experiments [27]. For this realization the coupling to an optical field produces a \(D=1\) (supersolid) stripe phase (irrespective of system dimension) and two gapless excitation branches are predicted [28]. The transition from the uniform planewave phase to the stripe phase is first-order [29; 30], in contrast to the \(D=1\) soft-core case and the dipolar case at intermediate densities [Fig. 1(b)]. Features of the spin-orbit system, such as the speeds of sound and compressibility in the thermodynamic regime, have been theoretically studied across the transition [31; 28; 30]. These features remain unexplored for the \(D=1\) dipolar supersolid. We address this here by examining the behaviour of the excitations, speeds of sound, density response, and compressibility of this system. We focus on the parameter regime where the crystalline order develops continuously [see Fig. 1(b)] and find that the compressibility and the speeds of sound change discontinuously across this transition. Thus indicating that the transition is second order. ## II System and ground states Here we consider a gas of magnetic bosonic atoms in a radially symmetric tube potential \(V=\frac{1}{2}m\omega_{\rho}(x^{2}+y^{2})\), where \(\omega_{\rho}\) is the angular trap frequency describing the transverse confinement. The theoretical description of this system is provided by an extended Gross-Pitaevskii equation (eGPE) which includes the leading order effects of quantum fluctuations [32; 33; 34; 35], and has been extensively used to model supersolid experiments with dipolar BECs (e.g. see [1; 2; 3]). The eGPE energy functional for this system is \[E=\int d\mathbf{x}\,\psi^{*}\left[h_{\rm sp}+\tfrac{1}{2}g_{s}| \psi|^{2}+\tfrac{1}{2}\Phi_{\rm dd}+\tfrac{2}{5}\gamma_{\rm QF}|\psi|^{3} \right]\psi, \tag{1}\] where \(h_{\rm sp}=-\frac{\hbar^{2}}{2m}\nabla^{2}+V\) is the single particle Hamiltonian. The short ranged interactions are governed by the coupling constant \(g_{s}=4\pi\hbar^{2}a_{s}/m\) where \(a_{s}\) is the \(s\)-wave scattering length. The long-ranged DDIs are described by the potential \[\Phi_{\rm dd}(\mathbf{x})=\int d\mathbf{x}^{\prime}\,U_{\rm dd}( \mathbf{x}-\mathbf{x}^{\prime})|\psi(\mathbf{x}^{\prime})|^{2}, \tag{2}\] where the atoms are polarized along \(y\) with \(U_{\rm dd}(\mathbf{r})=\frac{3a_{\rm dd}}{4\pi^{3}}\left(1-3y^{2}/r^{2}\right).\) Here \(g_{\rm dd}=4\pi\hbar^{2}a_{\rm dd}/m\), with \(a_{\rm dd}=m\mu_{0}\mu_{m}^{2}/12\pi\hbar^{2}\) being the dipolar length, and \(\mu_{m}\) the atomic magnetic moment. The effects of quantum fluctuations are described by the quintic nonlinearity with coefficient \(\gamma_{\rm QF}=\frac{32}{3}g_{s}\sqrt{a_{s}^{3}}/\pi\mathcal{Q}_{5}(\epsilon_ {\rm dd})\) where \(\mathcal{Q}_{5}(x)=\Re\{\int_{0}^{1}du[1+x(3u^{2}-1)]^{5/2}\}\)[36] and \(\epsilon_{\rm dd}\equiv a_{\rm dd}/a_{s}\). We constrain the stationary states of Eq. (1) to have an average linear density of \(n\), and ground states are found (following Ref. [14]) by minimising the energy per particle. For the case of modulated (crystalline) states, the system chooses a preferred unit cell size \(L\), and the normalization constraint is \(\int_{\rm{uc}}dz\int d\mathbf{\rho}\,|\psi|^{2}=nL\), where \(\mathbf{\rho}=(x,y)\) represents the transverse coordinates, and \({\rm{uc}}\) denotes the unit cell \(z\in[-\frac{1}{2}L,\frac{1}{2}L]\). These stationary states are solutions of the eGPE \[\mu\psi=\left(h_{\rm sp}+g_{s}|\psi|^{2}+\Phi_{\rm dd}+\gamma_{ \rm QF}|\psi|^{3}\right)\psi, \tag{3}\] where \(\mu\) is the chemical potential. In Fig. 2 we present results illustrating the transition from a uniform to spatially modulated state as the \(s\)-wave scattering length is reduced. For the parameters considered in these results the ground state is uniform (superfluid state) for \(a_{s}>a_{\rm rot}=92.32\,a_{0}\). Here \(a_{\rm rot}\) is the value of the scattering length where a roton excitation goes soft as we will discuss in Sec. III (also see [10; 11]). For \(a_{s}<a_{\rm rot}\) the ground state is modulated. We can characterise the strength of modulation using the density contrast \[\mathcal{C}=\frac{n_{\rm max}-n_{\rm min}}{n_{\rm max}+n_{\rm min}}, \tag{4}\] where \(n_{\rm max}\) and \(n_{\rm min}\) are the maximum and minimum of the linear density \(n(z)=\int d\mathbf{\rho}\,|\psi|^{2}\), respectively. Results for \(\mathcal{C}\) show that the density modulation develops continuously as \(a_{s}\) decreases below \(a_{\rm rot}\) [see Fig. 2(b)-(d), Fig. 3(a), and Refs. [10; 12; 13; 14]]. ## III Excitations The elementary quasi-particle excitations are described within the framework of Bogoliubov theory. In this theory the excitations give the small deviations of the condensate field with respect to ground state as \[\Psi(\mathbf{x},t)=e^{-i\mu t/\hbar} \left[\psi(\mathbf{x})+\sum_{\nu,q_{z}}\left\{c_{\nu,q_{z}}u_{\nu,q_{z}}(\mathbf{x})e^{-i\omega_{\nu,q_{z}}t}\right.\right. \tag{5}\] \[\left.\left.-c_{\nu,q_{z}}^{*}v_{\nu,q_{z}}(\mathbf{x})e^{i\omega ^{*}_{\nu,q_{z}}t}\right\}\right].\] Here \(\epsilon_{\nu,q_{z}}=\hbar\omega_{\nu,q_{z}}\) are the quasi-particle energies, \(\hbar q_{z}\) is a quasi-momentum in the first Brillouin zone, i.e. \(q_{z}\in[-\pi/L,\pi/L]\), \(\nu\) is the band index and \(c_{\nu,q_{z}}\) are the expansion amplitudes. The quasi-particle amplitudes take the Bloch form \[u_{\nu,q_{z}}(\mathbf{x})=\bar{u}_{\nu,q_{z}}(\mathbf{x})e^{iq_{z}z},\quad v_{ \nu,q_{z}}(\mathbf{x})=\bar{v}_{\nu,q_{z}}(\mathbf{x})e^{iq_{z}z}, \tag{6}\] where \(\{\bar{u}_{\nu,q_{z}}({\bf x}),\bar{v}_{\nu,q_{z}}({\bf x})\}\) are periodic in the unit cell. More details of the Bogoliubov analysis of the eGPE can be found in Ref. [37]. In Fig. 2 we show some examples of the excitation spectra of the system. The case in Fig. 2(a) is for a uniform state at the transition point \(a_{s}=a_{\rm rot}\). Here the \(z\)-momentum \(\hbar k_{z}\) is a good quantum number for the excitations3. A single Nambu-Goldstone branch exists in the uniform state, corresponding to the lowest energy excitation band that is gapless as \(k_{z}\to 0\). This reflects the broken gauge symmetry associated with superfluidity. We observe a fully developed roton-like local minimum in the excitation spectrum that has softened to zero energy. Here we define \(a_{\rm rot}\) as the value of \(a_{s}\) where the roton feature of the uniform state has a minimum at zero-energy (i.e. a fully softened roton). We note that \(a_{\rm rot}\) varies with \(n\) and confinement. The wavevector of the softened roton is denoted \(k_{\rm rot}\) and the density modulation first develops at the transition point with a unit cell size set by the roton wavelength, i.e. as \(a_{s}\to a_{\rm rot}\) from below, \(L\to 2\pi/k_{\rm rot}\). To aid in our later comparison to the excitations of the modulated ground states, in Fig. 2(a1) we map the uniform state excitation results of subplot (a) onto the positive part of first Brillouin zone (\(q_{z}\in[-\frac{1}{2}k_{\rm rot},\frac{1}{2}k_{\rm rot}]\)), taking \(k_{\rm rot}\) as the reciprocal lattice vector. The color coding of segments of the lowest excitation band used in (a) is selected to help identify the features in the reduced zone scheme. In particular, we note that the soft roton feature manifests in the reduced zone representation as 2 additional gapless excitations bands (blue and magenta color). Footnote 3: In the uniform case we can take \(q_{z}=k_{z}\), due to translational invariance. In Figs. 2(b1)-(d1) we show the excitation spectra corresponding to the modulated ground states shown in (b)-(d), respectively. The case in (b1) is close to the transition with a weak modulation. Here the spectrum is similar to the roton case [cf. (a1)], but the ground state modulation causes some noticeable changes in the lowest three excitations bands: a gap (for \(q_{z}\to 0\)) develops in the \(\nu=3\) (magenta color) band as we move away from the transition, while the other two bands remain gapless, and avoided crossings occur where the \(\nu=2\) (red color) and \(\nu=3\) (magenta color) bands approach each Figure 2: Density profiles, excitation spectra and structure factors of a dipolar Bose gas in an infinite tube. (a) Excitation spectrum for a uniform system at \(a_{s}=a_{\rm rot}\) where the roton softens. Excitations bands with even parity in \(x\) and \(y\) (solid lines), other bands (dotted lines). (a1) The results from (a) reduced to the first Brillouin zone with the colours of the lowest 3 even symmetry bands corresponding to the segments in (a). (a2) The static structure factor \(S(k_{z})\) (black line) is dominantly contributed to by the lowest band of (a) [i.e. \(S_{1}(k_{z})\), red line for the \(k_{z}\) range considered] (see Sec. V). Inset shows \(S(k_{z})\) over a wider momentum range. (b)-(d) Density isosurfaces of crystalline ground states for \(a_{s}<a_{\rm rot}\). Red (blue) isosurface at \(4\times 10^{20}\) m\({}^{-3}\) (\(10^{20}\) m\({}^{-3}\)). Unit cell size \(L\) and density contrast \(\mathcal{C}\) are also indicated. The excitation spectra (b1)-(d1) and static structure factors (b2)-(d2) corresponding to the ground states in (b)-(d), using same line types as in (a1) and (a2), respectively. The static structure factor is shown under like) and individual contributions of the lowest three excitations bands (i.e. \(S_{\nu=1,2,3}\)) [see labels in (c2)]. Results for \({}^{164}\)Dy using \(a_{\rm dd}=130.8\,a_{0}\) for a linear density of \(n=2500\,\mu\)m\({}^{-1}\) and with \(\omega_{\rho}=2\pi\times 150\)Hz. other. For the more strongly modulated cases (c1) and (d1), the three lowest excitation bands separate, and can be unambiguously assigned. We denote the lowest gapless band (blue color) as the phase band, and the higher gapless band (red color) as the density band [see Fig. 2(c1)]. The identification of these bands can be made by assessing the dominant effect of the excitations on the phase or density fluctuations of the system as has been done for dilute supersolid states with soft-core interactions and DDIs (e.g. see [6; 8; 9]). In the spin-orbit coupled stripe phase, similarly two gapless bands emerge, but these are identified as spin and density nature, due to their effect on density and spin fluctuations [28]. ## IV Superfluidity, characteristic excitations and speeds of sound In Fig. 3 we consider characteristic properties of the system across the transition. Subplot (a) shows both the density contrast and the superfluid fraction. We can view the contrast as an order parameter for the crystalline order of the system. The finite superfluid fraction in the modulated state confirms that the system is in a supersolid state. Note that here we have taken the superfluid fraction as the average of the upper and lower bounds developed by Leggett and given by the expressions \[f_{s}^{+} =\frac{L}{n}\left[\int_{\rm{uc}}\frac{dz}{\int d\mathbf{\rho}\,| \psi|^{2}}\right]^{-1}, \tag{7}\] \[f_{s}^{-} =\frac{L}{n}\int d\mathbf{\rho}\left[\int_{\rm{uc}}\frac{dz}{|\psi|^ {2}}\right]^{-1}, \tag{8}\] respectively [38]. These two bounds typically differ by less than a percent from each other and are in good agreement the superfluid fraction obtained from the direct calculation of the nonclassical translational inertia of the system (see [14]). We also consider the behavior of two characteristic excitations which reveal the approach to the transition from either side. In the uniform state the roton excitation plays this role and we define \(\epsilon_{\rm{rot}}\) as the energy of the local minimum in the rotonic feature (e.g. see [11]). As \(a_{s}\) approaches \(a_{\rm{rot}}\) from above \(\epsilon_{\rm{rot}}\to 0\). The softening of this mode leads to a dynamic instability causing the formation of spatial modulation [39; 16]. In the modulated ground state a Higgs-like amplitude mode plays a key role in signifying the approach of the transition. The identification of Higgs-like mode is made with the \(q_{z}\to 0\) part of the third excitation band, because these excitations cause amplitude fluctuations of the crystalline order [15]. Here we define \(\epsilon_{\rm{Higgs}}=\epsilon_{3,q_{z}=0}\) [see Fig. 2(c1)], and as \(a_{s}\) approaches \(a_{\rm{rot}}\) from below \(\epsilon_{\rm{Higgs}}\to 0\), and the modulated order disappears. The inset to Fig. 3(b) reveals that the roton and Higgs energies both soften with an exponent of \(\frac{1}{2}\), i.e. \(\epsilon\sim\sqrt{|a_{s}-a_{\rm{rot}}|}\), on their respective sides of the transition, consistent with the normal meanfield behaviour of the energy gap at a quantum phase transition [40]. Speeds of sound can be associated with the gapless excitation branches. The slope of the lowest band near \(k_{z}=0\) in the uniform phase identifies the usual Bogoliubov sound for a BEC as [see Fig. 2(a)] \[c_{s}=\frac{1}{\hbar}\left(\frac{\partial\epsilon_{1,k_{z}}}{\partial k_{z}} \right)_{k_{z}\to 0}. \tag{9}\] Similarly, in the modulated phase the two lowest bands can be Figure 3: Continuous transition between the uniform and modulated states at \(n=2.5\times 10^{3}\mu\text{m}^{-1}\) with \(a_{\rm{rot}}=92.32\,a_{0}\). (a) Density contrast and superfluid fraction, (b) the energy of roton and Higgs mode, and (c) speeds of sound and critical velocity in the uniform state. The inset to (b) shows that the modes soften as \(\sim\sqrt{|a_{s}-a_{\rm{rot}}|}\) in the approach to the transition, with the dashed lines being a linear fit to the data close to the transition. Small circles are results of our general 3D calculations and are fit with a solid line, except in the inset to (b) where these results are shown as small circles. For comparison the dotted lines show results of the reduced 3D theory where \(a_{\rm{rot}}=90.56\,a_{0}\) Other parameters as in Fig. 2. used to define \[c_{p} =\frac{1}{\hbar}\left(\frac{\partial\epsilon_{1,q_{z}}}{\partial q_{ z}}\right)_{q_{z}\to 0}, \tag{10}\] \[c_{d} =\frac{1}{\hbar}\left(\frac{\partial\epsilon_{2,q_{z}}}{\partial q _{z}}\right)_{q_{z}\to 0}, \tag{11}\] as the phase and density speeds of sound, respectively [see Fig. 2(c1)]. Results for the speeds of sound are shown in Fig. 3(c). The speeds are seen to change discontinuously across the transition, and is the basis of our identification of the transition as being second order4. For the density we consider here the discontinuity is rather small with \(c_{d}\) and \(c_{s}\) being almost equal at the transition point [see Fig. 4(c) for an example at a lower density where the discontinuity is larger]. The discontinuity in the speeds of sound arises from the avoided crossing between the low energy bands [e.g. see Fig. 2(b1)]. As we approach the transition from below the avoided crossing shifts towards \(k_{z}\to 0\) and hence affects the speeds of sound at the transition. We also note that the two speeds of sound in the stripe phase of spin-orbit coupled BEC have been identified and studied in Refs. [28; 31]. Footnote 4: In Sec. V we see that this behavior appears as a discontinuity in the compressibility, which is a second derivative of the thermodynamic potential. We also show the critical velocity in the uniform state evaluated using the Landau criteria \[v_{\rm crit}=\min_{k_{z}}\left(\frac{\epsilon_{1,k_{z}}}{\hbar k_{z}}\right). \tag{12}\] For \(a_{s}\) close to the transition the critical velocity is approximately given by \(v_{\rm crit}\approx\epsilon_{\rm rot}/\hbar k_{\rm rot}\)[41], and the softening of the roton causes it to go to zero. The critical velocity and dynamics flow past an obstacle has been studied in soft-core models of supersolids [22; 25; 42], but we have not made any extension of those ideas to our system. While our main results in this paper are obtained by full numerical calculations, in Fig. 3 we also show the results of the reduced 3D theory [11; 12; 13]. The reduced theory makes a variational description of the transverse degrees of freedom, and reduces the calculation to an effective one-dimensional model. In general the reduced theory produces qualitatively comparable results, although the speed of sound (particularly \(c_{s}\) and \(c_{d}\)) tends to be significantly over estimated. This overestimation was also discussed in the context of the uniform state study presented in Ref. [43]. These results suggest some caution is require in using reduced theory as a quantitative description of the system excitations. ## V Structure factors and compressibility In addition to the spectrum of the excitations our interest here extends to the nature of the density fluctuations of the system, and their connection to compressibility. We can make this connection via the dynamical structure factor, which determines the response of the system to a density coupled probe, where the probe transfers momentum \(\hbar\mathbf{k}\) and energy \(\hbar\omega\) to the system. For the case of a momentum along the \(z\)-axis (i.e. tube axis) and the dynamic structure factor of the \(T=0\) system is [44; 45] \[S(k_{z},\omega)=\sum_{\nu}|\delta n_{\nu,k_{z}}|^{2}\delta\left(\hbar\omega- \hbar\omega_{\nu,q_{z}}\right), \tag{13}\] where \[\delta n_{\nu,k_{z}}=\int_{\rm{uc}}\!dz\int\!d\mathbf{\rho}\left[u_{\nu,q_{z}}^{*} (\mathbf{x})\!-\!v_{\nu,q_{z}}^{*}(\mathbf{x})\right]\!e^{ik_{z}z}\psi( \mathbf{x}). \tag{14}\] In this expression, and others where both \(k_{z}\) and \(q_{z}\) appear, the value of quasimomentum \(q_{z}\) is fixed by \(k_{z}\) reduced to the first Brillouin zone by an integer number of reciprocal lattice vectors \(2\pi/L\) (also see [46]). It is possible to measure the dynamic structure factor in cold-atom experiments using Bragg spectroscopy, which has been used to probe excitation properties of dipolar BECs [47; 48; 18]. Here our main interest lies in the static structure factor is given by \[S(k_{z})\equiv\frac{\hbar}{nL}\int d\omega\,S(k_{z},\omega). \tag{15}\] This can also be directly measured using high resolution _in situ_ imaging of the density fluctuations, e.g. for the dipolar case see Refs. [49; 16]. The static structure factor over a broad momentum range is shown in the insets of Figs. 2(a2)-(d2). Here a divergence occurs at reciprocal lattice vector reflecting the periodic structure of the ground state5. We can examine the contribution from each band to the structure factor, i.e. setting \(S(k_{z})=\sum_{\nu}S_{\nu}(k_{z})\), where \(S_{\nu}(k_{z})=\frac{1}{nL}|\delta n_{\nu,k_{z}}|^{2}\) is the contribution from the \(\nu\)-band. Results focusing on \(S(k_{z})\) and \(S_{\nu}(k_{z})\) for low values of \(k_{z}\) and \(\nu=1,2,3\), are shown (a2)-(d2). The contributions of higher bands (\(\nu>3\)) are negligible. The avoided crossings in the spectrum (b1) are revealed in the behavior of the \(S_{\nu}\) in (b2), where the weight smoothly transfers between the bands at the avoided crossings. For the cases (c1,d1) where the lowest bands are separated, we see that the density band makes the dominant contribution to \(S(k_{z})\) in the low-\(k_{z}\) limit. As \(k_{z}\) increases the phase band contribution increases, and \(|\delta n_{\nu,k_{z}}|\) diverges for both the phase and density bands as \(k_{z}\to 2\pi/L\). Footnote 5: For uniform state a finite peak occurs when the system has a roton, representing enhanced density fluctuations [43; 50]. This peak grows as the roton softens, and diverges when the roton energy goes to zero. The isothermal compressibility for the tube confined system can be defined as \[\kappa=\frac{1}{n^{2}}\frac{\partial n}{\partial\mu}, \tag{16}\] and relates to the number fluctuations of the system in a large measurement cell (e.g. see [51; 52; 53]). We determine this directly from the ground state calculations by evaluating how the chemical potential changes with the average linear density. This expression directly relates to the speed of sound in the uniform system as \[n\kappa=\frac{1}{mc_{s}^{2}}, \tag{17}\] [43]. In Fig. 4(a) we show results for the compressibility across the transition for systems of various densities. For reference the parameter range for the data in this subplot is indicated in the phase diagram in Fig. 1(b). The compressibility exhibits a discontinuous jump of \(\Delta\kappa=\kappa^{-}-\kappa^{+}\) at the transition. Here \(\kappa^{\pm}\) denotes the compressibility at the transition approaching from the below (\(-\)) or above (\(+\)). Results for \(\Delta\kappa\) are presented in Fig. 4(b), where we see that the discontinuity vanishes at \(n\approx 2.85\times 10^{3}\mu\)m\({}^{-1}\). We also show \(a_{\mathrm{rot}}(n)\) in this plot and observe that it is maximised around the same density value. The maximum in \(a_{\mathrm{rot}}\) occurs from the competition between two-body interactions and the quantum fluctuation term [11]. The simultaneous occurrence of these extrema suggests that \(\Delta\kappa\) vanishing is also related to this competition. In Fig. 4(c) we consider the speeds of sound for a lower density case than the results presented in Fig. 3(b). For reference we have plotted the effective speed of sound \[c_{\kappa}=\frac{1}{\sqrt{mn\kappa}}, \tag{18}\] obtained assuming relationship (17) holds. We see that this value of \(c_{\kappa}\) agrees with \(c_{s}\) on the uniform side of the transition. In the modulated state it lies between the two speeds of sound, although at the transition its value is coincides with \(c_{d}\). This indicates that at the transition the phase branch does not contribute to the long wavelength density fluctuations of the system. While the compressibility is calculated from of the ground state chemical potential (16), we can link it to the excitations via dynamical structure factor using the relation \[\int d\omega\,\frac{S(k_{z},\omega)}{\omega}=\frac{1}{2}\chi(k_{z}), \tag{19}\] where \(\chi\) is the static density response function. From Eq. (13) we see that \(\chi(k_{z})=\sum_{\nu}\chi_{\nu}(k_{z})\), where \[\chi_{\nu}(k_{z})\equiv\frac{2|\delta n_{\nu,k_{z}}|^{2}}{\epsilon_{\nu,q_{z} }}. \tag{20}\] In the long-wavelength limit the compressibility sum rule [41; 54] is \[\lim_{k_{z}\to 0}\chi(k_{z})=n\kappa. \tag{21}\] We verify that this relationship holds in the inset to Fig. 4(c). Similar to our treatment of the static structure factor in Figs. 2(a2)-(d2), we can examine the contribution of each of the gapless bands to (19), i.e. \(\chi_{\nu=1,2}\). The results, shown in the inset to Fig. 4(c), reveal that in the modulated state the density and phase bands both significantly contribute to \(\chi\). The sum of these two saturates the contribution to \(\chi\) for the range of \(k_{z}\) values shown in the inset. In contrast our earlier results showed that for low \(k_{z}\) the magnitude of \(\delta n_{\nu,k_{z}}\) for the density band was significantly larger than the phase band6 Figure 4: (a) Compressibility across the transition for various linear densities. (b) The magnitude of the compressibility jump at the transition point (black line, with markers, left axes) compared to \(a_{tot}\) (blue dashed line, right axes) as a function of density. (c) For \(n=1.5\times 10^{3}\mu\)m\({}^{-1}\) we show the speeds of sound (line and markers), compared against \(c_{\kappa}\) (blue solid fine). Inset the contribution of the excitation branches to the compressibility sum rule for \(a_{s}=87.9\,a_{0}\). Lowest (phase) excitation band (blue line), second density band (red line) and total of all bands (black line). Dotted horizontal line shows \(n\kappa\) for reference. Other parameters as in Fig. 2. [e.g. see Figs. 2(c2)-(d2)]. This behavior is different from the spin-orbit stripe phase case where the density band excitations (propagating parallel to the crystal) exhaust the compressibility sum rule [31]. The role of the phase band in increasing the compressibility relative to the density band was discussed in [9] for a 3D soft-core supersolid, although no direct comparison of the elementary excitations to the compressibility was made. ## VI Conclusions In this paper we have studied the ground states and excitations of a dipolar BEC as it transitions to a supersolid state in an infinite tube potential. We have focused on the regime where the \(D=1\) crystalline order appears continuously, as characterized by the density contrast order parameter or the superfluid fraction. The compressibility and the speeds of sound obtained from the gapless energy bands are generally discontinuous across the transition, consistent with the transition being second order. It is interesting to consider the prospects for measuring our predictions in experiments. Recent experiments with optical lattices have measured the speed of sound and determined the superfluid fraction using Bragg spectroscopy [55] and collective mode excitation [56]. These techniques could be applied to the dipolar system noting that Bragg spectroscopy has been used to measure the anisotropy of sound in a dipolar BEC [47], and to probe the free-particle excitations of a dipolar supersolid [18]. Also, various forms of collective mode spectroscopy have already been demonstrated in this system [4, 5, 6, 16]. The compressibility relates to the number fluctuations in a large measurement cell [51, 52, 53] and has been measured experimentally in ultra-cold atoms experiments using _in-situ_ density and density fluctuation measurements [57, 58, 59, 60] (also see [61]). Related measurements have been performed in dipolar BECs and supersolids to determined the static structure factor [49, 16] and could be extended to probe compressibility. ###### Acknowledgements. PBB acknowledges use of New Zealand eScience Infrastructure (NeSI) high performance computing facilities. PBB and DB acknowledge support from the Marsden Fund of the Royal Society of New Zealand. LC acknowledges support from the European Research Council (ERC) under the European Union's Horizon Europe research and innovation program under grant number 101040688 (project 2DDip), and from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through project-ID 273811115 (SFB1225 ISOQUANT) and under Germany's Excellence Strategy EXC2181/1-390900948 (the Heidelberg Excellence Cluster STRUCTURES). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council.
2303.01032
ESceme: Vision-and-Language Navigation with Episodic Scene Memory
Vision-and-language navigation (VLN) simulates a visual agent that follows natural-language navigation instructions in real-world scenes. Existing approaches have made enormous progress in navigation in new environments, such as beam search, pre-exploration, and dynamic or hierarchical history encoding. To balance generalization and efficiency, we resort to memorizing visited scenarios apart from the ongoing route while navigating. In this work, we introduce a mechanism of Episodic Scene memory (ESceme) for VLN that wakes an agent's memories of past visits when it enters the current scene. The episodic scene memory allows the agent to envision a bigger picture of the next prediction. This way, the agent learns to utilize dynamically updated information instead of merely adapting to the current observations. We provide a simple yet effective implementation of ESceme by enhancing the accessible views at each location and progressively completing the memory while navigating. We verify the superiority of ESceme on short-horizon (R2R), long-horizon (R4R), and vision-and-dialog (CVDN) VLN tasks. Our ESceme also wins first place on the CVDN leaderboard. Code is available: \url{https://github.com/qizhust/esceme}.
Qi Zheng, Daqing Liu, Chaoyue Wang, Jing Zhang, Dadong Wang, Dacheng Tao
2023-03-02T07:42:07Z
http://arxiv.org/abs/2303.01032v3
# ESceme: Vision-and-Language Navigation with Episodic Scene Memory ###### Abstract Vision-and-language navigation (VLN) simulates a visual agent that follows natural-language navigation instructions in real-world scenes. Existing approaches have made enormous progress in navigation in new environments, such as beam search, pre-exploration, and dynamic or hierarchical history encoding. To balance generalization and efficiency, we resort to memorizing visited scenarios apart from the ongoing route while navigating. In this work, we introduce a mechanism of Episodic Scene memory (ESceme) for VLN that wakes an agent's memories of past visits when it enters the current scene. The episodic scene memory allows the agent to envision a bigger picture of the next prediction. This way, the agent learns to utilize dynamically updated information instead of merely adapting to static observations. We provide a simple yet effective implementation of ESceme by enhancing the accessible views at each location and progressively completing the memory while navigating. We verify the superiority of ESceme on short-horizon (R2R), long-horizon (R4R), and vision-and-dialog (CVDN) VLN tasks. Our ESceme also wins first place on the CVDN leaderboard. Code is available: [https://github.com/qizhust/esceme](https://github.com/qizhust/esceme). ## 1 Introduction With breakthroughs in computer vision and natural language understanding, the embodiment hypothesis that an intelligent agent is born from its interaction with environments [35] is now attracting more and more attention to embodied AI tasks such as vision-and-language navigation (VLN). VLN is firstly defined in [3] towards the goal of a robot carrying out general verbal instructions, where an agent is required to follow natural-language instructions based on what it sees and adapt to previously unseen environments. VLN has developed various settings, such as fine-grained and short-horizon navigation (e.g. R2R [3] and RxR [21]), long-horizon navigation (e.g. R4R [18]), vision-and-dialogue navigation (e.g. CVDN [37]), and navigation with high-level instructions (e.g. REVERIE [32]). Compared with non-embodied VL tasks such as visual question answering [4] and visual captioning [9, 44], VLN agents suffer from changing observations in a sequence of decision-making and domain shift in unseen scenarios. A vanilla Seq2Seq pipeline [3] that implicitly encodes path history with LSTMs [15] shows moderate navigating ability. Since then, VLN performance has been considerably improved by pre-training [14, 17, 7, 33], data augmentations [13, 36, 22], and algorithms that explicitly track past decisions along the trajectory [7, 38, 8]. These methods learn enhanced representations by training VLN agents in each episode but ignore the dynamics of navigating over the whole data. Different strategies, including modified beam Figure 1: The blue trajectory shows an agent carrying out instruction 1. The next time, the agent enters this scene to conduct the second instruction along the red path. ESceme allows it to recall the visited nodes (i.e., the blue ones) at where it is standing (A) and choose the neighboring node B\({}_{1}\) that will see “the white book-shelf” in one more step at C. Finally, it navigates towards the red dash route and reaches the target. search [13] and pre-exploration [40, 36, 28, 46], are devised to specifically increase adaptation to unseen environments at the cost of efficiency. Specifically, beam search significantly extends route length and involves much more interactions with the environment; pre-exploration takes extra steps to gather information and train the agent with auxiliary objectives before it can conduct given instructions. Such strategies incur burdensome time and computational expenses in practical usage. In this work, we propose a navigation mechanism with Episodic Scene memory (ESceme) to balance generalization and efficiency by exploiting the dynamics of navigating all the episodes. ESceme requires no extra annotations or heavy computation and is agent-agnostic. We encode observation, instruction, and path history separately and update the scene memory during navigation via candidate enhancing. By preserving the memory among episodes, ESceme envisions the agent seeing a bigger picture in each decision. This way, the agent learns to utilize dynamically updated information instead of merely adapting to static observations. Then during inference, it predicts actions with the progressively completed memory. A demonstration shows in Figure 1. When carrying out an instruction at Location A, the agent is to select one from the adjacent nodes B\({}_{1}\)-B\({}_{5}\) to navigate. It recalls the episodic scene memory, i.e. the blue route of a completed trajectory, and chooses Node B\({}_{1}\) that will see _"the white bookshelf"_ in one more step at C. We verify the superiority of ESceme in short-horizon navigation with fine-grained instruction (R2R), long-horizon navigation (R4R), and vision-and-dialog navigation (CVDN). We find that ESceme notably benefits navigation with longer routes (R4R and CVDN), promoting both successful reaching and path fidelity. Our method achieves the highest Goal Progress in the CVDN challenge. Besides a fair comparison with existing approaches under a single run, we test the performance with an approximately complete memory, where the agent fully updates its scene memory in the first round of navigation over all the episodes. We denote it as Esceme*, which serves as the upper bound of ESceme. We observe a further improvement in Esceme*, which indicates better-completed memory magnifies the advantage of ESceme. We hope this work can inspire further explorations in modeling episodic scene memory for VLN. Since ESceme does not introduce any extra time or steps before following the instruction in inference, it is fair to compare it with its counterparts in the single-run setting. Very different from pre-exploration optimizing the parameters of an agent before solving the task, ESceme only renews its episodic memory while conducting instructions and requires no back-propagation operations. Moreover, ESceme neither involves beam search nor changes the local action space in sequential decision-making. These properties make ESceme both efficient and effective in reality use. Our contributions are summarized as follows: * We devise the first navigation mechanism with episodic scene memory (ESceme) for VLN to balance generalization and efficiency. * We provide a simple yet effective implementation of ESceme via candidate enhancing, tested with two navigation architectures and two inferring strategies. * We verify the superiority of ESceme in short-horizon (R2R), long-horizon (R4R), and vision-and-dialog (CVDN) navigation, and achieve a new state-of-the-art. ## 2 Related work ### Vision-and-language navigation Since Anderson _et al_. [3] defined the VLN task and provided an LSTM-based sequence-to-sequence baseline (Seq2Seq), numerous approaches have been developed. A branch of methods improves navigation via data augmentation, such as SF [13], EnvDrop [36], and EnvEdit [22]. As for agent training, Wang _et al_. [41] model the environment to provide planned-ahead information during navigation. RCM [40] provides an intrinsic reward for reinforcement learning via an instruction-trajectory matching critic. Wang _et al_. [42] jointly train an agent on VLN and vision-dialog navigation (MT-RCM+EnvAg). To fully use available semantic information in the environment, AuxRN [46] devises four self-supervised auxiliary reasoning tasks. TD-STP [45] introduces an extra target location estimation during finetuning to achieve reliable path planning. Many methods explore more effective feature representations and architectures, such as PTA [10], OAAM [31], NvEM [1], RelGraph [16], MTVM [25], and SEvol [6]. Inspired by the breakthrough of large-scale pre-trained BERT [20] in natural language processing tasks, PRESS [23] replaces RNNs with pre-trained BERT to encode instructions and achieves a non-trivial improvement in unseen environments. PREVELENT [14] pre-trains BERT from scratch using image-text-action triplets and further boosts the performance. RecBERT [17] integrates a recurrent unit into a BERT model to be time-aware. Chen _et al_. [7] propose the first VLN network that allows a sequence of historical memory and can be optimized end-to-end (HAMT). HOP [33] designs trajectory order modeling and group order modeling tasks to model temporal order information in pre-training. CSAP [43] proposes trajectory-conditioned masked fragment modeling and contrastive semantic-alignment modeling tasks for pre-training. ADAPT [24] explicitly learns action-level modality alignment with action prompts. There are also some works specially designed for vision-and-dialog navigation, such as VISITRON [34], SCoA [47], and CMN [48]. ### Exploration strategies in VLN As the navigation graph is pre-defined in discrete VLN, diverse strategies are adopted other than the regularly used single-run. For example, Fried _et al_. [13] modify the standard beam search to select the final navigation route, which notably increases navigation success at the cost of un-bearable trajectory lengths. More efficient pre-exploration methods are studied. For instance, a progress monitor is trained to discard unfinished trajectories during inference [26]. Ma _et al_. [27] learn a regret module to decide when to backtrack. Ke _et al_. [19] compare partial paths with global information considered and backtrack only when necessary. AcPercep [39] learns an exploration policy to gather visual information for navigation. Although these methods improve searching efficiency, they heavily depend on manually designed or heuristic rules. Deng _et al_. [11] define a global action space for the first time and build a graphical representation of the environment for elegant exploration/backtracking. Wang _et al_. [38] extend EnvDrop [36] with an external structured scene memory (SSM) to promote exploration in the global action space. Pre-exploration, which allows an agent to pre-explore unseen environments before navigating, is first introduced in [40] as a setting different from single-run and beam search. The obtained information functions in diverse ways. RCM [40] uses the exploration experience in self-supervised imitation learning. EnvDrop [36] exploits the environment information for data augmentation via back-translation. VLN-BERT [28] provides the agent with a global view for optimal route selection. AuxRN [46] fine-tunes the agent in unseen environments with auxiliary tasks. ## 3 Method ### Problem formulation Given an instruction \(X_{i}\), e.g. _"Turn around and walk to the right of the room..."_, an agent starts from the initial location of route \(R_{i}\). It observes a panoramic view of the environment \(Y_{i}\). The panoramic view consists of \(K{=}36\) single viewpoints, each of which is accompanied by an orientation \((\theta,\phi)\) indicating heading and elevation and a binary navigable signal. The agent selects a viewpoint from the navigable ones and moves to the next location with new observations. This process repeats until the agent takes the STOP action. In a regular VLN task, there is a set of training samples \(\mathcal{D}=\{(Y_{1},X_{1},R_{1}),...,(Y_{N_{1}},X_{N_{1}},R_{N_{1}})\}\), where \((X_{i},R_{i})\) is the instruction-route pair in an environment \(Y_{i}\). The set \(\{Y_{1},...,Y_{N_{1}}\}\) composes the seen environments during training. An agent is expected to learn navigation with \(\mathcal{D}\) and carry out instructions in unseen scenarios given by \(\mathcal{D}^{u}=\{(Y_{1}^{u},X_{1}),...,(Y_{N_{2}}^{u},X_{N_{2}})\}\). The set \(\{Y_{1}^{u},...,Y_{N_{2}}^{u}\}\) composes the unseen environments for test. For a sequence prediction problem, history is an important source of information apart from observations and instructions. The right part of Figure 2 shows a decision step Figure 2: An overview of the **E**pisodic **S**c**ene **m**emory mechanism for VLN. On the left is partial episodic memory for the current scene, which gets updated in navigation 1) following the previous instruction, i.e. the blue route, and 2) following the current instruction from Step 1 to \(t-1\), i.e. the solid red trajectory. The cyan nodes are those viewed but not visited. The shadow box shows the memory of node \(\textbf{B}_{1}\), which has six adjacent neighbors, i.e. A, \(\textbf{B}_{2}\), \(\textbf{B}_{5}\), C, D, and E. The integration of these nodes consists of the memory of \(\textbf{B}_{1}\). At Step \(t\), the agent stands at Node A and is expected to choose one node from \(\textbf{B}_{1}\) to \(\textbf{B}_{5}\). Given observation from K views, each view retrieves its memory in ESceme and produces \(\{\textbf{m}_{1},...,\textbf{m}_{K}\}\). The memory representation then fuses with original encoded observations, which yields \(\{\textbf{o}_{1},...,\textbf{o}_{K},\textbf{o}_{s}\}\). \(o_{s}\) is the representation for STOP. The enhanced observations, instruction text, and history from Step 1 to \(t-1\) compose the input to a navigation network to predict the action \(a_{t}=i\in\{1,...K,s\}\). Generally, a navigation network uses the encoded features of the original K views as the input to the cross-modal encoder, i.e. the output 1�. Our ESceme exploits the enhanced observations from 2�. by a general navigation network following the pretraining-finetuning branch that encodes path history. We denote the vanilla features of \(K\) single views extracted by the observation encoder as \(\{\mathbf{f}_{1},...,\mathbf{f}_{K},\mathbf{f}_{s}\}\), which can be an integration of encoded RGB images and orientations. \(\mathbf{f}_{s}\) is appended to allow a STOP action. Together with history representations \(\{\mathbf{h}_{1},...,\mathbf{h}_{t-1}\}\) from the history encoder and text representations \(\{\mathbf{x}_{cls},\mathbf{x}_{1},...,\mathbf{x}_{L}\}\) from the instruction encoder, the features of the observations 1 are input into a cross-modal encoder for multi-modal fusion. A predictor block takes in the cross-modal representations \(\{\mathbf{o}^{\prime}_{1},...,\mathbf{o}^{\prime}_{K},\mathbf{o}^{\prime}_{s}\}\), \(\{\mathbf{h}^{\prime}_{1},...,\mathbf{h}^{\prime}_{t-1}\}\), and \(\{\mathbf{x}^{\prime}_{cls},\mathbf{x}^{\prime}_{1},...,\mathbf{x}^{\prime}_{L}\}\) to predict action \(a_{t}\). Due to the difference between \(\{Y_{1},...,Y_{N_{1}}\}\) and \(\{Y^{u}_{1},...,Y^{u}_{N_{2}}\}\), an agent trained in the above way suffers from decreased decision ability. The mistake accumulates along the path, which incurs a heavy drop in successful navigation in new environments. Since strategies such as pre-exploration and beam search that exploit extra clues in a new scene are too expensive for a deployed robot, we propose a mechanism of episodic scene memory to balance accuracy and efficiency. Figure 2 provides an overview of the proposed ESceme mechanism. By retrieving episodic memory for the \(K\) views at Step \(t\), ESceme replaces the vanilla encoded observations with enhanced representations for cross-modal encoding and action prediction, i.e. 1\(\rightarrow\)2. In the following sections, we detail how to build the episodic scene memory and promote observations with the memory in navigation. ### Episodic scene memory construction We initialize the episodic memory of Scene \(Y\) with an empty graph \(\mathcal{G}^{(0)}_{Y}=(\mathcal{V}^{(0)}_{Y}{=}\emptyset,~{}\mathcal{E}^{(0)}_ {Y}{=}\emptyset)\) if an agent never sees the scene. Namely, for the first instruction in Scene \(Y\), an agent starts navigation with an empty episodic memory. As shown in Figure 2(a), the start location has four neighbors and is added to \(\mathcal{G}_{Y}\) at the end of \(t{=}1\) by \(\mathcal{V}^{(1)}_{Y}=\{V_{1}\}\). Node feature \(\mathbf{m}_{V_{1}}\) is an integration of its neighbors, \[\mathbf{m}_{V_{1}}=\text{pooling}(\mathbf{f}_{V_{1,i}}), \tag{1}\] where \(i{\in}\{1,2,3,4\}\) in Figure 2(a), \(\mathbf{f}_{V_{1,i}}\in\mathbb{R}^{d}\) is \(d\)-dim plain representations of the \(i\)-th neighbor view from the observation encoder, and \(\mathbf{m}_{V_{1}}\in\mathbb{R}^{d}\). The pooling function can be either _max_ or _mean_ pooling along the number of features. It is worth noting that obtaining \(\mathbf{f}_{V_{1,i}}\) does not involve extra computation since these features have been calculated in offline feature extraction. The agent selects its right neighbor to navigate, and at the end of \(t{=}2\), the visited node is added to \(\mathcal{G}_{Y}\) by \(\mathcal{V}^{(2)}_{Y}{=}\{V_{1},V_{2}\},~{}\mathcal{E}^{(2)}_{Y}{=}\{e_{12}\}\), with node feature \(m_{V_{2}}\) calculated similarly as Eq. (1). We set all edges \(e_{jk}{=}1\). While following the first instruction, the agent updates its episodic scene memory \(\mathcal{G}_{Y}\) accordingly, i.e. the green nodes and edges in Figures 2(b) and 2(c). At the end of \(t=5\), \(\mathcal{V}^{(5)}_{Y}=\{V_{1},V_{2},...,V_{5}\}\), \(\mathcal{E}^{(5)}_{Y}=\{e_{12},e_{23},e_{34},e_{45}\}\). When the agent is directed to the second instruction in Scene \(Y\), its memory in previous visits is preserved in \(\mathcal{G}_{Y}\) and is updated at the end of each time step accordingly as Figures 2(d) and 2(e). Figure 3: Episodic memory construction of a scene during navigation. ESceme at the beginning of each time step is presented in the figures, which comprises green nodes and edges and is empty at the beginning of \(t=1\). The blue nodes indicate the current location of following the first instruction at each time step, and the red ones correspond to the second instruction. The small cyan nodes mark the remaining navigable viewpoints of the current location. Nodes with green boundary are the chosen viewpoint in each time step. ESceme at the end of that time step is updated by the node with green boundary and the dashed lines connected to its existing nodes. Please refer to Figure 1 for a complete global graph of the scene, which is unavailable to the agent either in navigation or ESceme construction. demonstrate. In Figure 3f, since the agent's location A has been added to ESceme in conducting the first instruction, there is no update to \(\mathcal{G}_{Y}\). The agent stores episodic memory for each scene separately in similar ways. Therefore, we omit the subscript \(Y\) for simplicity. ### ESceme navigation by candidate enhancing Except for information from instruction, current observation, and route history, an agent can refer to its episodic scene memory in decision-making at each step. Intuitively, the memory can be injected into the cross-modal encoder via a separate branch. We denote the solution as Graph Encoding (GE) and list experimental results in ablation studies and the framework in the supplementary material. We empirically find that GE does not promote navigation in novel scenarios and infer that this branch does not align well with the remaining ones to provide complementary information. Since the node representation in ESceme integrates information within the neighborhood, it is expected to envision the agent with a bigger picture of the current location. Therefore, we devise a candidate-enhancing (CE) mechanism to improve navigation. A flowchart of CE is shown in Figure 2. Faced with \(K\) candidate views at Step \(t\), the agent retrieves their representations \(\mathbf{m}_{k},\ k\in\{1,...,K\}\) from episodic memory \(\mathcal{G}^{(t-1)}\), \[\mathbf{m}_{k}=\left\{\begin{array}{ll}\mathbf{m}_{V_{j}}&\text{if the $k$-th view is $V_{j}\in\mathcal{V}^{(t-1)}$}\\ \mathbf{0}&\text{otherwise.}\end{array}\right. \tag{2}\] Then the Fusion block integrates the ESceme representations with the plain features \(\{\mathbf{f}_{1},...,\mathbf{f}_{K}\}\) to produce enhanced candidate viewpoints, \[\mathbf{o}_{k}=\text{MLP}([\mathbf{m}_{k};\mathbf{f}_{k}]), \tag{3}\] where \([:,:]\) denotes concatenation along feature dimension. The MLP function is a two-layer non-linear projection from \(\mathbb{R}^{2d}\) to \(\mathbb{R}^{d}\). Following [7, 45], type embedding that distinguishes visual and linguistic signals, navigable embedding that indicates the navigability of each candidate view, and orientation encoding are added to \(\mathbf{o}_{k}\). A zero vector \(\mathbf{o}_{s}\in\mathbb{R}^{d}\) is appended as the feature for STOP action. Finally, together with encoded history features, the enhanced candidate representations \(\{\mathbf{o}_{1},...,\mathbf{o}_{K},\mathbf{o}_{s}\}\) are input to the cross-modal encoder to merge linguistic information from encoded text features. The agent predicts the distribution of action \(a_{t}\) via a two-layer non-linear Predictor block, \[P(a_{t}{=}k{\in}\{1,...,K,s\})=\frac{e^{\mathbf{o}_{t}^{\prime}\odot\mathbf{ x}_{cls}^{\prime}}}{\sum_{j\in\{1,...,K,s\}}e^{\mathbf{o}_{j}^{\prime}\odot \mathbf{x}_{cls}^{\prime}}}, \tag{4}\] where \(\odot\) is element-wise multiplication. Following [36, 7], we train the framework end-to-end by a mixture of Imitation Learning and Reinforcement Learning (A2C [30]) loss, \[\mathcal{L}=-\alpha\sum_{t=1}^{T^{*}}\log P(a_{t}{=}a_{t}^{*})-\sum_{t=1}^{T} \log P(\tilde{a}_{t})(r_{t}{-}v_{t}), \tag{5}\] where \(T^{*}\) and \(T\) are the length of the annotated route and predicted path, respectively. \(\tilde{a}_{t}\) is sampled action. \(r_{t}\) is the discount reward, and \(v_{t}\) is the state value given by a two-layer (MLP) critic network. ## 4 Experiments ### Experimental setup **Datasets and metrics.** We conduct experiments on the following three VLN tasks for evaluation. (1) Short-horizon with fine-grained instructions. R2R [3] constructs on Matterport3D [5] and has 7,189 direct-to-goal trajectories with an average of 10m. Each path is associated with three instructions of 29 words on average. The train, val seen, val unseen, and test unseen splits include 61, 56, 11, and 18 houses, respectively. (2) Long-horizon with fine-grained instructions. R4R [18] is generated by joining existing trajectories in R2R with others that start close by where they end. Compared to R2R, it has longer paths and instructions and reduced shortest-path bias. The train, val seen, and val unseen have 233,613, 1,035, and 45,162 samples, respectively. (3) Vision-dialog navigation. CVDN [18] requires an agent to navigate given a target object and a dialog history. It has 7k trajectories and 2,050 navigation dialogs, where the paths and language contexts are also longer than those in R2R. The train, val seen, val unseen, and test splits contain 4,742, 382, 907, and 1,384 instances, respectively. Following standard criteria [7, 3, 2], we evaluate the R2R dataset with Trajectory Length (TL), Navigation Error (NE), Success Rate (SR), and Success weighted by Path Length (SPL). TL is the average length of an agent's navigation route in meters, NE is the mean shortest path distance between the agent's stop location and the target, and SR measures the ratio of navigation that stop less than three meters from the goal. SPL normalizes SR by the ratio between the path length of ground truth and the navigated, which balances accuracy and efficiency and becomes the key metric for the R2R dataset. We adopt three additional metrics, Coverage weighted by Length Score (CLS), normalized Dynamic Time Warping (nDTW), and Success weighted by nDTW (SDTW), to assess path fidelity on the R4R dataset. As for vision-dialog navigation on CVDN, the primary evaluation metric is Goal Progress (GP) in meters. **Implementation details.** We adopt the encoders from HAMT [7] in comparison by default, where the text, history, and cross-modal encoders have nine, two, and four transformer layers, respectively. Features of single views are extracted offline using finetuned ViT-B/16 released by [7]. For a fair comparison, we set the feature dimension \(d{=}768\), the ratio of imitation learning loss \(\alpha{=}0.2\), and train the ESceme framework for 100K iterations on each dataset with a batch size of 8 and a learning rate of 1e-5. All the experiments run on a single NVIDIA V100 GPU. We adopt max pooling and single-run by default in comparison with other methods, and provide the results of mean pooling and inferring twice in ablation studies and supplementary material, with qualitative examples and failure cases included. ### Comparison to state-of-the-art **Results on R2R dataset.** Table 1 compares the proposed ESceme with existing methods on the R2R dataset. We can see that the pretraining-finetuning paradigm (e.g. RecBERT [17], HAMT [7], ADAPT [24], CSAP [43], TD-STP [45]) largely improves the performance of VLN in unseen environments. ESceme achieves the highest SPL on all three splits. It surpasses the baseline model HAMT [7] by about 5% SPL on the validation and test unseen environments and even outperforms TDSTP [45] that involves auxiliary training tasks. Besides, ESceme brings a relative decrease of 6.4% and 4.1% in NE on validation and test unseen split, respectively. The results demonstrate the efficacy of episodic scene memory in generalization to unseen sc \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Validation Seen} & \multicolumn{3}{c|}{Validation Unseen} & \multicolumn{3}{c}{Test Unseen} \\ & TL & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & TL & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & TL & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) \\ \hline Seq2Seq [3] & 11.33 & 6.01 & 39 & - & 8.39 & 7.81 & 22 & - & 8.13 & 7.85 & 20 & 18 \\ SF [13] & - & 3.36 & 66 & - & - & 6.62 & 35 & - & 14.82 & 6.62 & 35 & 28 \\ AcPercep [39] & 19.7 & 3.20 & 70 & 52 & 20.6 & 4.36 & 58 & 40 & 21.6 & 4.33 & 60 & 41 \\ PRESS [23] & 10.57 & 4.39 & 58 & 55 & 10.36 & 5.28 & 49 & 45 & 10.77 & 5.49 & 49 & 45 \\ SSM [38] & 14.7 & 3.10 & 71 & 62 & 20.7 & 4.32 & 62 & 45 & 20.4 & 4.57 & 61 & 46 \\ EnvDrop [36] & 11.00 & 3.99 & 62 & 59 & 10.70 & 5.22 & 52 & 48 & 11.66 & 5.23 & 51 & 47 \\ OAAM [31] & 10.20 & - & 65 & 62 & 9.95 & - & 54 & 50 & 10.40 & - & 53 & 50 \\ AuxRN [46] & - & 3.33 & 70 & 67 & - & 5.28 & 55 & 50 & - & 5.15 & 55 & 51 \\ PREVALENT [14] & 10.32 & 3.67 & 69 & 65 & 10.19 & 4.71 & 58 & 53 & 10.51 & 5.30 & 54 & 51 \\ RelGraph [16] & 10.13 & 3.47 & 67 & 65 & 9.99 & 4.73 & 57 & 53 & 10.29 & 4.75 & 55 & 52 \\ NvEM [1] & 11.09 & 3.44 & 69 & 65 & 11.83 & 4.27 & 60 & 55 & 12.98 & 4.37 & 58 & 54 \\ NvEM+SEvol [6] & 11.97 & 3.56 & 67 & 63 & 12.26 & 3.99 & 62 & 57 & 13.40 & 4.13 & 62 & 57 \\ CSAP [43] & 11.29 & 2.80 & 74 & 70 & 12.59 & 3.72 & 65 & 59 & 13.30 & 4.06 & 62 & 57 \\ RecBERT [17] & 11.13 & 2.90 & 72 & 68 & 12.01 & 3.93 & 63 & 57 & 12.35 & 4.09 & 63 & 57 \\ ADAPT [24] & 11.39 & 2.70 & 74 & 69 & 12.33 & 3.66 & 66 & 59 & 13.16 & 4.11 & 63 & 57 \\ HOP [33] & 11.26 & 2.72 & 75 & 70 & 12.27 & 3.80 & 64 & 57 & 12.68 & 3.83 & 64 & 59 \\ TDSTP [45] & - & **2.34** & **77** & **73** & - & **3.22** & **70** & 63 & - & **3.73** & **67** & 61 \\ HAMT [7] & 11.15 & 2.51 & 76 & 72 & 11.46 & 3.62 & 66 & 61 & 12.27 & 3.93 & 65 & 60 \\ \hline ESceme (Ours) & 10.65 & 2.57 & 76 & **73** & 10.80 & 3.39 & 68 & **64** & 11.89 & 3.77 & 66 & **63** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with state-of-the-art methods on R2R dataset. ESceme (Ours) is built with HAMT [7] architecture by default. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Val Seen & Val Unseen & Test Unseen \\ \hline Seq2Seq [3] & 5.92 & 2.10 & 2.35 \\ PREVALENT [14] & - & 3.15 & 2.44 \\ CMN [48] & 7.05 & 2.97 & 2.95 \\ VISITRON [34] & 5.11 & 3.25 & 3.11 \\ HOP [33] & - & 4.41 & 3.24 \\ SCoA [47] & 7.11 & 2.85 & 3.31 \\ MT-RCM+EnvAg [42] & 5.07 & 4.65 & 3.91 \\ HAMT [7] & 6.91 & 5.13 & 5.58 \\ \hline ESceme (Ours) & **8.34** & **5.42** & **5.99** \\ \hline \hline \end{tabular} \end{table} Table 3: Results of Goal Process (GP) in meters on CVD dataset. narios with short instructions. **Results on R4R dataset.** We evaluate the proposed ESceme on the R4R dataset to examine if the generalization promotion maintains in long-horizon navigation tasks. The results are listed in Table 2. Our ESceme outperforms existing state-of-the-art by a large margin, i.e. a relative improvement of 6.4% in SPL, 7.0% in CLS, 7.3% in nDTW, and 9.1% in SDTW. It indicates that ESceme improves not only navigation success but also path fidelity. Although good at carrying out short instructions, TDSTP [45] suffers a heavy drop in long-horizon navigation regarding path fidelity compared with its baseline model HAMT [7]. It reveals that goal-related auxiliary tasks such as target prediction benefit reaching the target location but undermine the ability to follow instructions. Equipped with ESceme, an agent has a promoted ability to travel the expected route in long-horizon navigation. Besides, a consistent advantage of pretraining-based methods can be observed on this dataset. **Results on CVD dataset.** Table 3 compares ESceme with state-of-the-art methods on the vision-and-dialog navigation task. CVDN provides longer instructions and trajectories than R2R and more complicated instructions than R4R. The proposed ESceme achieves the best goal process in both seen and unseen scenarios and wins first place on the leaderboard. HAMT [7] shows an obvious advantage over other pretraining-based methods such as PREVA-LENT [14], and even surpasses those counterparts specially designed for vision-and-dialog navigation, e.g. CMN [48], VISITRON [34], and SCoA [47]. Our ESceme brings a relative improvement of 20.7%, 5.7%, and 7.3% over the baseline HAMT [7] in val seen, val unseen, and test unseen environments, respectively. ### Ablation studies & analysis **Different ESceme constructions.** We evaluate the superiority of Candidate Enhancing over Graph Encoding and the effect of different pooling functions in Table 4. First, Graph Encoding with mean pooling slightly increases navigation success in seen environments with almost no promotion in unseen scenarios. We assume that Graph Encoding adjusts the representation of observations in cross-modal encoding and does not align well with the remaining branches to provide complementary information, resulting in a limited effect. Candidate Enhancing with mean pooling brings a relative improvement of 2.3% in SPL for unseen navigation and behaves similarly in seen environments. Integrated with max pooling, Candidate Enhancing further boosts the performance in unseen environments, which produces a 3.8% relative increase compared to the HAMT [7] baseline. The results demonstrate the efficacy of the proposed Candidate Enhancing, which improves observation representations via direct injection and fusion, and max pooling, which preserves more distinguishable features of each view. **Different navigation architectures & inferring strategies.** The proposed ESceme is devised to be model-agnostic and should be compatible with any navigation network that \begin{table} \begin{tabular}{l c c|c c c c|c c c c} \hline \hline & & & & \multicolumn{3}{c|}{Validation Seen} & \multicolumn{3}{c}{Validation Unseen} \\ & GE & CE & pooling & TL & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & TL & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) \\ \hline HAMT [7] & - & - & - & 11.02\(\pm\)0.10 & 2.52\(\pm\)0.10 & 75.0\(\pm\)0.9 & 71.7\(\pm\)0.7 & 11.72\(\pm\)0.34 & 3.63\(\pm\)0.05 & 65.7\(\pm\)0.7 & 60.9\(\pm\)0.7 \\ ESceme & ✓ & ✗ & mean & 11.20\(\pm\)0.18 & 2.56\(\pm\)0.11 & 75.7\(\pm\)0.9 & 72.3\(\pm\)0.6 & 11.64\(\pm\)0.05 & 3.60\(\pm\)0.06 & 65.9\(\pm\)0.5 & 60.9\(\pm\)0.6 \\ ESceme & ✗ & ✓ & mean & 11.13\(\pm\)0.16 & 2.59\(\pm\)0.09 & 75.1\(\pm\)0.7 & 71.9\(\pm\)0.6 & 11.49\(\pm\)0.27 & 3.50\(\pm\)0.03 & 67.1\(\pm\)0.5 & 62.3\(\pm\)0.5 \\ ESceme & ✗ & ✓ & max & 10.81\(\pm\)0.12 & 2.60\(\pm\)0.12 & 75.6\(\pm\)0.4 & 72.6\(\pm\)0.4 & 11.18\(\pm\)0.23 & 3.44\(\pm\)0.03 & 67.4\(\pm\)0.5 & 63.2\(\pm\)0.5 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation studies of ESceme construction on R2R dataset. We compare the effect of graph encoding (GE) and candidate enhancing (CE), and different pooling functions. \begin{table} \begin{tabular}{l|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Validation Seen} & \multicolumn{3}{c|}{Validation Unseen} & Params & GPU & Time \\ & TL & NE\(\downarrow\) & SPL\(\uparrow\) & TL & NE\(\downarrow\) & SPL\(\uparrow\) & (MB) & (GB) & (ms) \\ \hline HAMT [7] & 11.02\(\pm\)0.10 & 2.52\(\pm\)0.10 & 71.7\(\pm\)0.7 & 11.72\(\pm\)0.34 & 3.63\(\pm\)0.05 & 60.9\(\pm\)0.7 & 651.5 & 8.5 & 29.4 \\ + ESceme & 10.81\(\pm\)0.12 & 2.60\(\pm\)0.12 & 72.6\(\pm\)0.4 & 11.18\(\pm\)0.23 & 3.44\(\pm\)0.03 & 63.2\(\pm\)0.5 & +6.8 & +0.1 & +1.4 \\ + ESceme* & 10.77\(\pm\)0.13 & 2.58\(\pm\)0.12 & 72.8\(\pm\)0.4 & 10.89\(\pm\)0.14 & 3.35\(\pm\)0.05 & 64.0\(\pm\)0.4 & +6.8 & +0.1 & +32.2 \\ \hline TDSTP [45] & 13.09\(\pm\)0.37 & 2.42\(\pm\)0.08 & 70.9\(\pm\)0.7 & 14.28\(\pm\)0.44 & 3.22\(\pm\)0.09 & 62.1\(\pm\)0.6 & 657.2 & 10.5 & 46.9 \\ + ESceme & 11.80\(\pm\)0.26 & 2.34\(\pm\)0.10 & 74.4\(\pm\)0.6 & 13.86\(\pm\)0.21 & 3.31\(\pm\)0.07 & 63.0\(\pm\)0.8 & +6.8 & +0.1 & +1.8 \\ + ESceme* & 11.83\(\pm\)0.23 & 2.33\(\pm\)0.08 & 74.8\(\pm\)0.7 & 13.38\(\pm\)0.28 & 3.21\(\pm\)0.05 & 64.3\(\pm\)0.7 & +6.8 & +0.1 & +50.5 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation studies of navigation architectures and inferring strategies on R2R dataset. ESceme* denotes navigating with a nearly completed scene memory by first going through all the episodes. For a scene in R2R covering 92 visited nodes on average, the maximum episodic memory cost in CPU is about 1.5MB. has an observation input. To validate this property, we build ESceme upon TDSTP [45] that achieves the highest SR on the R2R dataset and list the results in Table 5. ESceme improves navigation in both seen and unseen environments by 4.9% and 1.4% in SPL, respectively. As introduced in Section 3.3, the agent starts with an empty episodic scene memory during inference, and the memory keeps updating. If we let the agent renew its memory thoroughly by going through all the episodes and then evaluate its navigation performance, it will have a much more complete episodic memory. We present the results of ESceme* in Table 5. We can see that the nearly completed memory further boosts the performance in unseen environments by 1.3% and 2.1% regarding SPL for ESceme upon HAMT [7] and TDSTP [45], respectively. More results of ESceme* are in supplementary material, with slighter improvements observed for longer-horizon navigation. The results demonstrate that an agent learns to assist navigation with partial and persistently updated episodic memory. **Computational efficiency.** We present model size, GPU usage, and time cost during inference on the R2R dataset in Table 5. Either upon HAMT [7] or TDSTP [45], the proposed ESceme brings about 1.0% extra parameters and memory occupation in GPU. In a single-run setting, ESceme slightly increases the computational time by 4.8% when built on top of HAMT. Compared with HAMT, the TDSTP baseline costs more time by 59.5% and GPU by 23.5%. Accordingly, our ESceme only raises the time cost by 3.8% and almost no extra GPU consumption. With better-completed memory, ESceme* further boosts navigation performance in new environments at the expense of double the time. We can see that ESceme achieves a good trade-off between efficiency and efficacy in a single run. **Order of executing instructions.** Since ESceme learns with dynamically updated episodic memory while conducting instructions, the order of execution has little impact on overall performance. Table 6 lists navigating performance with shuffled episodes on the val unseen split in all the datasets, which indicates the stability of ESceme. **Success variation during inference.** Figure 4 compare SPL and CLS curves of different methods to visualize the variation of navigation quality in inferring progress. On the short-horizon navigation dataset R2R, HAMT [7] oscillates around 62 and drops in the last 1/5 progress. The decrease could result from more tough samples at the end. TDSTP [45] presents a more stable oscillation around 62, owing to a global action space and an auxiliary goal-related task. Starting from a moderate navigation ability, an agent with ESceme benefits greatly from memory updates and maintains a high success rate with completed memory. On the long-horizon VLN dataset R4R, TDSTP [45] shares a similar oscillation around 41 with HAMT [7] in SPL. TDSTP preserves a relatively more stable success rate at the cost of much lower CLS, which reveals that goal-related auxiliary task undermines the ability of instruction following. Our ESceme shows a sharp increase within the first 4/5 navigation and keeps stable since then. We attribute the excellent promotion on R4R to two reasons, 1) long-horizon navigation involves more action steps, so a slight increase in navigation ability results in a big difference in final performance; 2) the sample density of a scene from R4R is much higher than that from the R2R dataset. ## 5 Conclusion In this paper, we devise the first VLN mechanism with episodic scene memory (ESceme) and propose a simple yet effective implementation via candidate enhancing. We show that an agent with ESceme improves navigation ability in short-horizon, long-horizon, and vision-and-dialog \begin{table} \begin{tabular}{c c|c c|c} \hline (R2R) NE\(\downarrow\) & SPL\(\uparrow\) & (R4R) SPL\(\uparrow\) & CLS\(\uparrow\) & (CVDN) GP\(\uparrow\) \\ \hline 3.39\(\pm\)0.03 & 63.7\(\pm\)0.3 & 43.2\(\pm\)0.07 & 62.7\(\pm\)0.1 & 5.57\(\pm\)0.11 \\ \hline \end{tabular} \end{table} Table 6: \(\bar{x}\pm\sigma\) scores of shuffled episodes with five random seeds on the val unseen split of the datasets. Figure 4: Navigation quality w.r.t. inferring progress. The x-axis indicates the ratio of samples tested, and the y-axis is the smoothed average of SPL or CLS. We use the default order for all the methods. Navigation with ESceme improves over time. navigation. Our method outperforms the existing state-of-the-art and wins first place in the CVDN leaderboard, bringing a marginal increase in memory, parameters, and inference time. We hope this work can inspire further explorations on episodic memory in VLN and related fields, e.g., building the memory in continuous environments and with more advanced techniques such as neural SLAM. ## Appendix A Overview In this document, we first elaborate on the solution of assisting navigation by graph encoding (GE) that appeared in Section 3.3 (Appendix B). Then we illustrate the difference between ESceme and ESceme*, and list their navigating performance on all three datasets (Appendix C). Next, we provide the pseudo-code implementation for the proposed ESceme in Appendix D, followed by qualitative comparisons with other methods and failure cases (Appendix E). ## Appendix B ESceme navigation by graph encoding Figure 5 demonstrates ESceme-assisted navigation by adding a graph encoding (GE) branch to the cross-modal encoder. At the current location where the agent stands, a local window is masked to avoid repetition with the path history from time 1 to \(t\)\(-\)1. Thus, the searched episodic memory graph includes six nodes and three edges, i.e. \(\mathcal{G}^{(t-1)}=\{\mathcal{V}^{(t-1)},\mathcal{E}^{(t-1)}\}\). We adopt 3-WL GNNs [29, 12] that can distinguish two non-isomorphic graphs to encode the memory graph, where the input \(G\in\mathbb{R}^{n\times n\times(1+d)}\) is given by \[G_{ijk}=\left\{\begin{array}{ll}e_{ij}&\text{if }k=1\\ m_{i}&\text{if }j=i\text{ for }k>1\\ 0&\text{otherwise,}\end{array}\right. \tag{6}\] where \(n\) is the number of nodes in \(\mathcal{V}^{(t-1)}\). \(m_{i}\in\mathbb{R}^{d}\) is the representation of the node \(V_{i}\), with detailed calculations presented in Section 3.2. \(e_{ij}=1\) if \(V_{i}\) and \(V_{j}\) are connected, else \(e_{ij}=0\). The graph is encoded by \[G^{\prime}=[(W_{1}G)\odot(W_{2}G);(W_{3}G)], \tag{7}\] where \(W_{1\sim 3}\in\mathbb{R}^{(1+d)\times(d/2)}\) are two-layer MLPs. \(\odot\) denotes element-wise multiplication and \([\cdot;\cdot]\) is concatenation along feature dimension. The final encoded feature to the cross-modal encoder is \(\sum_{i=1}^{n}\sum_{j=1}^{n}G^{\prime}_{ij}\in\mathbb{R}^{d}\). From the results in Section 4.3, we can infer that implicitly encoding episodic memory in a single branch by GE does not align well with the remaining branches to provide complementary information, resulting in a limited effect. ## Appendix C ESceme VS. ESceme* We thoroughly compare navigating with progressively completed and nearly complete episodic memory on three datasets in Tables 7 and 8. ESceme conducts instructions in a single-run setting, where the agent dynamically updates memory in inference. ESceme* first goes through all the episodes to build a nearly complete memory at the beginning of the evaluation. ESceme* improves navigating in new environments by 1.6% (SPL) on test unseen split of the R2R dataset. As for vision-dialog navigation CVDN, the improvement in val unseen and test unseen are 5.5% and 3.0%, respectively. On the long-horizon navigation dataset R4R, the relative increase is about 0.5%. Overall, ESceme* further promotes generalization to novel scenarios, indicating that ESceme benefits from the nearly complete scene memory. On the other hand, the small gap between ESceme and ESceme* shows that the agent has learned to utilize progressively completed memory in navigation. ## Appendix D Pseudo-code implementation We provide the pseudo-code of ESceme construction and candidate enhancing in Algorithm 1. ESceme requires easy implementation and can be integrated with any navigation networks that encode the observation. ## Appendix E Qualitative examples and failure cases We present the navigating process to provide a more intuitive comparison with HAMT [7] and TDSTP [45]. Figures 6 to 8 are three navigation examples on R2R dataset, and Figures 9 and 10 illustrate two examples on R4R dataset. All the examples are tested in unseen environments. For short-horizon navigation, our ESceme outperforms its counterparts regarding stopping precision. For long-horizon navigation, our ESceme shows an improved ability to follow instructions that requires a forward and back trip and arrives at the target location. We attribute these advantages to the episodic memory of the scenes. Figures 11 and 12 showcase situations where EScem failed to follow the instructions. In the first example, the instruction is "Leave sitting room and head towards the kitchen, turn right at living room and enter. Walk through living room to dining room and enter. Turn left and head to front door. Exit the house and stop on porch." After correctly predicting the first three actions, ESceme failed to enter the _dining room_ and got lost. In the second example, the instruction is "Go down the stairs. Go into the room straight ahead on the slight left. Wait there." Esceme succeeded in going downstairs but failed to determine the _slight left_ direction and entered the wrong room. \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{Validation Seen} & \multicolumn{4}{c|}{Validation Unseen} & \multicolumn{4}{c}{Test Unseen} \\ Method & TL & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & TL & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & TL & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) \\ \hline ESceme & 10.65 & 2.57 & 76 & 73 & 10.80 & 3.39 & 68 & 64 & 11.89 & 3.77 & 66 & 63 \\ ESceme* & 10.62 & 2.57 & 76 & 73 & 10.65 & 3.36 & 68 & 64 & 11.77 & 3.69 & 68 & 64 \\ \hline \hline \end{tabular} \end{table} Table 7: Results of different inferring strategies on R2R dataset. \begin{table} \begin{tabular}{l|c c c|c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{R4R Val Unseen} & \multicolumn{4}{c}{CVDN} \\ Method & NE\(\downarrow\) & SR\(\uparrow\) & SPL\(\uparrow\) & CLS\(\uparrow\) & nDTW\(\uparrow\) & SDTW\(\uparrow\) & Val Seen & Val Unseen & Test Unseen \\ \hline ESceme & 5.84 & 45.6 & 43.2 & 62.7 & 55.7 & 34.7 & 8.34 & 5.42 & 5.99 \\ ESceme* & 5.83 & 45.8 & 43.4 & 62.9 & 55.8 & 34.8 & 8.39 & 5.72 & 6.17 \\ \hline \hline \end{tabular} \end{table} Table 8: Results of different inferring strategies on R4R and CVDN datasets. Figure 5: An overview of ESceme-assisted navigation by graph encoding. First, Episodic memory is built in the same way as that for candidate enhancing (c.f. Section 3.2). Then, the agent searches the episodic memory for the current viewpoint and obtains the memory graph by masking a local window. The encoded memory composes a separate branch to the cross-modal encoder. Figure 6: Panoramic views and top-down overviews of navigation. Mistakes during navigation are marked with red boxes for panorama and red arrows for top-down trajectories. The star indicates the target location. Our ESceme strictly follows the instruction “walk down to the end of hall” and waits at the door of the bedroom. Instructions: Walk along the narrow rug past the statue on the table, and go up two steps. Wait on the third step. Figure 7: Panoramic views and top-down overviews of navigation. Mistakes during navigation are marked with red boxes for panorama and red arrows for top-down trajectories. The star indicates the target location. Our ESceme strictly follows the instruction “go up two steps” and waits on the third step. Instructions: Stand with the wooden door behind you. Walk straight, past the oven and through the door to the next room. Walk through the room past the sink and microwaves. Stop after passing through the last doorway of the room with the microwaves and into the hall. Figure 8: Panoramic views and top-down overviews of navigation. Mistakes during navigation are marked with red boxes for panorama and red arrows for top-down trajectories. The star indicates the target location. Our ESceme stops at the right place. Instructions: Turn and walk out of the bathroom into the hallway. Walk through the door into the room with shelves and a sink. Continue through the room into the kitchen area and walk past the stove and sink. Turn left and walk across the kitchen hallway. When you get to a more open area, turn slight right and walk past the bedroom, then stop in the door of the second bedroom. Figure 9: The top-down trajectory of navigation. Mistakes during navigation are marked with red. The star indicates the target location. Our ESceme moves forward to the right place and then back and arrives at the second bathroom. In the next row of Figure 10, the top-down trajectory of navigation is marked with red. The star indicates the target location. Our ESceme moves forward to the right place and then back and stops inside the double doors. Figure 10: The top-down trajectory of navigation. Mistakes during navigation are marked with red. The star indicates the target location. Our ESceme moves forward to the right place and then back and stops inside the double doors. Figure 11: Failure case in R2R val unseen split. The instruction is “Leave sitting room and head towards the kitchen, turn right at living room and enter. Walk through living room to dining room and enter. Turn left and head to front door. Exit the house and stop on porch.” After correctly predicting the first three actions, ESceme failed to enter the _dining room_ and got lost.
2308.13744
Fluctuation relation in continuous-time random walks driven by an external field
We study a fluctuation relation representing a nonequilibrium equality indicating that the ratio between the distribution of trajectories obtained by exchanging the initial and final positions is characterized by free energy differences for the duration of the trajectories. We examine the fluctuation relation for noninteracting charge carriers driven by an external electric field by using a continuous-time lattice random walk model with a general waiting-time distribution of transitions. The fluctuation relation is obtained regardless of the lattice structure factor or the form of the waiting-time distribution. However, the fluctuation relation is satisfied only after taking the continuum limit in the presence of a reflecting boundary. Moreover, in free space without boundary conditions, exchanging the initial and final positions is equivalent to exchanging the field (or drift) directions. However, we show that the exchanging field (or drift) directions is not relevant for studying the fluctuation relation under the reflecting boundary condition.
Kazuhiko Seki
2023-08-26T03:08:32Z
http://arxiv.org/abs/2308.13744v3
# Fluctuation relation in continuous-time random walks driven by an external field ###### Abstract We study a fluctuation relation representing a nonequilibrium equality indicating that the ratio between the distribution of trajectories obtained by exchanging the initial and final positions is characterized by free energy differences for the duration of the trajectories. We examine the fluctuation relation for noninteracting charge carriers driven by an external electric field by using a continuous-time lattice random walk model with a general waiting-time distribution of transitions. The fluctuation relation is obtained regardless of the lattice structure factor or the form of the waiting-time distribution. However, the fluctuation relation is satisfied only after taking the continuum limit in the presence of a reflecting boundary. Moreover, in free space without boundary conditions, exchanging the initial and final positions is equivalent to exchanging the field (or drift) directions. However, we show that the exchanging field (or drift) directions is not relevant for studying the fluctuation relation under the reflecting boundary condition. * August 2023 ## 1 Introduction Over the past several decades, fluctuation theorems that enhance our understanding of the relation between nonequilibrium thermodynamics and statistics in the fluctuation of trajectories have been formulated. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17] The proposed fluctuation theorems require neither the adiabatic operation commonly imposed for equilibrium thermodynamic relations nor a limited strength of the applied field, as imposed for the linear response theorem. The early fluctuation theorem was formulated in terms of entropy production: the probability of observing trajectories opposing positive entropy production decreases exponentially. [3, 4, 5] Later, the Crooks fluctuation theorem was formulated in terms of the free energy difference: the probability of observing trajectories opposing a positive free energy difference decreases exponentially. [9, 10, 11] The Jarzynski equality relating the free energy difference and the work done [12] enables the Crooks fluctuation theorem to be interpreted as indicating that the ratio between the distribution of trajectories obtained by exchanging the initial and final states is characterized by the work done for the duration of the trajectories. [9, 10, 11] Fluctuation theorems have been proposed through studies of the phase-space Jacobian volume element of trajectories using the Liouville equation for deterministic systems, numerical calculations for many-particle systems, and the Langevin equation for stochastic systems, where particles move continuously in space. [3, 4, 5, 6, 7, 8, 9, 10, 11, 12] Fluctuation theorems have also been studied using master equations for discrete systems by assuming Markovian time evolution. [12, 18] In some cases, the path integral formulation has been used to propose fluctuation theorems by assuming Markovian time evolution. [19, 20] The fluctuation theorem has also been studied using the generalized Langevin equation or Fokker-Planck equation, where non-Markovian stochastic dynamics is assumed. [21, 22, 23, 24, 25, 26, 27] The path integral formulation has been also developed to study the fluctuation theorem using generalized Langevin equation by evaluating the Jacobian arising from non-Markovian response. [21] In continuous time random walks, two conditions are required for microscopic reversibility and the fluctuation theorem; one is independence of transition direction and waiting time for transition rates (separability of the waiting time distribution) and the other condition is detailed balance. [28, 29, 30] We consider the cases that both conditions are satisfied and focus on the effect of a reflecting boundary on the fluctuation theorem. Previously, the fluctuation relation has been presented by taking into account the lattice structure and a memory kernel in continuous time lattice random walks. [31, 32] The results are further extended to study the effects of branched states and the channels on the fluctuation theorem. [33] A thermodynamic interpretation to a fluctuation theorem derived from continuous time random walks was also given. [30] However, the coupled effects of the lattice structure factor for a random walk, a memory kernel, and the boundary conditions have not yet been fully examined. Here, we use continuous-time lattice random walk models with a general waiting-time distribution to study the random walk of noninteracting charge carriers driven by an external electric field. We show a fluctuation relation by expressing transition probabilities for the continuous-time random walk models in a suitable form to study the effect of exchanging the initial and final positions. The explicit expressions for transition probabilities enable us to clarify the difference between exchanging initial and final positions and exchanging field (or drift) directions. We show that the ratio between the usual transition probability and the probability obtained by exchanging the initial and final positions is related to the free energy difference between the initial and final positions, irrespective of the lattice structure factor and the form of the waiting time distribution. The results suggest that the Crooks fluctuation theorem holds for continuous-time lattice random walks in free space without boundaries, as demonstrated by various methods. [30, 31, 32, 33] However, in the presence of a reflecting boundary, the Crooks fluctuation theorem does not hold for continuous time random walk models if a continuum limit is not taken. The results suggest that the separability of the waiting time distribution and the detailed balance condition are sufficient conditions for the Crooks fluctuation theorem to hold in a continuous time random walk, as long as there is no reflecting boundary. Another aspect of introducing a reflecting boundary is also obtained. In free space without boundary conditions, exchanging the initial and final positions is equivalent to exchanging the field (or drift) directions. However, we show that exchanging the field (or drift) directions is not relevant for studying the fluctuation relation under a reflecting boundary condition. ## 2 Drift diffusion in free space We consider the fluctuation relation in a one-dimensional random walk along the \(x\)-axis under energetic disorder and an external electric field with strength \(F\). Here, we study nonequilibrium states in which charge carriers flow without interaction among themselves under the external electric field and without any boundary. When energetic disorder presents, the waiting-time distribution might have a power-law tail; we study the effect of a heavy-tail waiting-time distribution of transitions on the fluctuation theorem. We also study the effect of a reflecting boundary condition and the effect of the structure factor for a random walk on the fluctuation theorem. We take the direction of the field as the \(x\)-axis and denote \(F\) as the electric field strength acting on the charge carriers; the charge on each carrier is denoted by \(q\). First, we focus on the field dependence alone without energetic disorder in one-dimensional transitions [Fig. 1(a)]. For convenience, we introduce \(\gamma_{\rm r}\) when an applied field is absent; \(\gamma_{\rm r}\) indicates the transition rate from a trap to both neighboring sites. Under an applied field, the transition-rate in the field direction and that in the opposite direction are denoted by \(\gamma_{\rm rp}(F)\) and \(\gamma_{\rm rm}(F)\), respectively. By considering the Arrhenius law, these two transition-rates can be expressed as [34] \[\gamma_{\rm rp}(F) = (\gamma_{\rm r}/2)\exp\left[qFb/(2k_{\rm B}T)\right] \tag{1}\] \[\gamma_{\rm rm}(F) = (\gamma_{\rm r}/2)\exp\left[-qFb/(2k_{\rm B}T)\right], \tag{2}\] where \(q\), \(b\), \(k_{\rm B}\) and \(T\) are the elementary charge, the lattice constant, the Boltzmann constant and temperature, respectively. \(\gamma_{\rm rp}(F)\) and \(\gamma_{\rm rm}(F)\) satisfy the local detailed balance condition: _i.e._, \(\gamma_{\rm rp}(F)/\gamma_{\rm rm}(F)=\exp\left[qFb/(k_{\rm B}T)\right]\). The factor two in \((\gamma_{\rm r}/2)\) is required because \(\gamma_{\rm r}\) represents transition from a trap to both neighboring sites. Under the influence of an external electric field, the expression for the total transition frequency (\(\gamma_{\rm r}\)) changes to \[\gamma_{\rm rt}(F)=\gamma_{\rm rp}(F)+\gamma_{\rm rm}(F)=\gamma_{\rm r}\cosh \left[qFb/(2k_{\rm B}T)\right], \tag{3}\] which reduces to \(\gamma_{r}\) when \(F=0\). Next, we consider that carriers undergo transitions from the state with random trapping energy \(E\). Without an external field [Fig. 1(b)], the release rate is proportional to \(\exp\left[-E/(k_{\rm B}T)\right]\), where the distribution function for the trap energy is denoted by \(g(E)\). Under an external field, the detrapping rate in the field direction can be expressed as \(\gamma_{\rm p}(E,F)=\gamma_{\rm rp}(F)\exp\left[-E/(k_{\rm B}T)\right]\). In addition, the detrapping rate in the opposite direction can be expressed as \(\gamma_{\rm m}(E,F)=\gamma_{\rm rm}(F)\exp\left[-E/(k_{\rm B}T)\right]\). The total detrapping rate can be defined by \[\gamma_{\rm t}(E,F)=\left[\gamma_{\rm rp}(F)+\gamma_{\rm rm}(F)\right]\exp\left[-E /(k_{\rm B}T)\right]. \tag{4}\] In the continuous-time random walk (CTRW) model, the energetic disorder is taken into account by the waiting-time distribution, as shown in Fig. 1(c) for \(F=0\). The waiting time distribution in the continuous-time random walk can be formulated by assuming annealed energy disorder, where trap energy distribution [\(g(E)\)] is used in every transition step. (If each site has the fixed random trap energy, the situation is called quenched energy disorder, where the particle may return to the previously occupied site and the trap energy needs not necessarily be renewed. ) Under both an external field and energetic disorder [Fig. 1(d)], the total waiting-time distribution can be defined as \[\psi_{\rm t}(t,F)=\int_{0}^{\infty}d\,Eg(E)\gamma_{\rm t}(E,F)\exp\left(- \gamma_{\rm t}(E,F)t\right). \tag{5}\] The waiting-time distribution along the field direction is expressed by \(\psi_{\rm p}(t,F)=\Gamma_{\rm p}(F)\psi_{\rm t}(t,F)\), where \(\Gamma_{\rm p}(F)\) indicates the fraction of transitions from \(x\) to \(x+b\) against the sum of the transitions to both neighboring sites: \[\Gamma_{\rm p}(F)=\frac{\gamma_{\rm rp}(F)}{\gamma_{\rm rt}(F)}=\frac{\exp \left[qFb/(2k_{\rm B}T)\right]}{2\cosh\left[qFb/(2k_{\rm B}T)\right]}. \tag{6}\] Figure 1: (a) and (b) Schematics of the potential energy for transitions under an external electric field (\(F\)) applied in the direction of increasing \(x\); \(b\) indicates the lattice constant. (c) and (d) Continuous-time random walk model. (a) Energy landscape without energetic disorder. The activation energy is lowered by \(qFb/2\) in the transition-rate (\(\gamma_{\rm rp}(F)\)) along the direction of the external field, and raised by \(qFb/2\) in the transition-rate (\(\gamma_{\rm rm}(F)\)) along the opposite direction. The local detailed balance condition is satisfied, _i.e._, \(\gamma_{\rm rp}(F)/\gamma_{\rm rm}(F)=\exp\left[qFb/(k_{\rm B}T)\right]\). (b) Energy landscape under energetic disorder when the applied external field is absent. (c) The effect of energetic disorder can be taken into account by the total waiting-time distribution denoted by \(\psi_{\rm t}(t,F=0)\) when \(F=0\). (d) The superposition of the potential energy landscape of (a) and that of (b) can be taken into account using \(\psi_{\rm t}(t,F)\): \(\Gamma_{\rm p}(F)=\gamma_{\rm rp}(F)/\gamma_{\rm rt}(F)\) and \(\Gamma_{\rm m}(F)=\gamma_{\rm rm}(F)/\gamma_{\rm rt}(F)\), where \(\gamma_{\rm rt}(F)=\gamma_{\rm rp}(F)+\gamma_{\rm rm}(F)\). Similarly, the waiting-time distribution along the opposite direction is expressed by \(\psi_{\rm m}(t,F)=\Gamma_{\rm m}(F)\psi_{\rm t}(t,F)\), where \(\Gamma_{\rm m}(F)\) indicates \[\Gamma_{\rm m}(F)=\frac{\gamma_{\rm rm}(F)}{\gamma_{\rm rt}(F)}=\frac{\exp{[-qFb/ (2k_{\rm B}T)]}}{2\cosh{[qFb/(2k_{\rm B}T)]}}. \tag{7}\] Using Eq. (6) and (7), we find the local detailed balance, \[\frac{\Gamma_{\rm p}(F)}{\Gamma_{\rm m}(F)}=\exp{[qFb/(k_{\rm B}T)]}\,. \tag{8}\] For the biased CTRW model in free space without any boundary, the Green's function with the initial position denoted by \(x_{\rm i}\) should be translationally invariant and \(G_{0}(x,x_{\rm i},t)\) in the Laplace domain can be expressed as [35, 36] \[\hat{G}_{0}(x,x_{\rm i},s)=\frac{1-\hat{\psi}_{\rm t}}{s}\frac{1}{2\pi}\int_{- \pi}^{\pi}dk\,\frac{\exp[-ik(x-x_{\rm i})]}{1-\hat{\psi}_{\rm t}\lambda(k)}, \tag{9}\] where the Laplace transform of time-dependent function \(f(t)\) is denoted by \(\hat{f}(s)\), \(\hat{\psi}_{\rm t}\) is the Laplace transform of \(\psi_{\rm t}(t,F)\), and \(\lambda(k)\) is the structure factor for a biased random walk in one-dimension, \[\lambda(k)=\Gamma_{\rm p}(F)\exp{(ikb)}+\Gamma_{\rm m}(F)\exp{(-ikb)}\,. \tag{10}\] \(G_{0}(x,x_{\rm i},t)\) can be expressed using the lattice Green's function with modification by the waiting-time distribution to express the probability of arriving at \(x\) before \(t\) starting from \(x_{\rm i}\). The lattice Green's function modified by the waiting-time distribution is given by [36] \[\hat{G}_{L0}(x,x_{\rm i},s) = \frac{1}{2\pi}\int_{-\pi}^{\pi}dk\,\frac{\exp[-ik(x-x_{\rm i})]}{ 1-\hat{\psi}_{\rm t}\lambda(k)} \tag{11}\] \[= \frac{1}{2\pi}\int_{-\pi}^{\pi}dk\,\exp[-ik(x-x_{\rm i})]\sum_{j= 0}^{\infty}\left[\hat{\psi}_{\rm t}(s)\lambda(k)\right]^{j}.\] \([\hat{\psi}_{\rm t}(s)]^{j}\) becomes the time convolution after the inverse Laplace transformation; the inverse Laplace transform of Eq. (9) can be interpreted as the sum of the transition probability corresponding to \(j\)-times transitions between \(x_{\rm i}\) and \(x\), which occurs until time \(t\). In Eq. (9), the factor \((1-\hat{\psi}_{\rm t})/s\) is the Laplace transform of \(\varphi(t,F)=\int_{t}^{\infty}dt_{1}\psi_{\rm t}(t_{1},F)\), which indicates the remaining probability of a carrier not transitioning to a new site until \(t\). The usual Green's function given by Eq. (9) is obtained from \(G_{0}(x,x_{\rm i},t)=\int_{0}^{t}dt_{1}\varphi(t-t_{1})G_{L0}(x,x_{\rm i},t_{1})\); the lattice Green's function is convoluted with the remaining probability of the carrier arriving at \(x\) and remaining for time \(t\). For an unbiased random walk in one-dimension, we obtain \(\lambda(k)=(1/2)\sum_{x=\pm b}\exp(ikx)\approx 1-(kb)^{2}/2\), where we assumed \(kb\ll 1\). Under the bias, we can express the structure factor \(\lambda(k)\) by assuming \(kb\ll 1\), \[\lambda(k)\approx 1+i\mu k-\frac{\sigma^{2}}{2}k^{2}, \tag{12}\] where \(\mu\) is related to the drift and \(\sigma^{2}\) is related to the dispersion; they are given by \[\mu = b\left[\Gamma_{p}(F)-\Gamma_{m}(F)\right]=b\tanh[qFb/(2k_{\rm B}T)] \approx qFb^{2}/(2k_{\rm B}T), \tag{13}\] \[\sigma^{2} = b^{2}. \tag{14}\] Under the assumption \(kb\ll 1\), \(\hat{G}_{0}(x,x_{\rm i},s)\) can be approximated as [34] \[\hat{G}_{0}(x,x_{\rm i},s)\approx\frac{1-\hat{\psi}_{\rm t}}{s}\frac{1}{2\pi} \int_{-\infty}^{\infty}dk\,\frac{\exp[-ik(x-x_{\rm i})]}{1-\hat{\psi}_{\rm t} \left[1+i\mu k-(\sigma^{2}/2)k^{2}\right]}. \tag{15}\] We now transform \(\hat{G}_{0}(x,x_{\rm i},s)\) into a form suitable to study the fluctuation relation. By introducing \(k_{1}=k-i\mu/\sigma^{2}\), we obtain \[-i\mu k+(\sigma^{2}/2)k^{2}=\mu^{2}/(2\sigma^{2})+(\sigma^{2}/2)k_{1}^{2}, \tag{16}\] and Eq. (15) can be rewritten as \[\hat{G}_{0}(x,x_{\rm i},s)=\frac{1-\hat{\psi}_{\rm t}}{s}\frac{1}{2\pi}\int_{ -\infty}^{\infty}dk_{1}\,\frac{\exp[-i(k_{1}+i\mu/\sigma^{2})(x-x_{\rm i})]}{ 1-\hat{\psi}_{\rm t}\left[1+\mu^{2}/(2\sigma^{2})+(\sigma^{2}/2)k_{1}^{2} \right]}. \tag{17}\] Therefore, we obtain [34] \[G_{0}(x,x_{\rm i},t)=\exp\left[(\mu/\sigma^{2})(x-x_{\rm i})\right]g_{0}(|x-x_ {\rm i}|,t), \tag{18}\] where \(g_{0}(|x-x_{\rm i}|,t)\) is the inverse Laplace transform of \(\hat{g}_{0}(|x-x_{\rm i}|,s)\), [34] \[\hat{g}_{0}(|x-x_{\rm i}|,s)=\frac{1-\hat{\psi}_{\rm t}}{s}\frac{1}{2\pi}\int _{-\infty}^{\infty}dk\,\frac{\exp[-ik(x-x_{\rm i})]}{1-\hat{\psi}_{\rm t} \left[1+\mu^{2}/(2\sigma^{2})+(\sigma^{2}/2)k^{2}\right]}. \tag{19}\] \(\hat{g}_{0}(x-x_{\rm i},s)\) is an even function of \(x-x_{\rm i}\). Using the Green's function, we derive the fluctuation relation: [34] \[\frac{G_{0}(x,x_{\rm i},t)}{G_{0}(x_{\rm i},x,t)} = \exp\left(2\tanh[qFb/(2k_{\rm B}T)](x-x_{\rm i})/b\right) \tag{20}\] \[\approx \exp\left(\frac{qF(x-x_{\rm i})}{k_{\rm B}T}\right), \tag{21}\] where the initial condition is given by \(x_{\rm i}\) at \(t=0\). If we use Eq. (13) without the approximation given by \(qFb/(2k_{\rm B}T)<1\), then \(qF(x-x_{\rm i})/(k_{\rm B}T)\) on the right-hand side of Eq. (21) is replaced with \(2\tanh[qFb/(2k_{\rm B}T)](x-x_{\rm i})/b\) [Eq. (20)]. As we will show later by Eq. (45), Eq. (21) rigorously holds if we retain the discrete nature of transitions even for \(qFb/(2k_{\rm B}T)>1\). The results indicate that the limit of \(qFb/(2k_{\rm B}T)<1\) is necessary if we approximate the structure factor using Eq. (12). That is, the continuum limit of \(b\to 0\) should not be taken to study the nonlinear field dependence. The relation given by Eq. (21) represents the nonequilibrium equality characterized by free energy differences. [37]\(qF(x-x_{\rm i})/(k_{\rm B}T)\) in Eq. (21) can be interpreted as the free energy difference between \(x\) and \(x_{\rm i}\), induced by the external electric field divided by \(k_{\rm B}T\). Although the normalization constant for a steady-state distribution can be defined only formally in infinite systems, we study the ratio between the steady-state distribution at \(x\) and that at \(x_{\rm i}\) using the common normalization constant formally introduced; the normalization constant cancels out in taking the ratio. If the ratio between the stationary distribution at \(x\) and that at \(x_{\rm i}\) is replaced by the ratio of the equilibrium distribution at \(x\) to that at \(x_{\rm i}\) in a closed bounded system, then Eq. (21) represents the detailed balance condition. [38] However, \(G_{0}(x,x_{\rm i},t)\) depends on the boundary conditions. Here, we consider the open systems without boundary conditions and particles are driven by an external field. By considering that the free energy change on the right-hand side of Eq. (21) can be rewritten as the ratio between the stationary distribution at \(x\) and that at \(x_{\rm i}\), we can regard Eq. (21) as the extended detailed balance between the positions \(x\) and \(x_{\rm i}\). We note that \(G_{0}(x,x_{\rm i},t)\) [Eq. (18)] is a function of \(x-x_{\rm i}\), which allows us to express \({\cal G}_{0}(x-x_{\rm i},t)=G_{0}(x,x_{\rm i},t)\). Equation (21) can be rewritten as \[\frac{{\cal G}_{0}(y,t)}{{\cal G}_{0}(-y,t)}=\exp\left(\frac{qFy}{k_{\rm B}T} \right), \tag{22}\] where we defined \(y=x-x_{\rm i}\). Equation (22) has been regarded as a form of the fluctuation theorem. [25, 31, 32, 33] It has been pointed out that Eq. (22) is independent of the form of the waiting time distribution owing to the subordination principle; [25] the subordination relation holds for the present continuous time random walk models. [39, 40, 41, 42, 43, 25] Equation (22) allows the ratio of the transition probability for arriving at \(y\) from the initial location to that for arriving at \(-y\) from the initial location to be interpreted as being equal to the free energy difference for \(y\). Given that the field introduces drift in the random walk and changes the transition probability to \(y\) and the transition probability to \(-y\) from the same initial location, this result is fundamentally the same as exchanging the field direction, as shown below. By knowing that \(g_{0}(|x-x_{\rm i}|,t)\) is an even function of \(\mu\) and defining \[G_{0}(x,x_{\rm i},F,t)=\exp\left[(\mu/\sigma^{2})(x-x_{\rm i})\right]g_{0}(|x -x_{\rm i}|,t), \tag{23}\] where \(\mu\) is an odd function of \(F\) as shown in Eq. (13), we obtain \[\frac{G_{0}(x,x_{\rm i},F,t)}{G_{0}(x,x_{\rm i},-F,t)}=\exp\left(\frac{qF(x-x_ {\rm i})}{k_{\rm B}T}\right). \tag{24}\] Equation (24) indicates that the ratio for the trajectory distribution by changing the drift (field) direction is equal to the ratio for the stationary distribution expressed in terms of the free energy difference resulting from the applied external electric field. In free space without a boundary, exchange of the carrier locations is equivalent to changing the field (or drift) directions. Therefore, we cannot distinguish whether \(\exp\left[qF(x-x_{\rm i})/(k_{\rm B}T)\right]\) is related to the distribution obtained by exchanging the initial and final positions [Eq. (21)] or to the distribution obtained by exchanging drift directions [Eq. (24)]. In Sec. 3, we show that the relation given by Eq. (22) and the relation given by Eq. (24) break down when a reflecting boundary is present, whereas Eq. (21) holds irrespective of the reflecting boundary condition in a continuum limit. Equation (21) indicates that the ratio for the trajectory distribution obtained by exchanging the initial and final positions is equal to the ratio for the stationary distribution expressed in terms of the free energy difference resulting from the applied external electric field. [9, 10, 11] Equation (21) is independent of the form of the waiting-time distribution. The waiting-time distribution for transitions to neighboring lattice sites can be arbitrary because the form of the density of states denoted by \(g(E)\) in Eq. (5) has not been assumed. The fluctuation relation given by Eq. (21) is independent of the memory kernel in the drift diffusion equation in free space. Irrespective of the memory kernel, we proved the Crooks fluctuation theorem, where the ratio between the distribution of trajectories is characterized by the free energy difference between the initial and the final position induced by the applied electric field. [9, 10, 11] Before closing this section, we note that a power-law waiting-time distribution can be obtained when the density of states is given by [44] \[g(E)=\exp\left(-E/E_{0}\right)/E_{0}. \tag{25}\] Using Eq. (4), we obtain the waiting-time distribution function given by Eq. (5) using a dispersive parameter (\(\alpha\equiv k_{\rm B}T/E_{0}\)) as [45, 46, 47, 48, 34] \[\psi_{\rm t}(t,F) = \int_{0}^{\infty}d\,Eg(E)\gamma_{\rm t}(E,F)\exp\left(-\gamma_{ \rm t}(E,F)t\right) \tag{26}\] \[= \frac{\alpha\gamma\left(\alpha+1,\gamma_{\rm rt}(F)t\right)}{ \gamma_{\rm rt}(F)^{\alpha}t^{\alpha+1}}\sim\frac{\alpha\Gamma\left(\alpha+1 \right)}{\gamma_{\rm rt}(F)^{\alpha}t^{\alpha+1}}, \tag{27}\] where \(\gamma(z,p)\equiv\int_{0}^{p}e^{-t}t^{z-1}d\,t\) for \(({\rm Re}z>0)\) and \(\Gamma(z)\) are the incomplete Gamma function and the Gamma function, respectively. [49]\(\alpha<1\) indicates dispersive transport. ## 3 Influence of reflecting boundary condition Here, we study the influence of a reflecting boundary placed at \(x=0\) for the case of \(x_{\rm i}>0\) and \(x>0\). In this section, we consider a reflecting boundary condition in a continuum limit, where the lattice spacing denoted by \(b\) goes to zero. In Sec. 5, we show the influence of both the finite \(b\) and the reflecting boundary by considering the structure factor on a fluctuation relation. The solution in the limit of \(b\to 0\) has been already derived as [34] \[G_{\rm r}(x,x_{\rm i},t)=\exp\left(\frac{\mu}{\sigma^{2}}(x-x_{\rm i})\right)g _{\rm r}(x,x_{\rm i},t). \tag{28}\] By substituting Eq. (28) into the reflecting boundary condition given by, \[\mu G_{\rm r}(0,x_{\rm i},t)-\frac{\sigma^{2}}{2}\frac{\partial}{\partial x} G_{\rm r}(x,x_{\rm i},t)\bigg{|}_{x=0}=0, \tag{29}\] we obtain, \[\mu g_{\rm r}(0,x_{\rm i},t)-\sigma^{2}\frac{\partial}{\partial x}g_{\rm r}(x,x_{\rm i},t)\bigg{|}_{x=0}=0. \tag{30}\] We express \(g_{\rm r}(x,x_{\rm i},t)\) as \[g_{\rm r}(x,x_{\rm i},t)=g_{0}(|x-x_{\rm i}|,t)+g_{0}(x+x_{\rm i},t)+\int_{0}^ {\infty}dag_{0}(x+x_{\rm i}+z,t)\zeta(z), \tag{31}\] and determine \(\zeta(z)\) from the boundary condition. We can prove that the first two terms on the right-hand side of Eq. (31) cancel each other in calculating the second term on the left-hand side of Eq. (30) using \[\left.\frac{d}{dx}g_{0}(x+x_{\rm i},t)\right|_{x=0}=-\left.\frac{d}{dx}g_{0}(x-x _{\rm i},t)\right|_{x=0}. \tag{32}\] Eq. (32) follows from \[\left.\frac{d}{dx}g_{0}(x\pm x_{\rm i},t)\right|_{x=0}=\frac{1- \hat{\psi}_{\rm t}}{s}\frac{1}{2\pi}\int_{-\infty}^{\infty}dk\,\frac{(-ik)\exp (\mp ikx_{\rm i})}{1-\hat{\psi}_{\rm t}\left[1+\mu^{2}/(2\sigma^{2})+(\sigma^{2 }/2)k^{2}\right]} \tag{33}\] \[=\frac{1-\hat{\psi}_{\rm t}}{s}\frac{1}{2\pi}\int_{-\infty}^{ \infty}dk\,\frac{(\mp ik)\exp(-ikx_{\rm i})}{1-\hat{\psi}_{\rm t}\left[1+\mu^{ 2}/(2\sigma^{2})+(\sigma^{2}/2)k^{2}\right]}. \tag{34}\] The rest of the first derivatives can be calculated using partial integration as \[\frac{d}{dx}\int_{0}^{\infty}dzg_{0}(x+x_{\rm i}+z,t)\zeta(z)= \int_{0}^{\infty}dz\zeta(z)\frac{d}{dz}g_{0}(x+x_{\rm i}+z,t) \tag{35}\] \[=-g_{0}(x+x_{\rm i},t)\zeta(0)-\int_{0}^{\infty}dzg_{0}(x+x_{\rm i }+z,t)\frac{d}{dz}\zeta(z). \tag{36}\] When Eq. (36) is substituted into Eq. (30), we find \[\frac{d}{dz}\zeta(z)=-\frac{\mu}{\sigma^{2}}\zeta(z), \tag{37}\] where we have \(\zeta(0)=-2\mu/\sigma^{2}\). By solving Eq. (37), \(\zeta(z)=-(2\mu/\sigma^{2})\exp\left[-(\mu/\sigma^{2})z\right]\) is obtained. Equation (31) can be rewritten as, [34] \[g_{\rm r}(x,x_{\rm i},t) =g_{0}(|x-x_{\rm i}|,t)+g_{0}(x+x_{\rm i},t)- \tag{38}\] \[\frac{2\mu}{\sigma^{2}}\int_{0}^{\infty}d{z}g_{0}(x+x_{\rm i}+z, t)\exp\left(-\frac{\mu}{\sigma^{2}}z\right).\] We note that \(g_{\rm r}(x,x_{\rm i},t)=g_{\rm r}(x_{\rm i},x,t)\) and find that \[\frac{G_{\rm r}(x,x_{\rm i},t)}{G_{\rm r}(x_{\rm i},x,t)}=\exp\left[(2\mu/ \sigma^{2})(x-x_{\rm i})\right]=\exp\left(\frac{qF(x-x_{\rm i})}{k_{\rm B}T} \right), \tag{39}\] even under the reflecting boundary condition at \(x=0\) and when a memory kernel is present in the drift-diffusion equation. In Eq. (28), \(g_{\rm r}(x,x_{\rm i},t)\) given by Eq. (38) is not a function of \(x-x_{\rm i}\) alone and Eq. (24) does not hold. Similarly, in Eq. (28), \(\hat{g}_{\rm r}(x,x_{\rm i},s)\) expressed by Eq. (38) is not an even function of \(\mu\). Therefore, if \(G_{\rm r}(x,x_{\rm i},t)\) is used instead of \(G_{0}(x,x_{\rm i},t)\), then Eq. (24) does not hold. ## 4 Influence of structure factor Here, we consider the Green's function given by Eq. (9) with the structure factor given by Eq. (10), without assuming \(kb\ll 1\). We transform Eq. (9) into a form suitable to study the fluctuation relation. By introducing \(k_{2}=k-iC\) in Eq. (10), we obtain \[\lambda(k_{2}+iC)=\Gamma_{\rm p}(F)\exp\left(ikb-Cb\right)+\Gamma_{\rm m}(F) \exp\left(-ikb+Cb\right). \tag{40}\] We determine \(C\) to satisfy \(\Gamma_{\rm p}(F)\exp\left(-Cb\right)=\Gamma_{\rm m}(F)\exp\left(Cb\right)\) so that the denominator of Eq. (9) becomes an even real function of \(k\). When \(\int_{-\pi}^{\pi}dk\,\exp[-ik(x-x_{\rm i})]\) is applied to an even real function of \(k\), the result should be an even real function of \(x-x_{\rm i}\), which is a desirable form to study the effect of exchanging \(x\) and \(x_{\rm i}\). Using Eq. (40) and Eq. (8), we find that \[C=\frac{1}{2b}\ln\frac{\Gamma_{\rm p}(F)}{\Gamma_{\rm m}(F)}=qF/(2k_{\rm B}T). \tag{41}\] Using Eq. (41), we can rewrite \(\lambda(k)\) as \[\lambda(k_{2}+iC)=\frac{\cos(k_{2}b)}{\cosh\left[qFb/(2k_{\rm B}T)\right]}. \tag{42}\] The Green's function can then be expressed as \[G_{0}(x,x_{\rm i},t)=\exp\left[C(x-x_{\rm i})\right]g_{0}(|x-x_{\rm i}|,t), \tag{43}\] where \(g_{0}(|x-x_{\rm i}|,t)\) can be expressed in the Laplace domain as \[\hat{g}_{0}(|x-x_{\rm i}|,s)=\frac{1-\hat{\psi}_{\rm t}}{s}\frac{1}{2\pi}\int _{-\pi}^{\pi}dk\,\frac{\exp[-ik(x-x_{\rm i})]}{1-\hat{\psi}_{\rm t}\cos(kb)/ \cosh\left[qFb/(2k_{\rm B}T)\right]}. \tag{44}\] \(\hat{g}_{0}(x-x_{\rm i},s)\) is an even function of \(x-x_{\rm i}\). Equations (43) and (44) are consistent with the previous results. [31] Using the Green's function, we derive the fluctuation relation: \[\frac{G_{0}(x,x_{\rm i},t)}{G_{0}(x_{\rm i},x,t)}=\exp\left[2C(x-x_{\rm i}) \right]=\exp\left[(x-x_{\rm i})qF/(k_{\rm B}T)\right]. \tag{45}\] Therefore, the assumption \(kb\ll 1\) is not required to derive the fluctuation relation given by Eq. (21). Equation (43) is a function of \(x-x_{\rm i}\) alone. Therefore, Eq. (22) holds as shown previously. [31, 32, 33] Similarly, in Eq. (43), \(g_{0}(|x-x_{\rm i}|,t)\) expressed by Eq. (44) is an even function of \(F\). Therefore, Eq. (24) holds. ## 5 Influence of reflecting boundary condition and structure factor Here, we study the influence of a reflecting boundary placed at \(x=0\) for the case of \(x_{\rm i}>0\) and \(x>0\) in addition to the structure factor, without assuming \(kb\ll 1\). We assume that the solution under the reflecting boundary condition can be given in the form of Eq. (43) and introduce \(g_{\rm r}(x,x_{\rm i},t)\) to satisfy, \[G_{0}(x,x_{\rm i},t) = \exp\left[C(x-x_{\rm i})\right]g_{\rm r}(x,x_{\rm i},t) \tag{46}\] \[= \left(\frac{\Gamma_{\rm p}(F)}{\Gamma_{\rm m}(F)}\right)^{(x-x_{ \rm i})/(2b)}g_{\rm r}(x,x_{\rm i},t), \tag{47}\] where we have \(C=qF/(2k_{\rm B}T)\) from Eq. (41). By substituting Eq. (47) into the reflecting boundary condition given by, [38] \[-\Gamma_{\rm p}(F)G_{\rm r}(0,x_{\rm i},t)+\Gamma_{\rm m}(F)G_{\rm r}(b,x_{\rm i },t)=0, \tag{48}\] we obtain, \[-\Gamma_{\rm p}^{1/2}(F)g_{\rm r}(0,x_{\rm i},t)+\Gamma_{\rm m}^{1/2}(F)g_{\rm r }(b,x_{\rm i},t)=0. \tag{49}\] Equation (48) indicates the vanishing of the probability current between the site \(0\) and site \(b\). [38] In the limit of \(b\to 0\), Eq. (48) can be approximated by, \[-\Gamma_{\rm p}(F)G_{\rm r}(0,x_{\rm i},t)+\Gamma_{\rm m}(F)\left[G_{\rm r}(0,x_{ \rm i},t)+\left.b\frac{d}{dx}G_{\rm r}(x,x_{\rm i},t)\right|_{x=0}\right]=0. \tag{50}\] By using \(\Gamma_{\rm p}(F)-\Gamma_{\rm m}(F)\to qFb/(2k_{\rm B}T)=\mu/b\), and \(\Gamma_{\rm m}(F)\to 1/2\), Eq. (50) reduces to the reflecting boundary condition in the continuum limit given by Eq. (29), where \(\sigma^{2}=b^{2}\) is used. We express \(g_{\rm r}(x,x_{\rm i},t)\) as \[g_{\rm r}(x,x_{\rm i},t)=g_{0}(|x-x_{\rm i}|,t)+g_{0}(x+x_{\rm i},t)+\sum_{j=0} ^{\infty}g_{0}(x+x_{\rm i}+bj,t)\zeta(j), \tag{51}\] and determine \(\zeta(j)\) from the boundary condition, where the Laplace transform of \(g_{0}(|x-x_{\rm i}|,t)\) is given by Eq. (44). We first note, \[\hat{g}_{0}(|b\pm x_{\rm i}|,s)=\frac{1-\hat{\psi}_{\rm t}}{s}\frac{1}{2\pi} \int_{-\pi}^{\pi}dk\,\frac{\exp(\mp ikb-ikx_{\rm i})}{1-\hat{\psi}_{\rm t}\cos (kb)/\cosh\left[qFb/(2k_{\rm B}T)\right]}. \tag{52}\] We consider the second term on the left-hand side of Eq. (49) using the first two terms on the right-hand side of Eq. (51). Using Eq. (52), \(2g_{b}(x_{\rm i},t)=g_{0}(|b-x_{\rm i}|,t)+g_{0}(b+x_{\rm i},t)\) in the second term on the left-hand side of Eq. (49) can be expressed in the Laplace domain as, \[\hat{g}_{b}(x_{\rm i},s)=\frac{1-\hat{\psi}_{\rm t}}{s}\frac{1}{2\pi}\int_{- \pi}^{\pi}dk\,\frac{\cos(kb)\exp(-ikx_{\rm i})}{1-\hat{\psi}_{\rm t}\cos(kb)/ \cosh\left[qFb/(2k_{\rm B}T)\right]}. \tag{53}\] Using the third term on the right-hand side of Eq. (51), we find \[\sum_{j=0}^{\infty}g_{0}(b+x_{\rm i}+bj,t)\zeta(j)=\sum_{\ell=1} ^{\infty}g_{0}(x_{\rm i}+b\ell,t)\zeta(\ell-1)\] \[=\sum_{\ell=0}^{\infty}g_{0}(x_{\rm i}+b\ell,t)\zeta(\ell-1)-g_{0 }(x_{\rm i},t)\zeta(-1). \tag{54}\] When Eq. (54) is substituted into Eq. (49), we note that the following equation should hold, \[\frac{\zeta(j)}{\zeta(j-1)}=\left(\frac{\Gamma_{\rm m}(F)}{\Gamma_{\rm p}(F)} \right)^{1/2}, \tag{55}\] and the rest of the terms should satisfy, \[-2\Gamma_{\rm p}^{1/2}(F)g_{0}(x_{\rm i},t)+\Gamma_{\rm m}^{1/2}(F)\left[2g_{ b}(x_{\rm i},t)-g_{0}(x_{\rm i},t)\zeta(-1)\right]=0, \tag{56}\] where \(g_{b}(x_{\rm i},t)\) is given by Eq. (53). Equation (56) yields, \[\zeta(-1)=2\left[\frac{g_{b}(x_{\rm i},t)}{g_{0}(x_{\rm i},t)}-\left(\frac{ \Gamma_{\rm p}(F)}{\Gamma_{\rm m}(F)}\right)^{1/2}\right], \tag{57}\] and Eq. (55) yields, \[\zeta(j)=\left(\frac{\Gamma_{\rm m}(F)}{\Gamma_{\rm p}(F)}\right)^{(j+1)/2} \zeta(-1). \tag{58}\] Using Eq. (8) we obtain, \[g_{\rm r}(x,x_{\rm i},t)=g_{0}(|x-x_{\rm i}|,t)+g_{0}(x+x_{\rm i},t)-\] \[A_{b}(x_{\rm i})\sum_{j=0}^{\infty}g_{0}(x+x_{\rm i}+bj,t)\exp\left[ -qFbj/(2k_{\rm B}T)\right], \tag{59}\] where \(A_{b}(x_{\rm i})=\exp\left[-qFb/(2k_{\rm B}T)\right]\zeta(-1)\) can be expressed as \[A_{b}(x_{\rm i})=2\exp\left[-qFb/(2k_{\rm B}T)\right]\left[\exp\left(\frac{qFb }{2k_{\rm B}T}\right)-\frac{g_{b}(x_{\rm i},t)}{g_{0}(x_{\rm i},t)}\right]. \tag{60}\] Because of \(\cos(kb)\) in \(\hat{g}_{b}(x_{\rm i},s)\) [Eq. (53)], we have \(g_{\rm r}(x,x_{\rm i},t)\neq g_{\rm r}(x_{\rm i},x,t)\) and find from Eq. (46) that \[\frac{G_{\rm r}(x,x_{\rm i},t)}{G_{\rm r}(x_{\rm i},x,t)}\neq\exp\left(\frac{qF (x-x_{\rm i})}{k_{\rm B}T}\right), \tag{61}\] if the limit of \(b\to 0\) is not taken; the fluctuation relation given by Eq. (21) does not hold when a reflecting boundary condition is imposed by taking into account the structure factor of the lattice. One of the merit of studying random walk without taking a continuum limit is that we are able to study a nonlinear field driven transport characterized by \(qFb/(2k_{\rm B}T)>1\). As shown in Sec. 4, the fluctuation relation holds for open boundaries without a reflecting boundary for \(qFb/(2k_{\rm B}T)>1\). Here, we show a subtle point in studying a non-linear field driven transport using a continuous time random walk model when a reflecting boundary condition is imposed. The solution given by Eqs. (59) and (60) reduces to Eq. (38) in a continuum limit. In the limit of \(b\to 0\), we have \(g_{b}(x_{\rm i},t)\to g_{0}(x_{\rm i},t)\), and \(A_{b}(x_{\rm i})\to qFb/(k_{\rm B}T)\); \(A_{b}(x_{\rm i})\) becomes independent of \(x_{\rm i}\) and the relation given by \(g_{\rm r}(x,x_{\rm i},t)=g_{\rm r}(x_{\rm i},x,t)\) is recovered. Using \(\sum_{j=0}^{\infty}bf(bj)\rightarrow\int_{0}^{\infty}dxf(x)\) for a general function of \(f(x)\), and \(2\mu/\sigma^{2}\to qF/(k_{\rm B}T)\) [Eqs. (13) and (14)], we note that Eq. (38) is obtained by taking the limit of \(b\to 0\) for Eqs. (59) and (60). \(g_{\rm r}(x,x_{\rm i},t)\neq g_{\rm r}(x_{\rm i},x,t)\) follows from \(\cos(kb)\) in \(\hat{g}_{b}(x_{\rm i},s)\); by taking the limit of \(b\to 0\), the fluctuation relation given by Eq. (21) is recovered. ## 6 Conclusion Stochastic thermodynamics was originally formulated using Langevin equations, where the equation of motion was used. [50, 51] Thermodynamic work can be straightforwardly expressed using the equation of motion in a Langevin equation. The ensemble-averaged quantities calculated from a Langevin equation can be equivalently obtained using the Fokker-Planck equation. Therefore, the Fokker-Planck equation is also amenable to stochastic thermodynamics. The lattice random walks considered here are not directly related to a Langevin equation but are related to Fokker-Planck equations as the lattice spacing approaches zero. By considering arbitrariness in defining the work done in stochastic thermodynamics, [16, 17] which can be introduced only through stationary distributions for lattice random walks, we focus on a fundamental fluctuation relation under nonequilibrium situations expressed using transition probability during an arbitrary time duration. Using a continuous-time lattice random walk model under a general waiting-time distribution of transitions, we consider noninteracting charge carriers driven by an external electric field. Various non-equilibrium conditions associated with different boundary conditions can be analytically studied by continuous time random walk models. A fluctuation relation [Eq. (21)] is derived by expressing the Green's function for a lattice random walk in a suitable form to study the effect of exchanging the initial and final positions for free random walks without boundary conditions. In the presence of a reflecting boundary condition, Eq. (21) is derived in a continuum limit, where the lattice spacing denoted by \(b\) goes to zero. When a reflecting boundary condition is imposed without taking the limit of \(b\to 0\), Eq. (21) does not hold. The result indicates a subtle point in imposing a reflecting boundary condition in continuous time random walk models when particles are driven by a field in a nonlinear response regime given by \(qFb/(2k_{\rm B}T)>1\). Equation (21) can be regarded as the Crooks fluctuation theorem in that the ratio between the distribution of trajectories obtained by changing the initial and final positions is characterized by the free energy difference between the initial and final positions induced by the applied field. [9, 10, 11] In free space, the Crooks fluctuation theorem holds irrespective of the lattice structure factor and the form of the waiting-time distribution of transitions. In free space, we can also prove Eqs. (22) and (24), which are given by the ratio for the trajectory distribution by changing the drift (field) directions without changing the initial position. However, Eqs. (22) and (24) break down when a reflecting boundary is present, whereas Eq. (21) holds irrespective of the reflecting boundary condition in a continuum limit. Therefore, the Crooks fluctuation theorem should be interpreted in terms of exchanging the initial and final positions rather than exchanging the drift (field) directions. The left-hand side of Eq. (21) expressed using exchange of the initial and final positions appears also in the detailed balance relation in a closed equilibrium system; [38] the detailed balance originates from microscopic reversibility. [38, 52] Here, the detailed balance in equilibrium is extended for the nonequilibrium situation driven by an external field under various boundary conditions by taking into account a lattice structure factor as well as in the limit of \(b\to 0\). The derivation of fluctuation relation relies on the reciprocity (causality) relation of the Lattice Green's function. Imposing boundary conditions associated with various non-equilibrium situations can be analytically performed using the Lattice Green's function. As long as the Lattice Green's function can be introduced, fluctuation relation might be proved for other lattices. However, the derivation is not applicable if the Lattice Green's function is not defined. If particles are driven by an external field on a lattice, where each site has the fixed random trap energy, the situation is called quenched energy disorder. For quenched energy disorder, the average with respect to the density of states should, in principle, be used in evaluating the physical quantity, which is the ratio between the normal transition probability and the transition probability obtained by exchanging the initial and final positions; [53, 54] The waiting-time distribution calculated using the density of states can be regarded as pre-averaging of the energetic disorder. Studying the fluctuation relation for a lattice random walk under an external electric field with quenched energy disorder without resorting to pre-averaging remains an open problem.
2305.13960
From Model-Based to Data-Driven Simulation: Challenges and Trends in Autonomous Driving
Simulation is an integral part in the process of developing autonomous vehicles and advantageous for training, validation, and verification of driving functions. Even though simulations come with a series of benefits compared to real-world experiments, various challenges still prevent virtual testing from entirely replacing physical test-drives. Our work provides an overview of these challenges with regard to different aspects and types of simulation and subsumes current trends to overcome them. We cover aspects around perception-, behavior- and content-realism as well as general hurdles in the domain of simulation. Among others, we observe a trend of data-driven, generative approaches and high-fidelity data synthesis to increasingly replace model-based simulation.
Ferdinand Mütsch, Helen Gremmelmaier, Nicolas Becker, Daniel Bogdoll, Marc René Zofka, J. Marius Zöllner
2023-05-23T11:39:23Z
http://arxiv.org/abs/2305.13960v3
# From Model-Based to Data-Driven Simulation: ###### Abstract Simulation is an integral part in the process of developing autonomous vehicles and advantageous for training, validation, and verification of driving functions. Even though simulations come with a series of benefits compared to real-world experiments, various challenges still prevent virtual testing from entirely replacing physical test-drives. Our work provides an overview of these challenges with regard to different aspects and types of simulation and subsumes current trends to overcome them. We cover aspects around perception-, behavior- and content-realism as well as general hurdles in the domain of simulation. Among others, we observe a trend of data-driven, generative approaches and high-fidelity data synthesis to increasingly replace model-based simulation. ## 1 Introduction Simulations are an integral and indispensable part in the development process of autonomous vehicles (AV) and especially helpful for validation & verification (V&V). They enable researchers to massively scale up training and testing of their software stacks beyond the limits of real-world experiments. Moreover, they allow to better cover the "long tail of events". Particularly challenging corner case situations, or anomalies, only occur very rarely in reality, but are exceptionally powerful to improve safety and robustness of AVs. Simulation allows provoking them much more frequently. However, despite all their benefits, simulations still face a series of major challenges, especially related to an insufficiently large gap in realism compared to the real world. An increasing interest in driving simulation has brought forth detailed surveys about requirements, frameworks and their application in V&V of AD recently. A technical report by Fadaie [15] and the studies from Kaur et al. [26] and Kang et al. [25] aim at requirements analysis and comparison of existing simulation environments. Alghodhaifi and Lakshmanan [1] put simulation methods and environments in the context of different assessment methods and focus on methodology. While these works provide rich overviews about the state-of-the-art, none of them specifically presents latest trends and their relevance for different challenges related to content-, behavior- or perception realism. During our literature review, we were missing clear indications about future developments in the field, especially in view of recent years' advancements towards highly data-driven methods. Also, to our knowledge, no up-to-date classification scheme exists, that could facilitate comparisons between simulation approaches. Our present work aims to fill this research gap. In Sec. 2 we first establish a hi Figure 1: Examples of our proposed simulation levels. Top-left to bottom-right: AR-enhanced (level 1), SUMO (level 2), CARLA (level 3), Block-NeRF (level 4) [59]. of simulation approaches and touch upon their progression over time. Sec. 3 then surveys current challenges that simulations are confronted with, especially with respect to three major aspects of realism. Novel, trending concepts to overcome these challenges are showcased and give an indication about the direction future research may be moving towards. We focus on simulation for testing and refinement of AD software stacks in particular. ## 2 Levels of Simulation Approaches To better understand the context of different challenges and trends, we propose the classification scheme in Tab. 1. It delineates AD simulation approaches according to their comprehensiveness of covered aspects, their level of realism and other criteria. The presented hierarchy is partly inspired by [16]. Compared to previous taxonomies ([1, 15, 25, 26]), the categories presented in our scheme are less informed by _what_ is simulated, but rather by _how_ things are simulated. That is, with a focus on the methodology used by a simulation. We compare the categories with respect to the following different aspects and additionally list typical applications and concrete examples of representatives of each category, where possible. * **Closed-loop / Reactive**: This criterion describes whether these types of environments allow for closed-loop simulations, i.e. such in which the simulated system (or parts of it) can dynamically react to changed conditions in their environment and vice versa. * **End-to-end (E2E) Development & Testing**: This aspect is about whether development and testing of entire AD software stacks is supported by the simulation. Besides reactivity, the simulation environment must provide exteroceptive and interoceptive sensor models. Whether a given simulation environment is suitable for testing a particular AD stack, however, depends on the stack's specific requirements. * **Visual Fidelity**: It describes the perceived degree of realism of observations produced by simulated sensors. This serves as a qualitative measure for how small the appearance gap is with the respective simulation type, i.e., relates to _how things look like_. * **Content & Behavior Diversity**: This criterion is a qualitative measure for how much diversity a simulation allows for. This includes diversity in static and dynamic objects (e.g., different vehicle- or building shapes and textures), in the environment (e.g., different lighting conditions or weather) as well as in the behavior of dynamic agents within the simulation. * **Object Representation**: This relates to the way in which static and dynamic objects are represented in the simulation. We distinguish between explicit and implicit representation. For explicitly represented objects, a precise description or model (e.g. a CAD model) is given, involving parameters that can be interpreted and tweaked purposefully. An implicit representation does not feature a clear-cut object description, but instead involves either sensor observations or a (latent) neural representation. The latter is usually not open to human interpretation. * **Scalability**: This criterion describes the degree at which new simulations can be designed and created at a large scale. It relates to how much manual effort is involved with the creation of new, diverse simulations. * **Controllability**: This criterion describes the degree of granularity at which parameters of a given simulation can be tweaked, that is, the level of control a user has over a simulation. While model-driven simulations allow to explicitly tweak variables such as a car's target speed, parameters are often times not even human-understandable with data-driven models. We distinguish between the following levels or categories of simulation approaches. **Level 0: Log Replay.** Log replay is the most basic form of virtually reproducing real-world scenarios and involves to simply replay recorded driving data (e.g., streams of video data or point clouds, but possibly also CAN messages), without any way to change them, e.g., see [30]. Thus, it is not actually considered a type of simulation, but nevertheless widely used for testing and thus worth being mentioned in this taxonomy. Visual fidelity is high and only constrained by potential sensor imperfections. Diversity, on the other hand, is comparatively low, especially with respect to behavior. Recorded data will comprise corner case events (like accident scenarios or animals crossing a road) only very rarely. Scalability is very low, because new data must be collected each time in order to produce new variants of a scenario. Log replay allows developing and validating perception- and prediction algorithms. Due to its lack of reactivity, however, it does not enable for the exploration of hypothetical scenarios or testing different conditions. **Level 1: AR-enhanced Log Replay.** AR-enhanced log replay builds upon plain log replay, but additionally allows to use augmented reality techniques to dynamically inject artificial objects into the world. This enables to add a certain degree of variability to otherwise static recordings and thus enables for better diversity. Recorded streams of exteroceptive sensor data, such as camera images or point clouds are modified retroactively to make synthetic 3D-modeled objects appear in the virtual scenes. This improves on scalability, because a single recording can be used for multiple different scenarios. However, the underlying data stream is still non-reactive, i.e., not influenced by actions taken in the virtual world, and thus not suitable for closed loop simulation, e.g., see [70, 71]. **Level 2: Abstract Dynamics Simulation.** Shortcomings of levels 0 and 1 include the necessity for pre-recorded real-world logs and the fact that the simulated environments are not reactive to actions taken by the ego vehicle. Abstract dynamics simulations overcome these. These kinds of simulations can be run in abstract, simplified, entirely virtual environments and are closed-loop in that the environment can react to input signals dynamically. They usually involve a rather abstract, attributed object model, that only covers particular aspects, such as vehicle dynamics or behavior. Precise 3D geometry or texture information is usually not included, limiting content diversity to only a small set of parameters. As a consequence, however, these simulations allow for efficient execution even beyond real time and are easier to scale through sampling from the parameter space. Changing the fundamental structure (traffic participants, map layout, etc.) still involves manual work, however. Abstract dynamics simulations are well suited for simulating dynamic agents. Typical applications in AD include motion prediction, planning, and control. Level 2 simulations do not produce any type of sensor data (no camera, lidar, etc.) and thus are, by themselves, not suitable for perception tasks. **Level 3: Model-driven 3D Simulation.** In addition to the functionality provided by level 2 simulations, model-driven simulators involve explicit physics- and object models, allowing for more advanced and comprehensive simulations, such as needed for developing and testing perception algorithms. They usually involve a 3D rendering engine and are conceptually similar to modern video games. Typical characteristics include the capability to provide multi-modal output (camera, lidar, radar, etc.), accurate physics simulation, accurate ground-truth labels and a fairly immersive, first-person-view experience for users. Their visual fidelity can vary largely between different implementations. These 3D engine-based simulators find application in end-to-end testing of driving stacks and are, along with level 2 simulations, most widely adopted today [26, 65]. However, since 3D worlds, contents, and scenarios are mostly created manually by artists and domain experts, variability and scalability are still limited. **Level 4: Data-driven 3D Simulation.** In order to overcome some of the model-driven simulations' limitations, a lot of research is put into data-driven approaches today. Various AD companies and research groups have demonstrated impressive results in this domain recently [3, 16, 27, 56]. Data-driven simulators eliminate the need for explicit (3D-, behavior-, sensor-, etc.) models, but instead involve (generative) neural networks that learn to reconstruct or synthesize virtual worlds and behaviors from real-world data with little to no human supervision. They aim to produce photo-realistic outputs, are characterized through a small domain- and appearance gap and can be scaled easily, since not requiring a human in the loop. A drawback, however, is the often times limited control over the neural networks' outputs. While research is being done to allow for more human supervision ([27, 68]), parameters can usually only be tweaked implicitly through the input vectors. Moreover, most current approaches on data-driven simulators are capable of novel-viewpoint synthesis, but usually do not support synthesizing entirely new, unseen worlds. **Level 5: Mixed Neural Simulation.** Taking data-driven 3D simulations even one step further, mixed neural simulations are of particular interest for future research [16]. They \begin{table} \begin{tabular}{p{42.7pt}|p{42.7pt}|p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline \hline & **Closed-loop / reactive** & **End-to-end** ** **development ** **& **Visual fidelity** & **Diversity (content \& behavior)** & **Object representation** & **Scalability** & **Controllability** **& **Key use cases** & **Examples** \\ \hline **0. Log replay** & ✗ & ✗ & high & low & implicit & very low & none & Perception & - \\ \hline 1. AR-enhanced [29] [29] [28] [29] [22] [22] [22] [23] [23] [24] & partly ✗ & ✗ & mixed & medium & mixed & low & low & Perception & [29, 67, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 83, 85, 87, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 11, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 211, 212, 222, 223, 23, 24, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 52, 59, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 85, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 12, 14, 16, 18, 19, 19, 13, 15, 19, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 54, 53, 55, 56, 57, 58, 59, 60, 53, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 70, 73, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 101, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 53, 59, 61, 62, 63, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 85, 89, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 54, 59, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 74, 75, 76, 78, 79, 81, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 99, 10, 11, 12, 13, 14, 15, 16, 17, 19, 18, 19, address the previously mentioned limitations of level 4 and can be seen as a compound of multiple levels. Mixed neural simulations combine the data-driven nature of level 4 with the option to inject custom virtual objects from level 1 and fine-grained controllability from levels 2 and 3 and additionally complement support for full-synthetic environments. Generative capabilities are taken beyond only synthesizing novel viewpoints from given data and towards whole artificially generated worlds and behavior. Morevoer, neurally constructed, photo-realistic worlds can be integrated with arbitrary virtual content in a pick-and-place fashion for even higher diversity and flexibility. A key goal with this type of simulation is also to have precise control over the environment. However, to our knowledge, no simulators of this type are available today. **Conclusion.** We presented categories of simulation approaches for AD with a particular focus on simulation of an ego vehicle's external environment. While boundaries between these levels can be blurry sometimes, they nevertheless roughly build upon one another, allowing for increasingly complex and comprehensive, yet scalable, simulations. Higher levels typically allow to address challenges from different simulation aspects simultaneously, i.e., enable for high content-, behavior- and perception realism (see Sec. 3) jointly within the same environment. The presented scheme helps to systematically classify modern simulation approaches in view of the field's latest advancements and is the first of its type. In the following, it serves as a guideline when investigating recent challenges and trends. ## 3 Challenges and Trends Picking up on the hierarchy of different simulation levels presented in Sec. 2, we showcase current challenges and according trends (see Fig. 3) on the path to reaching the upper levels in the following. We consider realism the overarching goal and biggest challenge on that path and define and decompose the term as follows. Realism, in our understanding, describes the level of detail and accuracy of a depiction of reality. In the context of AD simulation, we distinguish between the following three different aspects of realism. The ultimate goal is to jointly maximize them, that is, minimize the gap between real world and simulation with respect to each of them. Firstly, _content realism_ is about accurately representing real-world objects (static and dynamic) and environments and their diversity. _Behavior realism_ covers dynamic aspects of real-world traffic scenes, such as non-static actors' motion characteristics. We exclude dynamics of the environment, such as changing weather effects here. Lastly, _perception realism_ is about replicating the appearance of the real world from the perspective of different sensors, e.g., to produce photo-realistic-looking RGB images or accurate lidar point clouds. Besides realism, a number of horizontal challenges, related to validity and transferability, data acquisition and -processing and standardization are presented in addition. ### Content Realism The first set of challenges focuses on realistic representations of a traffic scene's _contents_. This includes its structure and topology itself, the road layout and -infrastructure and all static and dynamic objects surrounding the ego vehicle. **Road Network.** An inherent part of driving simulations is the road network, usually represented as high-fidelity 3D maps. This includes the definition of road boundaries, driving lanes, traffic lights, stop signs and others. Traditionally, these maps were derived from the real world using 3D mapping techniques or created by hand in CAD. Neither approach is scalable to the requirements of automated virtual testing in AD. Therefore, procedural content generation (PCG) has emerged as a technique for synthesizing virtual environments [17]. For example, PGDrive [31] is a 3D simulator that procedurally generates randomized road networks based on a set of fundamental building blocks. More recently, generative deep-learning (DL) models, such as generative adversarial networks (GAN), were found to be applicable in this context as well [23]. Lastly, Gisslen et al. proposed to employ adversarial reinforcement learning (RL) for PCG [22]. Each of these approaches eventually contributes to the scalable generation of diverse road maps and hence, in a broader sense, to higher diversity and realism of a traffic scene's content. **Scenes & Environment.** In addition to only the road network itself, many approaches attempt to construct entire scenes in 3D- or pixel-space, including static objects and the environment, such as buildings and vegetation. Traditionally, these were hand-crafted by artists, which is a tedious process that often times lacks variety. Many model-based 3D simulations additionally suffer from a severe lack of photo-realism [3], but recent advances in computer graphics, such as with Unreal Engine [20], promise to drastically alleviate this discrepancy. SurfelGAN [62] is an approach to generate new scenes Figure 2: Our proposed hierarchy of simulation levels from data. Its authors propose the use of texture-mapped surfels (discs in 3D space) for reconstructing camera-recorded scenes in combination with a CycleGAN model, that additionally accounts for generation artifacts. SurfelsGAN allows for both novel view point (NVP) synthesis and novel scene configuration, that is, perturbing objects in a scene to create variations. Another popular recent trend is the use of Neural Radiance Fields (NeRFs) [37, 40] for NVP synthesis based on 2D imagery. Given a sparse set of input images of an object or a scene from different angles, NeRFs learn to project from a 5D-input vector (3D position and 2D viewing angle) to a pixel's precise color and depth information. This effectively yields a high-detail, 360\({}^{\circ}\) neural representation of the scene, including accurate lighting. Block-NeRF [54] and BungeeNeRF [61] extend the concept to large-scale scenes, such as entire housing blocks, through carefully combining multiple smaller NeRFs. MIT ViSTA 2.0 [3] constitutes an entire open-source simulator that is solely data-driven. Given control inputs, it is capable of synthesizing lidar, RGB- & event-camera observations for novel viewpoints, that are consistent with the virtual vehicle's kinematic model. This is achieved by complementing 2D RGB pixels with estimated depth information and subsequent projection into 3D space. As a result, ViSTA enables for developing end-to-end driving models with reinforcement- or guided policy-learning in simulation. Waabi World [56] seems to move in a similar direction, however, barely any details about its inner workings are publicly available. DriveGAN [27] addresses the typical problem of limited controllability over generated content. By decomposing the latent space into dedicated feature vectors for theme and content, the end-to-end differentiable, neural simulator allows to not only control background and weather of a scene, but also swap out objects at precise grid locations. While this level of control is still far from what model-based simulators provide, DriveGAN indicates an important direction for the future development of data-driven simulation. The previous approaches can be classified as data-driven (level 4), besides which there exist model-based approaches (level 3) as well. While the former usually generate sensor output directly, model-based approaches, in contrast, only yield more abstract structure and parameters and rely on a 3D engine for subsequent rendering. In this segment, Meta-Sim2 [11] proposed a way to generate realistically structured road scenes by utilizing an abstract graph representation of the scene. Using RL, their model is trained in an unsupervised fashion to sample rules from a probabilistic context-free grammar to iteratively build up a scene graph, while incorporating prior expert knowledge. Sim2SG [46] takes a similar direction. SceneGen [53] is another approach towards traffic scene generation, based on ConvLSTM networks. These approaches yield diverse constellations of traffic agents in a scene and may also be used as an initialization for behavior models. Thus, they contribute to both content- and behavior realism. **3D Objects.** Besides road network and general structure, another crucial part of simulated scenes are the contained objects, in particular, their precise geometry, texture, etc. These objects include cars, trucks, bicycles and pedestrians, but also buildings, trees, traffic signs and other road infrastructure. While a scene's fundamental structure defines what objects are located where, their precise properties are of importance, too. These objects must be both diverse and of authentic appearance. Most game-engine-based simulators only feature a limited set of mostly hand-crafted assets, which does not reflect the diversity of the real world and thus adds to a wider domain gap. Instead of having artists tediously design these objects, it is preferable to either extract them from data automatically or even let them be created synthetically. Various promising approaches have been developed either for 3D-aware novel view synthesis, such as with NeRFs, or actual 3D object generation. The latter are either represented in implicit (or neural) form, as point clouds or, more recently, as concrete 3D meshes. Among the most promising recent advances in this regard is the GET3D model presented by Gao et al. [21]. They have developed an end-to-end trainable generative model that is capable of sampling entire 3D mesh models and according textures from latent space, solely trained on 2D imagery. While previous approaches either failed to capture geometric details, were limited in mesh topology or only produced implicit neural renderings, the outputs of GET3D are of high fidelity and can be used in 3D renderers directly. This helps to vastly increase object variety and build rich asset collections entirely automated. Further approaches are showcased by DreamFusion [45] and Magic3D [32]. Latest advances in diffusion models and NeRFs are leveraged to build text-to-3D generation methods, capable of synthesizing entire scenes from natural language input. In the context of autonomous driving, such can be used for both object- and whole road scene generation. **Summary.** Self- or unsupervised deep generative models, especially NeRFs, find intensive application for both reconstruction and synthesis of simulation content and advance at a promising pace. Moreover, fully data-driven, synthesizing simulation environments rise in popularity over 3D-engine-, model-based simulation systems. With regard to content realism, we see a trend from model-based simulation methods, involving manually created road networks, scenes and 3D objects, towards deep-learning-facilitated, more automated approaches. Although recent works aim to address this, a key challenge, however, remains the limited degree of controllability with highly data-driven approaches. ### Behavior Realism In contrast to content realism, behavior realism is concerned with the dynamics of traffic scenarios. This includes viewing traffic flow from a macroscopic perspective, as well as considering individual agents' behaviors and precise driver attitudes or trajectories on a microscopic level. We focus on traffic participants' trajectories, but disregard behavior such as facial expressions or gesturing in the following. One of the biggest challenges is a lack in variety of traditional, parameterized models. Such fail to capture the full complexity of real-world traffic and often times miss out on the rare, unusual, long-tail events. Following the definitions of Ding et al. [12], we distinguish between _data-driven_, _adversarial_, and _knowledge-guided_ methods. **Data-Driven Methods.** These methods involve learning traffic dynamics or agent behavior from examples to generate novel, but realistic new data. On the macroscopic side, Savva et al. [49] presented R-CTM, a scalable RNN-based approach for heterogeneous traffic flow simulation. TrafficSim [52], on the other hand, is another approach for multi-agent motion prediction to overcome the limitations of traditional models. At least equally as interesting in the context of AD, however, are microscopic motion models. To account for the temporal dimension of traffic scenarios, the vast majority of data-driven methods utilize sequence models, e.g. some sort of LSTM [33, 10, 9, 53]. Explicitly incorporating the interactions and interdependence between nearby traffic agents turned out to be a notable success factors. According techniques include convolutional social pooling [9] or the use of spatio-temporal attention [33]. AgentFormer [64] is another recent attention-based approach. It utilizes the concept of transformer models to jointly incorporate both temporal- and social dimensions for trajectory prediction and has achieved significant improvement over the state-of-the-art. Given their success in other domains, we expect transformers to become much more important in the context of AD as well. Traditional sequence models like LSTMs or GRUs might get replaced to some extent in the future. **Adversarial Methods.** Adversarial methods aim to particularly challenge the ego vehicle's planning. With adversarial reinforcement learning approaches, a generator network learns adversarial policies with the goal to interfere with the AV and try to make it fail. These approaches do not necessarily require naturalistic data and are very flexible. A particular challenge with adversarial methods is to produce trajectories that are difficult to cope with, yet not impossible. A challenge with adversarial RL particularly is to reward the agents properly in order to prevent them from just trying to collide with the AV, as this would result in unrealistic scenarios. For this reason, Wachi et al. [57] divide the reward into an adversarial and a personal one. The adversarial reward is granted if the AV collides with any object in the virtual environment. The personal reward is granted to the agent if it maintains close to or reaches its personal destination, which is a certain location in the environment. Other approaches use naturalistic data and modify the agents' trajectories to create critical scenarios. This can be achieved by formulating an optimization problem to make a scenario more critical, whereas criticality can be quantified through the use of different metrics [60, 51], such as time-to-collision or drivable area. Klischat and Althoff [28] use evolutionary algorithms and try to minimize the drivable area of the AV by forcing the generators to adapt adversarial behavior. With NVIDIA STRIVE [48], the model first learns a latent space representation of a traffic model before optimizing the agents' trajectories to challenge the AV. AdvSim [58] updates recorded lidar point clouds to match perturbed trajectories of surrounding vehicles to obtain scenarios that are safety-critical for a full autonomy stack. Like the data-driven approaches presented earlier, adversarial approaches contribute to more diverse, yet realistic behavior. They are comparatively novel and differ in being specifically purposed to provoke especially demanding situations. Often times, they also require less to no example data for doing so. **Knowledge-Guided Methods.** In a broad sense, we consider knowledge-guided methods, also referred to as Informed ML [55], as ML methods that incorporate prior expert knowledge to accelerate learning and yield more realistic results quicker. A conditional diffusion model for controllable realistic traffic was developed by Zhong et al. [68]. Here, a diffusion model for trajectory generation is combined with Signal Temporal Logic (STL). STL can be used to define rules such as obeying regulations or avoiding collisions and thus make the generated trajectories fulfill certain criteria. While these methods are not conceptually new, they had not found substantial application in AD so far. **Summary.** In behavior realism, there is a trends towards increased use of ML. Data-driven methods mainly utilize sequence models and generate trajectories depending on surrounding agents and the environment. Those methods are well suited for realistic standard behavior as occurs in the training data. Adversarial methods, on the other hand, try to challenge the AV and have the explicit goal to produce critical scenarios. This is particularly important for V&V of the AV. Knowledge-guided methods incorporate expert knowledge to gain better control over generated trajectories. ### Perception Realism Perception realism, according to our understanding, is about accurately modelling the specific characteristics and noise distributions of different types of (perception-related) sensors. Most prominent for AV development today are camera, lidar and radar sensors, the simulation of each of which faces individual challenges. **Camera.** Cameras are semantically the richest type of perception sensor and common among many applications. However, their accurate simulation still remains challenging. Different camera types and models suffer from different sorts of noise, such as under- and overexposure or distortion due to lens geometry or shutter speed. To minimize the real-to-sim gap, models used for simulation must account for these sensor-specific characteristic and / or generalize from them. Traditionally, statistical noise models were used, however, such cannot fully account for real-world complexity. In recent research, DL methods are used to infer noise models from data. Chang et al. [8] presented a generative approach for image denoising which supports multiple cameras simultaneously. The authors emphasize their future ambitions to extend the model to support single- or few-shot learning in order to reduce data requirements for adapting new cameras. Similarly, FFDNet [66] was developed as a particularly fast denoising model, based on CNNs and with support for different noise levels. **Lidar.** Traditionally, simulation of point clouds is mostly based on depth maps and raycasting [63]. While constructing lidar points geometrically is fairly well understood, the challenge lies in accurately simulating intensity, attenuation and raydrop. Raytracing techniques enable to account for the intensity, however, are computationally expensive and necessitate explicit, high-detail information about the objects' geometry and texture. Recent approaches to make sensor simulation more effective are based on data-driven methods, which apply ML to learn to reproduce sensor data from recordings. Marcus et al. [36] translate point prediction into an image-to-image translation problem to generate 3D lidar points from 2D images. A different approach is pursued with LiDARSim [35], where the authors first reconstruct a 3D scene in simulation, then use traditional raycasting to get point clouds and eventually apply a deep neural network to insert more realistic patterns in the form of deviations and noise. Another example in this realm is RINet [39], which uses supervised learning to obtain a model for raydrop and intensity simulation from RGB images. **Radar.** Radars are seldom used directly for scene understanding, but rather to support camera- or lidar perception [50]. While featuring superior performance under adverse weather conditions and for the purpose of measuring relative speed, radar sensors are especially difficult to model. That is due to physical effects and phenomena such as multi-path reflections, interference, ghost objects or attenuation [24]. Moreover, radar data are more sparse and stochastic compared to lidar. While the stochasticity aspect can be addressed by including detection probability as an additional random component to the simulation [41], the impracticability of complete reconstruction of radar wave remains. A recent survey presents different schemata for classifying radar models by fidelity [34]. One can distinguish between idealized (aka. black-box), physics-based (aka. white-box) and hybrid (aka. grey-box) models, whereas the latter two are of most relevance in AD. Simulating radar data, especially including realistic sensor noise, is challenging. However, even though this is an ongoing research topic (cf. [43, 44, 38]), we could not observe any clear trends in this regard. **Summary.** In sensor simulation, a generally observable trend is to leverage real-world data in learned models to boost the capabilities of classical, statistical- or physics-based models. Few-shot learning appears to emerge as a promising way to account for covering the great variety of different sensor models and improve perception realism. ### Horizontal Challenges Previous sections focused on challenges and trends around realism, as such is the primary goal with every simulation. However, there are a number of cross-cutting, horizontal challenges as well, that span through all types and levels of simulation. **Standardization.** As more AD simulation tools emerge on the market, the need for a standardized format for scenario description is growing in order to facilitate transfer Figure 3: Schematic overview of challenges & trends in AD simulation and exchange between platforms. A current approach to tackle this is ASAM's1 family of OpenX standards, including OpenSCENARIO [6] for describing the dynamics of traffic situations, OpenDRIVE for maps, and OpenCRG [5] for road surfaces, complemented by OpenMaterial [19], developed by BMW, for description of material properties. In addition, the Open Simulation Interface (OSI) [4] exists to support data exchange between different simulation models, while Eclipse Cloe [14] contributes a middleware layer to abstract from specific simulation environments and thus speed up development. Having big industry players collaborate on open standards is a welcomed trend for AV development in general. However, despite these standardization efforts, other simulation data formats, such as GeoScenario [47] by University of Waterloo, TUM CommonRoad scenarios [2], Traffic Charts and Scenic [18] by UC Berkeley are still being developed in parallel. Footnote 1: Association for Standardization of Automation and Measuring Systems **Data & Compute Power.** Constructing high-fidelity simulations requires vast amounts of data to ensure that the simulated environment is representative of the world it is supposed to model. In AD, these data may originate from real-world test drives, from a fleet of sensor-equipped consumer cars or from road-side measuring infrastructure. They are generally hard to gather at scale for organizations other than big industry player. We hope to see a growing trend towards open-data initiatives in the future. Moreover, compute power becomes an increasingly important factor, while its limited availability is often times a bottleneck to scalability. The development of specialized, high-efficiency machine-learning hardware may help in this regard. **Validity & Transferability.** Another general challenge with simulations is to quantify their real-world validity. On the one hand, one commonly seeks to minimize the real-to-sim gap. Closing the sim-to-real gap, however, is equally as important. For a model developed and tested virtually, it is crucial to ensure its applicability to the real world. The most important question is around how to measure if a simulation is _good enough_ to be used as a replacement for real-world testing and training. This also relates to AI safety and is a research field of tremendous popularity today. ## 4 Conclusion In this work, we first proposed a novel classification scheme to systematically compare AD simulation approaches, that takes latest advancements on the topics of data-driven and neural simulators into account. We then outlined current challenges in that domain and presented recent research trends aiming to address them. We specifically focused on the content-, behavior- and perception realism and also touched upon a few cross-cutting challenges. With regard to content realism, self-supervised generative models, such as NeRFs, enable to synthesize virtual 3D worlds with little to no hand-crafted data required. Behavior realism benefits from recent advances in deep learning, such as elaborate sequence models or transformers, but, more recently, also from adversarial models trained with RL. Deep learning models have gained ground for perception realism as well and are often times used to augment classical models with data-informed insights. Few-shot learning emerges as a promising way to cover a greater variety of sensor characteristics. A general challenge in AD simulation is a lack of standardization, especially in terms of data formats. However, industry seems to slowly converge towards common standards on that end. Many questions around validity & transferability of simulations as well as data- and compute requirements still remain subject to ongoing research. Across all aspects of a simulation, data-driven methods gain popularity over model-based approaches and promise to overcome many of their limitations. Given these trends, simulation environments are being rapidly towards levels 4 and 5, that is, comprehensive data-driven- and mixed neural simulations. We expect future research to continue in this direction with a fast pace, enabled through the application of (generative) deep learning across all areas of AD simulation. Heading in this direction, we identified a number of important future research questions, including: 1. How can data-driven methods and their parameters be made better understandable and configurable in order to leverage expert knowledge? 2. How can quality, validity and transferability of simulations, and data-driven approaches in particular, be quantified? 3. How can we extract or generate training data from the real world or in simulation in a large-scale manner and assess its relevance for different ODDs? 4. How can AV development benefit from incorporating subtle, high-detail phenomena, such as gestures and facial expressions, in simulations? With respect to limitations and future work, it would be of interest to conduct a larger and more systematic follow-up literature review, that goes into greater detail and also covers aspects that were intentionally disregarded in this work. Additionally, as many simulators are not available in an open-source fashion, gaining knowledge about these is important for more detailed comparisons in the future. ## 5 Acknowledgment The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Climate Action within the "VVMethoden" (19A19002A) and "SoftDCar" (19S21002) projects. The authors would like to thank both consortia for the successful cooperation.
2302.09747
Fermi-LAT GeV excess and muon $g-2$ in a modular $A_4$ symmetry
The recent measurement of muon anomalous magnetic dipole moment (muon $g-2$) suggests that there might exist new physics that dominantly interacts with muons. The observed gamma-ray excess from Fermi-LAT indicates that dark matter annihilates into a specific charged fermions. We propose a successful model simultaneously to explain the Fermi-LAT GeV gamma-ray excess and sizable muon $g-2$ with a modular $A_4$ symmetry. Due to nature of this symmetry, our DM only interacts with pairs of muon and we explain sizable muon $g-2$ without suffering from constraints of any lepton flavor violations. We numerically show our allowed spaces on each measurements of Fermi-LAT, relic density of DM and muon $g-2$, randomly scanning our input parameters.
Jongkuk Kim, Hiroshi Okada
2023-02-20T03:55:21Z
http://arxiv.org/abs/2302.09747v1
# Fermi-LAT GeV excess and muon \(g-2\) in a modular \(A_{4}\) symmetry ###### Abstract The recent measurement of muon anomalous magnetic dipole moment (muon \(g-2\)) suggests that there might exist new physics that dominantly interacts with muons. The observed gamma-ray excess from Fermi-LAT indicates that dark matter annihilates into a specific charged fermions. We propose a successful model simultaneously to explain the Fermi-LAT GeV gamma-ray excess and sizable muon \(g-2\) with a modular \(A_{4}\) symmetry. Due to nature of this symmetry, our DM only interacts with pairs of muon and we explain sizable muon \(g-2\) without suffering from constraints of any lepton flavor violations. We numerically show our allowed spaces on each measurements of Fermi-LAT, relic density of DM and muon \(g-2\), randomly scanning our input parameters. pacs: + [FOOTNO Introduction Modular non-Abelian discrete flavor symmetries are widely applied to various flavor (new) physics such as quark and lepton Yukawa sectors to predict experimental values such as their masses, mixings, and phases as well as reproduce them, baryon asymmetry of Universe via leptogenesis, muon/electron anomalous magnetic dipole moment (muon/electron \(g-2\)), electron dipole moment (EDM), flavor changing processes like \(\mu\to e\gamma\), \(b\to s\ell\bar{\ell}\), and dark matter (DM). All of them (except quark and charged-lepton sectors) are expected to be existence of "beyond the standard model(BSM)". Historically, this group was initially proposed by F. Feruglio in the paper Feruglio (1995) and he investigated the lepton sector via a modular \(A_{4}\) group. Subsequently, a large number of possibilities are arisen. For example, the following refs. are applied to lepton and quark sectors through the modular \(A_{4}\) symmetry (Brands et al., 2002; Aoki et al., 2003; Aoki et al., 2004; Aoki et al., 2005; Aoki et al., 2006; Aoki et al., 2007; Aoki et al., 2008; Aoki et al., 2009; Aoki et al., 2010; Aoki et al., 2011; Aoki et al., 2012; Aoki et al., 2013; Aoki et al., 2014; Aoki et al., 2015; Aoki et al., 2016; Aoki et al., 2017; Aoki et al., 2018; Aoki et al., 2019; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2021; Aoki et al., 2022; Aoki et al., 2023; Aoki et al., 2024; Aoki et al., 2025; Aoki et al., 2026; Aoki et al., 2027; Aoki et al., 2028; Aoki et al., 2029; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2029; Aoki et al., 2029; Aoki et al., 2029; Aoki et al., 2029; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al., 2020; Aoki et al., 2029; Aoki et al. [77; 78; 79]. Also, the leptonic final state was understood by \(U(1)_{L_{\mu}-L_{\tau}}\) model and investigated [80]. In this paper, we explain the Fermi-LAT GeV excess that DM must annihilate into a pair of muons that produce gamma-ray to be observed. The unique interaction of DM can be achieved by a modular \(A_{4}\) symmetry, and we apply the first way to stabilize our DM assigning nonzero modular weight. As a bonus of our model, we also explain the sizable muon \(g-2\) without worry about the LFVs such as \(\mu\to e\gamma\) due to non-interactions except muon pair. This paper is organized as follows. In Sec. II, we show our model setup starting from the relevant superpotential and soft breaking terms, and formulate the heavy charged lepton mass matrix. In Sec. III, we firstly discuss our DM candidate on how the DM interact with the muon pair, and evaluate cross section requested by Fermi-LAT experiment as well as relic density of DM. Then, we demonstrate the formula of muon \(g-2\). In Sec. IV, we present our allowed region to satisfy the Fermi-LAT measurement, relic density of DM, and muon \(g-2\), randomly scanning our input parameters. In Sec. V, we devote the conclusions and discussions. ## II Model setup In this Section, we explain our model construction, introducing the SM leptonic superfields and new ones and assigning the charges under the symmetries of \(SU(2)_{L}\otimes U(1)_{Y}\otimes A_{4}\), where \(-k\) is the number of modular weight and "hat" over the fields represents superfields. Here, we add one vector-like matter superfields \((\widehat{E},\widehat{\overline{E}},\widehat{L}^{\prime},\widehat{\overline{ L}^{\prime}})\) where we apply only fermionic parts of \(\widehat{E},\widehat{\overline{E}},\widehat{L}^{\prime},\widehat{\overline{ L}^{\prime}}\) for our model. And \(E,\overline{E}\) are singly-charged fermions, while \(L^{\prime}\equiv[N^{\prime},E^{\prime}]^{T},\overline{L^{\prime}}\equiv[ \overline{N^{\prime}},\overline{E^{\prime}}]^{T}\) are isospin doublet ones. We use only a bosonic part of \(\widehat{\chi}\) that is identified as a DM candidate, therefore \(\chi\) is vanishing VEV. \(\chi\) is denoted by \(\chi=(\chi_{R}+i\chi_{I})/\sqrt{2}\) that is \(A_{4}\) trivial singlet with \(-3\) modular weight. The DM \(\chi\) also plays a role in inducing the muon \(g-2\) together with \(E\) and \(L^{\prime}\). The \begin{table} \begin{tabular}{|c||c|c|c|c|c|c||c|c|c|c|c|} \hline \hline & \multicolumn{4}{|c||}{SM leptonic superfields} & \multicolumn{4}{|c|}{New superfields} \\ \hline & \(\widehat{L}_{e}\) & \(\widehat{L}_{\mu}\) & \(\widehat{L}_{\tau}\) & \(\widehat{\overline{e}}\) & \(\widehat{\overline{\mu}}\) & \(\widehat{\overline{\tau}}\) & \(\widehat{L}^{\prime}\) & \(\widehat{\overline{L}^{\prime}}\) & \(\widehat{\overline{L}^{\prime}}\) & \(\widehat{\overline{E}}\) & \(\widehat{\overline{E}}\) & \(\widehat{\overline{\chi}}\) \\ \hline \(SU(2)_{L}\) & \(\mathbf{2}\) & \(\mathbf{2}\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) \\ \hline \(U(1)_{Y}\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(1\) & \(1\) & \(1\) & \(-\frac{1}{2}\) & \(\frac{1}{2}\) & \(-1\) & \(1\) & \(0\) \\ \hline \(A_{4}\) & \(1\) & \(1\) & \(1^{\prime\prime}\) & \(1\) & \(1^{\prime\prime}\) & \(1^{\prime}\) & \(1\) & \(1\) & \(1^{\prime}\) & \(1^{\prime\prime}\) & \(1\) \\ \hline \(-k\) & \(0\) & \(-2\) & \(0\) & \(0\) & \(-2\) & \(0\) & \(-3\) & \(-1\) & \(-3\) & \(-1\) & \(-3\) \\ \hline \end{tabular} \end{table} Table 1: Field contents of the matter superfields and their charge assignments under \(SU(2)_{L}\otimes U(1)_{Y}\otimes A_{4}\), where \(-k\) is the number of modular weight. superfield contents and their charged assignments are shown in Table 1. Superfields \(\widehat{H}_{u}\) and \(\widehat{H}_{d}\) are introduced as MSSM. We denote their boson parts are written by \(H_{u}=[h_{u}^{+},(v_{u}+h_{u}+iz_{u})/\sqrt{2}]^{T}\) and \(H_{d}=[(v_{d}+h_{d}+iz_{d})/\sqrt{2},h_{d}^{-}]^{T}\) with totally neutral charges under \(A_{4}\) and \((-k)\), where the structure is exactly the same as the minimum supersymmetric theory. The SM VEV is defined by \(v_{H}\equiv\sqrt{v_{u}^{2}+v_{d}^{2}}\equiv 246\) GeV. Under these symmetries, the valid superpotential is found as follows: \[\mathcal{W}_{Y}=y_{e}\widehat{\epsilon}\widehat{H}_{d}\widehat{L }_{e}+y_{\mu}\widehat{\overline{\mu}}\widehat{H}_{d}\widehat{L}_{\mu}+y_{ \tau}\widehat{\overline{\tau}}\widehat{\overline{H}}_{d}\widehat{L}_{\tau}+g_ {E}\widehat{\overline{L}}\widehat{L}_{\mu}\widehat{\chi}+y_{E}\widehat{ \overline{\mu}}\widehat{\overline{E}}\widehat{\chi}\] \[h_{E}\widehat{\overline{E}}\widehat{H}_{d}\widehat{L}^{\prime}+ M_{E}\widehat{\overline{E}}\widehat{E}+M_{L^{\prime}}\widehat{\overline{L}^{ \prime}}\widehat{L}^{\prime}+\mu_{H}\widehat{H}_{u}\widehat{H}_{d}+\mu_{\chi} \widehat{\chi}\widehat{\chi},\] (II.1) where all couplings except \(y_{e},~{}y_{\tau},~{}\mu_{H}\) implicitly includes modular Yukawa couplings that are determined only by modulus value. Charged-lepton sector does not mix each other, therefore the charged-lepton fields are mass eigenstates. Due to even (odd) number of assignments for SM (new) fields in modular weight, it is forbidden for any mixing terms between SM and new fields such as \(\widehat{\overline{\ell}}\widehat{H}_{d}\widehat{L}^{\prime}\), \(\widehat{\overline{\ell}}\widehat{E}\), and \(\widehat{\overline{L}^{\prime}}\widehat{L}_{i}\) where \(\ell=e,\mu,\tau\). This is because any odd numbers of modular couplings are mathematically forbidden. However, we implicitly impose R-parity in order to prohibit superpotential such as \(\widehat{\overline{\epsilon}}\widehat{L}_{\mu}\widehat{L}_{\mu}\), \(\widehat{L}_{e}\widehat{H}_{u}\), \(\widehat{\overline{\epsilon}}\widehat{H}_{d}\widehat{H}_{d}\). The bosonic mass of \(\chi\) is obtained by the following soft SUSY breaking terms; \[-\mathcal{L}_{\text{soft}}\sim \mu_{B\chi}^{2}\chi^{2}+m_{\chi}^{2}|\chi|^{2}+\text{h.c.},\] (II.2) where \(m_{\chi}^{2}\) includes F-term. Thus, mass of \(\chi\) is given by \(m_{\chi_{R}}^{2}=m_{\chi}^{2}+\mu_{B\chi}^{2}\) and \(m_{\chi_{I}}^{2}=m_{\chi}^{2}-\mu_{B\chi}^{2}\), and we select DM as \(\chi\equiv\chi_{I}\), taking \(0<\mu_{B\chi}^{2}\). ### Heavy charged lepton mass matrix After the electroweak spontaneous symmetry breaking, a heavily charged lepton mass matrix is found on the basis of \([E,E^{\prime}]_{L}^{T}\) as \[\mathcal{M}_{E}=\begin{pmatrix}M_{E}&m_{E}\\ 0&M_{L^{\prime}}\end{pmatrix},\] (II.3) where \(m_{E}\equiv h_{E}v_{d}/\sqrt{2}\), and all the mass components are supposed to be real without loss of generality after rephasing of fields. Then, \(\mathcal{M}_{E}\) is diagonalized by bi-unitary matrix as \(\text{diag}[m_{1},m_{2}]=V_{E_{R}}^{\dagger}\mathcal{M}_{E}V_{E_{L}}\), where we define \[\begin{pmatrix}E^{\pm}\\ E^{\prime\pm}\end{pmatrix}\equiv\begin{pmatrix}c_{E}&-s_{E}\\ s_{E}&c_{E}\end{pmatrix}\begin{pmatrix}\psi_{E_{1}}^{\pm}\\ \psi_{E_{2}}^{\pm}\end{pmatrix},\] (II.4) where \(s_{E}(c_{E})\) is short-hand notation for \(\sin\theta_{E}(\cos\theta_{E})\), and we abbreviate the electric charge hereafter. Since these fermions have nonzero electric charges with specific decays, there exist lower bounds on masses. If it decays into muon and missing, the bound is about 90 GeV [81]. In our case, \(\psi^{\pm}_{E_{1}}\) decays (in limit of \(s_{E}=0\)) into muon and missing energy (=DM) via \(y_{E}\) and \(\psi^{\pm}_{E_{2}}\) decays (in limit of \(s_{E}=0\)) into muon, muon-neutrino, and the missing energy via kinetic term of \(L^{\prime}\) and \(g_{E}\). We adopt the relaxing lower mass bound of 90 GeV in our numerical analysis. ## III Dark matter and muon anomalous magnetic dipole moment ### Dark Matter and Fermi-LAT GeV excess We consider a DM candidate \(\chi_{I}\). Then, the dominant contribution to the relic density of DM comes from the following relevant Lagrangian: \[-\mathcal{L} = \frac{i}{\sqrt{2}}g_{E}\bar{E}^{\prime}\ell_{\mu}\chi_{I}+\frac{i }{\sqrt{2}}y_{E}\bar{\mu}E\chi_{I}+\text{c.c.}\] (III.1) \[= \frac{i}{\sqrt{2}}g_{E}(s_{E}\bar{\psi}_{E_{1}}+c_{E}\bar{\psi}_ {E_{2}})\ell_{\mu}\chi_{I}+\frac{i}{\sqrt{2}}y_{E}\bar{\mu}(c_{E}\psi_{E_{1}} -s_{E}\psi_{E_{2}})\chi_{I}+\text{c.c.},\] where we suppose \(g_{E},~{}y_{E}\) to be real without loss of generality. Then, the cross-section is expanded by relative velocity \(v_{\text{rel}}\) that is given by \(\sigma v_{\text{rel}}\approx a_{\text{eff}}+b_{\text{eff}}v_{\text{rel}}^{2}+ \mathcal{O}(v_{\text{rel}}^{4})\), where \(a_{\text{eff}}\) is s-wave and \(b_{\text{eff}}\) is p-wave. 3 Each of waves is written by Footnote 3: Even though we show this expansion up to p-wave, we have computed the relic density of DM up to d-wave that is proportional to \(v_{\text{rel}}^{4}\). \[a_{\text{eff}} \approx \frac{(y_{E}g_{E}s_{E}c_{E})^{2}}{2\pi}\left[\frac{m_{1}^{2}}{(m_ {\chi}^{2}+m_{1}^{2})^{2}}+\frac{m_{2}^{2}}{(m_{\chi}^{2}+m_{2}^{2})^{2}} \right],\] (III.2) \[b_{\text{eff}} \approx -\frac{(y_{E}g_{E}s_{E}c_{E}m_{\chi})^{2}}{6\pi(m_{\chi}^{2}+m_{ 1}^{2})^{4}(m_{\chi}^{2}+m_{2}^{2})^{4}}\left[m_{2}^{2}(m_{1}^{8}+4m_{1}^{6}m_ {\chi}^{2}+m_{\chi}^{8})(3m_{2}^{2}+m_{\chi}^{2})\right.\] (III.3) \[\left.+3m_{1}^{4}(m_{2}^{8}+4m_{2}^{6}m_{\chi}^{2}+12m_{2}^{4}m_{ \chi}^{4}+6m_{2}^{2}m_{\chi}^{6}+m_{\chi}^{8})\right.\] \[\left.+m_{1}^{2}m_{\chi}^{2}(m_{2}^{8}+4m_{2}^{6}m_{\chi}^{2}+18m_ {2}^{4}m_{\chi}^{4}+8m_{2}^{2}m_{\chi}^{6}+m_{\chi}^{8})\right],\] where we define the DM mass to be \(m_{\chi}\). Since the cross section of Fermi-LAT \((\sigma v)_{\text{FL}}\) is measured at the present Universe, \(v_{\text{rel}}\) is almost zero. It leads us to be \(a_{\text{eff}}\approx(\sigma v)_{\text{FL}}\). Then, one simply solves this equation in terms of one of our input parameters \(s_{E},~{}y_{E},~{}g_{E}\). Here, we solve it in term of \(g_{E}\) and the result is straightforwardly given by \[g_{E}=\pm\sqrt{\frac{2(\sigma v)_{\text{FL}}}{(y_{E}s_{E}c_{E})^{2}\left[\frac {m_{1}^{2}}{(m_{\chi}^{2}+m_{1}^{2})^{2}}+\frac{m_{2}^{2}}{(m_{\chi}^{2}+m_{2} ^{2})^{2}}\right]}}.\] (III.4) Thus, \(g_{E}\) is not independent parameter any more hereafter, and we have to impose perturbative limit which we take \(|g_{E}|\leq 1\) for more conservative manner. To explain the gamma-ray GeV excess from galactic center, DM annihilation to the different SM particles is suggested. One of the solutions for the Fermi-LAT Gamma-ray excess is that the DM annihilation cross section to muons is \((3.9^{+0.5}_{-0.6})\times 10^{-26}\text{cm}^{3}/\text{sec}\) with DM mass \(58^{+11}_{-9}\) GeV at \(1\sigma\) interval [82]. The required DM annihilation cross section is very close to the canonical value for thermal freeze-out DM relic. Relic density of DM is found in terms of expansion coefficients of \(v_{\text{rel}}\) and approximately given by [83] \[\Omega_{\text{DM}}h^{2}\approx\frac{1.07\times 10^{9}x_{F}^{2}}{\sqrt{g_{*}}M_{ \text{PL}}(a_{\text{eff}}x_{F}+3b_{\text{eff}})},\] (III.5) where \(M_{\text{PL}}=1.22\times 10^{19}\) GeV, and \(x_{F}\approx 24\). In our numerical analysis, we apply the relaxed observed relic density \(0.11\leq\Omega_{\text{DM}}h^{2}\leq 0.13\) that is about \(5\sigma\) interval instead of exact value \(\Omega_{\text{DM}}h^{2}=0.1197\pm 0.0022\) at \(1\sigma\) interval [84]. ### Muon anomalous magnetic dipole moment: \(\Delta a_{\mu}\) A muon anomalous magnetic dipole moment (\(\Delta a_{\mu}\) or muon \(g-2\)) has been first reported by Brookhaven National Laboratory (BNL) [68]. They found that the muon \(g-2\) data has a discrepancy at the \(3.3\sigma\) level from the SM prediction. Recent experimental result of muon \(g-2\) suggests the following value at \(4.2\sigma\)[67]: \[\Delta a_{\mu}=a_{\mu}^{\text{EXP}}-a_{\mu}^{\text{SM}}=(25.1\pm 5.9)\times 10^{- 10}.\] (III.6) To get sizable muon \(g-2\) at one-loop level, we would need a chiral flip diagram and this contribution is obtained by the same term as Eq.(III.1). Our muon \(g-2\) at one-loop level is found as follows [55]: \[\Delta a_{\mu}=\frac{m_{\mu}}{(4\pi)^{2}}y_{E}g_{E}s_{E}c_{E}\left[ F_{I}(m_{\chi},m_{1})-F_{I}(m_{\chi},m_{2})\right],\] (III.7) \[F_{I}(m_{a},m_{b})\simeq-\frac{m_{b}\left(3m_{a}^{4}-4m_{a}^{2}m _{b}^{2}+m_{b}^{4}+4m_{a}^{4}\ln\left[\frac{m_{b}}{m_{a}}\right]\right)}{2(m_{ a}^{2}-m_{b}^{2})^{3}}.\] (III.8) ## IV Numerical analysis In our numerical analysis, we randomly scan free parameters such that \[\{y_{E},s_{E}\}\in[0.01,1],\quad\{m_{1},m_{2}\}\in[90,1000]\text{ GeV},\] (IV.1) where the other parameter \(g_{E}\) is determined by Eq. (III.4). Importing the measured observables \(m_{\chi},~{}(\sigma v)_{\rm FL},~{}\Omega_{\rm DM}h^{2},~{}\Delta a_{\mu}\), we plot two figures below where we take them up to \(2\sigma\) interval for \((\sigma v)_{\rm FL},~{}\Delta a_{\mu}\), \(1\sigma\) interval for \(m_{\chi}\), and \(5\sigma\) for \(\Omega_{\rm DM}h^{2}\) as discussed in the previous section. In the left panel of Fig. 1, we show the allowed region in terms of \((\sigma v)_{\rm FL}\times 10^{26}\)cm\({}^{3}\)/sec and \(\Delta a_{\mu}\times 10^{10}\). The blue plot represents \(1\sigma\) interval of \(\Delta a_{\mu}\) while the green one \(2\sigma\). The black vertical line is the best fit value of \(\Delta a_{\mu}\). The figure suggests that we do not have allowed region within \(1\sigma\) of \((\sigma v)_{\rm FL}\) but have within \(2\sigma\). In the right panel of Fig. 1, we show the allowed region in terms of \(\Omega_{\rm DM}h^{2}\) and \(\Delta a_{\mu}\times 10^{10}\). This figure implies that most of the region is below the BF of \(\Omega_{\rm DM}h^{2}\) but a few points reach at this value. ## V Conclusions and discussions We have shown the successful explanation of Fermi-LAT GeV excess where DM is supposed to be muon specific, applying a modular \(A_{4}\) symmetry. Thanks to nature of the symmetry, our DM interacts with pairs of muon only. Moreover, we have explained sizable muon anomalous magnetic dipole moment without suffering from lepton flavor violations. We have numerically demonstrated our allowed spaces on each the measurements above, randomly scanning our input parameters. Figure 1: (Left) Allowed region in terms of \((\sigma v)_{\rm FL}\times 10^{26}\)cm\({}^{3}\)/sec and \(\Delta a_{\mu}\times 10^{10}\). (Right) Preferred region for \((\Omega_{\rm DM}h^{2},\Delta a_{\mu})\). All of the points are satisfied with Fermi-LAT GeV excess within \(1\sigma\). The red horizontal line indicates the central value of the DM relic abundance observed by Planck [84]. In both left and right figures, the blue plot represents \(1\sigma\) interval of \(\Delta a_{\mu}\) while the green one \(2\sigma\). the black vertical line is the best fit value of \(\Delta a_{\mu}\). ###### Acknowledgements. The work of J.K. is supported in part by Korea Institute for Advanced Study (KIAS) Individual Grant. No. PG074202. The work of H.O. was supported by the Junior Research Group (JRG) Program at the Asia-Pacific Center for Theoretical Physics (APCTP) through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government and was supported by the Korean Local Governments-Gyeongsangbuk-do Province and Pohang City.
2310.06665
Finite difference method in prolate spheroidal coordinates for freely suspended spheroidal particles in linear flows of viscous and viscoelastic fluids
A finite difference scheme is used to develop a numerical method to solve the flow of an unbounded viscoelastic fluid with zero to moderate inertia around a prolate spheroidal particle. The equations are written in prolate spheroidal coordinates, and the shape of the particle is exactly resolved as one of the coordinate surfaces representing the inner boundary of the computational domain. As the prolate spheroidal grid is naturally clustered near the particle surface, good resolution is obtained in the regions where the gradients of relevant flow variables are most significant. This coordinate system also allows large domain sizes with a reasonable number of mesh points to simulate unbounded fluid around a particle. Changing the aspect ratio of the inner computational boundary enables simulations of different particle shapes ranging from a sphere to a slender fiber. Numerical studies of the latter particle shape allow testing of slender body theories. The mass and momentum equations are solved with a Schur complement approach allowing us to solve the zero inertia case necessary to isolate the viscoelastic effects. The singularities associated with the coordinate system are overcome using L'Hopital's rule. A straightforward imposition of conditions representing a time-varying combination of linear flows on the outer boundary allows us to study various flows with the same computational domain geometry. {For the special but important case of zero fluid and particle inertia we obtain a novel formulation that satisfies the force- and torque-free constraint in an iteration-free manner.} The numerical method is demonstrated for various flows of Newtonian and viscoelastic fluids around spheres and spheroids (including those with large aspect ratio). Good agreement is demonstrated with existing theoretical and numerical results.
Arjun Sharma, Donald L. Koch
2023-10-10T14:40:13Z
http://arxiv.org/abs/2310.06665v1
Finite difference method in prolate spheroidal coordinates for freely suspended spheroidal particles in linear flows of viscous and viscoelastic fluids ###### Abstract A finite difference scheme is used to develop a numerical method to solve the flow of an unbounded viscoelastic fluid with zero to moderate inertia around a prolate spheroidal particle. The equations are written in prolate spheroidal coordinates, and the shape of the particle is exactly resolved as one of the coordinate surfaces representing the inner boundary of the computational domain. As the prolate spheroidal grid is naturally clustered near the particle surface, good resolution is obtained in the regions where the gradients of relevant flow variables are most significant. This coordinate system also allows large domain sizes with a reasonable number of mesh points to simulate unbounded fluid around a particle. Changing the aspect ratio of the inner computational boundary enables simulations of different particle shapes ranging from a sphere to a slender fiber. Numerical studies of the latter particle shape allow testing of slender body theories. The mass and momentum equations are solved with a Schur complement approach allowing us to solve the zero inertia case necessary to isolate the viscoelastic effects. The singularities associated with the coordinate system are overcome using L'Hopital's rule. A straightforward imposition of conditions representing a time-varying combination of linear flows on the outer boundary allows us to study various flows with the same computational domain geometry. For the special but important case of zero fluid and particle inertia we obtain a novel formulation that satisfies the force- and torque-free constraint in an iteration-free manner. The numerical method is demonstrated for various flows of Newtonian and viscoelastic fluids around spheres and spheroids (including those with large aspect ratio). Good agreement is demonstrated with existing theoretical and numerical results. Finite difference; prolate spheroidal coordinates; viscoelastic fluids; moderate inertial effects; large aspect ratio fibers; spheres ## 1 Introduction The flow of viscoelastic or polymeric fluids around solid particles of various shapes is important in many industrial processes. A particle shape is chosen to achieve the manufactured product's specific purpose or property. For example, fibers allow the desired anisotropy in the roll-to-roll manufacturing of high aspect ratio, low resistance films for flexible and transparent electronics [1, 2]. In hydraulic fracturing [3], spheres may be used as proponents to keep the pores of fractured rocks from closing. In extrusion molding and fiber spinning [4, 5, 6, 7] spheres or fibers may be added to the fluid to impart strength to the finished product. A spheroid is a convenient shape to span a range of aspect ratios that can be synthesized by dispersing polystyrene spheres in a solution of polyvinyl alcohol, followed by drying the solution into thin sheets. The particles obtained after heating, stretching, and cooling these sheets [8] may be used in experiments [9] or industrial applications (such as the ones discussed above) involving the flow of viscoelastic fluids around particles. Viscoelasticity arises in these fluids from the underlying polymer molecules. Viscoelastic fluids exhibit several properties such as shear thinning and first normal stress difference (rod-climbing) due to the response of the polymers to the imposed flow field. Therefore, numerical computations using specific polymer constitutive models are very useful to isolate the origins of novel flow physics from the interaction of the polymers with the suspended particles. Furthermore, due to several parameters defining the properties of a viscoelastic fluid, such as the polymer mobility, its maximum extensibility, and relaxation time [10], computations complement laboratory experiments by exploring a wider range of these parameters. Studies concerning moderate inertial effects such that the flow is not turbulent but leads to mechanisms (absent in Stokes flow) such as a significant Saffman lift force [11] on a particle are relevant to several engineering applications and natural phenomena that involve particulate flows. These include air and water pollution, pneumatic and slurry transport, fluidized bed combustion, mineral separation, hemodynamics, and sedimentation in rivers [12, 13]. The numerical challenges in studying the flow of particle suspensions in the applications mentioned above are two-fold: fluid-particle interaction and particle-particle interaction. To incorporate both these effects, numerical computations involving more than one particle resort to the immersed boundary method (IBM) [14, 15]. These are useful in studying dense particle suspensions. However, the no-slip condition on the particle surface is not directly imposed in IBM. Instead, the particle region is modeled using fictitious forces required to enforce the necessary no-slip condition. The diffuse particle-solid interface does not fully resolve the large polymer stress gradients near the surface. In several of the scenarios above, the particle concentration is dilute enough, so the particle-particle interaction is rare. Thus, dilute particle suspensions can be modeled as an ensemble of several isolated particles in an unbounded fluid. The fluid-particle interaction is often analytical in Stokes flow (i.e. flow of inertia-less Newtonian fluid). However, it is complex for a Newtonian fluid with moderate inertia or a viscoelastic fluid because the particle-induced disturbance affects the velocity field in a way that alters the interplay of viscous and inertial or viscous and elastic forces. Therefore, valuable and more accurate physical insight is obtained by studying the flow of a fluid around an isolated particle where the no-slip on the particle surface is imposed. Such numerical studies can also qualitatively complement IBM simulations (where the particle-particle interaction is incorporated) for dense suspensions, such as in [16]. Furthermore, accurate force and torque coefficients for spheroids of different aspect ratios obtained from single particle simulations can be useful in Lagrangian models. In this paper, we describe a numerical method based on the finite-difference approximations to model the flow of viscoelastic fluid around a prolate spheroidal particle. The equations are solved in a particle-fixed reference frame which rotates and translates with the spheroid. Computational fluid mechanics of viscoelastic fluids, which first started in the 1970s, is now a well-established research field [17, 18]. Viscoelasticity of linear polymers is modeled through continuum equations governing second moment of the polymer end-to-end distance averaged over the polymer configuration [10]. Interesting and novel physical phenomena arise in the industrially relevant parameter regimes when the polymer stretch is significant, leading to significant polymer stress and its gradients. It is in these parameter regimes where unique numerical challenges also arise that have required ingenious solutions in the past, such as the log-conformation formulation by Fattal and Kupferman (2004) [19] and algebraic numerical treatment of the polymer stretch by Richter et al. (2010) [20] to ensure finite polymer length and hence finite polymer stress. Accurately resolving the polymeric flow around particles with large aspect ratios provides yet another numerical challenge as it requires high spatial resolution to accurately model the large polymer stress gradients along with the thin particle surface. Therefore, this paper uses a prolate spheroidal coordinate system to discretize the governing equations spatially. This exactly models the particle surface as one of the coordinate surfaces and is well suited to study the flow around a prolate spheroid, just as spherical [21] or cylindrical [22, 23, 24] coordinate systems are beneficial in studying the flow around a sphere and in a cylindrical pipe respectively. Furthermore, even a uniform grid in our chosen coordinate system is naturally more clustered (than a Cartesian grid) near the particle surface in the Euclidean sense, allowing enhanced spatial resolution in the regions that require it the most. The flow of viscoelastic fluids around particles in the aforementioned industrial applications undergoes a series of local linear flows with time in a Lagrangian reference frame. For example, in fiber spinning, the material is first sheared within the spineret and then pulled by the drawing mechanism leading to a strong uni-axial extensional flow before solidifying to form a fiber. The computational domain consists of the prolate spheroidal surface of the solid particle as the inner boundary and a nearly spherical outer surface where the imposed flow boundary conditions are applied. On the outer boundary of our computational domain, we can apply any choice of imposed stationary, time-varying, or alternating linear flow fields that mimic industrial scenarios. This allows us greater flexibility in the choice of imposed flow conditions as compared to previous numerical techniques where the computational domain is problem specific such as using parallel, oppositely moving walls to obtain simple shear flow [25, 26] or a cylindrical outer surface for a uniform far-field flow [27]. In our method, boundary conditions can be changed with time within a simulation. Theoretical studies and numerical simulations are complementary. Theoretical studies are however often done either for spherical [28, 29], or for slender particles [30, 31]. For the latter, a matched asymptotic expansion in particle aspect ratio, also known as slender body theory, [32, 33, 34] is used to obtain useful physical insight by considering the particle aspect ratio to be very large. Furthermore, flow around a particle is solved in an unbounded fluid in such theoretical developments. The choice of a prolate spheroidal coordinate system allows us to simulate the flow around a large aspect ratio particle in a very large computational domain with good spatial resolution in the regions near the particle surface where the velocity and polymer stress gradients are expected to be large while maintaining a reasonable number of mesh points. Therefore, our numerical method is a suitable testing ground for slender body theories. As we demonstrate later, we can simulate the flow around a sphere by allowing the particle aspect ratio to be 1+\(\epsilon\), where \(\epsilon\) is a very small positive number. We use a finite difference method to discretize the spatial gradients. Polymer viscosity is large in polymer melts and concentrated polymer solutions. An increase in the solvent viscosity in dilute polymer solutions leads to a large polymer relaxation time, leading to interesting mechanisms that are numerically challenging to resolve. Therefore, viscoelastic fluids are generally highly viscous, and numerical studies with negligible fluid and particle inertia are appropriate to study the effects of viscoelasticity [25, 35, 26]. Furthermore, the solutions of uniform and linear flows of an unbounded inertia-less Newtonian fluid around spheres and spheroids [36] are analytically known. Thus, numerical studies around such particles in the absence of inertia are more relevant in the presence of viscoelasticity. Some previous investigations of the flow of viscoelastic fluids, such as [25] and [37], intending to ignore inertia, have used small but finite values of Reynolds number, \(Re\) (ratio of inertial to viscous forces). In such numerical solvers, the momentum conservation and incompressibility (mass conservation) equations are solved via a splitting method where momentum equations are advanced in time. The incompressibility is imposed via a pressure Poisson equation. Therefore, additional boundary conditions are required for the pressure field, and Neumann boundary conditions are usually used at the solid surface [38, 39]. The splitting method is more appropriate for large \(Re\) flows where the splitting errors due to the introduction of artificial boundary conditions are not dominant. However, when \(Re\) is small, splitting errors increase with the dominance of the viscous forces [40]. To avoid this issue, we solve the coupled system of momentum and incompressibility equation iteratively using a Schur complement method with GMRES [41]. In addition, unlike the splitting method, we can solve a flow with \(Re=0\) where the momentum equation is quasi-steady, and the method used does not require us to use time marching. For a finite \(Re\), we incorporate the inertial terms in the momentum equation within the Schur complement method similar to [40]. Therefore, in addition to solving the flow of inertia-less viscoelastic fluid, our method allows us to access the effects of small to moderate inertia (with or without viscoelasticity). Accurate simulations for small (but nonzero) inertia made possible by the large computational domain enable us to test perturbation theories for the limit \(Re\ll 1\). Such simulations of a Newtonian fluid with finite inertia for larger aspect ratio prolate spheroids can allow us to test the slender body theories, such as in [42], describing the effect of inertia on fibers. In the next section, we present governing equations for the mass and the momentum conservation in the fluid along with the polymer constitutive equations in their original and log-conformal [19] formulation. We also present the equations governing particle dynamics and treatment of the boundary conditions. Section 3 deals with the temporal discretization of the governing equations. This entails the description of the Schur complement method [41] for solving mass and momentum equations, the technique of [20] to treat finite extensibility of a polymer, quaternion formulation to account for the particle orientation, and separate methodologies for zero and finite particle inertia to account for the particle motion. In section 4 we illustrate the spatial discretization of the equations using the finite difference method. In this section, we also present the method to treat the coordinate system dependent axis singularities that arise when the governing equations are expressed in the prolate spheroidal coordinate system. This is motivated by a similar method [23, 24] developed for the cylindrical coordinate system. In section 5 we demonstrate the robustness and versatility of our numerical solver through examples of a variety of flows past a particle in (a) Stokes flow (inertia-less flows of a Newtonian fluid), (b) flow of a Newtonian fluid with finite inertia, and (c) inertialess flows of viscoelastic fluids with different constitutive models. Comparison with previous numerical studies or analytical results is provided for each case. Our numerical solver is parallelized using the domain decomposition method implemented in the Message Passing Interface (MPI). All the examples presented are run on more than one processing unit. Finally, we present the conclusions in section 6. ## 2 Governing equations We consider the flow of an incompressible viscoelastic fluid around a prolate spheroidal particle in a reference frame rotating and translating with the particle (figure 1). The equations governing the conservation of mass (incompressibility) and momentum in this rotating reference frame are \[\nabla\cdot\mathbf{u}=0, \tag{1}\] \[\rho_{f}(\frac{\partial\mathbf{u}}{\partial t}+\mathbf{ADV}(\mathbf{u}, \mathbf{u}_{p},\mathbf{\omega}_{p};\mathbf{r}))=\nabla\cdot\mathbf{\sigma}, \tag{2}\] where \[\mathbf{ADV}(\mathbf{u},\mathbf{u}_{p},\mathbf{\omega}_{p};\mathbf{r})=\frac{d \mathbf{u}_{p}}{dt}+\mathbf{u}\cdot\nabla\mathbf{u}+2\mathbf{\omega}_{p}\times \mathbf{u}+\mathbf{\omega}_{p}\times\mathbf{\omega}_{p}\times\mathbf{r}+\frac{d\mathbf{ \omega}_{p}}{dt}\times\mathbf{r}. \tag{3}\] \(\mathbf{\omega}_{p}\) is the angular velocity of the particle, \(\mathbf{r}\) and \(\mathbf{u}\) are the position and the fluid velocity vector relative to the centroid of the particle, and \(\mathbf{\sigma}\) is the stress tensor field in the fluid. The stress in a viscoelastic fluid is the sum of the Newtonian Figure 1: Computational domain for the flow around a prolate spheroid: particle surface, \(\mathbf{r}_{p}\) is the inner boundary, and the exterior boundary is \(\mathbf{r}_{\infty}\) indicated with a dashed black curve. \(\mathbf{r}_{\infty}\) is at a large distance from the particle’s center and represents a surface in the far-field where the velocity boundary conditions are applied. A velocity field, \(\mathbf{r}\cdot\mathbf{\Gamma}+\mathbf{u}_{0}\) with time varying \(\mathbf{\Gamma}\) and \(\mathbf{u}_{0}\) indicated by blue arrows and curves can be imposed at \(\mathbf{r}_{\infty}\). The gray shaded region is the interior fluid region over which we solve the governing equations. solvent stress, \(\mathbf{\tau}\), (with viscosity \(\mu\)) and the polymer, \(\mathbf{\Pi}\), stress, \[\mathbf{\sigma}=\mathbf{\tau}+\mathbf{\Pi}=-p\mathbf{\delta}+2\mu\mathbf{e}+\mathbf{\Pi}, \tag{4}\] where \(p\) is the reduced pressure (the difference between the hydrodynamic and the hydrostatic pressure), \(\mathbf{e}=(\nabla\mathbf{u}+(\nabla\mathbf{u})^{T})/2\) is the strain rate tensor, and \(\rho_{f}\) is the fluid density. A rotating frame fixed with the particle avoids the need to introduce a mesh velocity but leads to the first, third, fourth, and fifth terms on the RHS of equation (3). These arise due to the non-inertial (rotating) reference frame and represent the linear acceleration, Coriolis force, centrifugal force, and angular acceleration. As shown in figure 1 the computational domain for these equations is bounded by \(\mathbf{r}_{p}\) (representing the particle surface) on the inside, and \(\mathbf{r}_{\infty}\) (representing the far-field) on the outside. The boundary conditions on the velocity field are the imposed flow conditions in the far-field and no-slip and no-penetration on the particle surface. In the frame of reference rotating and translating with the particle, these are, \[\mathbf{u}=\mathbf{0},\ \text{on particle surface,}\ \ \ \ \ \mathbf{u}= \mathbf{u}_{\infty}(\mathbf{r})=\mathbf{r}\cdot\mathbf{\Gamma}+\mathbf{u}_{0}- \mathbf{u}_{p}-\mathbf{\omega}_{p}\times\mathbf{r},\ \text{as}\ |\mathbf{r}|\to\infty\approx \mathbf{r}_{\infty}, \tag{5}\] where \(\mathbf{\Gamma}\) and \(\mathbf{u}_{0}\) are the imposed velocity gradient and uniform velocity field. The angular and translational velocities of the particle, \(\mathbf{\omega}_{p}\) and \(\mathbf{u}_{p}\), may either be imposed or obtained from relevant equations governing the motion of the particle due to the moments and forces acting on the particle. In the latter case, Newton's equations govern the particle motion, \[\rho_{p}V_{p}\frac{d\mathbf{u}_{p}}{dt}=\mathbf{f}_{\text{fluid}}+\mathbf{f}_{ \text{ext.}}, \tag{6}\] \[\mathbf{I}_{p}\cdot\frac{d\mathbf{\omega}_{p}}{dt}=\mathbf{q}_{\text{fluid}}+ \mathbf{q}_{\text{ext.}}, \tag{7}\] where \(V_{p}\), \(\rho_{p}\) and \(\mathbf{I}_{p}\) are the volume, density and moment of inertia tensor of the particle. \(\mathbf{f}_{\text{ext.}}\) and \(\mathbf{q}_{\text{ext.}}\) are the external imposed force and torque that can be prescribed. In case gravity is present, \(\mathbf{f}_{\text{ext.}}\) includes the buoyancy force acting on the particle due to the difference in particle and fluid densities represented by \((\rho_{p}-\rho_{f})V_{p}\mathbf{g}\). Here, \(\mathbf{g}\) is the gravity vector in the particle fixed frame. Although it is likely fixed in the inertial reference frame, the orientations of \(\mathbf{f}_{\text{ext.}}\), \(\mathbf{q}_{\text{ext.}}\) and \(\mathbf{g}\) vary in the particle reference frame with time for a rotating particle. For a prolate spheroid with minor radius, \(r_{\text{minor}}\) and aspect ratio, \(\kappa\), \[V_{p}=\frac{4}{3}\pi r_{\text{minor}}^{3}\kappa,\ \ \ \ \ \mathbf{I}_{p}=\frac{1}{5}\rho_{p}V_{p}r_{ \text{minor}}^{2}\begin{bmatrix}1+\kappa^{2}&0&0\\ 0&1+\kappa^{2}&0\\ 0&0&2\end{bmatrix}. \tag{8}\] \(\mathbf{f}_{\text{fluid}}=\mathbf{f}(\mathbf{\sigma})\) and \(\mathbf{q}_{\text{fluid}}=\mathbf{q}(\mathbf{\sigma})\) are the fluid stress dependent hydrodynamic force and torque acting on the particle defined as, \[\mathbf{f}(\mathbf{\sigma})=\int_{\mathbf{r}_{p}}dS\ \ \mathbf{\sigma}\cdot\mathbf{n},\ \ \ \ \ \mathbf{q}(\mathbf{\sigma})=\int_{\mathbf{r}_{p}}dS\ \ \ \mathbf{r}\times(\mathbf{\sigma}\cdot\mathbf{n}), \tag{9}\] where \(\mathbf{n}\) is the surface normal. In a viscoelastic fluid \(\mathbf{f}_{\text{fluid}}=\mathbf{f}(\mathbf{\sigma})\) and \(\mathbf{q}_{\text{fluid}}=\mathbf{q}(\mathbf{\sigma})\) are thus sums of contributions from the Newtonian solvent and polymeric stress, \[\mathbf{f}_{\text{fluid}}=\mathbf{f}(\mathbf{\tau})+\mathbf{f}(\mathbf{\Pi}),\ \ \ \mathbf{q}_{\text{fluid}}=\mathbf{q}(\mathbf{\tau})+\mathbf{q}(\mathbf{\Pi}), \tag{10}\] As considered in section 3.5.1 later, for finite particle inertia (\(\rho_{p}\neq 0\)) equations (6) and (7) are numerically integrated in time after discretizing the time derivative on the left-hand side of these equations. However, in the limit of zero particle inertia (\(\rho_{p}=0\)), the appropriate conditions are zero net force and torque on the particle. In that case, as discussed in section 3.5.2, either a secant iteration method (for a massless particle in a fluid with finite density, \(\rho_{f}\)) or a novel decomposition of the inertia-less non-Newtonian momentum equation combined with a resistivity formulation is used to ensure that various components on the right-hand side of these equations balance each other. Finite difference discretization of the governing equations in the Cartesian coordinate system is exactly satisfied by a uniform flow, but this is not the case in curvilinear coordinates because the spatial gradients of the relevant variables such as velocity, pressure, and polymer stress involve coordinate system dependent metric derivatives [43]. Free-stream preservation of the imposed linear flow field far away from the particle is particularly important for the simulations of our interest since we use a large computational domain. Various previously proposed techniques to treat this issue are in [43, 44] and references therein. However, by simply simulating the deviation of the velocity field from its far-field value, the discretization errors associated with the violation of free-stream preservation are removed. In other words, we do not need to numerically simulate the analytical value of the far-field velocity, \(\mathbf{u}_{\infty}\), and pressure, \(p_{\infty}\). Since the fluid is considered to be incompressible, in the imposed velocity gradient tensor, \(\mathbf{\Gamma}\), we require \(\text{trace}(\mathbf{\Gamma})=0\) and hence we have \(\nabla\cdot\mathbf{u}_{\infty}=0\). The far-field flow (relative to particle motion), \(\mathbf{u}(\mathbf{r})_{\infty}=\mathbf{u}_{\infty}\) is linear in position, the polymer stress generated by the linear imposed velocity field is a spatially constant value, \(\mathbf{\Pi}_{\infty}\). Therefore, the far-field momentum equation is \[\frac{\partial\mathbf{u}_{\infty}}{\partial t}+\mathbf{A}\mathbf{D}\mathbf{V}( \mathbf{u}_{\infty},\mathbf{u}_{p},\boldsymbol{\omega}_{p};\mathbf{r})=-\frac {1}{\rho_{f}}\nabla p_{\infty}. \tag{11}\] Hence, the governing mass and momentum equations for the deviation of velocity field and the pressure field from the far-field flow, \[\widetilde{\mathbf{u}}=\mathbf{u}-\mathbf{u}_{\infty},\quad\quad\widetilde{p }=p-p_{\infty}. \tag{12}\] are, \[\nabla\cdot\widetilde{\mathbf{u}}=0, \tag{13}\] \[\rho_{f}\big{(}\frac{\partial\widetilde{\mathbf{u}}}{\partial t}+\widehat{ \mathbf{A}\mathbf{D}\mathbf{V}}(\widetilde{\mathbf{u}},\mathbf{u}_{\infty}, \boldsymbol{\omega}_{p};\mathbf{r})\big{)}=-\nabla\widetilde{p}+\nabla^{2} \widetilde{\mathbf{u}}+\nabla\cdot(\mathbf{\Pi}-\mathbf{\Pi}_{\infty}), \tag{14}\] where \[\widehat{\mathbf{A}\mathbf{D}\mathbf{V}}(\widetilde{\mathbf{u}},\mathbf{u}_{ \infty},\boldsymbol{\omega}_{p};\mathbf{r})=\widetilde{\mathbf{u}}\cdot( \nabla\widetilde{\mathbf{u}}+\nabla\mathbf{u}_{\infty})+\mathbf{u}_{\infty} \cdot\nabla\widetilde{\mathbf{u}}+2\boldsymbol{\omega}_{p}\times\widetilde{ \mathbf{u}}, \tag{15}\] and the boundary conditions are \[\widetilde{\mathbf{u}}=-\mathbf{u}_{\infty}(\mathbf{r})=-\mathbf{r}\cdot \mathbf{\Gamma}-\mathbf{u}_{0}+\mathbf{u}_{p}+\boldsymbol{\omega}_{p}\times \mathbf{r},\text{ on particle surface },\quad\quad\widetilde{\mathbf{u}}=0,\text{ as }|\mathbf{r}|\to \infty\approx\mathbf{r}_{\infty}. \tag{16}\] Simulating \(\widetilde{\mathbf{u}}\) and \(\widetilde{p}\) instead of \(\mathbf{u}\) and \(p\) allows free-stream preservation trivially. Furthermore, the momentum equation for \(\widetilde{\mathbf{u}}\) does not include centrifugal and angular acceleration terms (compare (3) with (15)) and hence we do not need to numerically evaluate \(d\boldsymbol{\omega}_{p}/dt\) at different times during a simulation involving particle rotation. To model the polymer stress, \(\mathbf{\Pi}\), we consider various dumbbell models that consider polymer molecules as a spring attached to Brownian beads [10]. These dumbbell-based models define a constitutive equation for the polymer configuration, \(\mathbf{\Lambda}=\langle\mathbf{q}\mathbf{q}\rangle_{\text{polymer configuration}}\), where \(\mathbf{q}\) is the end-to-end vector of the dumbbell and the angle brackets represent the average over polymer configurations. \(\mathbf{\Lambda}\) is non-dimensionalized with the square of the radius of gyration of the polymer, and the dumbbell models have the following form of the constitutive equation, \[\frac{\partial\mathbf{\Lambda}}{\partial t}+\mathbf{u}\cdot\nabla(\mathbf{ \Lambda}-\mathbf{\Lambda}_{\infty})=\nabla\mathbf{u}^{\text{T}}\cdot \mathbf{\Lambda}+\mathbf{\Lambda}\cdot\nabla\mathbf{u}-\frac{1}{\lambda} \mathbf{R}(\mathbf{\Lambda}), \tag{17}\] where the polymer convection \(\mathbf{u}\cdot\nabla(\mathbf{\Lambda}-\mathbf{\Lambda}_{\infty})=\mathbf{u} \cdot\nabla\mathbf{\Lambda}\) is balanced by its stretching (\(\nabla\mathbf{u}^{\text{T}}\cdot\mathbf{\Lambda}+\mathbf{\Lambda}\cdot \nabla\mathbf{u}\)) and relaxation (\(\mathbf{R}(\mathbf{\Lambda})/\lambda\)) for a polymer solution with relaxation time \(\lambda\). Since any valid constitutive equation is materially frame-invariant or objective [45], as long as the components of the \(\mathbf{\Lambda}\) tensor are expressed in the appropriate reference frame, the constitutive equations retain their form in different reference frames. In other words, unlike the momentum equation governing the fluid velocity (a frame variant quantity), particle rotation does not introduce additional terms in the constitutive equation governing \(\mathbf{\Lambda}\) (a frame-invariant quantity) in a non-inertial or rotating reference frame. We subtract the spatially constant far-field polymer configuration, \(\mathbf{\Lambda}_{\infty}\), before taking the gradient in the convective term to ensure free-stream preservation. The polymeric stress, \(\mathbf{\Pi}\) is directly proportional to the polymer configuration, \(\mathbf{\Lambda}\) and can be written as \[\mathbf{\Pi}=\frac{c\mu}{\lambda}\mathbf{F}(\mathbf{\Lambda}), \tag{18}\] where \(c\) is the polymer concentration defined as the ratio of zero shear rate polymer to solvent viscosity (\(\mu\)). The exact form of the relaxation tensor, \(\mathbf{R}(\mathbf{\Lambda})\), and the polymer force-configuration relation, \(\mathbf{F}(\mathbf{\Lambda})\), depends on the particular model being used to represent the spring force. If the spring is considered to be Hookean, one obtains the Oldroyd-B model; if the spring force is nonlinear and has finite extensibility, one obtains a FENE (finite extensible nonlinear elastic) model. A closure approximation is needed to obtain continuum-level constitutive equations from the FENE model, and different choices lead to FENE-P and FENE-CR models. The latter does not exhibit shear-thinning. The Giesekus model is also a nonlinear model but has a quadratic nonlinearity. In FENE models, the maximum length of the polymers is \(L\) (non-dimensionalized with the radius of gyration). The Giesekus model describes concentrated polymer solutions and melts by including the anisotropic effect of nearby dumbbells through the mobility parameter \(\alpha\). See [10] for a review of various constitutive models. \(\mathbf{F}(\mathbf{\Lambda})\) and \(\mathbf{R}(\mathbf{\Lambda})\) for the different constitutive models we consider are, \[\begin{array}{llll}\text{Model}:&\text{Oldroyd-B}&\text{FENE-P}&\text{FENE-CR}& \text{Giesekus}\\ \mathbf{R}(\mathbf{\Lambda}):&\mathbf{\Lambda}-\boldsymbol{\delta}&f\mathbf{ \Lambda}-b\boldsymbol{\delta}&f\mathbf{\Lambda}-f\boldsymbol{\delta}&( \mathbf{\Lambda}-\boldsymbol{\delta})-\alpha(\mathbf{\Lambda}-\boldsymbol{ \delta})^{2}\\ \mathbf{F}(\mathbf{\Lambda}):&\mathbf{\Lambda}-\boldsymbol{\delta}&f\mathbf{ \Lambda}-b\boldsymbol{\delta}&f\mathbf{\Lambda}-f\boldsymbol{\delta}&\mathbf{ \Lambda}-\boldsymbol{\delta},\end{array} \tag{19}\] where, \[f=1/(1-\text{tr}(\mathbf{\Lambda})/L^{2}),\text{ and }b=1/(1-\text{tr}( \boldsymbol{\delta})/L^{2}). \tag{20}\] The polymer constitutive equation (17) (or equation (25) discussed later) is a hyperbolic equation. Therefore, the boundary conditions are required at the locations the streamlines of the velocity field, \(\mathbf{u}\), enter the computation domain. At these locations, the boundary condition is a time-dependent, spatially constant polymer configuration tensor, \(\mathbf{\Lambda}=\mathbf{\Lambda}_{\infty}\), driven by the imposed velocity field and is governed by, \[\frac{\partial\mathbf{\Lambda}_{\infty}}{\partial t}=\nabla\mathbf{u}_{\infty }^{\text{T}}\cdot\mathbf{\Lambda}_{\infty}+\mathbf{\Lambda}_{\infty}\cdot \nabla\mathbf{u}_{\infty}-\frac{1}{\lambda}\mathbf{R}(\mathbf{\Lambda}_{ \infty}). \tag{21}\] Using equation (18), \(\mathbf{\Pi}_{\infty}=c\mu/\lambda\mathbf{F}(\mathbf{\Lambda}_{\infty})\) and \(\mathbf{F}(\mathbf{\Lambda}_{\infty})\) is evaluated from equation (19). ### Log Conformal Form of Constitutive equations An important property of \(\mathbf{\Lambda}\) that is preserved by an appropriate constitutive equation such as the ones introduced above is its positive definiteness [19]. However, numerical discretization of constitutive equations of the form in equations equation (17) and (18) with a model from (19) may lead to violation of positive definiteness in the numerical solution of \(\mathbf{\Lambda}\) at high \(De\). This manifests as a numerical instability and is also termed the high Weissenberg number problem (HWNP)- Weissenberg number is defined as the product of the characteristic flow gradient magnitude and the polymer relaxation time; for steady linear flows Deborah and Weissenberg numbers are equivalent. Fattal and Kupferman [19] remedied the HWNP by introducing an equivalent constitutive equation for the matrix logarithm of the conformation tensor, \(\mathbf{\Lambda}\), \[\mathbf{\Psi}=\text{log}(\mathbf{\Lambda}). \tag{22}\] Solving the governing equation for \(\mathbf{\Psi}\) instead of \(\mathbf{\Lambda}\) provides a more stable numerical solution as found in numerous numerical studies such as [25; 27; 46; 47; 48] after the seminal work of [19]. Fattal and Kupferman [19] derived an equation for \(\mathbf{\Psi}\) based on the eigen-decomposition of the velocity gradient. We use the alternative derivation, provided by Hulsen and the previous authors [49], based on the evolution of the principal axes of \(\mathbf{\Lambda}\) (and hence also \(\mathbf{\Psi}\) since \(\mathbf{\Lambda}\) and \(\mathbf{\Psi}\) have same eigenvectors). We find this form to be simpler in treating cases such as biaxial extensional flow for which the two eigenvalues are identical. We use the Jacobi algorithm provided in [50] to obtain the eigen-decomposition of \(\mathbf{\Lambda}\) and \(\mathbf{\Psi}\), \[\mathbf{\Lambda}=\mathbf{V}\cdot\mathbf{D}_{\mathbf{\Lambda}}\cdot\mathbf{V}^{ T},\quad\quad\mathbf{\Psi}=\mathbf{V}\cdot\mathbf{D}_{\mathbf{\Psi}}\cdot \mathbf{V}^{T}, \tag{23}\] where \(\mathbf{V}\) is a \(3\times 3\) matrix with the eigenvectors, \(\mathbf{v}_{i},i\in[1,3]\) as its columns and \(\mathbf{D}_{\mathbf{\Lambda}}\) and \[\mathbf{D}_{\mathbf{\Psi}}=\text{log}(\mathbf{D}_{\mathbf{\Lambda}}), \tag{24}\] are diagonal matrices with eigenvalues \(\lambda_{i}\) and \(\psi_{i}=\text{log}(\lambda_{i}),i\in[1,3]\) as their entries. The governing equation for the matrix logarithm, \(\mathbf{\Psi}\) is, \[\frac{\partial\mathbf{\Psi}}{\partial t}+\mathbf{u}\cdot\nabla(\mathbf{\Psi}- \mathbf{\Psi}_{\infty})=\mathbf{S}\mathbf{R}(\mathbf{\Psi},\mathbf{u}), \tag{25}\] where, \[\mathbf{S}\mathbf{R}(\mathbf{\Psi},\mathbf{u})=2\Sigma_{i=1}^{3}L_{ii}\mathbf{ v}_{i}\mathbf{v}_{i}+\Sigma_{i=1}^{3}\Sigma_{j=1,ji}^{3}\frac{\psi_{i}-\psi_{j}}{ \lambda_{i}-\lambda_{j}}(\lambda_{j}L_{ij}+\lambda_{i}L_{ji})\mathbf{v}_{i} \mathbf{v}_{j}-\text{exp}(-\mathbf{\Psi})\cdot\mathbf{R}(\text{exp}(\mathbf{ \Psi})), \tag{26}\] \(\mathbf{L}=\mathbf{\Psi}\mathbf{u}\) is the velocity gradient tensor, \(\text{exp}(\mathbf{\Psi})=\mathbf{V}\cdot\mathbf{D}_{\mathbf{\Lambda}}\cdot \mathbf{V}^{T}\) and \(\text{exp}(-\mathbf{\Psi})=\mathbf{V}\cdot\mathbf{D}_{\mathbf{\Lambda}}^{-1} \cdot\mathbf{V}^{T}\). The three terms of \(\mathbf{S}\mathbf{R}(\mathbf{\Psi},\mathbf{u})\) in equation (26) represent the stretching of eigenvectors by \(L_{ii}\), rotation of eigenvectors by vorticity and their relaxation respectively [49]. When two eigenvalues are identical [49] in the second term of equation (26), \[\lim_{\lambda_{i}\to\lambda_{j}}\frac{\psi_{i}-\psi_{j}}{\lambda_{i}-\lambda_{ j}}(\lambda_{j}L_{ij}+\lambda_{i}L_{ji})\to L_{ij}+L_{ji}. \tag{27}\] The governing equation for the undisturbed matrix logarithm, \(\mathbf{\Psi}_{\infty}\), is obtained by setting \(\mathbf{L}=\nabla\mathbf{u}_{\infty}\) and \(\mathbf{\Psi}=\mathbf{\Psi}_{\infty}\) in equations (25) and (26). ## 3 Temporal discretization and coupling between equations and boundary conditions The constitutive equation (25) for the matrix logarithm, \(\mathbf{\Psi}\) is driven by the velocity and velocity gradients. The polymer stress hence generated (see equations (18), (19) and (22)) acts as a body force in the momentum equation (2) and influences the velocity and pressure field. The constraint of divergence-free velocity field defined by equation (1) ensures mass conservation. In this section, we describe the temporal discretization and methods that treat the coupling between mass (equation (1)) and momentum (equation (2)) conservation and the polymer constitutive equation (25). We also describe the separate numerical treatments of the Newton's equations (equations (6) and (7)) governing the particle dynamics for the case when particle inertia is negligible and when it is finite. The former case is of particular interest when fluid inertia is also neglected. We adopt a similar methodology as [40] to treat the time discretization of the momentum and mass conservation equations. The mass conservation equation at time step \(n+1\) is \[\nabla\cdot\widetilde{\mathbf{u}}^{n+1}=0. \tag{28}\] Using a backward Euler temporal discretization scheme, at a time step \(n+1\), the momentum equation is written as, \[\begin{split}&\rho_{f}(\frac{\widetilde{\mathbf{u}}^{n+1}- \widetilde{\mathbf{u}}^{n}}{\Delta t}+\frac{3}{2}\widetilde{\mathbf{ADV}}( \widetilde{\mathbf{u}}^{n},\mathbf{u}_{\infty}^{n},\mathbf{\omega}_{p}^{n},\mathbf {r})-\frac{1}{2}\widetilde{\mathbf{ADV}}(\widetilde{\mathbf{u}}^{n-1},\mathbf{ \upsilon}_{\infty}^{n-1},\mathbf{\omega}_{p}^{n-1};\mathbf{r}))=\\ &-\nabla\widetilde{p}^{n+1}+\nabla^{2}\widetilde{\mathbf{u}}^{n+ 1}+\nabla\cdot(\mathbf{\Pi}^{n+1}-\mathbf{\Pi}_{\infty}^{n+1}),\end{split} \tag{29}\] where the non-linear terms i.e. \(\widetilde{\mathbf{ADV}}(\widetilde{\mathbf{u}},\mathbf{u}_{\infty},\mathbf{ \omega}_{p};\mathbf{r})\) from equation (15) are treated explicitly using a second-order Adams-Bashforth scheme after the first time step (in the first time step a first-order explicit Euler scheme is used). Explicit treatment of these non-linear terms dependent upon fluid inertia is a valid strategy because we aim to study the effect of zero to moderate fluid inertia on flows of viscoelastic fluids. After the first time step, we use a second-order implicit Crank-Nicholson scheme where the polymer constitutive equation is temporally discretized as following, \[\frac{\mathbf{\Psi}^{n+1}-\mathbf{\Psi}^{n}}{\Delta t}+\frac{\mathbf{u}^{n+1}\cdot \nabla(\mathbf{\Psi}^{n+1}-\mathbf{\Psi}_{\infty}^{n+1})}{2}+\frac{\mathbf{u}^{n} \cdot\nabla(\mathbf{\Psi}^{n}-\mathbf{\Psi}_{\infty}^{n})}{2}=\frac{\mathbf{SR}(\mathbf{ \Psi}^{n+1},\mathbf{u}^{n+1})}{2}+\frac{\mathbf{SR}(\mathbf{\Psi}^{n},\mathbf{u}^ {n})}{2}. \tag{30}\] We consider both a fixed time step and a variable time step in our studies. \(\mathbf{SR}(\mathbf{\Psi}^{n+1},\mathbf{u}^{n+1})\) is dependent on the unknown values at the current time step. We use a weighted Jacobi method to iteratively solve equation (30) for \(\mathbf{\Psi}^{n+1}\). In the first time step we solve the polymer constitutive equation with a first order implicit Euler method. To obtain \(\mathbf{\Pi}^{n+1}\) from \(\mathbf{\Psi}^{n+1}\), we use the eigen-decomposition of \(\mathbf{\Psi}^{n+1}\) and then use equations (23) and (24) to obtain \(\mathbf{\Lambda}^{n+1}\) along with the relevant relation between \(\mathbf{\Lambda}^{n+1}\) and \(\mathbf{\Pi}^{n+1}\) from equation (18) and (19). ### Decoupling mass-momentum system from polymer constitutive equations The polymer stress, \(\mathbf{\Pi}^{n+1}\), is considered constant while solving the mass-momentum system of equations (28) and (29). Similarly, the velocity field, \(\widetilde{\mathbf{u}}^{n+1}\) is considered constant when solving the constitutive equation (30). This allows us to numerically decouple the mass and momentum equations from the polymer constitutive equations. To properly account for the correct time location of \(\mathbf{\Pi}^{n+1}\), \(\mathbf{\psi}^{n+1}\), \(\overline{p}^{n+1}\) and \(\widetilde{\mathbf{u}}^{n+1}\), similar to [20], we consider \(K\) inner iterations. At \(k^{th}\) iteration, first the polymer constitutive equations are solved using velocity information from the previous, \(1^{st}<(k-1)^{th}\leq K^{th}\), step to update the values of \(\mathbf{\Pi}^{n+1}\) and \(\mathbf{\psi}^{n+1}\) (at \(k=1^{st}\) inner iteration velocity information is taken from the previous time step). Then the mass-momentum system of equations is solved using the latest polymer stress to update the values of \(\widetilde{\mathbf{u}}^{n+1}\) and \(\overline{p}^{n+1}\). The inner iterations are terminated when a user defined tolerance or \(K\) is reached. We obtain accurate results even with \(K=1\). ### Decoupled Schur complement approach to solve mass-momentum system The coupled system of discrete mass and momentum equations (28) and (29) is written in an operator form, \[\begin{bmatrix}\mathcal{L}&\mathcal{G}\\ \mathcal{D}&\mathbf{0}\end{bmatrix}\begin{bmatrix}\widetilde{\mathbf{u}}^{n+1} \\ \widetilde{p}^{n+1}\end{bmatrix}=\begin{bmatrix}\mathbf{m}\\ 0\end{bmatrix}, \tag{31}\] or, using a Schur complement reduction approach similar to Furuichi et al. (2011), [41] as, \[\begin{bmatrix}\mathcal{L}&\mathcal{G}\\ \mathbf{0}&\mathcal{S}\end{bmatrix}\begin{bmatrix}\widetilde{\mathbf{u}}^{n+1 }\\ \widetilde{p}^{n+1}\end{bmatrix}=\begin{bmatrix}\mathbf{m}\\ h\end{bmatrix}, \tag{32}\] to decouple pressure from velocity. Here, \(\mathcal{S}=-\mathcal{D}\mathcal{L}^{-1}\mathcal{G}\) is the Schur complement of the matrix in equation (31) and \(h=-\mathcal{D}\mathcal{L}^{-1}\mathbf{m}\). In these equations, \(\mathcal{L}=(\rho_{f}/\Delta t)\delta-\nabla^{2}\), \(\mathcal{G}=\nabla\), and, \(\mathcal{D}=\nabla\cdot\) represent different spatial operators. Here, \(\nabla^{2}\) is a vector Laplacian operator. In the Cartesian basis, the unit vectors are spatially constant. However, due to the spatial dependence of unit vectors in a curvilinear basis, cross-terms exist that involve multiple vector components in a particular component of the Laplacian of that vector. This guides our choice of basis for vectors and tensors in section 4.2. The discretization of these spatial operators will be described in more detail in section 4. \[\mathbf{m}=\rho_{f}(\frac{\widetilde{\mathbf{u}}^{n}}{\Delta t}-\frac{3}{2} \widehat{\mathbf{\Delta}\mathbf{D}\mathbf{V}}(\widetilde{\mathbf{u}}^{n},\mathbf{ u}_{\infty}^{n},\mathbf{\omega}_{p}^{n};\mathbf{r})+\frac{1}{2}\widehat{\mathbf{ \Delta}\mathbf{D}\mathbf{V}}(\widetilde{\mathbf{u}}^{n-1},\mathbf{u}_{\infty}^ {n-1},\mathbf{\omega}_{p}^{n-1};\mathbf{r}))+\nabla\cdot(\mathbf{\Pi}^{n+1}-\mathbf{\Pi} _{\infty}^{n+1}) \tag{33}\] represents the sum of terms in the momentum equations that are considered constant within a time-step (or within a \(k^{th}\) iteration as described in section 3.1), i.e., the explicit terms and the divergence of the polymer stress. On the boundaries of the computational domain, i.e., the particle surface and the outer boundary, the matrix-vector system of equation (31) is appropriately changed to represent the velocity boundary conditions of equation (16). As highlighted in section 1, in the context of finite difference spatial discretization, splitting methods (that advance the momentum equation in time and solve a pressure-Poisson equation to obtain an appropriate pressure for ensuring incompressibility) are not suitable for cases in which the fluid inertia is small. This is related to artificial boundary conditions for pressure (since a second-order partial differential equation now governs it) required in the splitting methods [38; 39]. The splitting errors may be ignored compared with more important momentum advection terms when fluid inertia is larger than the viscous forces, making them suitable for studies of turbulent flows. The dominance of viscous terms at lower fluid inertia values in studies of our interest prevents us from using such methods. Fluid inertia is quantified in the above equations by the fluid density, \(\rho_{f}\). Furthermore, due to the time advancement of the momentum equation in splitting methods, studies with \(\rho_{f}=0\) cannot be considered. \(\rho_{f}=0\) studies are useful to isolate viscoelasticity's effect from fluid inertia completely. This Schur complement reduction method was originally developed to study the behavior of fluids with zero inertia and large viscosity variations, such as in long time scale dynamics of the Earth's convecting mantle [41]. We have found it useful in studies of zero to moderate inertia viscoelastic fluids. Similar to [41], we begin by solving the decoupled pressure equation \[\mathcal{S}\widetilde{p}^{n+1}=h, \tag{34}\] using a Krylov subspace method that constructs a subspace \(K(\mathcal{S},r_{0})=\text{Span}(r_{0},\mathcal{S}r_{0},\mathcal{S}^{2}r_{0},\cdots,\mathcal{S}^{N-1}r_{0})\), for \(\mathcal{S}\in\mathcal{R}^{N\times N}\), with \[\widetilde{\mathbf{u}}_{0}=\mathcal{L}^{-1}(\mathbf{G}\widetilde{p}_{0}-\mathbf{m}), \hskip 14.226378ptr_{0}=h-\mathcal{S}\widetilde{p}_{0}=\mathcal{D}\widetilde{ \mathbf{u}}_{0}, \tag{35}\] for an initial guess \(\widetilde{p}_{0}\) for \(\widetilde{p}^{n+1}\). The generalized minimum residual (GMRES) method [51] is our choice of Krylov subspace method. The matrix vector product \(y_{i}=\mathcal{S}x_{i}\) consists of three separate operations defined as (similar to [41]), \[\mathbf{a}^{*}=\mathbf{\mathcal{G}}x_{i},\hskip 14.226378pt\widetilde{\mathbf{u}}_{i}^ {*}=\mathcal{L}^{-1}\mathbf{a}^{*},\hskip 14.226378pty_{i}=\mathcal{D} \widetilde{\mathbf{u}}_{i}^{*}. \tag{36}\] The first and third steps are simple matrix-vector products of a known matrix and vector that can be computed straightforwardly in an efficient manner. The second step requires solution of a matrix equation \(\mathcal{L}\widetilde{\mathbf{u}}*=\mathbf{a}^{*}\) and the operator \(\mathcal{L}\) involves the sum of an identity and a Laplacian operator. Therefore, we use the aggregation-based algebraic multigrid (AGMG) method of [52; 53] to solve this elliptic equation efficiently. We use the implementation provided in [54]. Once the GMRES method terminates upon reaching a sufficient user-defined tolerance (see [51; 55] for details), say in \(M\) iterations, the solution for \(\widetilde{p}^{n+1}\) is formed as, \[\widetilde{p}^{n+1}=\widetilde{p}_{0}+\Sigma_{i=1}^{M}y_{i}/||y_{i}||_{2}I_{i}, \tag{37}\] where for each GMRES iteration \(i\), \(y_{i}\) is a vector defined by equation (36) and \(l_{i}\) is a scalar defined within the GMRES procedure (see [51; 55] for details). If we keep track of different \(\widetilde{\mathbf{u}}_{i}^{*}\) in the GMRES iteration for solution of the pressure equation (34), by a simple and fast operation we can construct, \[\delta\widetilde{\mathbf{u}}^{n+1}=\Sigma_{i=1}^{M}\widetilde{\mathbf{u}}_{i} ^{*}l_{i}. \tag{38}\] The velocity field at step n+1 is thus obtained by \[\widetilde{\mathbf{u}}^{n+1}=\widetilde{\mathbf{u}}_{0}-\delta\widetilde{ \mathbf{u}}^{n+1}, \tag{39}\] where \(\widetilde{\mathbf{u}}_{0}\) is already calculated and defined in equation (35). Our methodology to solve the coupled system of mass and momentum equation closely follows that of [41] for solving the decoupled pressure equation (34). However, careful observation of the GMRES method allows us to obtain the solution for velocity, \(\widetilde{\mathbf{u}}^{n+1}\) as an auxiliary product of the same calculation. This avoids the need to solve a velocity equation \(\widetilde{\mathbf{u}}^{n+1}=\mathcal{L}^{-1}(\mathbf{m}-\mathbf{\mathcal{G}}p^{n+1})\) and saves CPU time. In [41] authors note a similar point but still solve the velocity equation after obtaining the pressure solution. This has so far proven to be unnecessary for our studies. ### Ensuring stretch limited by maximum polymer extensibility in FENE models In dumbbell based models for polymer configuration, as discussed in section 2, the polymer configuration is \(\mathbf{\Lambda}=\langle\mathbf{q}\mathbf{q}\rangle_{\text{polymer configuration}}\), where \(\mathbf{q}\) is end-to-end vector (non-dimensionalized with polymer's radius of gyration) of the dumbbell and the angle brackets represent average over polymer configuration. Therefore, the polymer stretch is \(\sqrt{\text{tr}(\mathbf{\Lambda})}\). In FENE models such as the FENE-P and FENE-CR (equation (19)) models, the polymers have a maximum extensibility \(L\). Following the technique introduced in [20] we numerically impose this constraint by separately evolving the variable \(\gamma=1/f=1-\text{tr}(\mathbf{\Lambda})/L^{2}\) (equation (20)) used in the FENE models. An evolution equation for \(\gamma\) is obtained by taking the trace of the polymer constitutive equation (17) with one of the FENE models from equation (19), \[\frac{\partial\gamma}{\partial t}+\mathbf{u}\cdot\nabla\gamma+\frac{2}{L^{2} }\mathbf{\Lambda}:\nabla\mathbf{u}+\frac{1}{De}(\frac{\gamma-1}{\gamma}+\beta)=0, \quad\beta=\begin{cases}\frac{3}{L^{2}-3},&\text{FENE-P}\\ \frac{3}{L^{2}\gamma},&\text{FENE-CR}\end{cases}. \tag{40}\] Similar to [20], we temporally discretize equation (40) using a Cranck-Nicholson scheme for the relaxation terms (\(1/De((\gamma-1)/\gamma+\beta)\)) and treating the advection (\(\mathbf{u}\cdot\nabla\gamma\)) and stretching (\(2/L^{2}\mathbf{\Lambda}:\nabla\mathbf{u}\)) terms explicitly. This leads to the following quadratic equation for \(\gamma^{n+1}\), \[(\gamma^{n+1})^{2}+\gamma^{n+1}\Delta t\Big{(}-\frac{\gamma^{n}}{\Delta t}+ \mathbf{u}^{n}\cdot\nabla\gamma^{n}+\frac{2\mathbf{\Lambda}^{n}:\nabla\mathbf{u}^ {n}}{L^{2}}+\frac{\gamma^{n}-0.5}{De\gamma^{n}}+\frac{1}{DeL^{2}}\alpha_{1}^{ n}\Big{)}-\frac{\Delta t}{2De}\alpha_{2}=0, \tag{41}\] where \(\alpha_{1}^{n}=3/(L^{2}-3)\) for FENE-P and \(1.5/\gamma^{n}\) for FENE-CR, and, \(\alpha_{2}=1\) for FENE-P and \(1-3/L^{2}\) for FENE-CR. This equation leads to two real roots with opposite signs [20]. The negative root is unphysical since it implies \(\text{tr}(\mathbf{\Lambda}^{n+1})>L^{2}\). Choosing the positive root ensures that the polymer stretch is upper-bounded by the maximum extensibility, \(L\). At each time step, \(n+1\), we calculate \(f^{n+1}=1/\gamma^{n+1}\) from this treatment and use this value of \(f^{n+1}\) in the relaxation term of the discretized polymer constitutive equation (30). ### Evolution of particle orientation and velocity boundary conditions As mentioned in section 2, we solve the governing equations in a particle fixed reference frame. In our simulations, the inertial (or laboratory) frame is defined either from the initial particle orientation (e.g., a particle rotating about its axis in quiescent fluid) or through the geometry of the imposed flow (e.g., a reference frame fixed with the imposed simple shear flow, uniaxial extensional flow or a uniform flow field). The particle orientation is defined using quaternions, \(\mathbf{q}=\begin{bmatrix}q_{1}&q_{2}&q_{3}&q_{4}\end{bmatrix}^{T}\), (see chapter 8 of [56]) that are related to the Euler angles, \(\theta,\phi,\psi\), between the particle-fixed and inertial reference frames, \[\begin{split} q_{1}=\sin(\theta/2)\cos((\phi-\psi)/2),&q_{2}= \sin(\theta/2)\sin((\phi-\psi)/2),\\ q_{3}=\cos(\theta/2)\sin((\phi+\psi)/2),&q_{4}=\cos(\theta/2)\cos(( \phi+\psi)/2).\end{split} \tag{42}\] The sequence of three rotations that define the Euler angles are described in [56]: \(\theta=\sin^{-1}(-X_{3})\), \(\phi=\sin^{-1}(Y_{3}/\sqrt{1-X_{3}^{2}})\), \(\psi=\sin^{-1}(X_{2}/\sqrt{1-X_{3}^{2}})\). Here, \(X_{2}\) and \(X_{3}\) are the projections of the \(X\) (or 1) axis of the particle-fixed reference frame on the 2 and 3 axis, respectively, of the inertial reference frame. \(Y_{2}\) and \(Y_{3}\) are the same projections of the \(Y\) (or 2) axis of the particle-fixed reference frame. The quaternion formulation has previously been used to study fluid flows around prolate spheroids in [26; 57]. The evolution equation for quaternions is related to the particle's angular velocity, \(\mathbf{\omega}_{p}=\begin{bmatrix}\omega_{1}&\omega_{2}&\omega_{3}\end{bmatrix}^ {T}\), \[\frac{d}{dt}\begin{bmatrix}q_{1}\\ q_{2}\\ q_{3}\\ q_{4}\end{bmatrix}=\begin{bmatrix}q_{4}&-q_{3}&q_{2}&q_{1}\\ q_{3}&q_{4}&-q_{1}&q_{2}\\ -q_{2}&q_{1}&q_{4}&q_{3}\\ -q_{1}&-q_{2}&-q_{3}&q_{4}\end{bmatrix}\begin{bmatrix}\omega_{1}\\ \omega_{2}\\ \omega_{3}\\ 0\end{bmatrix}. \tag{43}\] We consider second and third-order accurate Adams-Bashforth schemes to discretize equation (43) temporally. The transformation matrix from the inertial frame to the particle-fixed frame is, \[A(t)=2\begin{bmatrix}q_{1}^{2}+q_{4}^{2}-1/2&q_{1}q_{2}+q_{3}q_{4}&q_{1}q_{3}- q_{2}q_{4}\\ q_{1}q_{2}-q_{3}q_{4}&q_{2}^{2}+q_{4}^{2}-1/2&q_{2}q_{3}+q_{1}q_{4}\\ q_{1}q_{3}+q_{2}q_{4}&q_{2}q_{3}-q_{1}q_{4}&q_{3}^{2}+q_{4}^{2}-1/2\end{bmatrix}. \tag{44}\] \(\mathbf{A}(t)\) is an orthonormal matrix. A vector in the particle-fixed frame, \(\mathbf{b}_{\text{particle}}\) is transformed into the inertial reference frame with \(\mathbf{b}_{\text{inertial}}=\mathbf{A}(t)^{T}\cdot\mathbf{b}_{\text{particle}}\). The major axis or the center-line of the spheroidal particle is along the \(x_{3}\) axis of the particle-fixed frame. At a particular time, in the inertial reference frame the particle orientation vector i.e. the particle center line is \(\mathbf{p}=\begin{bmatrix}q_{1}q_{3}+q_{2}q_{4}&q_{2}q_{3}-q_{1}q_{4}&q_{3}^{2 }+q_{4}^{2}-1/2\end{bmatrix}\). At the end of each time step, after updating the quaternions we update the velocity boundary conditions. Consider a general case where the imposed fluid motion consists of a uniform velocity, \(\mathbf{u}_{far}^{\text{inertial}}\) and a velocity gradient, \(\mathbf{\Gamma}_{\text{far}}^{\text{inertial}}\) in the laboratory frame. The particle's translation and angular velocities (numerical methods to evaluate these are discussed in section 3.5) in the frame aligned with the particle are \(\mathbf{u}_{p}\) and \(\mathbf{\omega}_{p}\) respectively. In the frame of reference aligned with the particle coordinates, the appropriately rotated imposed fluid motion at the outer boundary is, \[\mathbf{u}_{\infty}(\mathbf{r},t)=\mathbf{A}(t)\cdot\mathbf{u}_{\text{far}}^{ \text{inertial}}+\mathbf{r}\cdot\mathbf{A}(t)\cdot\mathbf{\Gamma}_{\text{far}}^{ \text{inertial}}\cdot\mathbf{A}(t)^{T}-\mathbf{u}_{p}-\mathbf{\omega}_{p}\times \mathbf{r},\quad\quad\mathbf{r}=\mathbf{r}_{\infty}. \tag{45}\] ### Particle velocities As mentioned in section 2 the particle's angular and translation velocities, \(\mathbf{\omega}_{p}\) and \(\mathbf{u}_{p}\) may be a prescribed quantity in a study of interest, so that the procedure mentioned above has all the required quantities to solve the equations. We are also interested in studying the scenarios where the particle is free to move due to the imposed and fluid-generated forces and torques. The case when particle inertia is zero must be dealt with differently than the one with finite particle inertia, and we will consider these two cases next. #### 3.5.1 Finite particle inertia At each time step the particle motion must satisfy the Newton's equations (6) and (7). These equations are discretized using a second order implicit Crank-Nicholson scheme, \[\mathbf{u}_{\mathbf{p}}^{n+1}= \mathbf{u}_{\mathbf{p}}^{n}+\frac{\Delta t}{2\rho_{p}V_{p}}( \mathbf{f}_{\text{net}}^{n+1}+\mathbf{f}_{\text{net}}^{n}) \tag{46}\] \[\mathbf{\omega}_{p}^{n+1}= \mathbf{\omega}_{p}^{n}+\frac{5\Delta t}{2\rho_{p}V_{p}r_{\text{ minor}}^{2}}\begin{bmatrix}\frac{1}{1+\mathbf{\kappa}^{2}}&0&0\\ 0&\frac{1}{1+\mathbf{\kappa}^{2}}&0\\ 0&0&0.5\end{bmatrix}\cdot(\mathbf{q}_{\text{net}}^{n+1}+\mathbf{q}_{\text{net }}^{n}), \tag{47}\] where \(V_{p}=4\pi\kappa r_{minor}^{3}/3\), \[\mathbf{f}_{\text{net}}^{n+1}= \mathbf{f}_{\text{fluid}}^{n+1}+\mathbf{f}_{\text{ext}}^{n+1}, \mathbf{f}_{\text{fluid}}^{n+1}=\mathbf{f}(\mathbf{\tau})^{n+1}+\mathbf{f}(\mathbf{ \Pi})^{n+1} \tag{48}\] \[\mathbf{q}_{\text{net}}^{n+1}= \mathbf{q}_{\text{fluid}}^{n+1}+\mathbf{q}_{\text{ext}}^{n+1}, \mathbf{q}_{\text{fluid}}^{n+1}=\mathbf{q}(\mathbf{\tau})^{n+1}+\mathbf{q}(\mathbf{ \Pi})^{n+1}.\] are the net force and torque acting on the particle at time step \(n+1\). \(\mathbf{f}_{\text{fluid}}^{n+1}\) and \(\mathbf{q}_{\text{fluid}}^{n+1}\) are evaluated from equations (9) and (10) once the fluid stress on the particle surface at time step \(n+1\), \(\mathbf{\sigma}^{n+1}=\mathbf{\tau}^{n+1}+\mathbf{\Pi}^{n+1}\), is available. \(\mathbf{f}_{\text{ext}}^{n+1}\) and \(\mathbf{q}_{\text{ext}}^{n+1}\) are the externally imposed (possibly time-varying) force and torques in the frame aligned with the particle at time step \(n+1\). We use the first order explicit Euler scheme in the first time step. #### 3.5.2 Zero particle inertia As mentioned earlier in section 1 and 3.2, we are also interested in the scenario of zero particle inertia or zero \(\rho_{p}\). In our studies, such a case arises when we want to completely remove the inertial effects and study the influence of viscoelasticity, but it could also be used to investigate massless particles in a fluid with finite inertia. In both of these cases, Newton's equations governing the particle motion at each time step are, \[\begin{split}\mathbf{f}_{\text{net}}^{n+1}=& \mathbf{f}_{\text{fluid}}^{n+1}+\mathbf{f}_{\text{ext}}^{n+1}=0, \quad\mathbf{f}_{\text{fluid}}^{n+1}=\mathbf{f}(\mathbf{\tau})^{n+1}+\mathbf{ f}(\mathbf{\Pi})^{n+1},\\ \mathbf{q}_{\text{net}}^{n+1}=&\mathbf{q}_{\text{ fluid}}^{n+1}+\mathbf{q}_{\text{ext}}^{n+1}=0,\quad\mathbf{q}_{\text{fluid}}^{n+1}= \mathbf{q}(\mathbf{\tau})^{n+1}+\mathbf{q}(\mathbf{\Pi})^{n+1},\end{split} \tag{49}\] and the time marching used for finite \(\rho_{p}\) in (48) cannot be employed. The governing equations (49) are viewed as force and torque constraints that velocity, pressure and polymer stress field must satisfy at each time step to yield an appropriate \(\mathbf{u}_{\text{p}}^{n+1}\) and \(\mathbf{\omega}_{p}^{n+1}\). Padhy et al. (2013) [58] used a secant method to iteratively impose the torque-free constraint on a sphere rotating in a cross shear flow. We first consider this method for imposing force- and torque-free constraints for a finite fluid inertia case. Subsequently, we show that a novel decomposition of inertia-less fluid's momentum equation can be used to impose these constraints in a non-iterative and hence computationally efficient manner. These two techniques are discussed next. **Secant iteration method for finite fluid inertia and zero particle inertia:** At each time step or within each inner \(k\) iteration (section 3.1) the polymer stress, \(\mathbf{\Pi}\) and hence the polymer torque and force, \(\mathbf{q}(\mathbf{\Pi})^{n+1}\) and \(\mathbf{f}(\mathbf{\Pi})^{n+1}\), from equations equations (9) and (10), are fixed. \(\mathbf{u}_{\text{p}}^{n+1}\) and \(\mathbf{\omega}_{p}^{n+1}\) are iterated along with \(\widetilde{\mathbf{u}}^{n+1}\) and \(\widetilde{p}^{n+1}\) to generate the appropriate Newtonian solvent torque and force, \(\mathbf{q}(\mathbf{\tau})^{n+1}\) and \(\mathbf{f}(\mathbf{\tau})^{n+1}\), that ensures the torque and force balance in equation (49). In practice, once the polymer constitutive equation (30) is solved, we first obtain the polymeric torque and force, \(\mathbf{q}(\mathbf{\Pi})^{n+1}\) and \(\mathbf{f}(\mathbf{\Pi})^{n+1}\), from equation (9). The mass-momentum system described by equation (32) depends on the particle's angular velocity \(\mathbf{\omega}_{p}\) (see equations (15), (16), and (33)). In the secant iterations method, iterations proceed by solving the mass-momentum system by the Schur complement method described in section 3.2 and obtaining the Newtonian torque and force, \(\mathbf{q}(\mathbf{\tau})^{n+1}\) and \(\mathbf{f}(\mathbf{\tau})^{n+1}\), from equation (9). After each secant iteration, \(s\), the particle's angular and translational velocities \(\mathbf{\omega}_{p}\) and \(\mathbf{u}_{p}\) are updated component-wise, \[\omega_{p,i}^{s+1}=\omega_{p,i}^{s}-q_{net,i}^{s}\frac{\omega_{p,i}^{s}-\omega _{p,i}^{s-1}}{q_{net,i}^{s}-q_{net,i}^{s-1}},\quad\quad u_{p,i}^{s+1}=u_{p,i}^{ s}-f_{net,i}^{s}\frac{u_{p,i}^{s}-u_{p,i}^{s-1}}{f_{net,i}^{s}-f_{net,i}^{s-1}}, \quad i\in[1,3]. \tag{50}\] \(\mathbf{\omega}_{p}^{n+1}\) and \(\mathbf{u}_{p}^{n+1}\) are used in the velocity boundary condition (equation (45)) for the next secant iteration. The secant iterations are stopped once the magnitudes of all components of the net torque and force on the particle, \(\mathbf{q}_{\text{net}}^{n+1}\) and \(\mathbf{f}_{\text{net}}^{n+1}\), are below prescribed tolerances. The numerical solution at this point consists of the velocity, pressure, and polymer stress fields and the particle's required angular and translational velocity that satisfy the force and torque constraints of equation (49). **Novel resistivity formulation for zero fluid and particle inertia in non-Newtonian fluids**: The system of mass and momentum equations and the torque- and force- free constraint is quasi-steady when both fluid and particle inertia are neglected. All the variables from here until the end of this section are taken at time step \(n+1\) and we will supress the superscript for clarity. Upon neglecting fluid inertia, the momentum equation (2) becomes \[\nabla\cdot\mathbf{\sigma}=0, \tag{51}\] subject to the boundary conditions mentioned in equation (5) and the stress tensor, \(\mathbf{\sigma}\), in equation (4). In this case, the momentum and mass (1) conservation equations are linear in the velocity and pressure. Therefore, using \[\mathbf{u}=\mathbf{u}^{\text{M}}+\mathbf{u}^{\text{P}},\quad\quad p=p^{\text{M} }+p^{\text{P}}, \tag{52}\] the system of mass and momentum equations along with the associated boundary conditions is linearly decomposed into two parts. \(\mathbf{u}^{\text{M}}\) and \(p^{\text{M}}\) represent the motion-induced velocity and pressure fields and \(\mathbf{u}^{\text{P}}\) and \(p^{\text{P}}\) the polymer-induced fields. The equations governing the motion-induced part are, \[\nabla\cdot\mathbf{u}^{\text{M}}=0,\quad\quad\nabla\cdot\mathbf{\sigma}^{\text{M}} =0, \tag{53}\] with, \[\mathbf{\sigma}^{\text{M}}=\mathbf{\tau}^{\text{M}}=-p^{\text{M}}\mathbf{\delta}+\mu( \nabla\mathbf{u}^{\text{M}}+(\nabla\mathbf{u}^{\text{M}})^{T}) \tag{54}\] and the boundary conditions, \[\mathbf{u}^{\rm M}=\mathbf{0},\ \mbox{on particle surface}\,\ \ \ \ \ \mathbf{u}^{\rm M}= \mathbf{u}_{\infty}(\mathbf{r})=\mathbf{r}\cdot\mathbf{\Gamma}+\mathbf{u}_{0}- \mathbf{u}_{p}-\boldsymbol{\omega}_{p}\times\mathbf{r},\ \mbox{as}\ |\mathbf{r}|\to \infty\approx\mathbf{r}_{\infty}. \tag{55}\] The polymer-induced part is governed by, \[\nabla\cdot\mathbf{u}^{\rm P}=0,\ \ \ \ \ \nabla\cdot\boldsymbol{\sigma}^{\rm P }=0, \tag{56}\] with, \[\boldsymbol{\sigma}^{\rm P}=\boldsymbol{\tau}^{\rm P}+\boldsymbol{\Pi}=-p^{\rm P }\boldsymbol{\delta}+\mu(\nabla\mathbf{u}^{\rm P}+(\nabla\mathbf{u}^{\rm P})^ {T}+\boldsymbol{\Pi}) \tag{57}\] and the boundary conditions, \[\mathbf{u}^{\rm P}=\mathbf{0},\ \mbox{on particle surface}\,\ \ \ \ \ \ \mathbf{u}^{\rm P}=\mathbf{0},\ \mbox{as}\ |\mathbf{r}|\to \infty\approx\mathbf{r}_{\infty}. \tag{58}\] The hydrodynamic force and torque acting on the particle are also decomposed into a motion- and polymer-induced part, \[\mathbf{f}_{\rm fluid}=\mathbf{f}_{\rm fluid}^{\rm M}+\mathbf{f}_{\rm fluid}^ {\rm P},\ \ \mathbf{q}_{\rm fluid}=\mathbf{q}_{\rm fluid}^{\rm M}+\mathbf{q}_{\rm fluid}^{ \rm P}, \tag{59}\] with, \[\mathbf{f}_{\rm fluid}^{\rm M}=\mathbf{f}(\boldsymbol{\sigma}^{\rm M}),\ \mathbf{f}_{\rm fluid}^ {\rm P}=\mathbf{f}(\boldsymbol{\sigma}^{\rm P}),\ \mathbf{q}_{\rm fluid}^{\rm M}=\mathbf{q}(\boldsymbol{\sigma}^{\rm M}),\ \mbox{and},\ \ \mathbf{q}_{\rm fluid}^{\rm P}=\mathbf{q}( \boldsymbol{\sigma}^{\rm P}), \tag{60}\] where, \(\mathbf{f}(\boldsymbol{\sigma})\) and \(\mathbf{q}(\boldsymbol{\sigma})\) are defined in equation (9). This way of decomposing the momentum equation attributes the entire effect of particle motion (via the boundary conditions in equation (55)) to the motion-induced part of the governing equations. This part has no influence of the polymer stress. The polymer-induced pressure and velocity fields are forced by the effect of the polymer stress, but are not explicitly affected by the particle's motion. A key observation allowing us to circumvent an iterative procedure in calculating the particle's motion is that the motion-induced equations represent a Stokes flow (inertia-less Newtonian flow). Hence, the motion induced force and torque are simply, \[\mathbf{f}_{\rm fluid}^{M}=\boldsymbol{F}_{p}\cdot\mathbf{u}_{p}+\mathbf{f}_{ \infty},\ \mathbf{q}_{\rm fluid}^{M}=\boldsymbol{Q}_{p}\cdot\boldsymbol{\omega}_{p}+ \mathbf{q}_{\infty}. \tag{61}\] The tensors \(\boldsymbol{F}_{p}\) and \(\boldsymbol{Q}_{p}\) depend upon only on the particle shape, and the vectors \(\mathbf{f}_{\infty}\) and \(\mathbf{q}_{\infty}\) depend upon both the particle shape and the imposed flow. These are either evaluated analytically for simple particle shapes and imposed flows or calculated by considering only the motion-induced mass and momentum equations with appropriate boundary conditions. For example, the 21 component of \(\boldsymbol{F}_{p}\) is simply the second component of \(\mathbf{f}(\boldsymbol{\sigma}^{\rm M})\) for a boundary condition \(\mathbf{u}^{\rm M}=\mathbf{0},\ \mbox{on particle surface}\) and \(\mathbf{u}^{\rm M}=[1\ 0\ 0]^{T}\), as \(|\mathbf{r}|\to\infty\approx\mathbf{r}_{\infty}\). \(\mathbf{f}_{\infty}\) and \(\mathbf{q}_{\infty}\) are simply the hydrodynamic force and torque on a fixed particle in the imposed flow (\(\mathbf{r}\cdot\mathbf{\Gamma}+\mathbf{u}_{0}\)) of an inertia-less Newtonian fluid. From the force- and torque-free constraints, we obtain \[\mathbf{u}_{p}=-\boldsymbol{F}_{p}^{-1}\cdot(\mathbf{f}_{\infty}+\mathbf{f}_{ \rm fluid}^{\rm P}+\mathbf{f}_{\rm ext}),\ \mbox{and},\ \boldsymbol{\omega}_{p}=- \boldsymbol{Q}_{p}^{-1}\cdot(\mathbf{q}_{\infty}+\mathbf{q}_{\rm fluid}^{\rm P} +\mathbf{q}_{\rm ext}) \tag{62}\] in a non-iterative way. This method of obtaining a particle's motion has long been used in micro-hydrodynamics of Newtonian fluids and is called a resistivity formulation [59]. However, with the novel decomposition of the inertial-less momentum equation we have shown that a similar principal can be applied in a computationally useful manner for non-Newtonian problems. Unlike the polymer induced pressure and velocity fields, the determination of the motion induced velocity and pressure fields does not require application of the Schur complement approach described in section 3.2 at each time step of the simulation. Once the particle's translational and angular velocity are available, these fields are either taken from the analytical solution of Stokes flow around the particle or from a linear superposition of pre-calculated numerical solutions of a few fundamental incompressible flows around the particle. The boundary conditions at a given time instant can be represented by a superposition of the boundary conditions in these fundamental flows. In the most general case when the velocity at the outer boundary in equation (55) has non-zero components for all the components of the effective (incompressible) imposed velocity gradient (\(\mathbf{\Gamma}+\boldsymbol{\epsilon}\cdot\boldsymbol{\omega}_{p}\)) and the effective imposed velocity (\(\mathbf{u}_{0}-\mathbf{u}_{p}\)), 11 fundamental Stokes flows have to be pre-calculated. While we are primarily considering linear velocity boundary conditions, this method can be extended to other types of boundary conditions where the Stokes flows required to be pre-calculated will be different. Using this resistivity formulation, our simulations become significantly faster relative to the secant iteration method. The two most computationally intensive components of our method are the weighted Jacobi method to iteratively solve equation (30) for \(\mathbf{\Psi}^{n+1}\) and the algebraic multigrid method to invert \(\mathbf{\mathcal{L}}\). The domain decomposition method implemented in Message Passing Interface (MPI) is used to parallelize the code in the \(\xi_{1}\) and \(\xi_{2}\) directions (the \(\xi_{3}\) direction could also be parallelized in future). The weighted Jacobi method is implemented completely in-house and shows good strong and weak scaling for 100s of processing units. The parallel scaling of the algebraic multigrid method is dependent on the external vendor's capability. We use the academic version of AGMG [54] for which good scaling was only obtained up to about 20 processing units. The examples considered in section 5 are completed within a reasonable time of up to four days using 20 processing units. However, for more stringent cases, AGMG can be replaced with other open source algebraic multigrid solvers available in libraries such as the BoomerAMG of HYPRE [60] that scales well up to a large number of processing units, but requires more parameter tuning. In this section 3, we have described the algorithmic structure of our numerical method in a coordinate system free manner. In the next section, we will delve into the specific choice of the prolate spheroidal coordinate system and the finite difference schemes used to discretize the spatial gradients in the governing equations. This will allow us to discretize the computational space and various operators mentioned above. ## 4 Spatial discretization One of the benefits of using body-fitted grids and expressing equations in the particle reference frame is that the discrete space or mesh remains fixed as the particle rotates, and the discretized spatial operators defined here need to be calculated only once before the equations are evolved in time. ### Prolate spheroidal coordinate system for spatial discretization of computational domain We use a prolate spheroidal coordinate system to spatially discretize the particle surface and the fluid region around it. The transformation between the prolate spheroidal (\(\mathbf{\xi}\)) and cartesian (\(\mathbf{x}\)) coordinates, defined for the particle-fixed reference frame, are \[x_{1}=f\sinh(\xi_{1})\sin(\xi_{2})\sin(\xi_{3}),\ \ \ \ \ x_{2}=f\sinh(\xi_{1})\sin(\xi_{2}) \cos(\xi_{3}),\ \ \ \ \ x_{3}=f\cosh(\xi_{1})\cos(\xi_{2}), \tag{63}\] where \(\xi_{2}\in[0,\pi]\) and \(\xi_{3}\in[0,2\pi]\). \(f\) is the focal length of the prolate spheroidal particle. Here, \(\xi_{1}\), \(\xi_{2}\), and \(\xi_{3}\) are similar to the radial, polar and azimuthal directions respectively in a spherical coordinate system. The surface of the particle with an aspect ratio, \(\kappa\), is exactly modeled as one of the coordinate surface i.e. \(\xi_{1}=\xi_{1}^{\text{surface}}\). We consider a prolate spheroidoid with minor radius, \(r_{\text{minor}}\). The surface, \(\xi_{1}^{\text{surface}}\) and focal length, \(f\), are, \[\xi_{1}^{\text{surface}}=\frac{1}{2}\log\Big{(}\frac{\kappa+1}{\kappa-1} \Big{)},\ \ \ \ \ f=\frac{r_{\text{minor}}}{\sinh(\xi_{1}^{\text{surface}})}. \tag{64}\] The outer surface of the computational domain, \(\mathbf{r}_{\infty}\), is also a prolate spheroidal surface that has a constant \(\xi_{1}=\xi_{1}^{\infty}\), \[\xi_{1}^{\infty}=\sinh^{-1}(\|\mathbf{r}_{\infty}^{\text{minor}}\|_{2}/f), \tag{65}\] where \(\|\mathbf{r}_{\infty}^{\text{minor}}\|_{2}\) (2-norm or the Euclidean norm of \(\mathbf{r}^{\text{minor}}\)) is the minor radius of the outer surface. It is a user prescribed parameter. The Euclidean distance of any point on a prolate spheroid \(\|\mathbf{r}_{\infty}\|_{2}\) is related to its minor axis through \(\|\mathbf{r}_{\infty}\|_{2}^{2}=\|\mathbf{r}_{\infty}^{\text{minor}}\|_{2}^{2} +f^{2}\cos^{2}(\xi_{2})\). If this surface is placed far from the particle, such as the outer surface, \(\|\mathbf{r}_{\infty}\|\approx\|\mathbf{r}_{\infty}^{\text{minor}}\|_{2}\). In other words, the outer surface is a nearly spherical surface. This is shown schematically in figure 1 and for an actual discretized example in the left panel of figure 2. The computational domain is defined within the limits: \(\xi_{1}\in[\xi_{1}^{\text{surface}},\xi_{1}^{\infty}]\), \(\xi_{2}\in[0,\pi]\), \(\xi_{3}\in[0,2\pi]\). In order to prevent pressure aliasing that leads to the checkerboard effect causing spurious pressure oscillations, we use a staggered grid arrangement [61]. The three velocity components are stored at the same location that we term velocity grid. Pressure and the components of the polymer configuration tensor \(\mathbf{\Lambda}\) and the latter's matrix logarithm \(\mathbf{\Psi}\) are stored at a location that is staggered relative to the velocity grid. These staggered locations form the pressure grid. The velocity and pressure grids are \[\text{Velocity Grid: }\xi_{1}^{\text{dist,vel}}\cup\xi_{2}^{\text{dist,vel}} \cup\xi_{3}^{\text{dist,vel}},\ \ \ \ \ \text{Pressure Grid: }\xi_{1}^{\text{dist,press}}\cup\xi_{2}^{\text{dist,press}}\cup\xi_{3}^{ \text{dist,pres}}, \tag{66}\] where \[\xi_{1,i}^{\text{dist.,vel}}= \xi_{1}^{\text{surface}}+\frac{\xi_{1}^{\infty}-\xi_{1}^{\text{ surface}}}{N_{1}-1}(i-1)[c_{1}\frac{(i-1)(i-0.5)}{(N_{1}-1)^{2}}+1],\ \ \ i=[1,2, \cdots,N_{1}],\] \[\xi_{1,1}^{\text{dist.,pres}}= \xi_{1}^{\text{surface}},\ \ \xi_{1,i}^{\text{dist.,pres}}=\frac{\xi_{1,i}^{\text{ dist.,vel}}+\xi_{1,i-1}^{\text{dist.,vel}}}{2},\ \ \ i=[2,3,\cdots,N_{1}],\ \ \xi_{1,N_{1}+1}^{\text{ dist.,pres}}=\xi_{1}^{\infty},\] \[\xi_{2,j}^{\text{dist.,pres}}= \pi\frac{j-1}{N_{2}}[c_{2}\frac{(j-1)(j-0.5)}{(N_{2}-1)^{2}}+1], \ \ j=[2,3,\cdots,N_{2}+1], \tag{67}\] \[\xi_{2,j}^{\text{dist.,vel}}= \frac{\xi_{2,j}^{\text{dist.,pres}}+\xi_{2,j+1}^{\text{dist.,pres}} }{2},\ \ \ j=[1,2,\cdots,N_{2}],\] \[\xi_{3,k}^{\text{dist.,vel}}= 2\pi\frac{k-1}{N_{3}-1},\ \ k=[1,2,\cdots,N_{3}],\ \ \xi_{3,k}^{\text{ dist.,pres}}=\frac{\xi_{3,k}^{\text{ dist.,vel}}+\xi_{3,k+1}^{\text{dist.,vel}}}{2},\ \ k=[1,2,\cdots,N_{3}-1],\] when \(c_{1}\) and \(c_{2}\) are zero, we obtain a uniform grid in spheroidal coordinates. A uniform spheroidal grid is naturally more clustered near the particle surface in the Euclidean sense than a uniform Cartesian grid. When \(c_{1}<0\), further grid clustering towards the particle and the outer surface is obtained. Another useful grid for this purpose to obtain grid clustering near the particle surface only (and appropriate when \(c_{1}\neq 0\)) is \(\xi_{1,i}^{\text{dist.,vel}}=\xi_{1}^{\text{surface}}+(\xi_{1}^{\infty}-\xi_{ 1}^{\text{surface}})[\exp[-c_{1}(i-1)/(N_{1}-1)]-1]/\{\exp(-c_{1})-1\},\ \ i=[1,2, \cdots,N_{1}]\). Setting \(c_{2}<0\) allows us to cluster the grids (in \(\xi_{2}\) coordinate) towards the major axis of the particle (which is useful in studies such as extensional flow around the particle with its major axis aligned with the extensional axis). We only consider a uniform grid in the azimuthal, \(\xi_{3}\) direction as the particle is axisymmetric, and there is no a priori preference of gradients at a particular \(\xi_{3}\) location for a general linear flow around the particle. Any other function to describe a non-uniform grid can be used with our method as the finite difference and interpolation schemes described in section 4.3 do not assume a grid type. Figure 2 shows the pressure grid for a spheroid with \(\kappa=4\), \(\|\mathbf{r}_{\infty}\|_{2}\approx\|\mathbf{r}_{\infty}^{\text{minor}}\|_{2}=80\), \(N_{1}=90\), \(N_{2}=N_{3}=71\) and \(c_{1}=c_{2}=0\). ### Caretsian basis for vectors and tensors and discrete representation of spatial operators We use the Cartesian basis on this prolate spheroidal grid to represent the velocity vector or the polymer configuration tensor at each location. This is done to prevent the coupling of different velocity vector components in the Figure 2: Discretized computational domain (\(\kappa=4\), \(r_{\text{minor}}=1\), \(\|\mathbf{r}_{\infty}\|_{2}\approx\|\mathbf{r}_{\infty}^{\text{minor}}\|_{2}=80\), \(N_{1}=90\), \(N_{2}=N_{3}=71\) and \(c_{1}=c_{2}=0\)): left panel shows the full domain, and right panel shows a zoomed region near particle surface (red). Pressure grid is shown here, velocity grid is staggered relative to this according to definitions in equation (67). vector Laplacian operator (also see equation (31) and following discussion) appearing in the momentum equations. The momentum equation written in Cartesian basis on the prolate spheroidal grid is, \[\rho_{f}\big{(}\frac{\partial\widetilde{u}_{i}}{\partial t}+\widetilde{\text{ ADV}}(\widetilde{\mathbf{u}},\mathbf{u}_{\infty},\boldsymbol{\omega}_{p}; \mathbf{r})_{i}\big{)}=-\frac{\partial\xi_{k}}{\partial x_{i}}\frac{\partial \widetilde{p}}{\partial\xi_{k}}+\nabla^{2}_{\text{SphinCart}}\widetilde{u}_{i} +\frac{\partial\xi_{k}}{\partial x_{j}}\frac{\partial(\Pi_{ji}-\Pi_{ji,\infty})} {\partial\xi_{k}}, \tag{68}\] where, \[\nabla^{2}_{\text{SphinCart}}=\frac{1}{h_{1}^{2}}\Big{(}\frac{1}{\text{tanh}( \xi_{1})}\frac{\partial}{\partial\xi_{1}}+\frac{\partial^{2}}{\partial\xi_{1} ^{2}}\Big{)}+\frac{1}{h_{2}^{2}}\Big{(}\frac{1}{\text{tan}(\xi_{2})}\frac{ \partial}{\partial\xi_{2}}+\frac{\partial^{2}}{\partial\xi_{2}^{2}}\Big{)}+ \frac{1}{h_{3}^{2}}\frac{\partial^{2}}{\partial\xi_{3}^{2}}, \tag{69}\] is a Laplacian operator that acts on individual Cartesian velocity component, \(u_{i}\) with derivatives defined in prolate spheroidal coordinates. In the above equation \[h_{1}=h_{2}=f\sqrt{\sinh^{2}(\xi_{1})+\sin^{2}(\xi_{2})},h_{3}=f\sinh(\xi_{1}) \sin(\xi_{2}). \tag{70}\] An advantage of this operator is that SphinCart\(\widetilde{u}_{i}\) has no cross-terms with other velocity components. This decoupling allows us to solve the mass-momentum system of equations more efficiently. As mentioned in section 3.2 the second step of the matrix-vector product in the Krylov subspace procedure (equation (36)) within the Schur complement method involves inversion of a discrete Laplacian operator acting on velocity vector. Unlike in curvilinear basis, the vector Laplacian in Cartesian basis can be treated as a set of three independent scalar Laplacians acting on three velocity components. Hence, it allows us to use the algebraic multi-grid methods such as AGMG [54], developed for scalar elliptic equations, to obtain the efficient inversion of the vector Laplacian operator. In other words, this choice of basis allows us to have a smaller bandwidth and number of off-diagonal components of the discrete matrix operators in equation (31) as compared to using a more natural choice of spheroidal basis. As mentioned earlier in section 4.1, we use a staggered grid arrangement to represent different variables: \(\widetilde{\mathbf{u}}\) is stored on the velocity and \(\widetilde{p}\), \(\boldsymbol{\Psi}\) and \(\gamma\) on the pressure grid. The momentum equation (29) is discretized on the velocity grid. \(\mathbf{u}_{\infty}\) is an analytically known function that can be evaluated at any location. The mass conservation equation (28), polymer constitutive equation (30) and \(\gamma\) equation (41) for FENE models are discretized on the pressure grid. Similar to the finite difference representation of the viscous terms by [24], we use a straightforward local Lagrange polynomial representation of the spatial operators on a discretized quantity. This allows flexibility in choosing the order of accuracy of finite difference schemes in different directions and for different interpolation and spatial derivative operators and is described in section 4.3. The spatially and temporally discretized governing equations at each grid point and at time step \(n+1\) are, \[\frac{\overline{\partial\xi_{k}}}{\partial x_{i}}\frac{\delta \widetilde{u}_{i}^{n+1}}{\delta\xi_{k}}=0, \tag{71}\] \[\Big{(}\frac{\rho_{f}}{\Delta t}-\nabla^{2}_{\text{SphinCart}} \Big{)}\widetilde{u}_{i}^{n+1}+\frac{\overline{\partial\xi_{k}}}{\partial x_{i }}\frac{\delta\widetilde{p}^{n+1}}{\delta\xi_{k}}=g_{i}^{\text{disc.}},\] (72) \[\frac{\Psi_{ij}^{m+1}}{\Delta t}+\frac{1}{2}\hat{u}_{l}^{n+1} \frac{\overline{\partial\xi_{k}}}{\partial x_{i}}\frac{\partial\Psi_{ij}^{m+1} -\vartheta_{ij,\infty}^{m+1}}{\delta\xi_{k}}-\frac{\text{SR}(\boldsymbol{\Psi} ^{m+1},\,\hat{\mathbf{u}}^{n+1})_{ij}^{\text{disc.}}}{2}=\frac{\text{SR}( \boldsymbol{\Psi}^{m},\hat{\mathbf{u}}^{n})_{ij}^{\text{disc.}}}{2}-\frac{1} {2}\hat{u}_{l}^{n}\frac{\overline{\partial\xi_{k}}}{\partial x_{i}}\frac{ \partial\Psi_{ij}^{m}}{\delta\xi_{k}},\] (73) \[(\gamma^{n+1})^{2}+\gamma^{n+1}\Delta t\Big{(}-\frac{\gamma^{n}}{ \Delta t}+\hat{u}_{j}^{n}\frac{\overline{\partial\xi_{k}}}{\partial x_{j}} \frac{\delta\gamma^{n}}{\delta\xi_{k}}+\frac{2\gamma^{n}}{L^{2}}\frac{ \overline{\partial\xi_{k}}}{\partial x_{i}}\frac{\overline{\partial u_{j}}} {\partial\xi_{k}}+\frac{\gamma^{n}-0.5}{De\gamma^{n}}+\frac{1}{DeL^{2}} \alpha_{1}^{n}\Big{)}-\frac{\Delta t}{2De}\alpha_{2}=0, \tag{74}\] \(\mathbf{u}^{n+1}\) in the polymer equations (73) and (74) is the total velocity i.e. the sum of velocity deviation \(\widetilde{\mathbf{u}}^{n+1}\) and undisturbed velocity \(\mathbf{u}_{\infty}^{n+1}\) evaluated on the pressure grid. In these equations the hat, \(\hat{\ }\), represents the scenario when the velocity or its spatial gradients are evaluated on the pressure grid. Similarly, the overbar, \(\hat{\ }\), represents the case when a pressure grid variable or its gradient is evaluated on the velocity grid. Various terms appearing in these equations are now defined. \(\alpha_{1}^{n}=3/(L^{2}-3)\) for FENE-P and \(1.5/\gamma^{n}\) for FENE-CR model, and, \(\alpha_{2}=1\) for FENE-P and \(1-3/L^{2}\) for FENE-CR model. These variables are irrelevant for Oldroyd-B or Giesekus models as the \(\gamma\) equation is unnecessary. \[\nabla_{\mathbf{x}}\boldsymbol{\xi}=\begin{bmatrix}\frac{\alpha\xi_{1}}{ \alpha\xi_{1}}&\frac{\alpha\xi_{2}}{\alpha\xi_{1}}&\frac{\alpha\xi_{1}}{\alpha \xi_{1}}&\frac{\alpha\xi_{1}}{\alpha\xi_{1}}\\ \frac{\alpha\xi_{1}}{\alpha\xi_{2}}&\frac{\alpha\xi_{2}}{\alpha\xi_{2}}&\frac{ \alpha\xi_{1}}{\alpha\xi_{2}}\\ \frac{\alpha\xi_{1}}{\alpha\xi_{3}}&\frac{\alpha\xi_{2}}{\alpha\xi_{3}}&\frac{ \alpha\xi_{1}}{\alpha\xi_{3}}\\ \end{bmatrix}=\begin{bmatrix}\frac{\cosh(\xi_{1})\sin(\xi_{2})\cos(\xi_{3})}{ h_{1}^{2}}&\frac{f\sinh(\xi_{2})\cos(\xi_{3})}{h_{2}^{2}}&\frac{-\sin(\xi_{3})}{h_{3} ^{2}}\\ \frac{f\cosh(\xi_{1})\sin(\xi_{2})\sin(\xi_{3})}{h_{1}^{2}}&\frac{f\sinh(\xi_{1}) \cos(\xi_{2})\sin(\xi_{3})}{h_{2}^{2}}&\frac{\cos(\xi_{3})}{h_{3}}\\ \end{bmatrix}, \tag{75}\] are spatial functions of the coordinate system that remain constant with time. \(\frac{\overline{\overline{\delta\xi}_{i}}}{\overline{\delta x_{j}}}\) and \(\frac{\widehat{\overline{\delta\xi}_{i}}}{\overline{\delta x_{i}}}\) are temporally constant analytical functions evaluated at the velocity and pressure grid respectively. \[\nabla^{2}_{\text{SphiaCart}}|^{\text{disc.}}\widetilde{u}_{i}^{n+1}=\frac{1}{h_ {1}^{2}}\Big{(}\frac{1}{\tanh(\xi_{1})}\frac{\delta\overline{u}_{i}^{n+1}}{ \delta\xi_{1}}+\frac{\delta^{2}\overline{u}_{i}^{n+1}}{\delta\xi_{1}^{2}} \Big{)}+\frac{1}{h_{2}^{2}}\Big{(}\frac{1}{\tan(\xi_{2})}\frac{\delta \overline{u}_{i}^{n+1}}{\delta\xi_{2}}+\frac{\delta^{2}\overline{u}_{i}^{n+1} }{\delta\xi_{2}^{2}}\Big{)}+\frac{1}{h_{3}^{2}}\frac{\delta^{2}\overline{u}_ {i}^{n+1}}{\delta\xi_{3}^{2}}, \tag{76}\] is discrete Laplacian operator acting at the velocity grid (\(\xi_{1}\), \(\xi_{2}\), \(h_{1}\), \(h_{2}\) and \(h_{3}\) are evaluated at each grid point on the velocity grid). \[g_{i}^{\text{disc.}}=\rho_{f}\Big{(}\frac{u_{i}^{n}}{\Delta t}-\frac{3}{2} \widehat{\text{ADV}}(\widehat{\mathbf{u}}^{n},\mathbf{u}_{\infty}^{n}, \boldsymbol{\omega}_{p}^{n};\mathbf{r})_{i}+\frac{1}{2}\widehat{\text{ADV}}( \widehat{\mathbf{u}}^{-1},\mathbf{u}_{\infty}^{n-1},\boldsymbol{\omega}_{p}^{ n-1};\mathbf{r})\big{)}_{i}+\frac{\overline{\overline{\delta\xi}_{k}}}{\overline{ \delta x_{j}}}\frac{\overline{\delta(\Pi_{j}^{n+1}-\Pi_{j,\infty}^{n+1})}}{ \delta\xi_{k}}, \tag{77}\] \[\widehat{\text{ADV}}(\widehat{\mathbf{u}}^{n},\mathbf{u}_{\infty}^{n}, \boldsymbol{\omega}_{p}^{n};\mathbf{r})_{i}=(\overline{u}_{j}^{n}+\boldsymbol {\omega}_{\infty,j}^{n})\frac{\overline{\partial\xi_{k}}}{\overline{\partial x _{j}}}\frac{\overline{\delta\xi_{i}}}{\overline{\delta\xi_{k}}}+\overline{u}_ {j}^{n}(\nabla\mathbf{u}_{\infty})_{\vec{\mu}}+2\epsilon_{ij\vec{\mu}} \omega_{j}^{n}\overline{u}_{k}^{n}, \tag{78}\] \(\nabla\mathbf{u}_{\infty}\) is a spatially constant tensor, that is analytically determined from the imposed flow and particle velocities along with the particle orientation (equation (45)). \(\text{SR}(\boldsymbol{\Psi}^{n},\hat{\mathbf{u}}^{n})_{ij}|^{\text{disc.}}\), is defined in equation (26) with \(\boldsymbol{\Psi}=\boldsymbol{\Psi}^{m}\) and \(\mathbf{u}=\hat{\mathbf{u}}^{n}\) where the velocity, \(\hat{\mathbf{u}}^{n}\), dependence appears through \(\mathbf{L}=\nabla\mathbf{u}^{n}\) defined as, \[L_{ij}=L_{ij}^{n}=\frac{\overline{\overline{\delta\xi}_{k}}}{\overline{ \partial x_{i}}}\frac{\widehat{\overline{\delta u}_{j}^{n}}}{\overline{ \delta\xi_{k}}}. \tag{79}\] ### Finite Difference Schemes for spatial derivatives and interpolations Values of various spatial derivatives and interpolated quantities appearing in the discretized equations, \(\frac{\widehat{\overline{\delta\xi}_{i}^{n}}}{\overline{\delta\xi_{j}}}\), \(\frac{\widehat{\overline{\delta\xi}_{j}^{n}}}{\overline{\delta\xi_{j}}}\), \(\frac{\delta\langle\nabla_{ij}^{n+1}-\Psi_{ij\alpha}^{n+1}\rangle}{\delta \xi_{k}}\), \(\frac{\delta\langle\nabla_{ij}^{n+1}-\Pi_{j\alpha}^{n+1}\rangle}{\delta\xi_{j}}\), \(\frac{\delta\langle\nabla_{ij}^{n+1}-\Pi_{j\alpha}^{n+1}\rangle}{\delta\xi_{ j}}\), \(\frac{\delta\langle\nabla_{ij}^{n}\rangle}{\delta\xi_{j}}\), \(\hat{u}_{i}^{n}\), \(\frac{\overline{\delta\xi}_{i}^{n}}{\overline{\delta\xi_{j}}}\) and \(\frac{\delta^{2}\overline{u}_{i}^{n}}{\overline{\delta\xi_{j}}}\), are approximated by higher order finite difference or interpolation schemes. Lagrange polynomials provide a convenient and flexible way to generate finite difference weights, irrespective of the grid spacing and order of accuracy [62]. Each variable, \(\overline{u}_{i}\), \(\overline{p}\), \(\Psi_{ij}\), \(\Pi_{ij}\) and \(\gamma\) is locally represented by an \((M-1)^{th}\) order Lagrange polynomial by fitting the variable's data on \(M\) neighboring grid points on which it is stored. Consider any of these variables to be represented by \(\phi(\xi_{1},\xi_{2},\xi_{3})\) which is stored at locations \([\xi_{1i}^{\text{dist.}},\xi_{2,j}^{\text{dist.}},\xi_{3,k}^{\text{dist.}}]\), \(i,j,k,\in\mathbb{N}\) (which represent either the pressure or the velocity grid defined in equation (67)). Interpolation along coordinate \(\xi_{i}\), \(i=[1,3]\), to a location \(\widehat{\xi}_{i}^{\text{dist.}}\), in Lagrange basis polynomials is, \[\phi(\widehat{\xi}_{1}^{\text{dist.}},\xi_{2,j}^{\text{dist.}},\xi_{ 3,k}^{\text{dist.}})=\sum_{q=n_{1}}^{M-n_{1}}l_{q}^{(0)}(\widehat{\xi}_{1\text{.}}^{\text{dist.}},\xi_{1,(\lambda)}^{\text{dist.}})\phi(\xi_{1q}^{\text{dist. }},\xi_{2,j}^{\text{dist.}},\xi_{3,k}^{\text{dist.}}),\] (80) \[\phi(\xi_{1,i}^{\text{dist.}},\widehat{\xi}_{2,i}^{\text{dist.}}, \xi_{3,k}^{\text{dist.}})=\sum_{q=n_{1}}^{M-n_{1}}l_{q}^{(0)}(\widehat{\xi}_{2 }^{\text{dist.}},\xi_{2,(\lambda)}^{\text{dist.}})\phi(\xi_{1,i}^{\text{dist. }},\xi_{2,q}^{\text{dist.}},\xi_{3,k}^{\text{dist.}}),\] \[\phi(\xi_{1,i}^{\text{dist.}},\xi_{2,j}^{\text{dist.}},\widehat{ \xi}_{3,k}^{\text{dist.}})=\sum_{q=n_{1}}^{M-n_{1}}l_{q}^{(0)}(\widehat{\xi}_{3 }^{\text{dist.}},\xi_{3,(\lambda)}^{\text{dist.}})\phi(\xi_{1,i}^{\text{dist.}}, \xi_{2,j}^{\text{dist.}},\xi_{3,q}^{\text{dist.}}).\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad where, \[\begin{split}\ell_{q}^{(1)}(\widehat{\xi}_{i}^{\text{dist.}},\xi_{i,(\cdot)}^{\text{dist.}})&=\Sigma_{p=n_{1},p\neq q}^{M+1-n_{1}} \frac{1}{\xi_{i,q}^{\text{dist.}}-\xi_{i,p}^{\text{dist.}}}\prod_{j=n_{1},j \neq(q,p)}^{M+1-n_{1}}\frac{\widehat{\xi}_{i}^{\text{dist.}}-\xi_{i,j}^{\text{ dist.}}}{\xi_{i,q}^{\text{dist.}}-\xi_{i,j}^{\text{dist.}}},\\ \ell_{q}^{(2)}(\widehat{\xi}_{i}^{\text{dist.}},\xi_{i,(\cdot)}^{ \text{dist.}})&=\Sigma_{r=n_{1},r\neq q}^{M+1-n_{1}}\frac{1}{\xi_ {i,q}^{\text{dist.}}-\xi_{i,r}^{\text{dist.}}}\Sigma_{p=n_{1},p\neq(q,r)}^{M+1 -n_{1}}\frac{1}{\xi_{i,q}^{\text{dist.}}-\xi_{i,p}^{\text{dist.}}}\prod_{j=n_{ 1},j\neq(q,p,r)}^{M+1-n_{1}}\frac{\widehat{\xi}_{i}^{\text{dist.}}-\xi_{i,j}^{ \text{dist.}}}{\xi_{i,q}^{\text{dist.}}-\xi_{i,j}^{\text{dist.}}},\end{split} \tag{82}\] The derivatives in \(\xi_{2}\) and \(\xi_{3}\) are similarly defined. In the first and second order derivative operators of equation (82) \(\widehat{\xi}_{i}^{\text{dist.}}\) may or may not be one of \(\xi_{i,q}^{\text{dist.}}\) locations. In the latter case it is somewhere within these \(\xi_{i,q}^{\text{dist.}}\) locations in the continuous space. The interpolation and discrete differentiation in the above equations is expressed as a discrete operator acting on the variable \(\phi\). **Terms including variables on the other grid:**\(\widehat{\frac{\delta\widehat{\mu}_{i}^{\prime}}{\delta\xi_{j}}}\) is obtained by the multiplying the first order discrete differential operator, \(\ell_{q}^{(1)}(\xi_{j}^{\text{dist.,pes}},\xi_{j,(\cdot)}^{\text{dist.,vel.}})\) and two interpolation operators \(\ell_{q}^{(0)}(\xi_{f}^{\text{dist.,pes}},\xi_{j,(\cdot)}^{\text{dist.,vel.}})\) for \(\{f=1,2,3,f\neq j\}\) with \(u_{i}^{n}\) in any order. Each operator in this case acts on the velocity grid and produces an output on the pressure grid in a particular dimension. Thus, \(\widehat{\frac{\delta\widehat{\mu}_{i}^{\prime}}{\delta\xi_{j}}}\) is the derivative of the velocity component \(u_{i}\) at time step \(n\) with respect to the \(\xi_{j}\) coordinate, evaluated on the pressure grid. Similarly, \(\widehat{\frac{\delta\widehat{\mu}^{\prime}}{\delta\xi_{j}}}\) is obtained by multiplication of \(\ell_{q}^{(1)}(\xi_{j}^{\text{dist.,vel}},\xi_{j,(\cdot)}^{\text{dist.,pes}})\), and \(\ell_{q}^{(0)}(\xi_{f}^{\text{dist.,vel}},\xi_{f,(\cdot)}^{\text{dist.,pes}})\) for \(\{f=1,2,3,f\neq j\}\) with \(\widehat{\mu}^{\text{st}}\) in any order to give the pressure derivative in \(\xi_{j}\) direction, evaluated on the velocity grid. \(\frac{\delta\widehat{\mu}_{i}^{\text{st.,vel.}}}{\delta\xi_{j}}\) is obtained in a way similar to pressure derivative after the polymer stress \(\Pi_{\bar{\mu}}^{n+1}\) and \(\Pi_{\bar{\mu},\infty}^{n+1}\) are obtained from the matrix logarithm of the polymer configurations \(\Psi_{\bar{\mu}}^{n+1}\) and \(\Psi_{\bar{\mu},\infty}^{n+1}\) respectively using the method described just before section 3.1. \(\hat{u}_{i}^{n}\), the velocity component in \(i^{th}\) coordinate evaluated on the pressure grid, is obtained by three successive interpolations of velocity deviation from the undisturbed value in each direction from velocity to pressure grid and then adding the interpolated value to the undisturbed velocity evaluated at the pressure grid. Therefore, the interpolation errors are only incurred on the velocity deviation from the far field. Since centered schemes introduce no numerical diffusion the stencils in these cases consist of an even number of grid points centered (equal on each side) around the output location, \(\xi_{j}^{\text{dist.,vel}}\) or \(\xi_{f}^{\text{dist.,pes}}\), i.e. \(M\) is an even integer. Near the boundaries of the computational domain, a centered stencil is not possible. In this case we still use an \(M\) point stencil, but with more points on one side of the output location so as to keep all the stencil points within the computation domain and avoid using any ghost nodes. This issue arises only in the \(\xi_{1}\) direction. \(\xi_{3}\) coordinate is periodic about \(\xi_{3}=0\) or \(2\pi\) and we utilize this periodicity. The \(\xi_{2}\) coordinate also has no external boundaries in the computational domain and we will discuss the special treatment required near the coordinate system dependent internal boundaries at \(\xi_{2}=0\) and \(\pi\) in section 4.3.1. **Non-convective terms including variables on the same grid:** The terms \(\frac{\delta\widehat{\mu}_{i}^{\prime}}{\delta\xi_{j}}\) and \(\frac{\delta^{\prime}\widehat{\mu}_{i}^{\prime}}{\delta\xi_{j}^{\prime}}\) appearing in the Laplacian operator, \(\nabla_{\text{SphinCart}}^{2}u_{i}^{n+1}\), require a single multiplication of the discrete derivative operator (first or second order) with the respective variable. In this case, the input and output grid are the same, i.e., the velocity grid. For \(\frac{\delta\widehat{\mu}_{i}^{\prime}}{\delta\xi_{j}^{\prime}}\) and \(\frac{\delta\widehat{\mu}_{i}^{\prime}}{\delta\xi_{j}^{\prime}}\) we use a stencil of odd length as the output location is one of the stencil points and there are equal number of points on either side of it. We deal with the \(\xi_{1}\) boundaries in the similar way as described above for terms including variables on the other grid. **Convective terms (HOUC schemes of Nourgaliev et al. (2007) [63]):** The spatial gradients appearing in the convective terms must be treated carefully to prevent numerical instability. These include \(\frac{\delta^{(\mathbf{v}_{j}^{\prime}-\mathbf{u}_{j}^{\prime\prime})}}{\delta \xi_{i}}\) and \(\frac{\delta\boldsymbol{\nu}^{\prime}}{\delta\xi_{j}}\) in equations (73) and (74), and, \(\frac{\delta\widehat{\mu}_{i}^{\prime}}{\delta\xi_{i}^{\prime}}\) within \(\widehat{\text{ADV}}(\widetilde{\mathbf{u}}^{n},\mathbf{u}_{\infty}^{n}, \boldsymbol{\omega}_{p}^{n};\mathbf{r})_{i}\) of the discrete momentum equation (equations (72) and (77)). Using centered difference schemes such as the one noted above for non-convective terms lead to a numerical instability. A slight upwinding, where one more point upstream is used relative to the downstream direction, allows one to maintain high spatial accuracy while maintaining numerically smooth and stable solution [24; 63]. This is termed as a higher order upwind central scheme (HOUC). It is straightforward to implement and introduces much less diffusion error compared to WENO schemes used to discretize the convective term [63]. Near the boundaries of the computational domain (\(\mathbf{r}_{p}\) and \(\mathbf{r}_{\infty}\)) we reduce the length of the stencil while maintaining upwinding. The direction of the velocity term multiplying the convective spatial gradient, \(\hat{u}_{j}^{n}\frac{\widehat{\overline{\hat{\alpha}}\hat{c}}_{k}}{\hat{\alpha}_{x _{1}}}\), \(u_{j,\infty}^{n}\frac{\widehat{\overline{\hat{\alpha}}\hat{c}}_{k}}{\hat{\alpha}_ {x_{1}}}\) or \(\hat{u}_{j}^{n}\frac{\widehat{\overline{\hat{\alpha}}\hat{c}}_{k}}{\hat{\alpha} _{x_{1}}}\), determines the shape of the stencil used at the time step, \(n\). This term is equivalent to the velocity components in the spheroidal basis for \(k=1\) and \(2\) i.e. the velocity along \(\xi_{1}\) and \(\xi_{2}\) directions respectively. For \(k=3\) this term differs from velocity in the azimuthal, \(\xi_{3}\), direction by a factor \(h_{3}\) from equation (70) and instead we use the direction of velocity in \(\xi_{3}\) to determine the stencil shape. Therefore, similar to the original HOUC proposition of [63] we choose the stencil shape based on the direction of convection in each direction. HOUC was originally [63] proposed for interface tracking in multiphase flows, but we have found it to improve numerical stability of flow of viscoelastic fluid around a prolate spheroid as well. #### 4.3.1 Internal boundaries in the \(\xi_{2}\) direction The only physical boundaries of the computational domain are the two \(\xi_{1}\) surfaces at \(\xi_{1}^{\text{surface}}\) and \(\xi_{1}^{\infty}\). Due to the choice of the coordinate system, \(\xi_{2}\) is bounded by \(0\) and \(\pi\) and \(\xi_{3}\) by \(0\) and \(2\pi\), which requires one to consider appropriate boundary conditions at these computational boundaries. As mentioned above, \(\xi_{3}\) is a periodic coordinate, and the points \(\xi_{3}=0\) and \(2\pi\) represent the same physical location. Therefore periodic boundary condition in \(\xi_{3}\) allows the interpolation mentioned above and discrete derivative operators in the \(\xi_{3}\) direction to be implemented near the boundaries in a way similar to internal \(\xi_{3}\) points. The boundary treatment of \(\xi_{1}\) operators is already described above. The \(\xi_{2}\) coordinate has non-periodic internal boundaries. Such a scenario appears at \(r=0\) in the cylindrical coordinate system and has previously been treated in the context of finite difference schemes by several researchers [22; 23; 24; 64]. In this situation scenarios the finite difference stencil near \(r=0\) requires information at an unphysical location \(r<0\). In prolate spheroidal coordinates, the information is required in the unphysical regions \(\xi_{2}<0\) and \(\xi_{2}>\pi\). Similar to these previous works, a variable \(\phi\) (representing \(p\), \(\gamma\) or a component of \(\mathbf{u}\) or \(\boldsymbol{\Psi}\)) at these locations is defined as, \[\begin{split}\phi(\xi_{1},\xi_{2},\xi_{3})&=\phi( \xi_{1},-\xi_{2},\xi_{3}+\pi),\ \ \ \ \xi_{2}\leq 0\\ \phi(\xi_{1},\xi_{2},\xi_{3})&=\phi(\xi_{1},2\pi- \xi_{2},\xi_{3}+\pi),\ \ \ \ \xi_{2}\geq\pi.\end{split} \tag{83}\] In previous works, the coordinate system and choice of basis for the vectors and tensors are the same, and care must be taken in the sign of certain components of vectors or tensors when using these transformations around the internal points/ axes [24]. However, in a Cartesian basis the transformations of equation (83) are valid for any component of \(\mathbf{u}\) or \(\boldsymbol{\Psi}\) (as well as \(p\) or \(\gamma\)). The issue of non-physical internal boundaries also appears in the spherical coordinate system near the origin at \(r=0\) and at the polar axis (\(\theta=0\) and \(\pi\)). This has been treated in a recent work of Santelli et al. [21] by writing momentum equations for a transformed vector \([u_{\phi}\ u_{r}r^{2}\ u_{\theta}\sin(\theta)]^{T}\) obtained from the original velocity vector \([u_{\phi}\ u_{r}\ u_{\theta}]^{T}\). Zero transformed vector boundary conditions are imposed on the internal boundaries for equations governing the transformed vector by Santelli et al. [21]. #### 4.3.2 Singular axis and boundary conditions Another non-physical issue that arises due to the choice of the coordinate system is the appearance of singular terms. \(1/h_{3}\) (defined in equation (70)), and \(\frac{\partial\xi_{3}}{\partial x_{1}}\) and \(\frac{\partial\xi_{2}}{\partial x_{2}}\) (defined in equation (75)) have \(\sin(\xi_{2})\) in the denominator. These terms appear as a factor in front of several terms in the governing equations. As expressed so far, these terms lead to a coordinate system generated singularity at the axis on which \(\xi_{2}=0\) or \(\pi\). However, these factors appear along with a spatial gradient in the \(\xi_{3}\) direction. The singular part of these terms can be expressed as \[\text{Sing}=\frac{1}{\sin(\xi_{2})}\frac{\partial\phi}{\partial\xi_{3}}. \tag{84}\] On the singular axes \(\phi\), representing either a scalar \(p\) or \(\gamma\) or a Cartesian component of \(\mathbf{u}\) or \(\boldsymbol{\Psi}\), is unique. Therefore, \[\frac{\partial\phi}{\partial\xi_{3}}\Big{|}_{\xi_{2}=0}=\frac{\partial\phi}{ \partial\xi_{3}}\Big{|}_{\xi_{2}=\pi}=0. \tag{85}\] It can be checked from the governing equations that wherever terms of the type in equation (84) appear there is no other \(\xi_{2}\) dependence in the multiplicative factors. Therefore, using L'Hopital's theorem \[\lim_{\xi_{2}\to(0,\pi)}\text{Sing}=\frac{1}{\cos(\xi_{2})}\frac{\partial^{2} \phi}{\partial\xi_{2}\partial\xi_{3}}, \tag{86}\] and the non-physical coordinate system generated singularity is removed at the expense of introducing additional derivatives in the \(\xi_{3}\) (azimuthal) direction. This treatment of coordinate system generated axis singularities is motivated by the treatment of the \(1/r\) singularity at \(r=0\) in cylindrical coordinate system by [23] and [24]. The axis singularity appears in the pressure grid where the points \(\xi_{2,1}^{\text{dist.,pres}}=0\) and \(\xi_{2,N_{2}}^{\text{dist.,pres}}=\pi\) are defined. Therefore, the singularity axis treatment is implemented in the discretization of the mass conservation and polymer constitutive equations. Such singularity does not appear on the velocity grid as it is staggered relative to the pressure grid. This treatment is not required to discretize the momentum equation. The previous methods of [23] and [24] that used L'Hopital's theorem to account for the coordinate system generated singularity used a curvilinear (cylindrical) basis for the vectors. These vector components are multi-valued at the singularity (\(r=0\)) and as a result the axis values of these components were obtained using the relation between curvilinear and Cartesian components on the axis. The Cartesian components were averaged/ interpolated using the points off the singular axis. This leads to a difference between the analytical equation obtained after L'Hopital's theorem and that obtained using the Taylor series expansion of the actual discretized equations (compare equations (37) and (51) of [24]). As mentioned in those studies, strict energy conservation cannot be obtained. Here, we use Cartesian components for the vectors and tensors and do not encounter this problem. This completes the description of the numerical method. In the next section, we implement this method on several flows of inertia-less Newtonian and viscoelastic fluids, along with examples of Newtonian fluids with inertia. ## 5 Numerical Tests In this section, we use the numerical method described above to compute the flow field around particles such as a sphere and a prolate spheroid in a Newtonian fluid with and without (Stokes flow) inertia and an inertia-less viscoelastic fluid. Several fluid-particle interaction problems are considered for each class by changing the imposed velocity boundary conditions and torque constraint on the particle. These include fixed and freely rotating spheres and spheroids (including high aspect ratios) in uniform flow, simple shear flow, extensional flow, and combinations of the first two. These studies show forces, torques and stresslets, fluid streamlines, and the orientational trajectories of freely suspended spheroids. As mentioned in section 4.3 using Lagrange polynomials to represent finite difference spatial discretization and interpolation allows us great flexibility in choosing the order of spatial discretization schemes. In all the examples presented, we have used a four-point stencil in \(\xi_{1}\), a six-point stencil in \(\xi_{2}\) and an eight-point stencil in the \(\xi_{3}\) direction. The higher-order discretization in \(\xi_{2}\) and \(\xi_{3}\) is useful because for most cases (excluding section 5.3.2) we use less mesh points in these directions as compared to \(\xi_{1}\). After initial testing of different orders of accuracy (not shown), we have found these orders of accuracy to be adequate for validating the cases we present while obtaining numerically stable simulations. In all the examples presented \(r_{\text{minor}}=1\). ### Stokes flow: Motion of particles in inertia-less Newtonian fluid One of our primary objectives in developing the numerical method described in the previous sections is to study flows with zero to moderate inertia. The interest in the former is in the presence of viscoelasticity. As discussed in section 1, 3.2 and 3.5.2 zero inertia studies are important in isolating the effect of viscoelastic fluids and our numerical method is suitable for studying flow with zero particle and fluid inertia. In this section, we compare our numerical method against the analytically available solution of Stokes flow (\(\rho_{f}=\rho_{p}=0\)) of a Newtonian fluid (\(c=0\)) around prolate spheroids. #### 5.1.1 Jeffery Orbits: A spheroid rotating in an inertia-less Newtonian fluid The orientation trajectory of a freely rotating prolate spheroid in a simple shear flow of an inertia-less Newtonian fluid in the absence of particle inertia is available from the work of Jeffery (1922) [65]. At a given time, \(t\), the polar angle, \(\theta_{\text{r}ort}\), with the vorticity axis of the imposed shear flow, and the azimuthal angle, \(\phi_{\text{g}rad}\), with the gradient direction in the flow-gradient plane, subtended by a prolate spheroid of aspect ratio \(\kappa\) are given by \[\tan(\phi_{\text{g}rad})=\kappa\tan\Big{(}\frac{\dot{\gamma}t}{\kappa+\kappa^{ -1}}\Big{)},\ \ \ \ \ \tan(\theta_{\text{r}ort})=\frac{C\kappa}{\sqrt{\kappa^{2}\cos^{2}(\phi_{\text{g} rad})+\sin^{2}(\phi_{\text{g}rad})}}, \tag{87}\] where \(\dot{\gamma}\) is the shear rate of the imposed flow and \(C\) is an orbital constant that depends on initial orientation. Hence, a prolate spheroid undergoes three-dimensional periodic motion in a shear flow of a Newtonian fluid, and the orbit's shape is determined by initial orientation. In figures 2(a) and 2(b), we show these Jeffery orbits and the evolution of the polar angle, \(\phi_{grad}\), respectively, computed from our code along with the analytical expressions from equation (87) for \(\kappa=20\) at four starting orientations. In the numerical solution, the domain size used is \(10\kappa\) and a uniform mesh is used with \(N_{1}=120\), \(N_{2}=N_{3}=71\) (these are defined in equation (67)). Terms multiplying fluid and particle density are ignored. At each time step, the resistivity formulation method described in section 3.5.2 is used to determine the particle's angular velocity, which leads to zero net torque on the particle. The necessary Stokes flows (motion induced part of the momentum equation) required in this formulation (as mentioned in section 3.5.2) are pre-calculated numerically using the Schur complement approach of section 3.2 before time evolving the particle's orientation. The numerical and the analytical curves shown in figure 3 are indistinguishable. #### 5.1.2 Forces on a fixed spheroid in a uniform flow The drag, \(C_{D}\), and the lift coefficient, \(C_{L}\), of a prolate spheroid with aspect ratio, \(\kappa\), fixed in a uniform flow are given by [66, 67, 68], \[C_{L}=\frac{\text{Lift}}{\mu U_{0}r_{\text{minor}}}=16\pi\kappa((K_{zz}-K_{xx} )\cos^{2}(\theta)+K_{xx}),\hskip 14.226378ptC_{D}=\frac{\text{Drag}}{\mu U_{0}r_{ \text{minor}}}=16\pi\kappa(K_{xx}-K_{zz})\cos(\theta)\sin(\theta), \tag{88}\] where drag and lift are the hydrodynamic forces acting parallel and perpendicular to the imposed flow in the plane of the imposed flow and the particle center line (major axis). \(\mu\) and \(U_{0}\) are the fluid viscosity and imposed speed, and \(\theta\) is the angle of imposed flow relative to the major axis of the particle. \[\begin{gathered} K_{xx}=\frac{1}{\xi+\alpha},\hskip 14.226378ptK_{zz} =\frac{1}{\xi+\kappa^{2}\gamma},\\ \alpha=\frac{\kappa^{2}}{\kappa^{2}-1}+\frac{\kappa}{2(\kappa^{2} -1)^{1.5}}\eta,\hskip 14.226378pt\gamma=-\frac{2}{\kappa^{2}-1}-\frac{ \kappa}{(\kappa^{2}-1)^{1.5}}\eta,\hskip 14.226378pt\eta=\log\Big{(}\frac{ \kappa-\sqrt{\kappa^{2}-1}}{\kappa+\sqrt{\kappa^{2}-1}}\Big{)}.\end{gathered} \tag{89}\] We show the lift and drag coefficient for the flows past spheroids of three different aspect ratios, \(\kappa=6\), \(10\) and \(20\) in figure 4 for seven different angles of attack between the imposed uniform flow and the particle's major axis. Good quantitative agreement between our numerical results and the analytical expressions from equations (88) and (89) is obtained. The domain size used is \(100\kappa\). A large computational domain is required for low inertia flows past a solid particle because the velocity disturbance created by the particle decays as \(1/r\) at large distances (\(r\)) from the particle Figure 3: Jeffery Orbits: Prolate spheroid of aspect ratio \(\kappa=20\) rotating in simple shear flow of Newtonian fluid. Four different starting orientations are plotted in (a), and the orientation trajectory follows one of the degenerate periodic orbits depending upon the starting orientation. (b) shares the same legend as (a) and shows the time evolution of the polar angle \(\phi_{grad}\) for the orbits of (a) (curves are indistinguishable as \(\phi_{grad}\) is independent of the orbit constant). We obtain good agreement with the analytical prediction of Jeffery [65] for all orientations. [59; 45]. This issue has marred previous computations of Andersson et al. [69] who mentioned the requirement of a large computational domain as the Reynolds number in their study of \(\kappa=6\) particle was reduced to about 0.1. They used a rectangular computational domain of size \(64\kappa\times 64\kappa\times 42.7\kappa\). Depending upon the angle of attack, they reported a deviation of 0.6 to 15% from the analytical \(C_{D}\) and \(10.3-24.9\) % from the analytical \(C_{L}\) values for zero inertia. In contrast, using a very large computational domain of size \(100\kappa\) with \(N_{1}=250\), \(N_{2}=80\) and \(N_{3}=51\) we obtain highly accurate results for \(\kappa=6\), 10 and 20 shown in figure 4. The largest absolute error is at \(\theta=45^{\circ}\) for \(\kappa=20\). It is only 1.09% and 0.23% in \(C_{L}\) and \(C_{D}\) respectively. An accurate simulation of these cases with a reasonable number of mesh points is possible because while the required size of the computational domain is large, the grid can be relatively sparse in the far-field and the prolate spheroidal grid is naturally more clustered near the particle surface. Therefore, prolate spheroidal coordinates employed in our simulations are an appropriate choice for such studies, particularly when we test theories designed to be a perturbation from Stokes flow. This example will be shown in section 5.2.1. ### Particles in Newtonian fluid with finite inertia #### 5.2.1 Uniform flow past a fixed prolate spheroid A prolate spheroid sediments in an inertial-less Newtonian fluid without any change in orientation. Equivalently, a uniform flow of inertial-less Newtonian fluid past a fixed spheroid, i.e., the cases of section 5.1.2, exerts no hydrodynamic torque on the particle. Inertia, however, leads to a finite torque on a fixed spheroid. In the low inertia limit, at the steady-state, Dabade et al. (2015) [70] calculated the coefficient of inertial torque, \(C_{T}\), on a particle with aspect ratio, \(\kappa\), fixed in a Newtonian fluid. When the angle of attack of the oncoming flow relative to the particle axis is \(\theta\), they find, \[C_{T}=\frac{Re}{2}F(\kappa)\sin(2\theta), \tag{90}\] where, \(C_{T}\) and the Reynolds number, \(Re\), are, \[C_{T}=\frac{\text{Torque}}{\mu U_{0}r_{\text{minor}}^{2}\kappa^{2}},\ \ \ \ \ Re=\frac{\rho_{f}U_{0}r_{\text{minor}}\kappa}{\mu}, \tag{91}\] and \(U_{0}\) is the imposed velocity. \(F(\kappa)\) is a non-dimensional parameter that depends on the particle aspect ratio and \(F(6)=0.5458\)[70]. In figure 5, we show the steady-state values of \(C_{T}/Re\) at various \(\theta\) for \(\kappa=6\) and three different \(Re=0.3\), 3.0 and 30.0 from our numerical results along with those of Andersson and Jiang (2019) [69] and Jiang et al. (2021) [71]. We also show the analytical expression \(F(6)\sin(2\theta)/2\) of Dabade et al. (2015) [70]. Due to the inertial screening (see [72; 73; 74] for numerical and experimental evidence of this mechanism in a dilute suspension of sedimenting spheres), the velocity disturbance due to the particle in the presence of fluid inertia decays at a rate faster than the \(1/r\) (\(r\) is the distance from the particle) decay characteristic of the inertia-less or Figure 4: Lift and drag coefficients (89) on a prolate spheroid fixed in a uniform flow at various angle of attacks (\(\theta\)) and particle aspect ratios (\(\kappa\)) in the inertia-less limit (\(\rho_{f}=0\)). We obtain good agreement with the analytical prediction available in [66; 67; 68] for each \(\kappa\) and \(\theta\). Stokes limit (see section 5.1.2 and [59; 45]). However, the inertial screening length may still be large if Reynolds number is small and a large computational domain may be required to quantitatively assess the validity of theories developed for small inertial corrections. In figure 5 we observe that our numerical results, performed with a large computational domain size of \(\|\mathbf{r}_{\infty}\|_{2}\approx\|\mathbf{r}_{\infty}^{\mathrm{minor}}\|_{2}= 100\kappa\) are closer to the analytical prediction of Dabade et al. (2015) [70] for \(Re=0.3\) than the numerical results of Andersson, Jiang and co-workers (2019,2021) [69; 71] performed with a rectangular domain of size \(64\kappa\times 64\kappa\times 42.7\kappa\)[69] or \(34\kappa\times 34\kappa\times 34\kappa\)[71]. For \(45^{\circ}\), simulations of [69; 71] have a deviation of \(17\%\) from the analytical prediction of [70] at \(Re=0.3\). We have a deviation of \(2.7\%\). In addition to larger domain size, we use a straightforward boundary condition of a constant uniform velocity on the outer boundary. However, the discussion in [69] points to a more involved boundary condition treatment. At higher \(Re=3.0\) and \(30.0\) our computations agree with that of [69; 71] as shown in figure 5. For \(Re=3.0\) we show another simulation result at \(\|\mathbf{r}_{\infty}\|_{2}\approx\|\mathbf{r}_{\infty}^{\mathrm{minor}}\|_{2 }=50\kappa\). The resolution of our computational grid is \(N_{1}=200\), \(N_{2}=131\) and \(N_{3}=65\) for both computational domains. The agreement with the results of [69; 71] is better for the smaller of our computational domains at \(Re=3.0\), a further indication of the importance of domain size at small to moderate inertia. For \(\theta=45^{\circ}\) at \(Re=3.0\) and \(30.0\) we simulated additional cases with increased resolution \(N_{1}=300\), \(N_{2}=201\) and \(N_{3}=91\) (not shown) and found similar results as with our lower resolution. Due to the nature of spheroidal coordinates (see equation (63) and more detailed discussion in section 1 and 4.1 about Euclidean spacing of the spheroidal grid), the domain size can be greatly increased without decreasing the resolution near the particle surface significantly. Therefore, our method is well equipped to study flow around a sedimenting particle or a particle fixed in uniform flow in the limit of small to moderate inertia. We also show the steady-state streamlines of the flow for \(Re=0\) and \(Re=30\) in figure 6 for \(\theta=90^{\circ}\). The presence of inertia leads to the formation of trailing edge vortices. The streamline pictures for \(Re=30\) are qualitatively similar to those shown by Andersson et al. [69]. #### 5.2.2 Effect of inertia on a freely rotating sphere in simple shear flow In the previous section, we demonstrated the validity of our method to capture the inertial effect on a fixed particle. In this section, we show an example where particle and fluid inertia are moderate, and the particle is allowed to rotate due to the hydrodynamic stresses acting on the particle surface as treated by the method described in section 3.5.1. For this purpose, we validate our code with the simulations of Bagchi and Balachandar (2002) [12] where the particle is a sphere. We use \(\kappa=1.001\) to represent a sphere in the prolate spheroidal coordinate system. The sphere is not allowed to translate but can rotate freely under the influence of the hydrodynamic torque. The imposed fluid velocity is, \[\mathbf{u}_{\mathrm{imposed}}^{\mathrm{inertial}}=[U_{0}+\dot{\gamma}y\ \ \ 0\ \ 0]^{T}, \tag{92}\] Figure 5: Uniform flow past a fixed spheroid: Steady-state torque coefficient, \(C_{T}\) normalized with Reynolds number, \(Re\), from our numerical simulations and those of Andersson and Jiang (2019) [69] and Jiang et al. (2021) [71] at different angles of attack, \(\theta\) and Reynolds numbers, \(Re=0.3\), \(3.0\) and \(30\) along with the analytical calculations of Dabade et al. [70] is also shown. Our results agree with that of [69; 71] at larger \(Re\) and with [70] at \(Re=0.3\). in the laboratory frame. Two different Reynolds numbers based on the uniform flow speed, \(U_{0}\), or the shear rate, \(\dot{\gamma}\) can be defined in this case. One of them i.e. the translational Reynolds number (\(Re\)) is the same as defined earlier in equation (91) and the shear Reynolds number is, \[Re_{\dot{\gamma}}=\frac{\rho_{f}\dot{\gamma}r^{2}}{\mu}, \tag{93}\] where \(r\) (\(=r_{major}\)) is the radius of the sphere. \(Re\) and \(Re_{\dot{\gamma}}\) are related by a non-dimensional factor, \(\dot{\gamma}r/U_{0}\). In Stokes flow (\(Re=Re_{\dot{\gamma}}=0\)) the sphere rotates at the angular velocity of the fluid, \(\omega_{f}=\dot{\gamma}/2\). The presence of inertia lowers the rotation rate of the sphere. In figure 7a we show the steady state rotation rate of the sphere, \(\omega_{st}\), normalized with \(\omega_{f}\) for a range of \(Re\) at two different \(\dot{\gamma}r/U_{0}=0.05\) and \(0.1\) from our simulations and those of Bagchi and Balachandar (2002) [12]. We find a good, albeit not exact, agreement of \(\omega_{st}/\omega_{f}\) between the two simulations for all the cases shown up to \(Re=100\). Bagchi and Balachandar (2002) [12] found the inertial slow-down, \(\omega_{st}/\omega_{f}\), of a sphere's rotation to be independent of \(\dot{\gamma}r/U_{0}\) for a constant \(Re\). This is also captured by our results in figure 7a as the results for the two \(\dot{\gamma}r/U_{0}\) shown nearly collapse. In figure 7b we show the streamlines of the fluid flow around the sphere for \(Re=100\) and \(\dot{\gamma}r/U+0=0.1\). These are qualitatively similar to the streamlines shown in figure 8(b) of Bagchi and Balachandar (2002) [12]. This example shows the capability of our numerical methodology to handle moving particles in the presence of inertia. ### Motion of particles in inertia-less viscoelastic fluid In this section, we compare our numerical method with other numerical and semi-analytical results of the flow of viscoelastic fluids around spheres and prolate spheroids in various linear flows. Figure 6: Streamlines of the Newtonian fluid flow around a fixed prolate spheroid of aspect ratio, \(\kappa=6\) at Reynolds number \(Re=0\) (Stokes flow) and \(Re=30\) with major axis at \(90^{\circ}\) to the oncoming flow (left to right). For each \(Re\), two different views are shown. The \(Re=30\) streamlines are the same as those observed i in figures 5(d) and 6(d) of [69]. #### 5.3.1 Torque-free rotation in simple shear flow In section 5.2.2, we discussed the influence of inertia in slowing down the rotation of a sphere in simple shear flow from its value in the Stokes limit. Avino et al. (2008) [75] reported a similar effect due to viscoelasticity where increasing \(De\) in an inertia-less viscoelastic fluid lowers the rotation rate of a spherical particle. A subset of these authors reported the steady-state angular velocity of the sphere in Avino et al. (2014) [26]. Here \(De\) is the non-dimensional parameter known as the Deborah number, representing the product of the imposed shear rate and the particle relaxation time. The fluid and particle inertia are ignored, and the net torque due to the fluid stresses acting on the surface of a freely moving particle is zero. Hence the particle motion in the cases considered in this section is obtained through the method described in section 3.5.2. The Giesekus constitutive relation (equations (18) and (19)) with \(c=10.0\) and \(\alpha=0.2\) is used to model the polymer stress in the viscoelastic fluid. In figure 8 we show that the results from our simulations for a sphere rotating in a simple shear flow of this fluid are almost identical to that of Avino et al. (2014) [26]. In our simulations, a prolate spheroid with aspect ratio \(\kappa=1.001\) represents the sphere. Avino et al. (2014) [26] also shown the average angular velocity for the case when the major axis of a prolate spheroid rotates in the flow-gradient plane of a simple shear flow of the same viscoelastic fluid. We compare the results for \(\kappa=2.0\) from our simulations with that of Avino et al. (2014) [26] in figure 8 and find an excellent agreement at all \(De\) shown. For a spheroid in a Newtonian fluid, i.e. at \(De=0.0\), according to Jeffery (1922) [65]\(\bar{\omega}/\omega_{f}=2.0\kappa/(1+\kappa^{2})=1.0\) and \(0.8\) for \(\kappa=1.0\) (spheres) and \(2.0\) respectively. These analytical estimates are the numerical values shown in figure 8 at \(De=0.0\). We show the streamlines around a freely rotating sphere in a Newtonian fluid and viscoelastic fluid with \(De=1.0\) in figures 9a and 9b. These are qualitatively similar to those shown in figures 9(a) and 9(d) of Avino et al. (2008) [75]. The effect of viscoelasticity is to distort the region of closed streamlines around the sphere that extends to infinity for Newtonian fluid. At \(De=1.0\), both the simulations find stagnation points on either side of the sphere in the flow direction, which marks the end of the closed streamline region. By showing streamlines for \(De=0.1\), \(0.3\) and \(1.0\) Avino et al. (2008) [75] established the movement of hte stagnation points closer to the sphere upon increasing \(De\). We also show the continuation of this trend to \(De=3.0\). At this higher \(De\), an asymmetry of the region of closed streamlines and a reverse wake about the flow (horizontal) direction is also observed. Now we consider three dimensional rotation of a \(\kappa=4.0\) prolate spheroid in shear flow of the same viscoelastic fluid. As shown by the numerical simulations of Avino et al. (2014) [26], in the presence of viscoelasticity the orientation of a prolate spheroid no longer follows the Jeffery orbits as in Newtonian Stokes flow [65] (section 5.1.1). In figure 10 we compare the orientation trajectory of a \(\kappa=4.0\) prolate spheroid rotating in an inertia-less viscoelastic fluid from our simulations with that of Avino et al. (2014) [26]. We show two different starting orientations for \(De=1.0\) and \(3.0\) in figure 10. The resolution used in these simulations is \(N_{1}=150\), \(N_{2}=71\) and \(N_{3}=57\) (equation (67)). The size of the computational domain used is \(\|\mathbf{r}_{\infty}\|_{2}\approx\|\mathbf{r}_{\infty}^{\rm minor}\|_{2}=20\kappa\). Our simulation results qualitatively Figure 7: Comparison of the inertial slow down of a freely rotating sphere in a flow \(\mathbf{u}=[U_{0}+\dot{\gamma}y\quad 0\quad]^{T}\) from our simulations and those of Bagchi and Balachandar (2002) [12] (\(U_{0}\) and \(\dot{\gamma}\) are constants). The sphere is allowed to rotate freely but not translate. The left figure shows the ratio of steady state rotation rate of a sphere, \(\omega_{air}\) to the fluid rotation rate, \(\omega_{f}=\dot{\gamma}/2\) at various translations Reynolds numbers and at two different \(\dot{\gamma}r/U_{0}=0.05\) and \(0.1\). Quantitative agreement in \(\omega_{air}/\omega_{f}\) between our simulations and those of [12] is found at the \(Re\) and \(\dot{\gamma}r/U_{0}\) shown. The right figure shows the streamlines for Re=100 and \(\dot{\gamma}r/U_{0}=0.1\) that are qualitatively similar to the respective streamlines in figure 8(b) of [12]. agree with those of Avino et al. (2014) [26] with small, subtle differences. Similar to Avino et al. (2014) [26], we find the final orientation behavior of the particle at \(De=1\) to be spiraling towards the vorticity direction of the imposed simple shear flow, irrespective of the initial orientation. At \(De=3.0\), the particle in both simulations settles to a location very close to the flow direction, irrespective of the starting orientation. #### 5.3.2 Rheology of dilute suspensions of spheres As mentioned in section 1, fluid flow around a particle in an unbounded fluid is useful in studying the rheology of a dilute suspension of particles where the inter-particle interaction is rare. In Newtonian fluids, the presence of particles leads to an additional stress \(n\mathbf{S}\) in the suspension, where \(n\) is the number of particles per unit volume and \(\mathbf{S}\) is termed stresslet. In rheology studies of incompressible suspensions, the deviatoric or traceless part of the suspension stress is most interesting as the trace of the stress can be absorbed in the modified pressure. The deviatoric part of the stresslet, \(\hat{\mathbf{S}}\), is an area integral over the particle surface, \(\mathbf{r}_{p}\), of a tensor product of the fluid stress [76], \[\hat{\mathbf{S}}(\mathbf{\sigma})=\int_{\mathbf{r}\cdot\mathbf{r}_{p}}\mathrm{d}A\{ \frac{1}{2}[\mathbf{n}\mathbf{n}\mathbf{n}\cdot\mathbf{\sigma}+\mathbf{n}\cdot \mathbf{\sigma}\mathbf{n}]-\frac{1}{3}\delta\mathbf{n}\cdot\mathbf{\sigma}\cdot \mathbf{n}\}, \tag{94}\] where \(\mathbf{n}\) is the unit surface normal pointing into the fluid. The stresslet also appears in the particle suspension of viscoelastic fluids where \(\mathbf{\sigma}\) is the sum of Newtonian solvent and polymer stress. The deviation of the polymeric stress \(\mathbf{\Pi}\) (equation (18)) from its undisturbed value \(\mathbf{\Pi}_{\infty}\) leads to an additional component of the suspension stress known as Figure 8: Slower particle rotation due to viscoelasticity. Period average angular velocity, \(\tilde{\omega}\), (normalized with the fluid rotation rate, \(\omega_{f}=\dot{\gamma}/2\)) of a torque-free sphere and a prolate spheroid with aspect ratio, \(\kappa=2.0\) in the flow-gradient plane of a simple shear flow of Giesekus viscoelastic fluid (\(c=10\) and \(\alpha=0.2\)) at various \(De\). Our results are quantitatively similar to that of Avino et al. (2014) [26] at all \(De\). Figure 9: Streamlines around a torque-free sphere rotating in a simple shear flow of a Newtonian fluid (left) and of a Giesekus viscoelastic fluid (right) with \(c=10.0\), \(\alpha=0.2\) and \(De=1.0\). The changes in streamlines due to viscoelasticity are consistent with the observations of Avino et al. (2008) [75] (figures 9(a) and 9(d) of [75]). Streamlines for a Giesekus fluid with \(De=3.0\) are also shown that extend the conclusions of Avino et al. (2008) [75] to higher \(De\) than explored in their study. the ensemble average polymeric stress. This requires more careful treatment, as pointed out by Koch et al. (2016) [77]. In this section, we will compare the stresslet due to a sphere (represented by a prolate spheroid with \(\kappa=1.001\)) in a simple shear flow and a uni-axial extensional flow of viscoelastic fluid with the previously available results. Jain & Shaqfeh (2021) [16] considered the shear rheology of a suspension of spheres in a Giesekus viscoelastic fluid (equations (18) and (19)) with \(c=0.471\) and \(\alpha=0.0039\). In the dilute particle limit, this is obtained by studying simple shear flow around a sphere. The evolution of the shear or \(\hat{\mathrm{S}}_{12}\) component (where 1 and 2 represent the flow and gradient directions of the imposed shear flow) of the stresslet (normalized with the product of the particle volume, \(V_{p}\), solvent's dynamic viscosity, \(\mu\), and imposed shear rate \(\dot{\gamma}\)) with strain, from our numerical simulations for this case is compared with that of [16] in figure 11 for four different \(De\). Here, \(De\) or Deborah number is the non-dimensional product of the polymer relaxation time and the imposed shear rate. The simulations reported in [16] were not converged with mesh size. Through personal communication with the authors, we obtained results from their numerical method at a finer grid and used these refined values to compare with our results here. At the first instant of time, the normalized stresslet value is 2.5, representing the constant stresslet in a Newtonian fluid originally calculated by Einstein (1906) [78], as initially, the polymers in the viscoelastic fluid are in equilibrium and have zero stress. The normalized particle stresslet increases with the strain in the suspension. Stresslet values from the two simulations are in good quantitative agreement at all \(De\) and strain. Upon increasing the mesh resolution, their evolution of the normalized stresslet remained qualitatively similar but increased in magnitude by about 0.06 for all \(De\) values. We have found that our values converged with the mesh size near their most refined values. In the results presented we use 1.72 million mesh points (\(N_{1}=150\), \(N_{2}=201\) and \(N_{3}=57\)) and the refined results of Jain & Shaqfeh (2021) [16] (computed using a finite volume method) are from simulations using 1.75 million volume elements. Jain et al. (2019) [79] considered the extensional rheology of a dilute suspension of spheres in a viscoelastic fluid, where the stresslet can be obtained by studying uni-axial extensional flow around a sphere. As shown in figure 12a the extensional component of the deviatoric stresslet, \(\hat{\mathrm{S}}_{11}\), (normalized with the product of particle volume, \(V_{p}\), solvent's dynamic viscosity, \(\mu\), and imposed extension rate, \(\dot{\epsilon}\)) for a FENE-P (equations (18) and (19)) viscoelastic fluid with \(L=100\) and \(c=0.471\) from our simulations qualitatively agrees with that of Jain et al. (2019) [79] at all strain values for \(De=0.4,0.6\), and \(0.8\). In this case, \(De\) is the product of polymer relaxation time and imposed extension rate. Due to the same reasons discussed above for shear rheology, the normalized extensional stresslet in these cases starts at 2.5, but then it reduces in magnitude as strain increases. In [80] we have used a semi-analytical method for extensional rheology of spheres in FENE-P fluid that allows us to perform a wider parameter study at much less computational cost. It is valid at a small polymer concentration, \(c\), and we validate our numerical method described here using this semi-analytical method in figures 12b and 12c. We consider \(c=10^{-5}\) and \(De=0.4\), \(2.0\) and \(5.0\) at two different \(L=10\) and \(100\) and show the variation of the non-Newtonian component of the deviatoric stresslet, \(\hat{\mathrm{S}}_{zz}-2.5V_{p}\mu\dot{\epsilon}\) normalized with the particle volume times extensional component of deviatoric undisturbed polymer stress, \(\hat{\mathrm{H}}_{zz,\infty}V_{p}\) with Hencky strain. At zero Hencky strain, both the numerator and the denominator in this Fig. 10: Orientation trajectory of a torque-free prolate spheroid with aspect ratio, \(\kappa=4\) released in a simple shear flow of an inertia-less Giesekus viscoelastic fluid (\(c=10\) and \(\alpha=0.2\)) at two different initial orientations in the gradient-vorticity plane at \(De=1.0\) and \(3.0\). Solid orange and purple lines are from our simulations and the dashed grey and black are from Avino et al. (2014) [26]. Grey arrows indicate the imposed flow in the shearing plane. Good qualitative agreement is obtained between the two results. The orientation trajectories of Avino et al. (2014) [26] were obtained through personal communication with the authors. normalized stresslet are zero, but it has a finite limit of 2.5. Our numerical results capture all the features of the stresslet from this semi-analytical method, and the two are almost identical at all \(De\), \(L\), and Hencky strain shown. For all the preceding simulations concerning uni-axial extensional flow we have used \(N_{1}=251\), \(N_{y}=351\) and \(N_{z}=25\). The flow is axi-symmetric; hence, we do not need many points in the azimuthal (\(\xi_{3}\)) direction. Therefore another benefit of using prolate spheroidal coordinates is that in simulating a strong, but axisymmetric flow around an aligned axisymmetric particle, we can maintain a small CPU time for the simulation by having fewer points in \(\xi_{3}\) and increasing the resolution in the radial (\(\xi_{1}\)) and polar (\(\xi_{2}\)) direction. In this case, good resolution in the polar (\(\xi_{2}\)) direction is essential to capture the large polymer stress gradients around the extensional axis. Simulating viscoelastic fluids around particles is a challenging numerical problem, particularly at large polymer relaxation times or Deborah numbers, \(De\). This is because of the large polymer stretch possible in this scenario creating large gradients in polymer stress that require good resolution around both the particle surface and in specific regions where large polymer stretch is observed. By comparing and showing an agreement of results from our numerical method with the state-of-the-art simulation results available for relatively low aspect ratio spheroids (\(\kappa=4\)) and spheres, we have demonstrated that our numerical method is suitable for studying such flows. Furthermore, due to the chosen coordinate system, we are well poised to investigate higher particle aspect ratios where the required numerical simulations will be more challenging. However, the physical effects due to viscoelasticity are expected to be more interesting. ## 6 Conclusions We have presented a finite difference numerical method to solve the flow of viscoelastic liquids around a prolate spheroid in a body conforming coordinate system. We can simulate much larger particle aspect ratios than previous computational studies as the particle surface is exactly modeled as one of the coordinate surfaces in the prolate spheroidal coordinates used. This is the inner boundary of the computational domain where the required no-slip/no-penetration condition on the particle is imposed. The outer boundary of the computational domain is nearly spherical. It represents the far-field where appropriate boundary conditions can be imposed for any constant or time-varying combination of linear flows. This allows us to study a wide range of highly resolved particle shapes ranging from a spherical particle to a large aspect ratio prolate spheroidal fiber. Our method is valid for zero to moderate fluid inertia. Various components within the numerical methods are inspired by the existing numerical techniques originally developed for varying applications. Schur complement method developed [41] to solve zero-inertia, large viscosity gradient flows in the Earth's mantle is used to solve the mass and momentum equations. To remove the coordinate system generated singularity on the polar axes of the prolate spheroidal coordinate system, we used L'Hopital's rule as first demonstrated for finite difference methods developed in cylindrical coordinate system [22; 23; 24]. Stability of Figure 11: Comparison of the stresslet due to a sphere in simple shear flow of a Giesekus viscoelastic fluid with \(c=0.471\) and \(\alpha=0.0039\) from our simulations and that of Jain & Shaqfeh (2021) [16]. The curves of Jain & Shaqfeh’s (2021) study are obtained via personal communication with the authors. the convective derivatives is obtained by using the higher-order upwinding central schemes [63] that were originally used for interface tracking in multiphase flows and lead to low numerical diffusion. To overcome the violation of free-stream preservation in a curvilinear coordinate system, we simulate the deviation of the relevant flow variables from the known far-field flow that is undisturbed by the particle's presence. This also simplifies the governing equations. For the case of zero particle and fluid inertia in viscoelastic liquids, using a novel resistivity formulation, we develop a computational technique to satisfy the torque- and force-free constraints on the particle in a non-iterative manner, thus saving computational resources. We demonstrate our method on a variety of flows of Newtonian (with and without inertia) and viscoelastic fluids around spheres and prolate spheroids and find good agreement with existing numerical and theoretical results. The capability to handle a variety of fluids, boundary conditions, and particle shapes opens numerous avenues that can be explored with our numerical methodology. Flexibility in choosing different imposed flows on the outer boundary within a simulation allows our numerical method to study the flow around a particle or dilute suspensions of particles in industrial processes where they experience different linear flows in time. Asymptotic theories for flows around particles are generally developed for the parameter range that is difficult to test numerically due to requirements such as very large domain size or large particle aspect ratio. Such challenges are overcome by our numerical method. Figure 12: Stresslet vs. strain due to a sphere in uniaxial extensional flow of a FENE-P viscoelastic fluid. Comparison with the results of Jain et al. (2019)[79] is shown in (a) with the viscoelastic fluid parameters: \(L=100\), \(c=0.471\) and various \(De\). We compare our numerical results at \(c=10^{-5}\) with those of our low \(c\) semi-analytical method described in our forthcoming publication [80] at various \(De\) and \(L\) in (b) and (c). ## 7 Acknowledgment Authors would like to thank Gaojin Li, Olivier Desjardins and Mahdi Esmaily for fruitful discussions. Funding: This work was supported by the National Science Foundation [grant number 2206851] and the National Aeronautics and Space Administration [grant number 80NSSC23K0348].
2301.01893
GIVL: Improving Geographical Inclusivity of Vision-Language Models with Pre-Training Methods
A key goal for the advancement of AI is to develop technologies that serve the needs not just of one group but of all communities regardless of their geographical region. In fact, a significant proportion of knowledge is locally shared by people from certain regions but may not apply equally in other regions because of cultural differences. If a model is unaware of regional characteristics, it may lead to performance disparity across regions and result in bias against underrepresented groups. We propose GIVL, a Geographically Inclusive Vision-and-Language Pre-trained model. There are two attributes of geo-diverse visual concepts which can help to learn geo-diverse knowledge: 1) concepts under similar categories have unique knowledge and visual characteristics, 2) concepts with similar visual features may fall in completely different categories. Motivated by the attributes, we design new pre-training objectives Image Knowledge Matching (IKM) and Image Edit Checking (IEC) to pre-train GIVL. Compared with similar-size models pre-trained with similar scale of data, GIVL achieves state-of-the-art (SOTA) and more balanced performance on geo-diverse V&L tasks.
Da Yin, Feng Gao, Govind Thattai, Michael Johnston, Kai-Wei Chang
2023-01-05T03:43:45Z
http://arxiv.org/abs/2301.01893v1
# GIVL: Improving Geographical Inclusivity of ###### Abstract A key goal for the advancement of AI is to develop technologies that serve the needs not just of one group but of all communities regardless of their geographical region. In fact, a significant proportion of knowledge is locally shared by people from certain regions but may not apply equally in other regions because of cultural differences. If a model is unaware of regional characteristics, it may lead to performance disparity across regions and result in bias against underrepresented groups. We propose GIVL, a Geographically Inclusive Vision-and-Language Pre-trained model. There are two attributes of geo-diverse visual concepts which can help to learn geo-diverse knowledge: 1) concepts under similar categories have unique knowledge and visual characteristics, 2) concepts with similar visual features may fall in completely different categories. Motivated by the attributes, we design new pre-training objectives Image-Knowledge Matching (IKM) and Image Edit Checking (IEC) to pre-train GIVL. Compared with similar-size models pre-trained with similar scale of data, GIVL achieves state-of-the-art (SOTA) and more balanced performance on geo-diverse V&L tasks. ## 1 Introduction Vision-Language Pre-trained Models (VLPs) [23, 24, 9, 52, 29] have achieved remarkable performance on Vision-Language (V&L) tasks including visual question answering [11, 12, 15], image-text retrieval [22], and image captioning [19, 27]. Pre-trained with large-scale corpora of image-text pairs, COCO [27], OpenImages [21]. VLPs are capable of learning multi-modal representations and can be effectively fine-tuned on downstream V&L tasks. While VLPs can solve a broad range of V&L tasks, to deploy VLPs in real-world applications, it is essential to consider the geographical inclusivity1 of VLPs. Because of geographic differences, images from different regions embody a large amount of knowledge that is locally shared but cannot be applied in other regions, geographically diverse. For example, in Figure 1, the festivals in different regions look different. Footnote 1: We use regions as a proxy to estimate inclusivity of V&L models. People in the same regions may have different cultures and traditions. Ideally, a geographically inclusive VLP should be capable of achieving comparable performance over all the images, regardless of their origins. However, current VLPs does not perform equally well on data from different regions. For example, prior works [28, 48] show that on geo-diverse V&L tasks, there is nearly a 20% performance discrepancy between Western and East Asian images when current VLPs are applied. To combat such geographical bias, we aim to design methods to make VLPs achieve more balanced performance across regions. One solution to mitigating bias is to obtain diverse task-specific annotations for each region and fine-tune VLPs on the new annotations. However, according to [17], most Amazon MTurk annotators are from US and India, and may be unfamiliar with the cultures of other regions. Thus, it is unrealistic to obtain large-scale geo-diverse annotations even in such a popular crowdsourcing platform. Pre-training a _unified_ VLP with large-scale _unannotated_ geo-diverse images and corresponding knowledge could make the VLP a foundation to provide more generalizable representations and help to transfer on comprehending images from various regions easier. In this paper, we propose **GIVL**, a **Ge**ographically **I**nclusive **V**ision-and-**L**anguage Pre-trained model. We focus on **how to encourage GIVL to better learn geo-diverse knowledge on images from different regions during its pre-training stage**. We observe two attributes of geo-diverse visual concepts that can contribute to learning geo-diverse knowledge: **A1: Concepts under similar categories have unique knowledge and visual characteristics.** For example, traditional Western and Chinese festivals, like _Christmas_ and _Chinese New Year_ in Figure 1, are held with different rituals and their decoration style differs as well. It is necessary for GIVL to learn the difference between their corresponding knowledge and precisely distinguish these visual concepts. On the other hand, _Christmas_ and _Chinese New Year_ are both festivals. Learning the commonalities of visual concepts (e.g., both images in Figure 1 belong to the same category "festival") would help model connect Western and non-Western concepts and contribute to more effective transfer on geo-diverse images. **A2: Concepts with similar visual features may lie in completely different categories.** In Figure 2, _Chinese paper cuttings_ share visual features (e.g., color, shape) with _red frisbee_. Similarly. _sugar cane_ and _flute_ share visual features. However, these concepts are not related to each other. Since geo-diverse images cover a broader range of visual concepts, differentiating visually similar concepts given visual contexts is also essential. To this end, besides common objectives Masked Language Modeling (MLM) and Image-Text Matching (ITM) for pre-training VLPs, we propose two additional pre-training objectives, **Image-Knowledge Matching (IKM)** and **Image Edit Checking (IEC)**. IKM is used to learn the alignment between images and corresponding textual knowledge in Wikipedia. It requires GIVL to not only judge if the input textual knowledge matches input images, but also identify whether the visual concepts described in input knowledge falls into similar categories of the concepts in input images. This encourages GIVL to learn corresponding relationship between knowledge and images as well as recognize similarity among geo-diverse visual concepts. IEC is proposed to identify whether a visual concept in input image is replaced by another concept that is visually similar but lies in an irrelevant category (see Fig.3 for an example). It enables GIVL to capture nuances between visually similar concepts after the replacement given visual contexts. Our contributions and empirical results are as follows: * By considering the attributes of geo-diverse visual concepts, we propose two novel V&L pre-training objectives Image-Knowledge Matching (IKM) and Image Edit Checking (IEC) that can greatly improve the geographical inclusivity of VLPs. * Compared with similar-size VLPs pre-trained with similar scale of data, GIVL2 achieves state-of-the-art (SOTA) and more balanced performance over different regions on geo-diverse V&L tasks including MaRVL [28], GD-VCR [48] and WIT Image-Text Retrieval [37]. For geo-diverse zero-shot image classification on Dollar Street dataset3, GIVL outperforms VinVL [52] 26%. Footnote 2: Code and model checkpoint will be released. Footnote 3: Dollar street dataset is available at [https://github.com/greentfrapp/dollar-street-images](https://github.com/greentfrapp/dollar-street-images). ## 2 Related Work **Vision-Language Pre-Trained Models (VLPs).** VLPs [29, 35, 41, 4, 22, 24, 52, 4, 26] are proposed to tackle tasks that require understanding of both images and texts. Following the paradigm of pre-training language models [30, 32, 8], in common practice, VLPs use Transformer [42] as the backbone and pre-train them with large-scale image-caption pairs. The commonly used image-text parallel data are from multiple sources including COCO [27], Flickr30K [49], Conceptual Captions [34] and OpenImages [21] datasets. Currently, VLPs have achieved remarkable performance on various V&L tasks including visual question answering [15, 12], visual reasoning [39], image captioning [27], Figure 2: Example of Chinese paper cuttings and red frisbee (left) and sugar cane and flute (right). Different visual concepts may share similar visual characteristics, but they may have completely different functionalities. and image-text retrieval [27, 49]. Most recent works focus on scaling up VLPs; this paper studies an orthogonal but important concerns - how to leverage diverse knowledge to improve inclusivity of VLPs. Geographical Bias.Geographical bias [7, 33, 43, 47] is a severe problem in that AI applications have. Previous works [28, 48] reveal the fact that on geo-diverse V&L tasks, the performance gap between non-Western and Western images is significant when using VLPs. Similarly, object recognition models' performance greatly drops on non-Western images [7, 33]. Researchers [7, 33, 43] find that one factor of the geographical bias is introduced by an imbalanced distribution of training data with respect to geographical location. They [7] observe that COCO and OpenImages, two widely used pre-trained corpora for VLPs, are amero-centric and euro-centric. Another reason behind the performance drop is that VLPs can understand basic visual information in images from different regions, but are less able to leverage geo-diverse knowledge and reason [48]. Geo-Diverse V&L Tasks.GD-VCR [48] studies whether a model can understand commonsense on geo-diverse images. V&L models are required to select the correct answer from four answer choices given textual questions involving geo-diverse commonsense and the corresponding images. MaRVL [28] is another V&L task that requires visual reasoning with cultural knowledge of images from non-Western regions. It is formulated as a binary classification problem in which the model needs to judge whether a sentence correctly describes two images from non-Western regions. WIT image-text retrieval [3, 37] is a standard multimodal retrieval task on geo-diverse Wikipedia images. ## 3 Methods In this section, we introduce the pre-training method of GIVL in detail. Section 3.1 provides preliminary of GIVL pre-training method including the definition of visual concept and category. Section 3.2 describes the four pre-training objectives. Section 3.3 and 3.4 illustrate the process of acquiring essential information used to construct input contents for objectives Image-Knowledge Matching (IKM) and Image Edit Checking (IEC). Specifically, Section 3.3 shows how to extract visual concept name from an image caption and its category information from Wikipedia. Section 3.4 shows how to locate visual concept to corresponding detected objects in input image. ### Preliminary Definition of Visual Concept and Category.Visual concept is an object or scenario that an image mainly involves. For example, Figure 3 shows the visual concept of _Chinese paper cuttings_. Each specific visual concept corresponds to one general category. Each category covers various visual concepts having particular shared characteristics. For example, the category of visual concept _Chinese paper cuttings_ is _art_. The _art_ category includes other visual concepts such as _Jewish paper cuttings_. The extraction pipeline for visual concept and its category information will be introduced in Section 3.3. Pre-Training Corpus.To improve the geographical inclusivity of VLPs, we use Wikipedia Image-Text (WIT) dataset [37] as a primary source of geo-diverse images. WIT contains 2.95M images in total4. We also incorporate 0.22M commonly used V&L pre-training images from COCO [27], Flickr30k [49], and GQA. Images in WIT dataset come with the corresponding Wikipedia sections that include the corresponding knowledge of WIT images. This knowledge5, such as customs and history, is usually culturally related and not explicitly described in the images. Such knowledge plays a crucial role in helping VLPs understand visual concepts in geo-diverse images more comprehensively. Footnote 4: GIVL focuses on English-only V&L tasks. We only consider images with English captions, which only occupy 30% out of the entire WIT. Footnote 5: The knowledge of COCO, Flickr30K and GQA images is the first sentence in Wikipedia pages of the objects mentioned in captions. Input for Pre-Training.We organize the input for GIVL pre-training as follows: \[[\mathrm{CLS}]\;\mathbf{c}\;[\mathrm{SEP}]\;\mathbf{k}\;[\mathrm{SEP}]\; \mathbf{t}\;[\mathrm{SEP}]\;\mathbf{v}, \tag{1}\] where \(\mathbf{c}\) is either an image caption or a GQA question; \(\mathbf{k}\) denotes the corresponding knowledge of the visual concept in input image \(\mathbf{I}\); \(\mathbf{t}\) is either tags of detected objects or a GQA answer; \(\mathbf{v}\) is a list of visual embeddings generated from input image \(\mathbf{I}\) by a ResNeXt-152 C4 detection model [52]. \(p_{v}\) is the name of the visual concept contained in image \(\mathbf{I}\). ### Pre-Training Objectives for GIVL We pre-train GIVL with four objectives: Masked Language Modeling (MLM), Image-Text Matching (ITM), Image-Knowledge Matching (IKM), Image Edit Checking (IEC). We introduce each pre-training objective as follows. #### 3.2.1 MLM and ITM Objectives Masked Language Modeling (MLM) is a learning objective prevalently used in V&L pre-training. Given the context of model inputs, GIVL needs to recover the tokens masked by \([\mathrm{MASK}]\). MLM loss \(\mathcal{L}_{MLM}\) is the average of all cross-entropy loss with respect to the probability of predicting the correct masked tokens given a vocabulary. Image-Text Matching (ITM) is another commonly applied objective that enables GIVL to learn the alignment between texts and images. Following VinVL [52], given an input image \(\mathbf{I}\), we construct three types of input contents for \(\mathbf{c}\) and \(\mathbf{t}\). It is formulated as a 3-way classification task, \(y^{c,t}\in\{0,1,2\}\), where 0 represents that \(\mathbf{c}\) and \(\mathbf{t}\) both match the input image \(\mathbf{I}\); 1 means when \(\mathbf{t}\) matches image \(\mathbf{I}\) whereas \(\mathbf{c}\) mismatches the image \(\mathbf{I}\); 2 indicates \(\mathbf{c}\) matches \(\mathbf{I}\) but \(\mathbf{t}\) mismatches \(\mathbf{I}\). ITM loss is the cross-entropy loss with respect to the probability of predicting the type of input contents upon \([\mathrm{CLS}]\) representation. #### 3.2.2 Image-Knowledge Matching (IKM) We propose Image-Knowledge Matching (IKM) to assist GIVL in learning knowledge of geo-diverse visual concepts. With the help of IKM, we encourage GIVL to learn the corresponding knowledge of the visual concepts and discover connections between geo-diverse visual concepts. Although the visual characteristics of the geo-diverse visual concepts in GIVL's pre-training corpus may be poles apart, they could be clustered in similar categories. For example, in Figure 1, the visual characteristics of traditional Western and non-Western festivals are different, but these scenarios all belong to the same category _festival_. Learning to identify category similarity can connect diverse visual concepts under similar categories and generalize to understanding more relevant concepts across regions more simply. On the other hand, each of the visual concepts in similar categories associates with unique knowledge. Therefore, it is also crucial for GIVL to precisely distinguish if input knowledge aligns with the input image. To this end, we construct the three types of input contents and formulate IKM as a 3-way classification task to enforce GIVL to identify the input type: * Type 1: \(\mathbf{k}\) matches input image \(\mathbf{I}\); * Type 2: \(\mathbf{k}\) mismatches input image \(\mathbf{I}\) and the visual concept described by \(\mathbf{k}\) does **NOT** fall into a similar category of the visual concept \(p_{v}\) in \(\mathbf{I}\); * Type 3: \(\mathbf{k}\) mismatches input image \(\mathbf{I}\) but the visual concept described by \(\mathbf{k}\) falls into a similar category of the visual concept \(p_{v}\) in \(\mathbf{I}\). To select knowledge \(\mathbf{k}\) for Type 3 input in IKM, we need to conduct two steps (i) extracting the name of visual concept \(p_{v}\) of input image \(\mathbf{I}\) from its caption (for GQA data, see supplementary) and (ii) looking for visual concepts under similar categories. More details of extracting \(p_{v}\) from image caption and its category information will be introduced in Section 3.3. After the visual concept name \(p_{v}\) is extracted from the caption, to find a visual concept which falls in the most relevant categories to \(p_{v}\), we randomly sample 200 visual concepts as candidates. Then we select the candidate concept that has the most semantically similar category with \(p_{v}\)'s category. Specifically, the sampled candidates are ranked by cosine similarity between text embeddings6 of their category names and the embedding of \(p_{v}\)'s category name. The process of selecting the most relevant visual concept \(p^{*}\) is illustrated in Eq. (2), Footnote 6: We utilize FastText [2] embeddings pre-trained on Wikipedia. Phrase embeddings are mean pooled embeddings of the words in the phrases. \[p^{*}=\operatorname*{arg\,max}_{p_{i}}\mathbf{CosineSim}(z_{p_{i}},z_{p_{v}}), \tag{2}\] where \(z_{p_{i}}\) is the embedding of \(i\)-th sampled visual concept \(p_{i}\)'s category, \(z_{p_{v}}\) is the embedding of \(p_{v}\)'s category, \(\mathbf{CosineSim}\) denotes the function quantifying cosine similarity between two embeddings. The corresponding knowledge of \(p^{*}\) can be regarded as \(\mathbf{k}\) in Type 3 input. For preparing \(\mathbf{k}\) in Type 2 input content, we first set up a threshold for the cosine similarity between the embeddings of category names (\(\tau=0.3\)) to filter out the visual concepts relevant with \(p_{v}\). Then we randomly pick one of the retained visual concepts. The selected visual concept indicates the one that has irrelevant category information with \(p_{v}\). Its corresponding knowledge can be used as \(\mathbf{k}\) in Type 2 input. Figure 3: GIVL pre-training method with four pre-training objectives. The input image is about the visual concept _Chinese paper cuttings_. The input knowledge is about _Jewish paper cuttings_ rather than _Chinese paper cuttings_, but it is also the knowledge describing a visual concept that shares a similar category with _Chinese paper cuttings_. Hence, for Image-Knowledge Matching (IKM) objective, the input contents belong to Type 3. Also, the visual concept _Chinese paper cuttings_ is replaced with a visually similar concept _red frisbee_. Thus, for Image Edit Checking (IEC) objective, the input contents belong to Type 2. IKM loss is a cross-entropy loss with respect to the probability of predicting the type of relationship between the input image and knowledge upon \([\mathrm{CLS}]\) representation, \[\mathcal{L}_{IKM}=-\frac{1}{|\mathcal{D}|}\sum_{i=1}^{|\mathcal{D}|}\log p(y_{i }^{k}|\mathbf{c},\mathbf{k},\mathbf{t},\mathbf{v}), \tag{3}\] where \(\mathcal{D}\) indicates the entire pre-training corpus7 and \(y_{i}^{k}\) is the label for the input type in IKM. Footnote 7: The proportion of Type 1, 2 and 3 input for IKM in \(\mathcal{D}\) is \(2:1:1\). #### 3.2.3 Image Edit Checking (IEC) To better differentiate visually similar but irrelevant concepts, we propose another pre-training objective Image Edit Checking (IEC). In geo-diverse setting, it is highly likely that visual concepts share similar visual characteristics but fall into completely different categories. For example, in Figure 2, _Chinese paper cuttings_ are red and circular, which aligns with the visual characteristics of _red frisbee_. IEC is designed to identify whether a specific visual concept \(p_{v}\) in input image \(\mathbf{I}\) is replaced by another visually similar one in an irrelevant category. We consider two types of input contents for IEC: * Type 1: Input image \(\mathbf{I}\) remains the same; * Type 2: The visual embedding of the visual concept \(p_{v}\) in input image \(\mathbf{I}\) is replaced with the embedding of another concept that is visually similar but falls into an irrelevant category with \(p_{v}\). In Figure 3, since the visual concept _Chinese paper cuttings_ is replaced with _red frisbee_, the input type is Type 28. Footnote 8: The proportion of Type 1 and 2 input for IEC in \(\mathcal{D}\) is \(1:1\). To prepare input contents of Type 2 data, we need to accomplish two steps (i) seeking the corresponding detected objects of the visual concept \(p_{v}\) in input image \(\mathbf{I}\) from its caption (ii) looking for visually similar concepts for replacement. The pipeline of locating visual concept \(p_{v}\) is introduced in Section 3.4. After the visual concept \(p_{v}\) is located, to select the proper visual concept for replacement in Type 2 input, we randomly sample 20 images, and then collect the visual embeddings and tag names of all the detected objects in the sampled images as candidates. The visual concept for replacement is selected according to two criteria: (i) its category is dissimilar9 with the category information of concept \(p_{v}\) and (ii) its visual embedding is closest to \(p_{v}\)'s visual embedding. We select irrelevant visual concepts with \(p_{v}\) to guarantee that the replacement is unreasonable given the image context. Footnote 9: We use the cosine similarity between embeddings of the candidate visual concept’s category and \(p_{v}\)’s category. Any candidate concepts with a similarity lower than 0.3 are treated as dissimilar ones. IEC loss is a binary cross-entropy loss with respect to the probability of predicting whether the input image is modified upon the \([\mathrm{CLS}]\) representation, \[\mathcal{L}_{IEC}=-\frac{1}{|\mathcal{D}|}\sum_{i=1}^{|\mathcal{D}|}\log p(y_{ i}^{v}|\mathbf{c},\mathbf{k},\mathbf{t},\mathbf{v}), \tag{4}\] where \(y_{i}^{v}\) is the label for input type in IEC. The final loss \(L\) is the sum of all losses mentioned above: \[\mathcal{L}=\mathcal{L}_{MLM}+\mathcal{L}_{ITM}+\mathcal{L}_{IKM}+\mathcal{L} _{IEC}. \tag{5}\] ### Acquiring Categories of Visual Concepts Acquiring the categories of visual concepts is a prerequisite step to construct GIVL inputs for IKM and IEC. We first need to extract the visual concept name \(p_{v}\) in input image \(\mathbf{I}\) from its image caption. We achieve this by parsing the caption with [31]. \(p_{v}\) is the composition of the head noun and its modifiers in the parse tree. For example, given a caption "_Chinese paper cuttings in a shop_", \(p_{v}\) is _Chinese paper cuttings_, which is composed of the head noun "_cuttings_" and its modifiers "_Chinese paper_" in its parse tree. To acquire \(p_{v}\)'s category, we then search for Wikipedia with keyword \(p_{v}\). If \(p_{v}\) is an entry of Wikipedia, we find that the category information can be usually found in the first sentence of Wikipedia introduction paragraph. As shown in Figure 4, the category of _torii_ (i.e., _traditional Japanese gate_) is present in the first sentence "_A torii... is a traditional Japanese gate most commonly..._" Then we notice that the category name is the phrase consisting of the head noun and its modifiers in the first sentence. In this example, the head noun of the first sentence is "_gate_" and its modifier words are "_traditional_" and "_Japanese_". The final concatenation, "_traditional Japanese gate_", is the category of _torii_. Though the category information mined with these simple heuristics is imperfect, the extraction method is easy to implement and efficient in acquiring categories of large quantities of visual concepts. ### Locating Visual Concepts in Images With a limited amount of object class labels, it is difficult for current object detectors to detect a geo-diverse vi Figure 4: Steps to mine the category information of visual concepts. The composition of the head noun (“_gate_”, root of parse tree) and its modifiers (“_traditional Japanese_”, words with “_amod_” relation with “_gate_”) can be treated as the category of _torii_ (“_traditional Japanese gate_”). sual concept \(p_{v}\). Therefore, we introduce a simple approach to efficiently locate the corresponding object given a visual concept \(p_{v}\). We find that a visual concept \(p_{v}\) is commonly (i) classified as a tag name that has similar semantics with \(p_{v}\)'s category, and (ii) its image patch occupies a large portion of the image. To this end, we design heuristics to locate novel visual concepts according to our empirical findings. First, only the top-10 large detected objects from each image will be considered. Second, we calculate the similarity between their object tags and \(p_{v}\)'s category. The one with the highest similarity score will be treated as the object corresponding to \(p_{v}\). We take Figure 5 as an example. The visual concept \(p_{v}\) to be located is _Chinese paper cuttings_. Suppose that one of the _Chinese paper cuttings_ (the object in top right corner) is among top-10 large detected objects. Besides, its original detected object tag is _poster_, which is the most semantically similar to _Chinese paper cuttings_'s category. Hence, we can replace its original object tag with _Chinese paper cuttings_ as it is the corresponding object we are looking for. The method above only locates one visual concept per image. However, it is possible that one image may contain multiple identical visual concepts. For example, in Figure 5, there are a couple of _Chinese paper cuttings_. To solve this problem, we simply propagate the visual concept name of _Chinese paper cuttings_ to other objects that share the same original detection labels. ## 4 Experiments We conduct two sets of experiments to evaluate GIVL. We first evaluate GIVL on multiple geo-diverse V&L tasks including zero-shot image classification, V&L reasoning and image-text retrieval. It helps us to verify the effectiveness of GIVL under geo-diverse settings. On the other hand, experiments on common V&L tasks are conducted to prove the generalizability of GIVL's pre-training method. ### Baselines for Ablation Study Five baselines are described below. For fair comparison, pre-training corpus, number of pre-training steps, and hyper-parameters are all identical to GIV10. Since V&L pre-training is extremely time consuming, all baselines are pre-trained with 500K steps in ablation study. Footnote 10: Details of experimental setups are described in Appendix A. **GIVL w/o \(\mathcal{L}_{IKM}\) & GIVL w/o \(\mathcal{L}_{IEC}\).** GIVL w/o \(\mathcal{L}_{IKM}\) and GIVL w/o \(\mathcal{L}_{IEC}\) is the model pre-trained without Image-Knowledge Matching (IKM) objective and Image Edit Checking (IEC) objective, respectively. We demonstrate the effectiveness of our proposed pre-training objectives with these two baselines. **VinVL\({}^{*}\).** VinVL\({}^{*}\) is pre-trained only with MLM and ITM objectives as VinVL [52]. It also shares the same pre-training corpus with GIVL. The only difference between GIVL and VinVL\({}^{*}\) is objectives. GIVL is pre-trained with Image-Knowledge Matching (IKM) and Image Edit Checking (IEC) but VinVL\({}^{*}\) is not. Comparing GIVL and VinVL\({}^{*}\) can manifest the improvement by introducing IKM and IEC objectives on geo-diverse V&L tasks. The comparison is also fair for the pre-training methods of GIVL and VinVL on common V&L tasks. **GIVL w/ CLIP.** Some recent VLPs utilize CLIP [32] as the vision encoder. We replace object-level visual encoder in GIVL with CLIP to check if it can further improve performance. CLIP provides grid-level visual representation instead of object-level's. Therefore, IEC objective is removed because it involves object-level replacements. **GIVL-B.** The only difference between GIVL and GIVL-B is that the IKM objective of GIVL-B is a binary classification objective instead of 3-way classification. For IKM, it requires GIVL-B to identify whether the input knowledge matches the image contents. GIVL-B doesn't need to judge whether the input knowledge describes a visual concept that shares similar category with the concept in input image. The comparison between GIVL and GIVL-B is able to demonstrate the effect of incorporating category information for learning the knowledge of geo-diverse visual concepts. ### Results on Geo-Diverse Benchmarks **Geo-Diverse Zero-Shot Image Classification.** Geo-diverse zero-shot image classification is a downstream geo \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & \#Param & Acc. & Western/non-Western \\ \hline \multicolumn{4}{l}{**Prior VLPs**} \\ \hline VinVL\({}^{*}\) & 112M & 1.21 & 1.77/1.01 \\ VinVL [52] & 112M & 1.29 & 1.25/1.30 \\ \hline **Ours** & & & \\ \hline GIVL w/o \(\mathcal{L}_{IKM}\) & 112M & 21.37 & 25.31/20.37 \\ GIVL w/o \(\mathcal{L}_{IEC}\) & 112M & 12.96 & 12.71/13.02 \\ GIVL w/ CLIP & 199M & 18.04 & 22.89/16.82 \\ GIVL-B & 112M & 20.35 & 23.93/19.45 \\ GIVL & 112M & **27.25** & **31.65/26.15** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on geo-diverse zero-shot image classification on Dollar Street dataset. We also show the respective performance on Western and non-Western images. Figure 5: Steps to locate novel visual concepts in input images. diverse V&L task that directly evaluates the effectiveness of the pre-training methods. We evaluate models on Dollar Street dataset11. It is labeled with 127 classes, each of which contains images around the world. For classification on one image, we compose 127 inputs, each of which is the concatenation of one class name, the class's corresponding knowledge12, tag names and visual embeddings of the detected objects. We compare the probability of predicting that each class name matches the input image via ITM objective for all the 127 classes. The class with the highest probability is treated as the final classification result. Footnote 11: Images in Dollar Street are labeled with country information. The proxy to categorize Western and non-Western countries is based on [16]. Footnote 12: The knowledge of each class sources from Wikipedia and Wordhoard. As shown in Table 1, GIVL outperforms both VinVL and VinVL\({}^{*}\) by a significant margin around 26%. GIVL achieves 6%-20% improvement in ablation studies, demonstrating the effectiveness of the proposed IKM and IEC objectives. We also find that GIVL outperforms GIVL w/ CLIP which involves a strong vision encoder. It further demonstrates that object-level visual representations and object-level pre-training objective IEC are effective for learning geo-diverse visual concepts. Multicultural Visual Reasoning (MaRVL).Following NLVR2[39], MaRVL[28] is a V&L task that requires models to identify whether a sentence correctly describes the contents of two input images. MaRVL images involve diverse visual concepts in non-Western regions. Since MaRVL13 is merely a testing set, following [28], we fine-tune models on NLVR2 training set and then select the best checkpoint on the dev set of NLVR2 to evaluate on MaRVL. Footnote 13: We use the translated English version of MaRVL dataset in [54]. From Table 2, we observe that GIVL outperforms the ablated baselines pre-trained without our proposed objectives IKM and IEC, respectively. Also, similar to the observations on Dollar Street dataset, compared with VinVL\({}^{*}\) pre-trained with the same corpus as GIVL, GIVL achieves higher performance. It further demonstrates that the pre-training objectives of GIVL can help VLPs learn geo-diverse visual concepts better than VinVL. We also compare GIVL with VLPs (i) \(3\times\) larger model (METER) and (ii) pre-trained with \(2-5\times\) larger corpus (VinVL, X-VLM and ALBEF). GIVL achieves competitive performance with much less data and smaller model size. Additionally, we attach importance to the comparison of GIVL between NLVR2 and MaRVL. [28] demonstrate that the visual concepts in NLVR2 dataset are Western-centric. A smaller performance gap between NLVR2 and MaRVL means less bias against non-Western regions. We observe that GIVL can achieve more balanced performance on both datasets, while other VLPs including METER, X-VLM and ALBEF have larger performance discrepancy. Geo-Diverse Visual Commonsense Reasoning (GD-VCR).GD-VCR is a testing set to evaluate multi-modal models' ability to understand geo-diverse commonsense knowledge in images. It is a multiple-choice QA task which requires geo-diverse commonsense reasoning. We fine-tune models on VCR [50] training set and select the best checkpoint on VCR's dev set to evaluate on GD-VCR. As shown in Table 3, GIVL outperforms all prior similar-size VLPs models trained with similar number of images. GIVL also outperforms all ablated baselines except for GIVL w/ CLIP, which uses a much stronger visual encoder and only achieves 0.1% subtle improvements. Besides, we highlight the performance gap between Western and non-Western data in GD-VCR. GIVL has significantly smaller gap than any of the ablated baselines. While GIVL w/ CLIP has a marginal improvement over GIVL, the performance gap of GIVL is 4.06% smaller than GIVL w/ CLIP. Wikipedia Image-Text Retrieval (WIT).WIT image-text retrieval is a standard retrieval task on geo-diverse \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & Data/Steps & \#Param & Acc. & \(\Delta\) \\ \hline \hline **Prior VLPs** & & & \\ \hline ViLBERT[29] & 3.3M/- & 274M & 66.53 & 10.87 \\ VinVL[25] & 5.65M/2M & 112M & 72.48 & 8.55 \\ VinVL\({}^{*}\) & 3.17M/500K & 112M & 69.66 & 8.27 \\ X-VLM[51] & 16M/- & 216M & 73.02 & 11.39 \\ ALBEF[23] & 14M/- & 210M & 73.17 & 9.37 \\ METER[9] & 4M/- & 352M & 73.47 & 8.86 \\ \hline **Ours** & & & & \\ \hline GIVL w/o \(\mathcal{L}_{IKM}\) & 3.17M/500K & 112M & 72.11 & - \\ GIVL w/o \(\mathcal{L}_{IEC}\) & 3.17M/500K & 112M & 68.58 & - \\ GIVL w/ CLIP & 3.17M/500K & 199M & 71.78 & - \\ GIVL-B & 3.17M/500K & 112M & 70.26 & - \\ GIVL & 3.17M/500K & 112M & 72.50 & **6.56** \\ \hline GIVL & 3.17M/900K & 112M & **72.70** & 7.17 \\ \hline \hline \end{tabular} \end{table} Table 2: Results on MaRVL testing set. We also show the performance discrepancy \(\Delta\) between NLVR2 and MaRVL. \(\dagger\) denotes the results reported in [54]. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & \#Param & Acc. & Non-West & \(\Delta\) \\ \hline **Prior VLPs** & & & \\ \hline VisualBERT[29] & 135M & 53.95 & - & 10.42 \\ ViLBERT[29] & 274M & 59.99 & - & 7.28 \\ VinVL\({}^{*}\) & 112M & 69.07 & 66.45 & 8.46 \\ VinVL\({}^{*}\) & 112M & 70.20 & 66.78 & 11.04 \\ \hline **Ours** & & & & \\ \hline GIVL w/o \(\mathcal{L}_{IKM}\) & 112M & 69.56 & 65.96 & - \\ GIVL w/o \(\mathcal{L}_{IEC}\) & 112M & 69.89 & 66.92 & - \\ GIVL w/ CLIP & 199M & 70.43 & 67.25 & 10.20 \\ GIVL-B & 112M & 69.56 & 65.96 & - \\ GIVL & 112M & 70.32 & 68.41 & 6.14 \\ \hline GIVL (1M) & 112M & **72.01** & **70.4** & **4.97** \\ \hline \hline \end{tabular} \end{table} Table 3: Results on GD-VCR. We also show the results on all the non-Western images in GD-VCR and discrepancy \(\Delta\) between and non-Western images. Wikipedia images14. Table 4 shows that GIVL achieves superior performance comparing to baselines except GIVL-B. Pre-trained with 1M steps, GIVL obtains SOTA performance on WIT image-text retrieval task. Footnote 14: We use the translated English WIT retrieval data in [3]. ### Results on Common V&L Benchmarks Besides testing GIVL on geo-diverse V&L tasks, we benchmark GIVL on common V&L task to investigate whether the pre-training method of GIVL is competitive among existing VLPs. We don't expect GIVL to perform the best among SOTA VLPs on these V&L benchmarks, because they are annotated with Western-centric data and SOTAs are trained with much larger similar data as well. We aim to answer two questions. Q1: _Is GIVL able to obtain comparable performance with VLPs pre-trained with similar scale of data?_ Q2: _Can GIVL perform as strongly as SOTA VLPs pre-trained with the same corpus?_ To answer Q1, we evaluate GIVL on common V&L benchmarks including NLVR2, GQA and COCO captioning. For NLVR2, GIVL is able to beat 11 VLPs with much more parameters and pre-trained with more data. For GQA, GIVL performs better than most of the VLPs. For COCO image captioning, it can even obtain close performance with SimVLM-base, a VLP pre-trained with 1.8B images. Overall, even though GIVL is pre-trained with the corpus whose domain is not similar with common V&L benchmarks, it can still achieve competitive results. It demonstrates the effectiveness of GIVL pre-training method. For Q2, we target on VinVL, a strong VLP that once swept leaderboards of multiple V&L tasks. For fair comparison, we reproduce the pre-training process of VinVL with GIVL pre-training corpus. As mentioned in Section 4.1, we denote the reproduced pre-training as VinVL\({}^{*}\). On above three V&L datasets, the performance difference between GIVL and VinVL\({}^{*}\) is subtle. We argue that GIVL could achieve equally good performance as VinVL on common V&L benchmarks if it was pre-trained with VinVL corpus. ### Qualitative Study on Geo-Diverse Categories We showcase examples from GD-VCR and Dollar Street dataset to better demonstrate GIVL's advantages. As shown in Figure 7, non-Western _festivals_, _servants_ and _religions_ are quite different from those in Western regions. We find that GIVL's performance gap on the images involving these categories is significantly smaller than VinVL on GD-VCR. Moreover, GIVL's performance on non-Western images is 5-8% higher than VinVL. For Dollar Street dataset, while the overall performance of GIVL is around 30%, GIVL can achieve above 55% accuracy when recognizing _vegetables_ and _drying clothes_ which greatly vary across data from worldwide. GIVL even outperforms VinVL 50% on those categories. GIVL's strong performance on these highly geo-diverse categories further demonstrates its effectiveness. ## 5 Conclusion We propose GIVL, a geographically inclusive vision-and-language pre-trained model. GIVL achieves strong and more balanced results on multiple geo-diverse V&L tasks. It can also produce competitive performance on common V&L tasks. By proposing GIVL, we call upon researchers to devise methods that can further improve geographical inclusivity of VLPs and popularize their applications for all. \begin{table} \begin{tabular}{l c c c c} \hline **Model** & Data & \#Param & I/R & T/R \\ \hline \multicolumn{5}{l}{**Prior VLPs**} \\ \hline LXMERT [41] & - & 240M & 14.28 & 14.86 \\ VisualBERT [24] & - & 135M & 15.36 & 15.75 \\ UNITER [4] & - & - & 15.43 & 16.01 \\ VL-BERT [38] & 3.3M & - & 15.11 & 16.09 \\ ViLBERT [29] & 3.3M & 274M & 15.40 & 16.93 \\ VinVL [52] & 5.65M & 112M & 27.78 & 28.65 \\ VinVL\({}^{*}\) & 3.17M & 112M & 25.44 & 25.50 \\ \hline \multicolumn{5}{l}{**Ours**} \\ \hline GIVL w/o \(\mathcal{L}_{\textit{IKM}}\) & 3.17M & 112M & 26.21 & 26.97 \\ GIVL w/o \(\mathcal{L}_{\textit{IEC}}\) & 3.17M & 112M & 28.08 & 28.18 \\ GIVL w/ CLIP & 3.17M & 199M & 27.94 & 28.17 \\ GIVL B & 3.17M & 112M & 29.97 & 29.86 \\ GIVL & 3.17M & 112M & 28.00 & 28.79 \\ \hline GIVL (1M) & 3.17M & 112M & **29.98** & **30.79** \\ \hline \end{tabular} \end{table} Table 4: Results on WIT image-text retrieval task. I/R and T/R denote image retrieval and text retrieval. The evaluation metric is Recall@1. 1M denotes the number of pre-training steps. Figure 6: GIVL performance on common V&L tasks. Complete results are shown in Appendix B. Figure 7: GIVL and VinVL’s performance on non-Western and Western images related to geo-diverse categories.
2302.11108
Magnetically Actuated Millimeter-Scale Biped
This paper introduces a new approach to studying bipedal locomotion. The approach is based on magnetically actuated miniature robots. Building prototypes of bipedal locomotion machines has been very costly and overly complicated. We demonstrate that a magnetically actuated 0.3~gm robot, we call Big Foot, can be used to test fundamental ideas without necessitating very complex and expensive bipedal machines. We explore analytically and experimentally two age old questions in bipedal locomotion: 1. Can such robots be driven with pure hip actuation. 2. Is it better to use continuous or impulsive actuation schemes. First, a numerical model has been developed in order to study the dynamics and stability of a magnetically actuated miniature robot. We particularly focus on stability and performance metrics. Then, these results are tested using Big Foot. Pure hip actuation has been successful in generating gait on uphill surfaces. In addition, complex tasks such as following prescribed gait trajectories and navigating through a maze has been successfully performed by the experimental prototype. The nature and timing of hip torques are also studied. Two actuation schemes are used: Heel Strike Actuation and Constant Pulse Wave Actuation. With each scheme, we also vary the time duration of the applied magnetic field. Heel Strike actuation is found to have superior stability, more uniform gait generation, and faster locomotion than the Constant Pulse Wave option. But, Constant Pulse Wave achieves locomotion on steeper slopes.
Adam Cox, Sinan Beskok, Yildirim Hurmuzlu
2023-02-22T03:07:05Z
http://arxiv.org/abs/2302.11108v1
# Magnetically Actuated Millimeter-Scale Biped ###### Abstract This paper introduces a new approach to studying bipedal locomotion. The approach is based on magnetically actuated miniature robots. Building prototypes of bipedal locomotion machines has been very costly and overly complicated. We demonstrate that a magnetically actuated 0.3 gm robot, we call Big Foot, can be used to test fundamental ideas without necessitating very complex and expensive bipedal machines. We explore analytically and experimentally two age old questions in bipedal locomotion: 1. Can such robots be driven with pure hip actuation. 2. Is it better to use continuous or impulsive actuation schemes. First, a numerical model has been developed in order to study the dynamics and stability of a magnetically actuated miniature robot. We particularly focus on stability and performance metrics. Then, these results are tested using Big Foot. Pure hip actuation has been successful in generating gait on uphill surfaces. In addition, complex tasks such as following prescribed gait trajectories and navigating through a maze has been successfully performed by the experimental prototype. The nature and timing of hip torques are also studied. Two actuation schemes are used: Heel Strike Actuation and Constant Pulse Wave Actuation. With each scheme, we also vary the time duration of the applied magnetic field. Heel Strike actuation is found to have superior stability, more uniform gait generation, and faster locomotion than the Constant Pulse Wave option. But, Constant Pulse Wave achieves locomotion on steeper slopes. Bipedal Locomotion, Magnetic Actuation, Impulsive Actuation, Milirobot, Gait Dynamics, Impact Mechanics ## Introduction The earliest studies of bipedal walkers began with inverted pendulum models. As research in bipedal locomotion advanced, investigators faced difficulties due to prohibitively large energy demands and very high component costs when attempting to develop experimental prototypes. Complex numerical models of bipedal walkers have been created to better understand the dynamics behind such systems. Major concerns of modelling have been the model discontinuities due to feet impact with the walking surface and stability analyses of these highly nonlinear systems. The main idea of the present paper is to answer key questions regarding stability and actuation of a very simple bipedal walker. In addition, we develop a magnetically actuated miniature robot as an experimental platform to produce a low cost, 3D printed prototype. Using this platform, we apply the results of the theoretical analyses to a real system and demonstrate its physical utility. To the best of our knowledge such an approach to verifying theoretical results using a magnetically actuated miniature bipedal prototype has never been taken before. Robotic bipedal locomotion literature is vast, it is impossible to refer to all work performed on this subject in the confined space of the present article. We refer readers to books and survey articles written on the subject and references within (Hurmuzlu et al. (2004); Yamamoto et al. (2020); Grizzle et al. (2014); Tazaki and Murooka (2020); Mikolajczyk et al. (2022); Hobbeelen and Wisse (2007); Brogliato (2018)). The past research has been mainly concerned with the study of topics of stability, nonlinear dynamics, and control of robotic bipedal locomotion. In terms of stability, defining periodic gait as a limit cycle and analyzing its stability using Poincare sections, Floquet Multipliers, and Lyapunov Exponents has become the standard approach in the field (Hurmuzlu and Moskowitz (1986); Hurmuzlu and Moskowitz (1987b); Hurmuzlu and Basdogan (1994); Hurmuzlu et al. (1996); Garcia et al. (1998); Goswami et al. (1997); Ahn et al. (2021); Bruijn et al. (2010); Ekizos et al. (2018); Saeed et al. (2018); Rameani et al. (2013); Dingwell et al. (2000)). Devising an actuation scheme for a bipedal robot is not a straightforward task. One of the obvious approaches is to place electric/hydraulic motors and sensors at the joints. Yet, this renders the robots excessively heavy and significantly complicates the design. In addition, it results in very high energy requirements. Passive locomotion (McGeer (1990)), inspired by earlier works in ballistic walking (Mochon and McMahon (1980)) has emerged as a promising idea to simplify the design and reduce energy demand. These mechanisms produce gait without joint actuation. They can walk down an inclined plane utilizing gravitational energy. Yet, these bipeds have extremely limited applicability, since they can only walk down inclined planes. In addition, it is very well known that they have extremely limited stability range. Nevertheless, researchers have used the basic concept of passive walking in order to generate active walkers (Tavakoli and Hurnuzaluu (2009, 2010, 2013); Goswami et al. (1997); Garcia et al. (1998); Spong and Bullo (2005)). Despite these efforts, the presently known walking robots are extremely complex and have very high price tags (Honda's Asimo (Honda (2023)), Boston Dynamics's Atlas (Dynamics (2023)), Biped Robotics Lab at Michigan's Cassie Blue (Grizle (2021)), Engineered Arts' Ameca (Arts Limited (2022)), and NASA's R5 (Hall (2015))). Consequently, it has become very difficult for the majority of the researchers in the field to experiment with physical prototypes. 3-D printed miniature robots can offer an attractive option to develop and experiment with low cost bipeds using rapid prototyping methods. One can experimentally study the dynamics and stability of these systems, and develop control methods and strategies to regulate their motion. Miniature walking robots have been produced before using soft and flexible actuators. Ijaz et al. (2020) developed a simple walking robot that uses embedded magnets to flex the robot's abdomen to produce locomotion. Baisch et al. (Baisch et al. (2014, 2011); Baisch and Wood (2011); Baisch et al. (2010)) created a family of walking robots called the Harvard Ambulatory MicroRobot (HAMR). These microrobots use custom machined piezoelectric boards that flex with an applied voltage. HAMR was inspired by cockroaches. Miniaturizing actuators for rigid body robots is difficult. Parker (1995) provide ideas for miniaturizing actuators. But, for actuators that perform like electric motors, often these systems use compliant surfaces to allow motion (Buzzin et al. (2022); Fettweis et al. (2021)). The full extent of these "muscle-like" walking robots can be seen in the LCE-microbots from Zeng et al. (2018). These microrobots are basically a single torsional muscle, whose design has no other control or means of locomotion. With rigid actuation, Islam et al. (2022) developed a minimally actuated, rigid bipedal walker, including five 3D printed rigid bodies and a single actuator per leg. Each actuator is controlled via an open-loop sinusoidal profile, thereby eliminating the need for feedback. Their reasonably small robot (15 cm) was able to walk on level surfaces and produce turns. Magnetic actuation of miniature robots has many useful applications in robotic surgery, drug delivery, small scale manufacturing, and other related engineering topics (Pawashe et al. (2009)). Use of electromagnetic fields as a form of robotic actuation allows untethered, low-power mechanisms to regulate the motion of robots. Yesin et al. (2006) used a combination of Helmholtz and Maxwell coils to control cubed microrobots in one-dimension, followed by Choi et al. (2009) in two-dimensions and eventually Jeong et al. (2010) in three-dimensions. Kim et al. (2013) reduced the number of coils needed for two-dimensional control by using a pair of coils similar to Helmholtz coils that could receive independently controlled current. In addition, Mahoney et al. (2012) explored various magnetic methods of microrobots with helical screw propulsion and rolling locomotion. Li et al. (2020) created a completely rigid, two-legged magnetic microrobot that uses periodic magnetic fields to induce pivot walking to indirectly manipulate cell aggregates, though it is limited to this one mode of locomotion. While not bipedal, Al Khatih et al. (2020) developed a rectangular millifcopt capable of various modes of locomotion, including pivot-walking, tapping, and tumbling, with high maneuverability in tight spaces by capitalizing purely on the torque created by the magnetic field. They also presented "stag beetle" and "carbot" designs for improved applicability. A special novelty of magnetic actuation is that these robots are powered externally, but also untethered. This removes the need for power storage (batteries, compressed air tanks, hydraulic and pneumatic lines, and any other traditional power source.) or electronics (sensors and controllers). This allows for novel miniaturization. All that is required to power the robots on the body itself is a magnet. This opens the door for rapid prototyping as simpler designs allow better analysis means. In this paper we present the first magnetically actuated bipedal walking robot. To the best of our knowledge, this robot is also the world's smallest bipedal robot. The robot weighs 0.3g and is 9mm tall. We call this robot the Big Foot. The actuation of Big Foot is accomplished by simply placing a set of magnets in the upper body of the standard passive walker. Next, a magnetic field applies a moment on the foot-lifting-dynamics to generate gait. The prototype can be produced within hours using 3D printing techniques at a cost of few Dollars. The manuscript begins with a description of the robot design. Next, the dynamics are presented, including the impact mechanics. Then, the experimental setup is explained. Finally, the numerical and experimental analyses are presented. ## Robot Design The central features of Big Foot are the feet, the Shaft Body, and the magnets. Actuatio Figure 1: Isometric view of the multi-body diagram of Big Foot magnets in the Shaft Body shown in Fig 1. A magnetic field is pulsed around Big Foot to create locomotion. The Shaft Body's cylinder is shown in Fig. 2. This cylinder contains outer holes that magnetes are press fit into and a central hole that a low friction shaft is pressed into. Coincident to the cylinder are legs. These legs have holes that have a very loose slip fit onto the shaft. There are also collars pressed on the outside of the shaft assembly to prevent disassembly during operation. The cylinder, the legs, and the collars are all made from a very low friction material to allow motion. Finally, at the end of the legs are feet. The purpose of the feet is to enhance stability. Due to the lack of available feedback for such a small walker, stability could only be achieved through physical design. More complicated designs may be considered in future work, but for this version of Big Foot, the curvature of the feet was defined as a circle with a center at the centroid of the Shaft Body. (See Fig. 4). ## Robot Dynamics ### Coordinate Frames A generalized model of Big Foot is shown in Fig 1 and Fig 3. The components of Big Foot include two legs with integrated feet, and a central shaft. The legs are designed such that the center of mass is close to the ground, and the feet give Big Foot a vertical state equilibrium orientation. Six coordinate systems are required to fully model Big Foot's dynamics: 1. \(\{\hat{\mathbf{z}},\hat{\mathbf{j}},\hat{\mathbf{k}}\}=\mathbf{I}\). 2. \(\{\hat{\mathbf{z}},\hat{\mathbf{j}},\hat{\mathbf{k}}\}=[\mathbf{R}_{\mathbf{y}}(\beta)]^{T}\{ \hat{\mathbf{z}},\hat{\mathbf{j}},\hat{\mathbf{k}}\}\). 3. \(\{\hat{\mathbf{z}}_{1},\hat{\mathbf{j}}_{1},\hat{\mathbf{k}}_{1}\}=[\mathbf{R}_{\mathbf{z}}(\psi)] \{\hat{\mathbf{z}}_{s},\hat{\mathbf{j}}_{s},\hat{\mathbf{k}}_{s}\}\). 4. \(\{\hat{\mathbf{z}}_{2},\hat{\mathbf{j}}_{2},\hat{\mathbf{k}}_{2}\}=[\mathbf{R}_{\mathbf{x}}(\phi)] \{\hat{\mathbf{z}}_{1},\hat{\mathbf{j}}_{2},\hat{\mathbf{k}}_{1}\}\). 5. \(\{\hat{\mathbf{z}}_{3},\hat{\mathbf{j}}_{3},\hat{\mathbf{k}}_{3}\}=[\mathbf{R}_{\mathbf{y}}(\theta_ {1})]\{\hat{\mathbf{z}}_{2},\hat{\mathbf{z}}_{2},\hat{\mathbf{k}}_{2}\}\). 6. \(\{\hat{\mathbf{z}}_{4},\hat{\mathbf{j}}_{4},\hat{\mathbf{k}}_{4}\}=[\mathbf{R}_{\mathbf{y}}(\theta _{2})]\{\hat{\mathbf{z}}_{2},\hat{\mathbf{j}}_{2},\hat{\mathbf{k}}_{2}\}\). The following coordinate transformations are required to fully define the coordinate frames. The simple rotation transformations \(\{[\mathbf{R}_{\mathbf{z}}],[\mathbf{R}_{\mathbf{y}}],[\mathbf{R}_{\mathbf{z}}]\}\) are from Ginsberg's foundational textbook (Ginsberg (2008)). First, a rotation of \(\beta\) about \(\hat{\mathbf{j}}_{s}\) transforms the Space Fixed coordinate system to the Platform Fixed coordinate system. In Fig 1 and Fig 3, the Space Fixed coordinate system is not shown. The Platform Fixed coordinate system is aligned with the surface Big Foot is walking on. Next, a rotation of \(\psi\) about \(\hat{\mathbf{k}}\), followed by a rotation of \(\phi\) about \(\hat{\mathbf{j}}_{1}\), transforms the Platform Fixed coordinate system to the Shaft Fixed coordinate system. The Platform Fixed coordinate system is chosen as the global for the final equations of motion. Subsequently, a rotation of \(\theta_{1}\) and \(\theta_{2}\) about \(\hat{\mathbf{j}}_{2}\) transform the Shaft Fixed coordinate system to the Leg-A and Leg-B coordinate systems. Figure 1 shows these transformations. ### Hybrid System As a biped robot, Big Foot is a hybrid system (Hurmuzlu et al. (2004)). In general for bipedal walkers, there are two states \(\{a,b\}\). These states correspond to which foot a bipedal walker is standing on. Big Foot actually has four states corresponding to the contact point between Big Foot and the platform: \(\{a,b,c,d\}\) 1. Leg-A foot inner edge 2. Leg-B foot inner edge 3. Leg-A foot curved surface 4. Leg-B foot curved surface These states are a result of the unique foot design. In numerical simulations, the switching between states is achieved by toggling the respective state to 1 and the other states to 0. ### Position Vectors To model the kinematic constraints of motion and the centers of mass, the following position vectors have been defined. These positions are shown in Fig 3. Point \(\mathbf{r}_{C_{i}/O}\) is the position vector of the foot/platform contact point. Point \(\mathbf{r}_{A_{i}/O}\) is the position of the center of the joint connecting the leg with the shaft. Finally, \(\mathbf{r}_{D/O}\) is the position of the center of the shaft. \[\mathbf{r}_{D/O}=\mathbf{r}_{A_{i}/O}+(a-b)L\hat{\mathbf{j}}_{2}. \tag{1}\] where, \[\mathbf{r}_{A_{i}/O}=\begin{bmatrix}X\\ Y\\ Z\end{bmatrix}. \tag{2}\] The coordinates of the center of mass for the rigid bodies are as follows. The position vector of the center of mass (COM) of Leg-A is \(\mathbf{r}_{G_{A}/O}\) and Leg-B is \(\mathbf{r}_{G_{D}/O}\). The COM of the shaft body is \(\mathbf{r}_{G_{D}/O}\). These vectors are defined as: \[\mathbf{r}_{G_{A}/O} =\mathbf{r}_{D/O}-L_{G}\hat{\mathbf{j}}_{2}-H_{G}\hat{\mathbf{k}}_{3}. \tag{3}\] \[\mathbf{r}_{G_{D}/O} =\mathbf{r}_{D/O}+L_{G}\hat{\mathbf{j}}_{2}-H_{G}\hat{\mathbf{k}}_{4}.\] \[\mathbf{r}_{G_{D}/O} =\mathbf{r}_{D/O}.\] ### Angular Velocities Next, the angular velocities are defined. The angular velocities of Leg-A, Leg-B, and the Shaft Body are \(\mathbf{\omega}_{1}\), \(\mathbf{\omega}_{2}\), and \(\mathbf{\omega}_{3}\), respectively. \[\mathbf{\omega}_{1} =\dot{\psi}\hat{\mathbf{k}}+\dot{\phi}\hat{\mathbf{i}}_{1}+\dot{\theta}_{1} \hat{\mathbf{j}}_{2}. \tag{4}\] \[\mathbf{\omega}_{2} =\dot{\psi}\hat{\mathbf{k}}+\dot{\phi}\hat{\mathbf{i}}_{2}+\dot{\theta}_{2} \hat{\mathbf{j}}_{2}.\] \[\mathbf{\omega}_{3} =\dot{\psi}\hat{\mathbf{k}}+\dot{\phi}\hat{\mathbf{i}}_{1}.\] There is only two angular velocities for the Shaft Body because there are no moments acting about the \(\hat{\mathbf{j}}_{2}\) direction, which makes one degree of freedom of the Shaft Body static and able to be ignored. Figure 2: Indexed shaft cylinder ### Velocity Constraint Next, a velocity constraint must be defined for the position of Big Foot with respect to the walking surface. The feet are designed to be circular, as shown in Fig 4. Two velocity constraints exist for each state. The first velocity constraint is the requirement that the foot is in contact with the ground. This is modeled as the velocity of the foot contact point with the platform, \(\dot{C}_{i}\), is zero. Next, a velocity constraint exists that represents the roll of the circular foot on the platform as Big Foot moves. There are four equations for this constraint corresponding to each state. * \(\dot{\mathbf{r}}_{A_{i}/O}=\mathbf{\omega}_{1}\times\mathbf{r}_{A_{A}/C_{A}}\) * \(\dot{\mathbf{r}}_{A_{i}/O}=\mathbf{\omega}_{2}\times\mathbf{r}_{A_{B}/C_{B}}\) * \(\dot{\mathbf{r}}_{A_{i}/O}=\mathbf{\omega}_{1}\times(H_{B}\mathbf{\hat{k}})-L\mathbf{\hat{j}_{2}}\) * \(\dot{\mathbf{r}}_{A_{i}/O}=\mathbf{\omega}_{2}\times(H_{B}\mathbf{\hat{k}})+L\mathbf{\hat{j}_{2}}\) where, \[\mathbf{r}_{A_{i}/C_{i}}=H\mathbf{\hat{k}_{2}}. \tag{5}\] and \(H_{B}\) is the radius of the spherical surface of the feet. The \(\mathbf{\hat{k}}\)-direction of the rolling constraint is holonomic and when integrate for each state is: * \(Z=H\cos\phi\) * \(Z=H_{B}-L\sin\phi\) * \(Z=H_{B}+L\sin\phi\) ### Energy Definitions Now that the position vectors have been fully defined, the Lagrangian can be derived. We start with the different types of energy. First, we have translational kinetic energy: \[\begin{split} T_{t}=&\frac{1}{2}(m_{A}\,\dot{\mathbf{r}} _{G_{A}/O}\cdot\dot{\mathbf{r}}_{G_{A}/O}+\\ & m_{B}\,\dot{\mathbf{r}}_{G_{B}/O}\cdot\dot{\mathbf{r}}_{G_{B}/O}+m_{C} \,\dot{\mathbf{r}}_{G_{C}/O}\cdot\dot{\mathbf{r}}_{G_{C}/O})\end{split} \tag{6}\] where, \(m_{A}\), \(m_{B}\), and \(m_{C}\) are the masses of the two legs and the shaft body, respectively. Next, the rotational kinetic energy: \[\begin{split} T_{r}=&\frac{1}{2}(\mathbf{\omega}_{1}^{ T}[\mathbf{R}_{3}]^{T}\mathbf{I}_{1}[\mathbf{R}_{3}]\mathbf{\omega}_{1}+\\ &\mathbf{\omega}_{2}^{T}[\mathbf{R}_{4}]^{T}\mathbf{I}_{2}[\mathbf{R}_{4}]\mathbf{ \omega}_{2}+\mathbf{\omega}_{3}^{T}[\mathbf{R}_{2}]^{T}\mathbf{I}_{3}[\mathbf{R}_{2}]\mathbf{ \omega}_{3})\end{split} \tag{7}\] where, \(\mathbf{I}_{1}\) is the inertia matrix of Leg-A with respect to \(\{\mathbf{\hat{x}}_{3},\mathbf{\hat{y}}_{3},\mathbf{\hat{z}}_{3}\}\), \(\mathbf{I}_{2}\) is the inertia matrix of Leg-B with respect to \(\{\mathbf{\hat{x}}_{4},\mathbf{\hat{y}}_{4},\mathbf{\hat{z}}_{4}\}\), \(\mathbf{I}_{3}\) is the inertia matrix of the Shaft Body with respect to \(\{\mathbf{\hat{x}}_{2},\mathbf{\hat{y}}_{2},\mathbf{\hat{z}}_{2}\}\), and the rotation matrices are defined below: \[\begin{split}[\mathbf{R}_{2}]&=\begin{bmatrix}\mathbf{\hat{i} }_{2}^{T}\\ \mathbf{\hat{j}}_{2}^{T}\\ \mathbf{\hat{k}}_{2}^{T}\end{bmatrix}\\ [\mathbf{R}_{3}]&=\begin{bmatrix}\mathbf{\hat{i}}_{3}^{T}\\ \mathbf{\hat{j}}_{3}^{T}\\ \mathbf{\hat{k}}_{3}^{T}\end{bmatrix}\\ [\mathbf{R}_{4}]&=\begin{bmatrix}\mathbf{\hat{i}}_{4}^{T}\\ \mathbf{\hat{j}}_{4}^{T}\\ \mathbf{\hat{k}}_{4}^{T}\end{bmatrix}.\end{split} \tag{8}\] Finally, the potential energy of the system is: \[V=g(m_{A}\mathbf{r}_{G_{A}/O}+m_{B}\mathbf{r}_{G_{B}/O}+m_{C}\mathbf{r}_{G_{C}/O})\cdot \mathbf{\hat{K}}. \tag{9}\] The Lagrangian can be written as: \[\mathcal{L}=T_{t}+T_{r}-V, \tag{10}\] ### Constraint Forces The constrained generalized coordinates for the system are: \[\mathbf{q}=\{\theta_{1},\theta_{2},\phi,\psi,x,y\}. \tag{11}\] As a result of the rolling velocity constraint, there are two constraint forces \(\{r_{x},r_{y}\}\). These forces interact with the system through generalized forces and moments (\(Q_{ic}\)): Figure 4: Side view of foot curvature Figure 3: The multi-body diagram of Big Foot from a frontal view \[Q_{1c}= -(a\ H+c\ H_{B}\cos\phi)(r_{x}\cos\psi+r_{y}\sin\psi) \tag{12}\] \[Q_{2c}= -(b\ H+d\ H_{B}\cos\phi)(r_{x}\cos\psi+r_{y}\sin\psi)\] \[Q_{3c}= (r_{y}\cos\psi-r_{x}\sin\psi(H(a+b)\cos\phi+\] \[H_{B}(c+d)+L(d-c)\sin\phi)\] \[Q_{4c}= (r_{x}(-\cos\psi)-r_{y}\sin\psi)(H(a+b)\sin\phi+\] \[L(c-d)\cos\phi)\] \[Q_{5c}= r_{x}\] \[Q_{6c}= r_{y}\] There are two types of external loads. The first type of external load is friction. There is rolling friction between the feet and the platform (Szirtes and Rozsa (2007)): \[Q_{1r}= -c_{r1}sgn(\dot{\theta}_{1}) \tag{13}\] \[Q_{2r}= -c_{r1}sgn(\dot{\theta}_{2})\] \[Q_{3r}= -c_{r2}sgn(\dot{\phi})\] \[Q_{4r}= -c_{r2}sgn(\dot{\psi})\] where, \(c_{ri}\) is the rolling coefficient for the respective dynamics. The model also includes the normal force between the foot and the platform, but this normal force varies insignificantly and can be assumed to be constant. The damping friction term is: \[Q_{1f}= -c_{f1}\dot{\theta}_{1} \tag{14}\] \[Q_{2f}= -c_{f1}\dot{\theta}_{2}\] where, \(c_{fi}\) is the damping coefficient for the respective dynamics. The damping friction of \(\theta_{i}\) is the result of the constraint forces between the legs and the Shaft Body. ### Magnetic Moment The active control of Big Foot is achieved through an arbitrary uniform magnetic field (Al Khatib et al. (2020)). The magnetic field is generated with 3 sets of orthogonal Helmholtz coils. The magnetic field produced by in the inner set of coils is shown in Fig. 5. The yellow rectangle represents the foam platform the experiments are run on. After calibration, the magnetic field can be represented by a vector \(p_{m}\hat{\boldsymbol{j}}_{m}\), where \(p_{m}\) is the power level of the field and \(\hat{\boldsymbol{j}}_{m}\) is the \(y\)-direction of the coordinate frame as defined below: \[\{\hat{\boldsymbol{i}}_{m},\hat{\boldsymbol{j}}_{m},\hat{\boldsymbol{k}}_{m} \}=[\boldsymbol{R}_{x}(\phi_{m})][\boldsymbol{R}_{x}(\psi_{m})]\{\hat{ \boldsymbol{i}},\hat{\boldsymbol{j}},\hat{\boldsymbol{k}}\} \tag{15}\] Once the magnetic field has been created, the magnet in Big Foot interacts with the field with the following magnetic moment equation: \[\boldsymbol{\tau}_{m}=(k_{m}\hat{\boldsymbol{j}}_{2})\times(p_{m}\hat{ \boldsymbol{j}}_{m}) \tag{16}\] where, \(k\) is a magnet strength coefficient found through tuning the model to experimental results. Next, to find the applied load of the magnetic moment on the shaft assembly's generalized coordinates, the moment is projected on each coordinate's rotation axis: \[Q_{3m}= \boldsymbol{\tau}_{m}.\hat{\boldsymbol{i}}_{1} \tag{17}\] \[= k_{m}p_{m}(\sin\phi_{m}\cos\phi-\cos\phi_{m}\sin\phi\cos\left( \psi_{m}-\psi\right))\] \[Q_{4m}= \boldsymbol{\tau}_{m}.\hat{\boldsymbol{k}}\] \[= k_{m}p_{m}\cos\phi_{m}\cos\phi\sin\left(\psi_{m}-\psi\right)\] Finally, using the Lagrangian method, the equations of motion for Big Foot are: \[\frac{d}{dt}\Big{(}\frac{\partial\mathcal{L}}{\partial\theta_{i} }\Big{)}-\frac{\partial\mathcal{L}}{\partial\theta_{i}} =Q_{1c}+(a+c)\ Q_{1r}+Q_{1f} \tag{18}\] \[\frac{d}{dt}\Big{(}\frac{\partial\mathcal{L}}{\partial\theta_{i} }\Big{)}-\frac{\partial\mathcal{L}}{\partial\theta_{i}} =Q_{2c}+(b+d)\ Q_{2r}+Q_{2f}\] \[\frac{d}{dt}\Big{(}\frac{\partial\mathcal{L}}{\partial\phi} \Big{)}-\frac{\partial\mathcal{L}}{\partial\phi} =Q_{3c}+Q_{3r}+Q_{3m}\] \[\frac{d}{dt}\Big{(}\frac{\partial\mathcal{L}}{\partial\psi} \Big{)}-\frac{\partial\mathcal{L}}{\partial\psi} =Q_{4c}+Q_{4r}+Q_{4m}\] \[\frac{d}{dt}\Big{(}\frac{\partial\mathcal{L}}{\partial\dot{x}} \Big{)}-\frac{\partial\mathcal{L}}{\partial x} =Q_{5c}\] \[\frac{d}{dt}\Big{(}\frac{\partial\mathcal{L}}{\partial\dot{y}} \Big{)}-\frac{\partial\mathcal{L}}{\partial y} =Q_{6c}\] ### Impact Map Every time Big Foot switches stance legs, there is an impact. This impact is observed through a loss of kinetic energy and audible sound produced. Hrumuzlu (2020) notes that the Lagrangian formalism for impact problems can be written as: \[\left[\frac{\partial T}{\partial\dot{q}_{i}}\right]^{+}-\left[ \frac{\partial T}{\partial\dot{q}_{i}}\right]^{-}-\dot{Q}_{i}-\sum_{K}^{j=1} \lambda_{j}\frac{\partial\psi_{j}}{\partial\dot{q}_{i}} =0 \tag{19}\] \[i=1,2,...,N+2\] where, \(T\) is the total kinetic energy, the superscripts \(-\) and \(+\) correspond to the states immediately before and after impact, respectively, \(\partial\dot{q}_{i}\) is the \(i\)th generalized velocity, \(\dot{Q}_{i}\) is the generalized impulse in the direction of the \(i\)th generalized Figure 5: Cross-section view of magnetic field generated by the inner Helmholtz coils. coordinate, and \(\psi_{j}\) and \(\lambda_{j}\) are the \(j\)th constraint and the corresponding Lagrange multiplier, respectively. In using this impact model, once \(\left[\frac{\partial T}{\partial\phi_{i}}\right]^{+}\) and \(\left[\frac{\partial T}{\partial\phi_{i}}\right]^{-}\) are evaluated, the velocity constraints are substituted back in. This eliminates the constrained coordinates from the impact model and simplifies the map. Also, to model the movement of the foot/platform contact point of the pre-impact stance leg, the generalized coordinates are expanded as follows: \[\mathbf{q}_{impacts}=\{\theta_{1},\theta_{2},\phi,\psi,c_{x},c_{y},c_{z}\}. \tag{20}\] The position of these contact points is incorporated by defining the velocity constraint at impact as: \[\dot{\mathbf{r}}_{A_{i}/O}=\dot{\mathbf{c}}_{i}+a(\mathbf{\omega}_{1}\times\mathbf{r}_{A_{A}/C _{A}})+b(\mathbf{\omega}_{2}\times\mathbf{r}_{A_{B}/C_{B}}) \tag{21}\] where, \(\dot{\mathbf{c}}_{i}\) is the velocity of point \(c_{i}\) and state \(a\) is used when the stance leg switches from Leg-a to Leg-b and visa versa. The generalized impulse, \(\dot{Q}_{i}\), is found as: \[Q_{1c} =(a-b)H(\tau_{x}\cos\psi+\tau_{y}\sin\psi)\] \[Q_{2c} =(b-a)H(\tau_{x}\cos\psi+\tau_{y}\sin\psi)\] \[Q_{3c} =2(a-b)L\;\tau_{z}\] \[Q_{4c} =-2(a-b)L(\tau_{x}\cos\psi+\tau_{y}\sin\psi) \tag{22}\] \[Q_{5c} =\tau_{x}\] \[Q_{6c} =\tau_{y}\] \[Q_{7c} =\tau_{z}\] where, \(\tau_{i}\) is the contact impulses at the striking foot and \(\phi\) is set equal to 0 because this is the stance switching angle. Finally, we have a set of equations to solve. There is the Lagrangian formalization from Eq. 19 with the generalized impulses in Eq. 22 and the substituted impact velocity constraints in Eq. 21. Next, there are post- and pre-impact velocity constraints. Before impact, the contact point on the stance leg is static, meaning: \[\dot{\mathbf{c}}^{-}=0 \tag{23}\] After impact, the post-impact stance contact point is static. This is represented kinematically as: \[\dot{c}_{x}^{+} =-\Big{(}(a-b)\cos\psi\big{(}H\dot{\theta}_{1}^{+}-H\dot{\theta}_ {2}^{+}-2L\dot{\psi}^{+}\big{)}\Big{)}\] \[\dot{c}_{y}^{+} =(a-b)\sin\psi\big{(}-H\dot{\theta}_{1}^{+}+H\dot{\theta}_{2}^{+ }+2L\dot{\psi}^{+}\big{)} \tag{24}\] \[\dot{c}_{z}^{+} =2L(b-a)\dot{\phi}^{+}\] This post impact constraint comes from assuming the coefficient of restitution is 0. Next, these equations are fed into a numerical solver and the post-impact velocities can be found through the solution. This solution is called the impact map. Once the equations of motion and the impact map has been derived, the system can be simulated. ## 4 Actuation Schemes We tailor the actuation such that we can study the following questions regarding the application of the magnetic torque: 1. The timing of the external torque, whether it should be applied at the end of the heel strike or whether it should be in the form of an external beat and let the biped adjust to it. 2. The nature of the pulse, whether it should be in the form of high-magnitude short duration (impact like) pulse or longer duration but lower magnitude pulse. During both approaches a magnetic field in the form of square wave is considered. The applied field has five control parameters: * \(\psi_{m}\), is the yaw angle of the magnetic field vector. * \(\phi_{m}\), is the roll angle of the magnetic field vector. * \(P_{L}\), is the time duration of the applied magnetic field. * \(P_{m}\), is the height of the applied magnetic field. * \(P_{Area}\) is the magnitude of the applied magnetic impulse where, \(P_{m}\) is unitless. \(P_{m}\) is a multiplier of a magnetic field power defined in the equations of motion. The rotational impulse acting on the biped by the magnetic field is defined as: \[I_{m}=\int_{t_{i}}^{t_{i}+P_{L}}||\mathbf{\tau}_{m}||\,dt \tag{25}\] where, \(t_{i}\) and \(t_{i}+P_{L}\) are the time of onset and termination of the magnetic pulse respectively. The consequence of considering a square pulse is: \[P_{Area}=P_{L}\times P_{m} \tag{26}\] with \(P_{Area}\), regimes can be defined that correspond to the interaction of the input magnetic field with the dynamical response. **I. Impact regime :**: This can be described as a Very short duration and a high magnitude pulse. During this regime, the generalized coordinates and velocities remain relatively unchanged during application of the magnetic torque. Thus, the applied magnetic torque has a similar effect to that of an external impact. **II. Impulsive regime:**: This can be described as a moderate magnitude and duration pulse. During this regime, the generalized coordinates and velocities change significantly during the application of the magnetic torque. Thus, the applied magnetic torque cannot be characterized as an external impact. Responses illustrating these regimes are shown in Figs. 5(a) and 5(b). In addition, as mentioned before, two actuation timing approaches will be considered. **I. Heel strike based actuation:**: The first scheme is based on triggering the pulse immediately after the heel strike: \[\phi[t]=\begin{cases}\phi_{in}&\phi>0\\ -\phi_{in}&\phi<0\end{cases}\quad, \tag{27}\] \[\psi[t]=\begin{cases}\psi_{in}&\phi>0\\ -\psi_{in}&\phi<0\end{cases}\quad,\text{and} \tag{28}\] \[P_{m}=\begin{cases}P_{in}&t_{s}<t<t_{s}+P_{L}\\ 0\end{cases} \tag{29}\] where, \(t_{s}\) is the time of heel strike. An example run of this foot strike actuation is shown in Fig. 5(a) and Fig. 5(b). **II. Constant period actuation**: The second actuation scheme is completely open-loop and is simply a pattern of periodically applied pulses: \[\phi_{m}=\begin{cases}\phi_{in}&0<\text{mod}(t,t_{4})<t_{1}\\ -\phi_{in}&t_{2}<\text{mod}(t,t_{4})<t_{3}\end{cases}\quad, \tag{30}\] \[\psi_{m}=\begin{cases}\psi_{in}&0<\text{mod}(t,t_{4})<t_{1}\\ -\psi_{in}&t_{2}<\text{mod}(t,t_{4})<t_{3}\end{cases}\quad,\text{and} \tag{31}\] \[P_{m}=\begin{cases}P_{in}&(0<\text{mod}(t,t_{4})<t_{1})||(t_{2}<\text{mod}(t,t_ {4})<t_{3})\\ 0\end{cases} \tag{32}\] where, \(t_{1}=P_{L},\ t_{2}=P_{L}+t_{off},\ t_{3}=2P_{L}+t_{off}\), \(t_{4}=2P_{L}+2t_{off}\), and \(t_{off}\) is the time period that the pulse is turned off. An example run of this constant pulse wave actuation is shown in Fig. 5(c). In Fig. 6, the plots show steady state responses. Notice the timing between the heel strike and the switching of the input pulse. For the heel strike actuation, as enforced by the magnetic torque input, the heel strike and the input pulse are synchronous. But, for the constant pulse wave, the pulse does not necessarily initiate at the heel strike. This shows how the gait pattern can be initiated without the syncing of the pulse wave and the heel strike. The selection of actuation parameters depend on the desired gait characteristics. ## 4 Stability Analysis Let \(\phi_{p}(t)=(\theta_{1},\theta_{2},\phi,\psi,\dot{\theta}_{1},\dot{\theta}_{2 },\dot{\phi},\dot{\psi})\in\mathbb{R}^{8}\) be a periodic solution of Eq. 18 (Kashki et al. (2017)). The one-sided hyper plane Poincare section is defined as \[\mathcal{S}\equiv(\theta_{1},\theta_{2},\psi,\dot{\theta}_{1},\dot{\theta}_{ 2},\dot{\phi},\dot{\psi})\in\mathbb{R}^{7}:\phi=0,\dot{\phi}<0. \tag{33}\] If \(\mathbf{\xi}[k]\in\mathcal{S}\) denotes the \(k\)th intersection of \(\mathcal{S}\) by the flow of \(\phi_{p}\), the discrete-time Poincare Map \(\mathcal{P}:\mathcal{S}\rightarrow\mathcal{S}\) can be expressed as \[\mathbf{\xi}[k+1]=\mathcal{P}(\mathbf{\xi}[k]) \tag{34}\] Subsequently, if \(\mathbf{\xi}^{*}\) stands for a fixed point of the Poincare Map, then the local exponential stability of \(\mathbf{\xi}^{*}\) on \(\mathcal{S}\) is equivalent to local exponential stability of the underlying limit cycle (Westervelt et al. (2007)). We used Floquet Theory for stability analysis of a specific limit cycle (Hurmuzlu and Moskowitz (1987, 1987)). Therefore, the local linearization of the Poincare Map about \(\mathbf{\xi}^{*}\) gives: \[\mathcal{P}(\mathbf{\xi})\simeq \mathcal{J}(\mathbf{\xi}^{*})(\mathbf{\xi}-\mathbf{\xi}^{*}) \tag{35}\] \[\mathcal{J}(\mathbf{\xi})= \frac{\partial\mathcal{P}(\mathbf{\xi})}{\partial\mathbf{\xi}}\] where \(\mathcal{J}(\mathbf{\xi})\) is the \(7\times 7\) linearized Jacobian matrix of \(\mathcal{P}(\mathbf{\xi})\). Next, the Floquet multipliers are defined as: \[\rho_{i}=|Re(\lambda_{i})|:\lambda_{i}=\text{eig}(\mathcal{J}(\mathbf{\xi})) \tag{36}\] where \(\rho_{i}\) and \(\lambda_{i}\) are the \(i\)th Floquet multiplier and eigenvalue of \(\mathcal{J}(\mathbf{\xi})\). Accordingly, stability of the limit cycle can be defined as: \[\phi_{p}(t)=\begin{cases}\text{stable}:&\forall\rho_{i}<1\\ \text{unstable}:&\exists\rho_{i}\geq 1\end{cases} \tag{37}\] The algorithm used to find the Floquet multipliers is shown below: Next, we explore the effect of the input parameter values on the resulting gait patterns. Figure 6: The two actuation schemes in different regimes ## 5 Simulations The simulations were conducted using Wolfram Research Mathematica software. In the next subsections we present the simulation results. We study the dynamic behavior for the two actuation schemes considering the following resulting outcomes: 1. Existence and type of resulting limit cycles. 2. Stability of the period-1 limit cycles. 3. Step length and progression speed of the biped. We vary the following parameters when we conduct the simulations: 1. Pulse duration \(P_{L}\) 2. Pulse area \(P_{Area}\) 3. Walking surface inclination \(\beta\) We have two objectives here: a) first, is to identify the parameter ranges that result in the most stable period-1 cycles with the highest possible progression speed over the steepest walking surface, b) second, is to compare the dynamic response for impact and impulsive regimes. ### Heel Strike Scheme Simulation Results In Fig. 7, the results of the simulations for the heel strike actuation scheme are shown. The results are composed of 3 sets of different \(P_{L}\) values:\(\{5\:\mathrm{ms},22.5\:\mathrm{ms},40\:\mathrm{ms}\}\). In each set five pulse areas were used:\(\{2\:\mathrm{ms},4\:\mathrm{ms},6\:\mathrm{ms},8\:\mathrm{ms},10\:\mathrm{ms}\}\). For simplicity, the direction of the applied magnetic pulse is kept constant at \(\{\phi_{m}=60^{\circ},\psi_{m}=20^{\circ}\}\). The simulations were initiated by running the model at a \(0^{\circ}\) slope for each \(P_{Area}\) until either a limit cycle is identified, a pre-determined simulation time has expired, or locomotion is lost. Once the steady state locomotion is identified for a flat slope, the nature of the gait is added to the bifurcation map, the average maximum Floquet multiplier is calculated, and the slope is incremented by \(0.5^{\circ}\) for the next run. The increasing of slope continued until either the robot begins walking backward or locomotion is lost. In algorithm 1, the calculation of the Floquet multipliers is explained. The perturbation, \(\delta\), is varied between 0.01 and 0.1. Then, for each \(\delta\), the maximum Floquet multiplier is selected. Finally, the average of the maximum Floquet multipliers for each \(\delta\) is calculated to represent the average local stability of the fixed point. This value is printed next to the corresponding sample point on Fig. 7 and Fig. 9. From the map, the maximum slope the robot can walk and the corresponding gait patterns and stability has been identified. As stated previously, we conduct the simulations in order to identify the optimal operating region for Big Foot. We have chosen three goals for locomotion. The first goal is to walk the steepest slope. Based on our results, longer pulse lengths with higher pulse areas is best (larger \(P_{m}\)). In our runs, a maximum slope of \(4.5^{\circ}\) at a pulse length of \(40\:\mathrm{ms}\) and a pulse area of \(10\:\mathrm{ms}\) is achieved. The second goal is to walk the steepest slope with a period-1 gait. This occurred at \(4^{\circ}\) at the same parameter values. The period-1 gait is useful for controllability as the displacement per step remains unchanged during locomotion. The third goal is to achieve maximum stability. Based on the average maximum Floquet multipliers, smaller pulse areas result in higher stability. Several overall trends in the results were observed as well. Quasi-Periodic gaits were observed as, while increasing the slope, the gait transitioned from one periodic gait to another. In Fig. 8, the Poincare return map for \(\psi\) on a Quasi-Periodic Figure 7: Forward gait patterns in heel strike actuation orbit is shown as an example. For \(P_{L}=5\) ms and \(P_{L}=22.5\) ms, the bifurcation plot have similar trends as \(P_{Area}\) is increased. Initial, for \(P_{L}=5\) ms and \(P_{L}=22.5\) ms, as \(P_{Area}\) is increased, the maximum slope increases. The maximum for these ranges of \(P_{Area}\) is observed as the point to which Big Foot would start walking backwards. For \(P_{L}=5\) ms at \(P_{Area}=6\) ms and for \(P_{L}=22.5\) ms at \(P_{Area}=8\) ms, this point switches to being the point of instability. Meaning, for \(P_{Area}\) greater than this point, the maximum slope is limited by stability. At these \(P_{Area}\), Big Foot falls and loses locomotion at steeper slopes. ### Constant Pulse Wave Scheme Simulation Results Next, simulations for the constant pulse wave actuation scheme were conducted. A bifurcation map of the same structure of Fig. 7 is setup for the constant pulse wave scheme. The results of this map are shown in Fig. 9. For this map, the same parameters of Fig. 7 with a \(t_{off}=60\) ms is chosen. The difference in results of the heel strike scheme and the constant pulse wave scheme are as follows. The maximum slope achieved by the constant pulse wave scheme is significantly higher with \(6^{\circ}\) at \(P_{L}=40\) ms and \(P_{Area}=10\) ms. Another difference is the generation of different periodic gaits. The heel strike scheme predictably performed period-1 gaits at lower slopes. Whereas, the gait patterns of the constant pulse wave do not appear to be predictable from this study. For the constant pulse wave scheme a gait pattern called "Non-Periodic" is identified. "Non-Periodic" refers to a gait that appeared to have a random gait pattern and could not be classified as periodic or quasi-periodic. Another observation is that the performance did not vary greatly with different pulse lengths for the constant pulse wave, where it does for the heel strike. For the constant pulse wave, the robot is only operational in pulse areas \(6\) ms through \(10\) ms for all pulse lengths. For a control engineer, the heel strike appeared to be more desirable period-1 gait, but the constant pulse wave scheme achieves significantly greater slopes. Another difference is stability. For the constant pulse wave with \(P_{L}=22.5\) ms, lower pulse areas were found to be less stable. This is believed to be a result of the same dynamics that cause the lack of predictability in gait pattern as the generation of different gait patterns developed at instability. In Fig. 10, the maximum slopes achieved in the simulations are shown. These plots show the behavior of the performance as you change actuation schemes and input parameters. For the heel strike scheme, the maximum achieved slope appears to increase as you increase the pulse length. However, for the constant pulse wave actuation, the maximum slope occured at \(P_{Area}=10\) ms for all pulse durations. The maximum achieved slope for the constant pulse wave scheme increased with larger pulse areas. Another performance metric of Big Foot is the locomotion travel, or more specifically the velocity and stride length of the gaits. In Fig. 11 and Fig. 12, these metrics are presented. To find the velocity and stride length, each fixed-point is run for 10 seconds. Then, the velocity is calculated by dividing Figure 8: Poincaré Return Map for the yaw angle in a Quasi-Periodic gait. Figure 9: Forward gait patterns in constant pulse wave actuation the distance covered by 10 seconds. The stride length is calculated by dividing the distance covered by the number of steps taken. These calculations provide averages, which are required as many of the fixed-points were not period-1. The results of the study of the travel show the velocities and stride lengths match in their relationship to slope, pulse area, and pulse duration. Also, the velocities and stride lengths appear to be independent of the gait patterns generated. Finally, the highest travel velocity (50 mm/s) and stride length (8mm) are achieved by the heel strike actuation scheme at \(P_{1}=22.5\) ms and \(P_{Area}=10\) ms. ## Experimental Setup The experimental setup includes 3 pairs of Helmholtz coils, a test platform, a light, a camera, a Microcontroller, a motor driver, and a PC. Each pair of Helmholtz coils requires two identical, parallel coils separated by a distance equal to their radius. Both coils are wired such that their current runs in the same direction. The experimental setup follows the outline set by Abbott (2015) for nested circular coils that was used by Al Khatib et al. (2022) as well. The setup involves three pairs of Helmholtz coils positioned orthogonally around a platform on which Big Foot walks, as pictured in Fig. 13 The outer diameters of the \(y\), \(x\), and \(z\) coils are approximately 46 cm, 40 cm, and 31 cm respectively. The inner diameters are approximately 40 cm, 34 cm, and 25.5 cm and the coil thicknesses are approximately 4.47 cm, 3.61 cm, and 3.21 cm. The platform is machined out of engineering foam, and an Ultra-Thin 10A drometer Silicone Rubber Sheet is placed on top of the platform's surface to prevent slipping. At the top of the setup a Razer Kiyo Pro camera is secured with a XJ-19 Selfie Ring Light around it. Reflective tape is applied to the Shaft Body of Big Foot. Testing of various incline angles is done with the use of 3D printed ramps that hooked on the bottom \(z\) coil. The approximately uniform magnetic fields caused by the three pairs of coils induce torques, as briefly noted in Eq. 16. The input control of the coils contains three variables that are the attitude angles and power of the overall created magnetic field. These variables are: yaw (\(\psi_{m}\)), pitch (\(\phi_{m}\)), and power(\(P\)). The equations used to convert these inputs into actual coil voltages are given as follows: \[\begin{split} P_{x}&=P\cos\phi_{m}\cos\psi_{m}k_{x} \\ P_{y}&=P\cos\phi_{m}\sin\psi_{m}k_{y}\\ P_{z}&=P\sin\phi_{m}k_{z}\end{split} \tag{38}\] where, \(\{P_{x},P_{y},P_{z}\}\) are the separate voltages applied to each orthogonal coil and \(\{k_{x},k_{y},k_{z}\}\) are the correction factors that account for unit conversions and intermediate experimental parameters. ### Big Foot Prototype The prototype of Big Foot is shown in Fig. 14. The Shaft Cylinder is machined from Delrin. The collars and legs were laser cut using the P548 laser cutter and engraver. The leg is 6.5x0.79x1.75 mm. The feet are 3D printed using a Photon S SLA 3D printer. The feet are a section of an 8mm sphere, which are glued to the legs. The shaft is a 0.794mm non-magnetic stainless steel shaft. The magnets used are 1mm diameter Neodymium magnets. ## Experimental Results A set of walking tasks is defined to validate the performance of the Big Foot prototype. First, Big Foot is placed along a straight path with different slopes to compare the prototype's dynamics to the model. Second, a variety of two dimensional paths are prescribed with Big Foot to demonstrate maneuvering capabilities. ### Uphill Walking Experiments To compare the response of the Big Foot prototype to the model, the prototype is placed on a series of slopes. As no sensors were available to detect the heel strikes, experiments are limited to the constant pulse wave actuation scheme. The parameters of this test were as follows: \(\{\psi_{m}=20^{\circ},\phi_{m}=60^{\circ},P_{m}=33\%,P_{L}=300\text{ ms},T_{off}=60\text{ ms}\}\). A long pulse width is used because a very short pulse lengths are not possible because the cut off frequency of the largest Helmholtz coil is found experimentally to be 5Hz. Video footage of the experiment is shown in Extension 1. Figure 15 shows the stride length's variation with respect to the walking surface slope for both experiments and simulations. The simulation results agree with the experiments. Experimental outcomes are similar to the theoretical ones. The stride length is greater in the experiment, but this can Figure 10: Maximum Slopes Achieved ## 6 Summary Figure 11: Simulation velocities Figure 12: Simulation Stride Lengths be explained in experimental uncertainties such as manufacturing tolerances and inconsistent surface contact. However, the result in Fig. 15 are consistent with other runs and show the same relationship in experiments and simulations. In constant pulse wave actuation, the stride length, and consequentially the forward velocity, monotonically decreases as the walking surface slope increases. The next experiment is designed to capture the effect of pulse frequency on the stride length. In the experiment with results in Fig. 16, the prototype walks across a flat surface. The input pulse duration is incremented by 10% between each run. We observe two different gait characteristics at two extremes of the input frequencies. For high frequencies the gait is more dynamic because of the short duration pulses. For low frequencies, however, the gait becomes more quasi-static. This is due to the long duration pulse where the biped becomes almost static towards the end of each step cycle. When the contra-lateral pulse arrives, the biped transitions to the next quasi-static posture, achieving forward locomotion. For this reason, for lower frequency values, step length remains almost unchanged. There is a clear transition from impulsive actuation to continuous actuation as the input frequency exceeds a certain value (approximately 1.4 Hz). Experimentally we observe that the longest step lengths are achieved at the highest input frequencies. Consequently, we observe that impulsive actuation produces longer step lengths than continuous actuation. ### Miscellaneous Maneuvers In this paper, so far we have focus on locomotion along a single direction. It is possible to walk in two dimensions by adding a new input parameter through the following substitution: \[\psi_{m}\rightarrow\psi_{m}+\psi_{d} \tag{39}\] where, \(\psi_{d}\) is a direction angle. Big Foot can be steered using \(\psi_{d}\). Each time a pulse is administered a magnetic field is generated in a given direction. This results in an applied moment vector in that direction. Hence, the locomotion direction can be altered by varying this \(\psi_{d}\) angle. In order to demonstrate this feature, four sets of maneuvers were performed. These sets can be viewed in Extension 2-5. Figure 17 depicts the four tasks that we put into Big Foot. The first maneuver is programmed to have Big Foot follow a square path. The maneuver is simply 36 forward steps, followed by a \(90^{\circ}\) left turn. The turn is accomplished by adding \(90^{\circ}\) to \(\psi_{d}\). The sequence is repeated until the square is complete. This initial maneuver shows the ability of Big Foot to make sharp right turns. The second maneuver is a circular path. Here, Big Foot takes 80 steps. This maneuver is realized by taking Figure 16: Stride Length with respect to input frequency Figure 14: The Big Foot Figure 13: Experimental setup of three orthogonally positioned pairs of Helmholtz coils Figure 15: Experimental results compared to Simulations. alternating steps with \(\psi_{d}=\psi_{d}+4^{\circ}\) in one step and \(\psi_{d}=\psi_{d}+5^{\circ}\) in the next one. The third maneuver is a figure eight. It is realized similarly to the circle. Except, after one loop, one accomplishes the second loop by subtracting from \(\psi_{d}\) instead of adding to it. The final maneuver is a maze. Lego(r) bricks are used in assembling the maze. This maneuver is accomplished using right turns and straight walks similar to the square maneuver. These maneuvers show the practical potential of hip actuated bipedal locomotors. The walking direction can be easily modified by inducing hip rotations. The work in this paper has shown that gait regulation can be achieved through hip action. ## Conclusion The work in this paper presents a new magnetically actuated bipedal walker: Big Foot. Two important questions regarding bipedal locomotion are answered. First, Big Foot has shown that bipedal locomotion and maneuvering can be achieved through pure hip actuation. Second, the analysis shows that it is preferable to used impulsive actuation rather than continuous actuation because it results in longer step lengths. In addition, the performance of this walker is also analyzed through simulation. After defining two distinct actuation schemes; heel strike and continuous pulse wave actuation schemes. It is found that heel strike actuation achieves better stability, more predictable gait generation, and greater locomotion velocities. However, constant pulse wave actuation achieves locomotion on steeper slopes. What is also observed is that, even though increasing the pulse duration in heel strike increases the maximum slope, it also requires more magnetic power. Interestingly more magnetic power in heel strike results in shorter stride lengths as the pulse duration is increased. The work opens the door for experimentation with new designs of bipedal walkers using hip actuation (magnetic and traditional actuation). The work also shows that magnetic actuation promises to speed up locomotion research as prototypes are no longer limited by financial resources and production time. Future research includes studying more complex legged machines and gait patterns. We will explore tasks that include stair climbing, obstacle avoidance, and closed-loop control. ## Acknowledgements We would like to thank Kenny Sangston and Necdet Yildirimer for providing machining services and space in manufacturing Big Foot. We would also like to thank The SMU Deason Innovation Gymnasium, led by JT Ringer and Seth Orsborn, for allowing us to use their laser cutter. Finally, we are acknowledging Ying-Chu Chen for her work in the development of prototypes that led to the idea of hip actuation.
2303.08770
Elastocaloric effect of the heavy-fermion system YbPtBi
YbPtBi is one of the heavy-fermion systems with largest Sommerfeld coefficient $\gamma$ and is thus classified as a `super'-heavy fermion material. In this work, we resolve the long-debated question about the hierarchy of relevant energy scales, such as crystal-electric field (CEF) levels, Kondo and magnetic ordering temperature, in YbPtBi. Through measurements of the a.c. elastocaloric effect and generic symmetry arguments, we identify an \textit{elastic level splitting} that is uniquely associated with the symmetry-allowed splitting of a quartet CEF level. This quartet, which we identify to be the first excited state at $\Delta/k_\text B\approx1.6\,\rm K$ above the doublet ground state at ambient pressure, is well below the Kondo temperature $T_\text K\approx10\,\rm K$. Thus, our analysis provides strong support for models that predict that the heavy electron mass is a result of an enhanced degeneracy of the CEF ground state, i.e., a quasi-sextet in YbPtBi. At the same time, our study shows the potential of the a.c. elastocaloric effect to control and quantify strain-induced changes of the CEF schemes, opening a different route to disentangle the CEF energy scales from other relevant energy scales in correlated quantum materials.
Elena Gati, Burkhard Schmidt, Sergey L. Bud'ko, Andrew P. Mackenzie, Paul C. Canfield
2023-03-15T17:11:39Z
http://arxiv.org/abs/2303.08770v1
# Elastocaloric effect of the heavy-fermion system YbPtBi ###### Abstract YbPtBi is one of the heavy-fermion systems with largest Sommerfeld coefficient \(\gamma\) and is thus classified as a'super'-heavy fermion material. In this work, we resolve the long-debated question about the hierarchy of relevant energy scales, such as crystal-electric field (CEF) levels, Kondo and magnetic ordering temperature, in YbPtBi. Through measurements of the a.c. elastocaloric effect and generic symmetry arguments, we identify an _elastic level splitting_ that is uniquely associated with the symmetry-allowed splitting of a quartet CEF level. This quartet, which we identify to be the first excited state at \(\Delta/k_{\rm B}\approx 1.6\,\)K above the doublet ground state at ambient pressure, is well below the Kondo temperature \(T_{\rm K}\approx 10\,\)K. Thus, our analysis provides strong support for models that predict that the heavy electron mass is a result of an enhanced degeneracy of the CEF ground state, i.e., a quasi-sextet in YbPtBi. At the same time, our study shows the potential of the a.c. elastocaloric effect to control and quantify strain-induced changes of the CEF schemes, opening a different route to disentangle the CEF energy scales from other relevant energy scales in correlated quantum materials. pacs: xxx An enhanced effective electron mass \(m^{*}\) is considered as a hallmark of a large class of strongly correlated metals. In such heavy-electron systems, many exotic quantum phenomena, including unconventional superconductivity, non Fermi-liquid behavior, quantum criticality and even topological semimetals [1; 2; 3; 4; 5; 6; 7] occur. Among the materials with the largest Sommerfeld coefficients \(\gamma\propto m^{*}\) are the rare-earth based heavy fermion systems YbPtBi [8; 9], YbT\({}_{2}\)Zn\({}_{20}\) (\(T={\rm Fe,Co}\)) [10] and PrAg\({}_{2}\)In [11], also dubbed as'super' heavy electron systems [12; 13]. In these cubic systems, \(\gamma\) reaches record values as high as \(10\,\)J/(mol K). The Yb-variants have in common that they are characterized by small characteristic energy scales [10; 14] of the order of \(k_{\rm B}\cdot 1\ldots 10\,\)K, including the Kondo scale \(k_{\rm B}T_{\rm K}\) as well as excited crystal electric field (CEF) levels at \(k_{\rm B}T_{\rm CEF}\). Correspondingly, it has been suggested that the hybridization of the conduction electrons [1] with a large number of degenerate CEF states is the source of the high electronic mass [10; 15; 16]. However, given the multitude of small energy scales, a definite determination of the energy scales and thus a quantitative description of the unusually high \(\gamma\) has proven difficult [17; 14; 18]. For YbPtBi, specifically, the following characteristic temperature scales [8; 9; 14; 17; 19; 20] have been found so far (see Fig. 1 (a), (c) and (d)). Since the Yb\({}^{3+}\) Kramers ion resides on a site with cubic symmetry, the CEF is expected to split the \(J=7/2\) multiplet into two doublets of type \(\Gamma_{6}\) and \(\Gamma_{7}\) and a \(\Gamma_{8}\) quartet [21]. The analyses of several experiments consistently find that the highest CEF excited state is a doublet (likely \(\Gamma_{6}\)) with \(T_{\rm CEF,2}\sim 60\ldots 100\,\)K. Even though various studies [20] favor \(\Gamma_{7}\) to be the ground state and \(\Gamma_{8}\) to be the first excited state with \(T_{\rm CEF,1}\sim 1\ldots 10\,\)K, the reverse assignment with a \(\Gamma_{8}\) ground state was also found to be compatible with a number of experimental results [20]. In addition, symmetry-breaking distortions at low temperature, giving rise to additional level splittings, could not be conclusively ruled out so far [22; 23; 17; 20]. These uncertainties have not only hampered estimates for the absolute value of \(T_{\rm CEF,1}\), but also of the second important temperature scale \(T_{\rm K}\)[14] which is believed to be in the same energy range. Finally, for very low temperatures below \(T_{\rm N}\approx 400\,\)mK, YbPtBi orders antiferromagnetically. This order is fragile [24; 25; 26; 22] as it can be suppressed towards a quantum critical point by a small external magnetic field, and non Fermi-liquid behavior was discovered in the quantum critical region [24]. In this paper, we clarify the hierarchy of energy scales in YbPtBi, something only made possible through conducting thermodynamic measurements of the elastocaloric effect under well-controlled, symmetry-breaking, uniaxial pressure \(p\). In contrast to magnetic field, which breaks time-reversal symmetry and thus affects all Kramers-degenerate states, the lattice strain \(\epsilon\) associated with uniaxial pressure can only lift degeneracies stabilized by crystallographic symmetries. As we will show below, it is through this _elastic level splitting_ that the application of uniaxial pressure can be used to sensitively probe the single-ion physics [27] associated with the \(\Gamma_{8}\) state. As a result, we successfully disentangle the thermodynamic features resulting from CEF excitations and the Kondo effect in YbPtBi. Overall, this analysis places the'super'-heavy YbPtBi in the limit of \(T_{\rm K}>T_{\rm CEF,1}\) and provides strong support for the notion that the extremely high \(\gamma\) value results from the hybridization of conduction electrons with a quasisextet CEF ground state. The elastocaloric effect \(\Delta T/\Delta\epsilon\) describes a temperature change \(\Delta T\) that is induced by varying the strain by \(\Delta\epsilon\). Thermodynamically it is given by \[\frac{\Delta T}{\Delta\epsilon}=-\frac{\partial S/\partial\epsilon\big{|}_{T}}{ \partial S/\partial T\big{|}_{\epsilon}}=-\frac{T}{C_{V}}\left.\frac{\partial S }{\partial\epsilon}\right|_{T}, \tag{1}\] with \(S\) being the entropy and \(C_{V}\) the heat capacity at constant volume \(V\). As recently established in Ref. [28], \(\Delta T/\Delta\epsilon\) can be determined with high precision in an a.c. version of the technique by applying an oscillation with amplitude \(\Delta\epsilon\) in piezo-driven uniaxial pressure cells [29; 30] and measuring the resulting \(\Delta T\) using a thermocouple (see Fig. 1 (b)). \(\Delta\epsilon\) is determined through a capacitive displacement sensor (not shown). A constant, finite strain \(\epsilon\) can be superimposed, so that \(\Delta T/\Delta\epsilon\) can be mapped out as a function of \(\epsilon\). In this work, we follow this novel experimental method to determine \(\Delta T/\Delta\epsilon\) for YbPtBi with uniaxial pressure \(p\) applied along the crystallographic [1 0 0] direction, resulting in a finite \(\epsilon\). We implemented following important modifications to the technique: first, we use a uniaxial pressure cell that also incorporates a force sensor [29] since the applied force/pressure \(p\) is a better control parameter than the conjugated strain \(\epsilon\) in these type of devices. Second, we mount the sample free-standing in a sample carrier which is designed in such a way that only compression (denoted by a negative sign of \(p\)) can be applied when the mechanical contact in the carrier is closed (see Fig. 1 (b), Ref. [31]). This allows us to determine precisely the neutral point \(p=0\) (and \(\epsilon=0\)) at any given temperature which is important for the symmetry arguments presented below. To illustrate the fingerprint of relevant energy scales in YbPtBi at ambient pressure in our data, we first compare on the right of Fig. 1 the elastocaloric temperature Figure 1: (a) Schematic representation of proposed characteristic energy scales of YbPtBi. \(T_{\rm N}\) indicates the position of magnetic ordering, \(T_{\rm K}\) the possible range of the Kondo crossover and \(T_{\rm CEF,1}\) and \(T_{\rm CEF,2}\) the possible temperature range of the first and second excited crystal-electric field levels. (b) Schematic of the setup to measure the elastocaloric effect [28]. A sample with attached thermocouple is placed across a gap on the sample carrier, which is screwed into the pressure cell (not shown). Different compressive uniaxial pressures can be exerted when the mechanical contact is closed by a uniaxial pressure cell. The direction of applied pressure is indicated by the big black arrows. The strain-induced temperature changes \(\Delta T\) are recorded with the thermocouple. (c-e) Ambient-pressure properties of YbPtBi: total specific heat, \(C_{\rm p}\), and magnetic specific heat, \(C_{\rm m}\)[24], (c) thermal expansion [24], \(\alpha\), (d) and elastocaloric temperature amplitude, \(\Delta T\) (e), from this work. Figure 2: Elastocaloric temperature amplitude \(\Delta T\) induced by a small oscillating uniaxial strain with amplitude \(\Delta\epsilon\approx\text{const.}<0\) at constant offset uniaxial pressures, \(p\), as a function of temperature, \(T\), (a) and as a function of \(p\) at constant \(T\) (b). Note that the \(p\approx 0\) data is also shown in Fig. 1 on a logarithmic \(T\) scale. The data spacing in (a) is \(\sim 0.1\,\)GPa and in (b) 1 K (for \(T<10\,\)K) and 2 K (for \(T>10\,\)K). The arrows in panel (a) indicate the position of the characteristic temperatures \(T_{\rm ext}\) and \(T^{*}\). The inset to (a) shows the evolution of \(T_{\rm ext}\) and \(T^{*}\) with \(p\) (for criteria, see SI). amplitude \(\Delta T(T)\) for \(T\gtrsim 1.2\,\)K (panel (e)) at ambient pressure \(p_{a}\approx 0\) and \(\Delta\epsilon\approx\) const. with literature data [24] on the molar specific heat \(C_{p}(T)\propto T\left.\partial S/\partial T\right|_{p_{a}}\) (panel (c)) and the thermal expansion \(\alpha(T)\propto\left.\partial S/\partial p\right|_{T}\propto\left.\partial S /\partial\epsilon\right|_{T}\) (panel (d)) (see SI for a discussion of the equivalence of different thermodynamic quantities at ambient pressure). Upon cooling, \(\Delta T\) shows a clear feature around \(T^{*}\approx 7.6\,\)K (see SI for criterion) with a concomitant sign change of \(\Delta T\). \(\alpha(T)\) also exhibits a similar feature as \(\Delta T(T)\), including a sign change, at a slightly lower temperature. Simultaneously, the magnetic contribution \(C_{\rm m}\) to the specific heat \(C_{p}\), obtained after subtracting the specific heat of the non-moment bearing Lu analogue [24], shows a clear peak at \(T^{*}\). Previously, these prominent features in \(\alpha\) and \(C_{\rm m}\) were interpreted either to be solely due to CEF effects or to combined CEF/Kondo effects. Here, we will provide an alternative interpretation, and we will show that the first excited CEF level is in fact located much lower in energy. Upon further cooling down to \(\approx 1.2\,\)K, the lowest temperature of our experiment, \(\Delta T(T)\) remains featureless and small, similar to \(\alpha(T)\). At even lower temperatures \(C_{p}(T)\) and \(\alpha(T)\) show clear features associated with a magnetic phase transition at \(T_{\rm N}\approx 0.4\,\)K. Now we turn to the behavior of \(\Delta T\) under finite, symmetry-breaking uniaxial pressure \(p\) up to \(\approx-1.8\,\)GPa compression, shown in Fig. 2. Clearly the data sets as a function of \(T\) (Fig. 2 (a)) and \(p\) (Fig. 2 (b)) reveal significant changes of \(|\Delta T|\). Key observations can be summarized as follows: first, as \(|p|\) is increased, a low-temperature extremum emerges at \(T_{\rm ext}\approx 2.7\,\)K for \(p\approx-0.3\,\)GPa, which is increased to \(4.0\,\)K by \(p\approx-1.8\,\)GPa. Second, the feature at \(T^{*}\approx 7.6\,\)K remains visible for all pressures and its position is only barely affected by \(p\) (see inset of Fig. 2 (a)). Third, \(\Delta T\) increases strongly, from almost zero, in a monotonic and, to first approximation, in a near linear manner with \(|p|\) for any given \(T\). Only for larger \(|p|\gtrsim 0.5\,\)GPa are some deviations from this \(p\)-linear behavior, in particular at lowest \(T\), observed. In general, this elastocaloric effect data contains contributions from all relevant energy scales, in particular the CEF and Kondo energy scales. In the following we will use generic qualitative arguments and explicit modelling of the elastocaloric effect of single-ion CEF states to disentangle these contributions. In particular, we will demonstrate that the strong change of temperature \(\Delta T\) with uniaxial pressure \(p\) results from the response of the first excited quartet CEF level to symmetry breaking, whereas the behavior of \(\Delta T\) at \(p=0\) (including the anomaly at \(T^{*}\)) most likely originates from the formation of the coherent Kondo state. To facilitate the discussion of the elastocaloric effect of CEF levels, we will from now on focus on the notion of strain \(\epsilon\), since Young's modulus \(Y_{100}:=\partial p/\partial\epsilon|_{T}\approx\) const. (see SI), and assume temperatures \(T=\mathcal{O}(\Delta_{\rm CEF})/k_{\rm B}\) where \(\Delta_{\rm CEF}\) is the energy difference between ground state and first excited CEF energy level. The two dominant effects of finite \(\epsilon\) are expected to be (i) shifting the CEF levels and (ii) a possible lifting of degenerate CEF levels due to lowering the crystal symmetry. The latter scenario can only occur when the degenerate state is not the ground state, since the degeneracy would otherwise be lifted at zero strain through a spontaneous Jahn-teller distortion. The elastocaloric effect \(\Delta T/\Delta\epsilon\) is expected to be significantly distinct in these two cases. The two scenarios are visualized separately in Fig. 3 (a): In case (i), left sketch of the figure, the CEF energy level is uniquely associated with \(\epsilon\), necessary for the applicability of Gruneisen scaling \(\partial p/\partial T|_{V}\propto C_{V}/V\) (see SI). We obtain \(\partial S/\partial\epsilon|_{T}\propto T\partial S/\partial T|_{\epsilon}\), therefore \(\Delta T/\Delta\epsilon\approx\) const. as a function of strain, see Eq. (1). Thus, in this case, we expect a large intercept of \(\Delta T/\Delta\epsilon\) at \(\epsilon=0\) and no significant change with \(\epsilon\). In case (ii), the strain-induced symmetry lowering leads to a splitting of the first excited CEF energy level for both compressive and tensile strains. Hence, at \(\epsilon=0\) we must have \(\partial S/\partial\epsilon|_{T}=0\), correspondingly \(\Delta T/\Delta\epsilon=0\) and Gruneisen scaling is no longer applicable. We also must have \(\partial S/\partial\epsilon|_{T}\propto\epsilon\). Therefore, in scenario (ii), we expect that the magnitude of \(\Delta T/\Delta\epsilon\) is expected to increase rapidly from a small value at \(\epsilon=0\) with increasing tension or compression (see right sketch of Fig. 3 (a)). Our data (see Fig. 2 and 3(c)) is characterized by a large change of \(\Delta T/\Delta\epsilon\) with \(\epsilon\) and only a small finite intercept at \(\epsilon=0\). We can therefore conclude that the elastocaloric effect under finite strains is dominated by a strain-induced splitting of a first excited CEF level. This CEF level has to be the \(\Gamma_{8}\) quartet, since the Yb\({}^{3+}\) Kramers doublets are protected by time-reversal symmetry and cannot be split by the application of strain. We note that these considerations also imply that measurements of the thermal expansion \(\alpha\) at ambient pressure will only display anomalies of excited CEF levels when these levels are doublets that only shift with strain. In contrast, the excited CEF \(\Gamma_{8}\) level will leave almost no fingerprint in \(\alpha(p=0)\propto\left.\partial S/\partial\epsilon\right|_{T}(\epsilon=0)\approx 0\) when the level splitting is the dominant effect. Therefore, a finite \(\alpha\) seems unlikely to be related to the physics of the \(\Gamma_{8}\) level, contrary to what has been discussed in previous studies on YbPtBi [17] under the assumption of validity of Gruneisen scaling (see SI). To extract quantitative information on the CEF states from the elastocaloric data, in particular on the estimate of energy of the \(\Gamma_{8}\) level in YbPtBi, we performed model calculations of \(\Delta T/\Delta\epsilon\) using a Schottky-type specific heat \(C_{p}\) of a two-level system with four-fold degenerate first excited state at energy \(\Delta_{\rm CEF}\). In addition, since our measured signal is \(\Delta T\propto 1/C_{p}\), we added an electronic contribution of the form \(\gamma T\) to the total model specific heat \(C_{p}\). To reduce the number of parameters, we omitted phononic contributions because they are small below \(\sim 10\,\)K (see Fig. 1(c)). The full model specific heat at \(\epsilon=0\) is shown in the left panel of Fig. 3 (b). Using a value of \(\Delta_{\rm CEF}/k_{\rm B}=1.6\,\)K, this model reproduces the broad hump in the literature \(C_{P}(T)\) data [8; 24] around \(T\approx 800\,\)mK. To parameterize the response to strain, we use two approximations, which we call model 1 and model 2. Model 1 comprises a linear splitting by strain via \(\Delta_{\rm{CEF}}(\epsilon)=\Delta_{\rm{CEF}}(0)\,(1\pm\beta_{1}|\epsilon|)\). The choice for this model is motivated by considering the effect of a tetragonal distortion on the CEF-Hamiltonian [21]\(\mathcal{H}_{\rm{CEF}}^{\rm{cubic}}\to\mathcal{H}_{\rm{CEF}}^{\rm{cubic}}+g_{ z\varepsilon}\epsilon O_{2}^{0}\) perturbatively with an elastic constant \(g_{zz}\) that characterizes the distortion and \(O_{2}^{0}\) the Stevens operator that emerges in tetragonal symmetry (see SI). The energy of the \(\Gamma_{8}\) state then changes as \(E_{\Gamma_{8}}\to E_{\Gamma_{8}}\pm 6g_{zz}\epsilon\) for small \(\epsilon\). Naturally deviations from the \(\epsilon\)-linear behavior of the CEF excitation energy will arise for larger \(|\epsilon|\). In an attempt to better describe the magnitude of our experimental data, we include a second-order term of the expansion in \(\epsilon\) in model 2. The corresponding energy splitting we use then reads \(\Delta_{\rm{CEF}}(\epsilon)=\Delta_{\rm{CEF}}(0)\,\big{(}1\pm\beta_{1}| \epsilon|\pm\beta_{2}|\epsilon|^{2}\big{)}\). Figure 3 (c) shows a comparison of the experimental data for \(\Delta T/\Delta\epsilon\) (left column) to the calculations for model 1 (middle column) and model 2 (right column). For both models we use \(\Delta_{\rm{CEF}}(0)/k_{\rm{B}}=1.6\,\)K and \(\beta_{1}=70\). Since the lowest temperature of our experiments is \(T_{\rm{min}}\approx 1.2\,\)K, we restrict the calculations to this temperature range. Therefore, the model results reflect the effects associated with the higher-energy branch (blue dotted lines in the right panel of Fig. 3 (b)). Clearly, the results for model 1 already capture many of the essential observations of the experiment on a qualitative level. It reproduces the low-temperature minimum of \(\Delta T/\Delta\epsilon\) as a function of \(T\) around \(T\approx 3\,\)K under finite \(\epsilon\), as well as the approximately linear change of \(\Delta T/\Delta\epsilon\) with \(\epsilon\). However, model 1 is not sufficient to account for the data also on a quantitative level. Taking into account a second-order term with model 2 and repeating our calculations with \(\beta_{2}=1750\), we obtain the results shown in the right column of Fig. 3 (c). Now much of the experimental data can be very well reproduced over the full strain range. It is important to note that the good agreement between experiment and model calculations is only achieved when \(\Gamma_{8}\) is the first excited state (scenario (ii)). In the SI, we also show model calculations for the reverse scenario (we call it scenario (iii)) assuming \(\Gamma_{8}\) is the ground state and the first excited state is a doublet, even if this scenario is unlikely due to the inherent instability of a symmetry-protected degeneracy of the ground state towards Jahn-Teller distortions. We find that scenario (ii) Figure 3: (a) Modelling of the impact of a strain \(\epsilon\) on the CEF energy level difference \(\Delta_{\rm{CEF}}\) and the resulting behavior of \(\Delta T/\Delta\epsilon\) at constant temperature \(T=\mathcal{O}(\Delta_{\rm{CEF}})/k_{\rm{B}}\). Left: scenario (i), induced shift of the first excited level. Right: scenario (ii), induced splitting of the first excited level. (b) Left: model specific heat \(C_{V}(T)\) at \(\epsilon=0\). Right: splitting of \(\Delta_{\rm{CEF}}\) as a function of \(\epsilon\) used in model 1 and model 2, both within scenario (ii) (see text for details). (c) Comparison of experimental data of the elastocaloric effect and the results of the two model calculations. For the experimental data, \(Y_{100}\approx 120\,\)GPa was used [32] to express \(p\) in terms of \(\epsilon\) (see text and SI). The data spacing for both experimental data and model calculations in the top panel is \(\sim\,0.08\%\) and in the bottom panel 1 K. and (iii) can be clearly distinguished based on our experimental data by considering the value of \(T_{\rm ext}\) for \(\epsilon\to 0\). Specifically, we find that the finite value of \(T_{\rm ext}\) for \(\epsilon\to 0\), shown in the inset of Fig. 2(a), is only compatible with scenario (ii). Overall, our analysis establishes the \(\Gamma_{8}\) quartet to be the first excited state with an energy difference to the doublet ground state of \(\Delta_{\rm CEF}\approx k_{\rm B}\cdot 1.6\,{\rm K}\). However, the physics of single-ion CEF levels does not capture the feature at \(T^{\star}\sim 7.5\,{\rm K}\) in \(\Delta T/\Delta\epsilon\) which persists for all uniaxial pressures (see Fig. 2(a) and Fig. 3 (c)). The model would also predict that \(\Delta T/\Delta\epsilon\to 0\) for \(\epsilon\to 0\), required by restoring the cubic crystal symmetry. Instead we observe a small but finite \(\Delta T/\Delta\epsilon\), which we attribute to contributions from other energy scales than the CEF \(\Gamma_{8}\) one. Since the remaining doublet CEF level excitation is located at much higher temperatures around \(60\ldots 100\,{\rm K}\), it can most likely be excluded as the source for the anomaly at \(T^{\star}\). The obvious temperature scale which is known to be relevant in YbPtBi is set by the Kondo temperature \(T_{\rm K}\). Remarkably, the thermal expansion data [24] shown in Fig. 1 (d) reveals the onset of negative \(\alpha\) below \(T\approx 20\,{\rm K}\) that persists down to \(T\approx T^{\star}\). In general, a negative thermal expansion in cubic systems is exceptional [33]. To rationalize this, we note that the volume of Yb\({}^{3+}\) is smaller than the one of Yb\({}^{2+}\). Therefore, even tiny hybridization-induced changes [35; 34] of the strictly trivalent state of Yb can be the origin of a negative \(\alpha\). Thus, all experimental data are consistent with the expectations for the formation of the Kondo lattice with \(T_{K}\approx T^{\star}\approx 10\,{\rm K}\). The hierarchy of temperature scales in YbPtBi can now be clearly assigned to \(T_{\rm CEF,2}>T_{\rm K}>T_{\rm CEF,1}\). Therefore, this opens the possibility that the conduction electrons do not only hybridize with the Yb\({}^{3+}\) CEF \(\Gamma_{7}\) doublet ground state, but also with the first excited \(\Gamma_{8}\) quartet state. Effectively, hybridization then takes place with an Yb quasi-sextet (\(N=6\)). In fact, the analysis [15] of the generalized Kadowaki-Woods ratio \(A/\gamma^{2}\), with \(A\) being the Fermi-liquid coefficient of the resistivity, shows that \(\rho(T)\) for YbPtBi [10; 8] falls between the values of \(A\) expected for \(N=6\) and \(N=8\). This is nonetheless remarkable since the hybridization strength can also depend on the symmetry of the underlying CEF level [36; 37; 38], an aspect which has so far been rarely considered. Our study shows that YbPtBi with two CEF levels of different symmetry below \(T_{\rm K}\) might be an interesting reference system to quantify the relevance of symmetry-dependent hybridization strength. In summary, through measurements and analyses of the elastocaloric effect, we have firmly established the hierarchy of energy scales in the'super'-heavy fermion material YbPtBi. We find that the Kondo energy \(k_{\rm B}T_{\rm K}\approx k_{\rm B}\cdot 10\,{\rm K}\) is higher than the energy difference \(\Delta_{\rm CEF}\) between the ground state and the first excited quartet CEF level with \(\Delta_{\rm CEF}\approx k_{\rm B}\cdot 1.5\,{\rm K}\), putting both the \(\Gamma_{7}\) ground state doublet and the \(\Gamma_{8}\) quartet below \(k_{\rm B}T_{\rm K}\). This allows for the possibility that conduction electrons hybridize with a quasi-sextet (\(N=6\)) Yb\({}^{3+}\) ground state, providing strong support for theoretical models that assign the anomalously large electron mass to an enhanced degeneracy of the CEF levels. At a more general level, our work demonstrates that measurements of the elastocaloric effect under finite pressures [39; 40; 41; 42] enable us to control and quantify strain-induced changes of the crystal-electric field schemes and disentangle relevant low-energy scales in correlated electron systems in a novel way. This approach will also be particularly relevant for the field of quantum magnets, in which the unambiguous determination of single-ion CEF states is essential for a microscopic description of their unusual magnetic properties. _Methods_ - Single crystals of YbPtBi were grown from a Bi-riched ternary melt following the procedure described in Ref. [43; 9; 24] and in the SI. The samples were polished for measurements under finite uniaxial pressures [29] into a bar with dimensions of \(100\,\mu{\rm m}\times 140\,\mu{\rm m}\times 1000\,\mu{\rm m}\), with the long axis being the strain axis. For measurements of the a.c. elastocaloric measurements, the d.c. voltages on the piezoelectric actuators were modulated by a small a.c. voltage on the tension stack. For the measurements of the induced temperature change \(\Delta T\), an chromel-AuFe\({}_{0.07\%}\) thermocouple [44] was fixed to the sample with a tiny amount of Stycast 1266. The thermocouple was anchored on the cell body. The voltage on the thermocouple was amplified by a low-temperature transformer mounted on the low-temperature stage and subsequently read out by a Lock-In amplifier. Further description on the uniaxial pressure cell and modelling of the elastocaloric effect are included both in the main text (see Fig. 1) and the SI. _Acknowledgments_ - We acknowledge useful discussions with P. Thalmeier and thank E. Mun and B. Kothanazhi for providing the ambient-pressure thermodynamic data. We also acknowledge the Gordon and Betty Moore foundation for funding the International Workshop "Experimental Advances in the Use of Pressure and Strain to Probe and Control Quantum Matter", which initiated the idea for this project. PCC acknowledges G. Wells for not letting YbPtBi be called a "morbidy obese Fermion". Financial support by the Max Planck Society is gratefully acknowledged. In addition, we gratefully acknowledge funding through the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through TRR 288--422213477 and the SFB 1143 (project-id 247310070). Research in Dresden benefits from the environment provided by the DFG Cluster of Excellence ct.qmat (EXC 2147, project ID 390858940). Work at the Ames National Laboratory was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. The Ames National Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. DEAC02-07CH11358.
2304.03682
BenCoref: A Multi-Domain Dataset of Nominal Phrases and Pronominal Reference Annotations
Coreference Resolution is a well studied problem in NLP. While widely studied for English and other resource-rich languages, research on coreference resolution in Bengali largely remains unexplored due to the absence of relevant datasets. Bengali, being a low-resource language, exhibits greater morphological richness compared to English. In this article, we introduce a new dataset, BenCoref, comprising coreference annotations for Bengali texts gathered from four distinct domains. This relatively small dataset contains 5200 mention annotations forming 502 mention clusters within 48,569 tokens. We describe the process of creating this dataset and report performance of multiple models trained using BenCoref. We expect that our work provides some valuable insights on the variations in coreference phenomena across several domains in Bengali and encourages the development of additional resources for Bengali. Furthermore, we found poor crosslingual performance at zero-shot setting from English, highlighting the need for more language-specific resources for this task.
Shadman Rohan, Mojammel Hossain, Mohammad Mamun Or Rashid, Nabeel Mohammed
2023-04-07T15:08:46Z
http://arxiv.org/abs/2304.03682v3
# BenCoref: A Multi-Domain Dataset of Nominal Phrases and Pronominal Reference Annotations ###### Abstract Coreference Resolution is a well studied problem in NLP. While widely studied for English and other resource-rich languages, research on coreference resolution in Bengali largely remains unexplored due to the absence of relevant datasets. Bengali, being a low-resource language, exhibits greater morphological richness compared to English. In this article, we introduce a new dataset, BenCoref, comprising coreference annotations for Bengali texts gathered from four distinct domains. This relatively small dataset contains 5200 mention annotations forming 502 mention clusters within 48,569 tokens. We describe the process of creating this dataset and report performance of multiple models trained using BenCoref. We expect that our work provides some valuable insights on the variations in coreference phenomena across several domains in Bengali and encourages the development of additional resources for Bengali. Furthermore, we found poor crosslingual performance at zero-shot setting from English, highlighting the need for more language-specific resources for this task. ## 1 Introduction Coreference resolution is the task of identifying all references to the same entity in a document. This task originally started as a sub-task of information extraction. The Message Understanding Conferences Grishman and Sundheim (1996) first introduced three tasks, collectively referred to as SemEval, designed to measure the deeper understanding of any information extraction (IE) system. One of these three tasks proposed in the event was coreferencial noun phrase identification. The Automatic Content Extraction (ACE) Program Doddington et al. (2004) was the first major initiative that created a large dataset with entity, event and relation annotations. This project revealed some major complexities behind creating such dataset. Some of the significant challenges reported by the annotators include the coreference of generic entities, use of metonymy, characterization of Geo-Political Entity, distinguishing certain complex relations, and recognizing implicit vs. explicit relations. Since then coreference resolution, anaphoric & cataphoric relation identification, event reference detection has been studied widely. As a result, large datasets like ACE Doddington et al. (2004), Ontonotes Pradhan et al. (2012), WikiCoref Ghaddar and Langlais (2016), and LitBank Bamman et al. (2020) were made public. Some datasets, like ACE Doddington et al. (2004), and Ontonotes, expanded this task beyond English to include more languages, like Arabic, and Chinese. This coreference resolution task has shown potential in improving many downstream NLP tasks Figure 1: BenCoref annotations with color-coded Co-reference chains. like machine translation (Miculicich Werlen and Popescu-Belis, 2017; Ohtani et al., 2019), literary analysis (Bamman et al., 2014), question answering (Morton, 1999), text summarization (Steinberger et al., 2007), etc. However, Bengali, despite being a popular lanaguage, has seen very little work is this direction due to lack of public datasets. Figure 1 shows a sample from our dataset with each color representing an unique entity. The main contributions of this work are: * We introduce a new Bengali coreference annotated dataset, consisting of 48,569 tokens collected from four diverse domains. Our dataset creation process is shared along with the annotators' guidelines, which we believe is the first of its kind for Bengali coreference annotation. * We characterize the behaviour and distribution of nominal and pronominal coreference mentions across the four domains with necessary statistics. Furthermore, we report the performance of an end-to-end neural coreference resolution system that was solely built using our data. * We empirically demonstrate the necessity for more language-specific datasets, particularly for low-resource languages, by comparing our results with zero-shot cross-lingual learning from English. ### Related Datasets To the best of our knowledge, no coreference dataset in Bengali exists. Most of the works related to Bengali (Sikdar et al., 2013; Senapati and Garain, 2013; Sikdar et al., 2015) uses data from ICON2011 shared task which was never publicly shared. Most of the major coreference datasets are in English. OntoNotes (Pradhan et al., 2012) is a well-annotated and large dataset with over 1.6M words. This dataset does not contain any singleton mention. Later, LitBank (Bamman et al., 2020) was published that is almost 10 times larger than OntoNotes (12.3M words). ## 2 Challenges in Bengali One of the main challenges we faced was the absence of preexisting coreference annotation guidelines tailored for the Bengali language. To overcome this obstacle, we adapted the OntoNotes coreference annotation guideline to suit our objectives. This highlighted several distinctive linguistic characteristics of Bengali, such as zero anaphora, non-anaphoric pronouns, and case-marking, that needs to be carefully considered when preforming co-reference annotation in Bengali. Each of this is discussed with more details and examples in Figure 9 in the Appendix. Moreover, we discovered that existing annotation software is ill-equipped to manage Bengali text, occasionally leading to inaccurate rendering and unstable character display. This underscores the importance of advancing normalization techniques and standardization of Bengali digital representation. ## 3 Data Domain Description The Bengali language can be braoadly categorized into two primary literary dialects, namely "Shadhubhasha" and "Choltibhasha." "Shadhubhasha" was commonly used by Bangla writers and individuals in the 19th and early 20th centuries, while "Choltibhasha" is currently the more prevalent and colloquial dialect. This dataset contains both domains of Bengali text, with story and novel texts sourced from copyright-free books of the 19th and 20th centuries, and biography and descriptive texts obtained from modern sources, primarily in "Choltibhasha." A brief description of each domain is given below: ### Biography A biography presents a comprehensive account of an individual's life, character, accomplishments, and works, spanning from birth to death or the present time. Although the number of references per document in biographical texts is comparable to other genres, they primarily focus on a single subject throughout the entire narrative. Additionally, the dialect employed in biographies in BenCoref is typically "Choltibhasha." ### Descriptive By descriptive text we refer to wikipedia-like articles. They cover a broad range of subjects that span various fields, such as technology, professions, travel, economics, and numerous related subtopics. These comprehensive texts try to accurately portray and convey holistic information about real-world objects or experiences. ### Story BenCoref is primarily composed of short stories, each with a word count of 1000 words or less, which was an arbitrary decision. These stories typically feature 3-4 characters on average. The language used in the stories varies, with some being exclusively in "Shadhubhasha," while others use a mix of "Shadhubhasha" and "Choltibhasha." ### Novel The Bengali novels in our dataset typically consist of more than 1200 words and feature an average of over 5 characters. These novels primarily employ "Shadhubhasha". The next segment discusses the coreference behaviour across each domain in more detail. ## 4 Domain Specific Coreference Behaviour Characterization In this section, some statistics is presented to better understand the coreference phenomenon across each domain. Each coreference cluster may refer to different type of entities, like an object, people, location or event. An arbitrary design choice was made to not explicitly mark the type of entity. We start by analyzing the mean and standard deviation between mentions across the domains. Table 1 shows that biographies and novels exhibit a low standard deviation but have noticeably different mean distance between mentions. On the other hand, stories and descriptive texts fall in the middle, exhibiting a similar coreference distribution. For mentions that span more than one token, only the first token was used for calculation. The majority of texts in BenCoref belong to the stories domain, while the biography domain has the smallest contribution. The distribution of mentions, clusters, and tokens across the categories in BenCoref is presented in Figure 4. Figure 2 depicts the distribution of cluster size across each domain. The cluster size refers to the total number of mentions in each coreference chain. It is worth noting that singletons were not annotated in BenCoref. The story domain has the highest number of coreference chains with two mentions only. Since the story domain contributes the most data to the dataset, this may be a contributing factor to its high frequency in each cluster size. Besides story, the descriptive domain also seems to have more larger coreference chains. Figure 3 compares the spread of coreference chains in each domain, where the spread refers to the token-level distance between the beginning and end of a coreference chain. A general trend can be observed that as the size of the coreference chain increases, its corresponding frequency decreases in each domain. \begin{table} \begin{tabular}{c c c} \hline **Categories** & **Mean** & **Std. Dev** \\ \hline Novel & 29.17 & 3.70 \\ Story & 24.10 & 8.46 \\ Biography & 15.67 & 3.81 \\ Descriptive & 22.35 & 5.42 \\ \hline \end{tabular} \end{table} Table 1: Mean and Std. Deviation of distance between mentions in each domain. Figure 3: Spread in BenCoref across each domain. The spread is measured by the token level distance between the first and last mention of an entity. Figure 2: Cluster size comparison between Story, Novel, Biography and wiki-like Descriptive domain. An additional "index.csv" file is included with in the dataset, which serves as an index to all the documents included, organized by title and author. A partial view of this file is presented in Appendix Figure 6. ## 5 Methodology We used BnWikiSource2 and Bangladesh(Islam et al., 2003) as sources of copyright-free Bengali text for our dataset. Banglades was used for biographies and wiki-like descriptive texts. The dataset creation process is discussed in detail in the following paragraphs. Footnote 2: [https://bn.wikipedia.org](https://bn.wikipedia.org) ### Annotation Phase The WebAnno annotator (Eckart de Castilho et al., 2016) was the chosen tool for annotation. To accommodate WebAnno's limited capacity to work with large texts, the articles were partitioned according to the Table 2. Each partition ends in a complete sentence and any incomplete portion of a sentence were moved to the next fragment. The partition size was chosen arbitrarily to reduce the number of data fragments. In Appendix Figure 5, a screenshot of the WebAnno interface used during this phase is displayed. A post annotation sample is provided in Figure 7 in Appendix from the Biography domain. Since there is no existing guideline for coreference annotation, the annotators were initially instructed to annotate the noun phrases and its coreferences, which were predominantly pronouns. The primary noun phrase references were tagged as "entity" and their corresponding coreferences were tagged as "ref". While determining what forms an entity is an important linguistic problem, it is not the primary challenge we are trying to address in our work. Annotators were free to mark any token or span that they considered an entity. After the annotation phase was completed, the data was exported and the character-level annotations were converted to token-level annotations. For every exceptional cases encountered, a new rule was established and enforced during further annotation of the dataset. The rules are further discussed in the next section. ### Annotation Strategy/Guideline This coreference annotation guide (refer to A in the Appendix) was prepared concurrently with the annotation phase to ensure consistency throughout the annotation process. We mirrored the overall structure of the OntoNotes annotation guidelines, tailoring them to our specific use case. Initially, we did not impose any specific restrictions on the definition of an entity during the annotation process. The annotators were instructed to annotate any span they deemed as an entity. However, this approach resulted in an annotator bias, with a strong focus on nominal and pronominal mentions. Subsequently, we made the decision to prioritize and concentrate solely on these types of mentions. Furthermore, as part of our design decision, we chose to not tag singleton mentions. Consequently, any singletons were removed during the post-annotation processing phase. ### Annotation Criteria: The general rule used for annotation is to annotate mentions in any form, including nested mentions or those referring to multiple entities. The characterization of mention and coreference link types was conducted after annotating the entire dataset. Annotating coreference link types was kept optional due to the significant training required for the task. This \begin{table} \begin{tabular}{c c} \hline **Tokens** & **Partitions** \\ \hline \textless{}699 & 1 \\ \textless{}1000 & 2 \\ \textgreater{}1000 & 3 \\ \hline \end{tabular} \end{table} Table 2: Documents with greater than 699 tokens and less than 1000 tokens were divided into 2 parts and the ones with more than 1000 tokens were divided into 3. Figure 4: (Right) Distribution of Clusters, Mentions, and Tokens across the categories. strategy was followed the accelerate the annotation process. The rules with corresponding examples are illustrated in a more detailed manner in Figure 10 in the Appendix. Furthermore, the coreference link types have been categorized into two groups, namely identical and apposite, and they have been discussed in detail in 11 and 12 in the Appendix. However, the task of annotating coreference link types is currently pending and will be addressed in future work. While this guideline is incomplete and limited in scope, it can play an impotant role in encouraging the next generation of coreference datasets in Bengali. The OntoNotes coreference guidelinegui (2007) is currently in its 7th edition which is a strong indication that the first attempt on making a such guideline would be imperfect and will require further revisions. It may take several iterations before we can have a robust guideline for coreference annotation in Bengali. ### Inter-Annotator Agreement The OntoNotes strategy was roughly employed to assess interannotator agreement in this work. Specifically, two annotators independently annotated the documents, and only in cases of disagreement, a third annotator was consulted to arrive at a final decision. These ultimate annotations were deemed as the gold standard annotations. Based on the adjudicated version as the ground truth, the individual annotations in our dataset achieved an average MUC score of 78.3 on the combined dataset. while the combined inter-annotator MUC score was 67.6. However, it is important to acknowledge that the process of resolving disagreements was not adequately documented and will be addressed in greater detail in future endeavors. ## 6 Experiments We took an end-to-end neural network based modeling approach. The following section discusses the algorithm, followed by the experimental setup, evaluation strategy and analysis of results. We used the 300-dimensional Fasttext and Glove embeddings Grave et al. (2018) as words representations. To generate contexual representations the embeddings were passed through a bi-directional LSTM Hochreiter and Schmidhuber (1997) for some experiments and a variation of the popular transformer-based Vaswani et al. (2017) pretrained model, BERTDevlin et al. (2019), for other experiments. For the task of coreference resolution, the contextual representations from these base models were passed on to a span ranking model-head, originally proposed in Lee et al. (2018). For the crosslingual experiment, a multilingual BERT was finetuned on the OntoNotes dataset. For hyperparameter optimization, we tuned the maximum number of words in a span(s), maximum number of antecedents per span(a), and coref layer depth(CL). ### Experimental Setup The data was separated into train and dev set on a ratio of 95% by 5%. An additional test set was carefully prepared, completely disjoint from the train and dev set, that contains 37 documents. An overview of the dataset given in Table 3 For evaluating our system, we used the CONLL-2012 official evaluation scripts which calculates four metrics: Identification of Mentions, MUC, B3 and CEAF. The following section analyzes the performance of our model. ### Results and Analysis \begin{table} \begin{tabular}{l l l l l l} \hline \hline **category** & **model** & **parameters** & pre. & nc. & f1 \\ \hline \multirow{3}{*}{bootstrap} & c2f-Glove & s\(\times\)30, s\(\times\)50, CL=2 & 93.83 & 65.34 & 77.04 \\ & c2f-Fantext & s\(\times\)20, s\(\times\)50, CL=2 & 96.51 & 64.02 & 76.98 \\ & BERT-base & s\(\times\)30, s\(\times\)50, CL=2 & 94.22 & 86.13 & 90.00 \\ & M-BERT(Zero-Shot) & s\(\times\)30, s\(\times\)50, CL=2 & 7.14 & 4.62 & 5.61 \\ \hline \multirow{3}{*}{story} & c2f-Glove & s\(\times\)30, s\(\times\)50, CL=2 & 73.78 & 58.96 & 65.55 \\ & c2f-Fantext & s\(\times\)20, s\(\times\)50, CL=2 & 74.80 & 54.08 & 62.78 \\ & BERT-base & s\(\times\)30, s\(\times\)50, CL=2 & 83.91 & 65.88 & 72.39 \\ & M-BERT(Zero-Shot) & s\(\times\)30, s\(\times\)50, CL=2 & 74.0 & 3.73 & 4.96 \\ \hline \multirow{3}{*}{novel} & c2f-Glove & s\(\times\)30, s\(\times\)50, CL=2 & 78.00 & 40.83 & 53.60 \\ & c2f-Fantext & s\(\times\)30, s\(\times\)50, CL=2 & 87.50 & 43.97 & 58.53 \\ \cline{1-1} & BERT-base & s\(\times\)30, s\(\times\)50, CL=2 & 85.41 & 65.90 & 72.43 \\ \cline{1-1} & M-BERT(Zero-Shot) & s\(\times\)30, s\(\times\)50, CL=2 & 8.51 & 4.18 & 5.61 \\ \hline \multirow{3}{*}{ descriptive} & c2f-Glove & s\(\times\)30, s\(\times\)50, CL=2 & 66.39 & 27.93 & 39.32 \\ \cline{1-1} & c2f-Fantext & s\(\times\)20, s\(\times\)50, CL=2 & 72.16 & 24.13 & 36.17 \\ \cline{1-1} & BERT-base & s\(\times\)30, s\(\times\)50, CL=2 & 82.98 & 50.34 & 62.66 \\ \cline{1-1} & M-BERT(Zero-Shot) & s\(\times\)30, s\(\times\)50, CL=2 & 7.47 & 5.51 & 6.34 \\ \hline \hline \end{tabular} \end{table} Table 4: Identification of mentions \begin{table} \begin{tabular}{l l c c c} \hline \hline & **categories** & **documents** & **mentions** & **clusters** \\ \hline train & biography & 17 & 421 & 38 \\ + dev & descriptive & 36 & 1157 & 108 \\ & novel & 13 & 601 & 78 \\ & story & 56 & 3021 & 278 \\ \hline test & biography & 10 & 303 & 22 \\ & descriptive & 9 & 290 & 33 \\ & novel & 3 & 191 & 15 \\ & story & 15 & 697 & 53 \\ \hline \hline \end{tabular} \end{table} Table 3: Dataset distribution The performance of the model was reasonable given the size of our dataset. As neural networks tend to achieve optimal performance with larger datasets, we hypothesize that our results could be enhanced by expanding our dataset. The model demonstrated good performance in identifying individual mentions, as evidenced by the scores presented in Table 4. However, we observed a decrease in performance during the second phase of clustering the mentions, as shown in Table 5. This highlights the challenge of accurately identifying coreference clusters, particularly in languages with complex sentence structures and a high degree of lexical ambiguity. Further innovation is needed to address these challenges and improve the overall performance of coreference resolution models. Upon closer inspection one recurring problem was discovered. The model failed to do basic common sense reasoning on long coreference clusters, often breaking it up into several clusters. As demonstrated in Figure 8 in Appendix, the model failed to merge clusters 0 and 1, which should have been a single cluster. Furthermore, it can be observed that the coreference resolution model performs significantly better on the biography domain as compared to other domains. The relatively low mean and standard deviation of the distance between mentions reported in Table 1 may have contributed to this result. However, despite forming the major portion of the dataset, the story domain did not show any significant improvement. The high standard deviation in distance between mentions reported in Table 1 for the story domain may have contributed to this lack of improvement. Qualitative analysis is needed to investigate the underlying causes of this performance gap. The zero-shot crosslingual experiment demostrated that coreference knowledge doesn't easily transfer through multilingual training. This clearly demonstrates the need for language specific datasets. Some studies Novak and Zabokrtsky (2014) report developing projection techniques to improve crosslingual coreference resolution. There maybe scope for further work in this direction. ## 7 Conclusion This paper presented BenCoref, the first publicly available dataset of coreference annotations in Bengali. The creation process and annotation guidelines were described in detail to facilitate future work in this area. We then used the dataset to develop an end-to-end coreference resolution system and reported its performance across different domains. Our findings indicate that a lower mean and standard deviation of token-distance between mentions may lead to better results, but further experiments on other datasets are needed to confirm this hypothesis. We also observed a higher tendency for breakage in longer coreference chains. Our zero-shot cross-lingual experiment demonstrated that coreference knowledge does not easily transfer through multilingual training, highlighting the importance of language-specific datasets. While some studies Novak and Zabokrtsky (2014) have reported success in developing projection techniques to improve cross-lingual coreference resolution, further research is required to explore this area. \begin{table} \begin{tabular}{l l l l l l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{category}} & \multicolumn{4}{c}{\(B^{3}\)} & \multicolumn{4}{c}{MUC} & \multicolumn{4}{c}{\(CEAF_{\text{cal}}\)} & \multicolumn{4}{c}{Avg} \\ \cline{3-13} & & parameters & Pre. & Rec. & F1 & Pre. & Rec. & F1 & Pre. & Rec. & F1 & Pre. & Rec. & F1 \\ \hline \multirow{4}{*}{biography} & c2f + Gure & s=30,s=50,CL=2 & 84.52 & 44.74 & 58.51 & 92.26 & 64.41 & 76.05 & 55.33 & 40.24 & 46.60 & 77.37 & 49.80 & 60.39 \\ & c2f + Fasttext & w=20,s=50,CL=2 & 88.09 & 43.99 & 89.50 & 87.44 & 40.75 & 76.12 & 86.19 & 45.49 & 87.02 & 48.08 & 60.38 \\ & BERT-base & s=30,s=50,CL=2 & 85.37 & 72.59 & 79.04 & 93.79 & 86.12 & 89.29 & 57.48 & 49.64 & 53.22 & 78.88 & 69.76 & 74.05 \\ & c2f + Gure & s=30,s=50,CL=2 & 4.28 & 0.25 & 0.46 & 0.67 & 0.35 & 0.46 & 1.22 & 2.93 & 1.72 & 2.06 & 1.18 & 0.88 \\ \hline \multirow{4}{*}{story} & c2f + Gure & s=30,s=50,CL=2 & 46.92 & 23.95 & 31.72 & 63.41 & 44.40 & 52.23 & 20.24 & 36.99 & 26.16 & 43.52 & 35.11 & 36.70 \\ & c2f + Fasttext & w=20,s=50,CL=2 & 47.31 & 22.80 & 30.77 & 65.23 & 42.54 & 51.50 & 23.49 & 34.02 & 27.79 & 45.34 & 33.12 & 36.69 \\ & BERT-base & s=30,s=50,CL=2 & 84.61 & 31.62 & 40.06 & 74.46 & 31.88 & 60.52 & 28.80 & 40.23 & 33.87 & 52.61 & 41.91 & 45.18 \\ & M-BERT(Zero-Shot) & s=30,s=50,CL=2 & 2.32 & 0.25 & 0.45 & 1.42 & 0.62 & 0.86 & 2.00 & 2.59 & 2.26 & 1.91 & 1.15 & 1.19 \\ \hline \multirow{4}{*}{novel} & c2f + Gure & s=30,s=50,CL=2 & 42.97 & 7.98 & 17.77 & 95.45 & 25.00 & 35.20 & 16.37 & 26.61 & 20.27 & 41.90 & 19.86 & 23.08 \\ & c2f + Fasttext & w=20,s=50,CL=2 & 60.30 & 10.45 & 17.82 & 72.72 & 31.81 & 44.26 & 23.36 & 27.74 & 25.37 & 52.18 & 23.33 & 29.15 \\ & BERT-base & s=30,s=50,CL=2 & 43.55 & 30.38 & 30.81 & 74.13 & 31.22 & 60.32 & 34.98 & 20.80 & 33.58 & 49.95 & 36.61 & 41.06 \\ & M-BERT(Zero-Shot) & s=30,s=50,CL=2 & 3.54 & 0.25 & 0.47 & 26.6 & 1.13 & 1.59 & 1.90 & 2.49 & 2.15 & 2.70 & 1.29 & 1.40 \\ \hline \multirow{4}{*}{descriptive} & c2f + Gure & s=30,s=50,CL=2 & 84.33 & 11.91 & 19.12 & 58.24 & 20.62 & 30.45 & 31.16 & 26.83 & 28.83 & 45.91 & 19.79 & 26.13 \\ & c2f + Fasttext & w=20,s=50,CL=2 & 58.32 & 9.74 & 16.70 & 66.66 & 18.67 & 29.17 & 29.98 & 20.81 & 24.57 & 51.65 & 16.41 & 23.48 \\ \cline{1-1} & BERT-base & s=30,s=50,CL=2 & 62.81 & 26.88 & 30.76 & 76.62 & 48.91 & 57.42 & 46.12 & 26.18 & 34.99 & 61.85 & 33.66 & 40.38 \\ \cline{1-1} & M-BERT(Zero-Shot) & s=30,s=50,CL=2 & 2.01 & 0.56 & 0.88 & 1.16 & 0.77 & 0.93 & 2.30 & 3.00 & 2.60 & 1.82 & 1.44 & 1.47 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance on test data. The main evaluation metric is the average F1 score of \(MUC\), \(B^{3}\), and \(CEAF_{\phi 4}\). The best scores are highlighted. ## Acknowledgements We would like to express our sincere gratitude to everyone who contributed to this research. We would also like to thank our colleagues and collaborators for their valuable insights and feedback throughout the project. We are especially grateful to our annotators for their hard work and dedication to ensure the quality of this dataset. We also appreciate the feedback and suggestions from the anonymous reviewers, which helped to improve the quality of this manuscript. No financial assistance was provided by any organization for the completion of this project.
2310.10007
Broadband radio study of the North Polar Spur: Origin of the spectral turnover with insights into the X-ray and Gamma-ray spectra
The North Polar Spur (NPS) is a giant structure that is clearly visible in both radio and X-ray all-sky maps. We analyzed broadband radio observations covering a range between 22 MHz and 70 GHz to systematically analyze the thermal/non-thermal emissions associated with the NPS. We demonstrate that the radio emission of the NPS comprises synchrotron, free-free, and dust emission; however, synchrotron emissions dominate over other emissions, especially at high galactic-latitudes. Moreover, the synchrotron spectra exhibit a power-law behavior with $N(\gamma)\propto\gamma^{-s}$ ($s\simeq1.8-2.4$) up to a few GHz moderated by a turnover at $\nu_{\rm brk} \simeq 1$ GHz, above which the spectral index $s$ decrease by one. Assuming that the turnover is due to the electrons cooled by synchrotron radiation before escaping (or advecting) from the emission region, the magnetic field strength can be estimated to be $B\sim 8 \rm\mu G$ if the NPS is a distant structure that is near the Galactic Center (GC). However, an unreasonably strong $B\sim 114\rm\mu G$ is required if the NPS is near the local supernova remnant (SNR). The corresponding non-thermal energy stored in the NPS is $E_{\rm n/th}\simeq 4.4\times 10^{55}$ erg in the GC scenario, whereas $E_{\rm n/th}\simeq 4.1\times 10^{52}$ erg is difficult to explain with a single local SNR. We also estimated the gamma-ray emission associated with the NPS through inverse Comptonization of the cosmic microwave background (CMB), which peaks at 100 - 1000 keV with a flux of $\nu F_{\nu}\sim 10^{-9}$ $\rm erg\,cm^{-2}s^{-1}sr^{-1}$ in the GC model, and may be a good candidate for detection by future X-ray/gamma-ray observatories.
Iwashita Ryoji, Kataoka Jun, Sofue Yoshiaki
2023-10-16T02:03:42Z
http://arxiv.org/abs/2310.10007v1
# Broadband radio study of the North Polar Spur: ###### Abstract The North Polar Spur (NPS) is a giant structure that is clearly visible in both radio and X-ray all-sky maps. We analyzed broadband radio observations covering a range between 22 MHz and 70 GHz to systematically analyze the thermal/non-thermal emissions associated with the NPS. We demonstrate that the radio emission of the NPS comprises synchrotron, free-free, and dust emission; however, synchrotron emissions dominate over other emissions, especially at high galactic-latitudes. Moreover, the synchrotron spectra exhibit a power-law behavior with \(N(\gamma)\propto\gamma^{-s}\) (\(s\simeq 1.8-2.4\)) up to a few GHz moderated by a turnover at \(\nu_{\rm brk}\simeq 1\) GHz, above which the spectral index \(s\) decrease by one. Assuming that the turnover is due to the electrons cooled by synchrotron radiation before escaping (or advecting) from the emission region, the magnetic field strength can be estimated to be \(B\sim 8\mu\)G if the NPS is a distant structure that is near the Galactic Center (GC). However, an unreasonably strong \(B\sim 114\mu\)G is required if the NPS is near the local supernova remnant (SNR). The corresponding non-thermal energy stored in the NPS is \(E_{\rm n/th}\simeq 4.4\times 10^{55}\) erg in the GC scenario, whereas \(E_{\rm n/th}\simeq 4.1\times 10^{52}\) erg is difficult to explain with a single local SNR. We also estimated the gamma-ray emission associated with the NPS through inverse Comptonization of the cosmic microwave background (CMB), which peaks at 100 - 1000 keV with a flux of \(\nu F_{\nu}\sim 10^{-9}\) erg cm\({}^{-2}\)s\({}^{-1}\)sr\({}^{-1}\) in the GC model, and may be a good candidate for detection by future X-ray/gamma-ray observatories. Radio astronomy(1338) -- Galaxy stellar halos(598) -- Interstellar medium(847) -- Stellar wind bubbles(1635) 0000-0002-0002]Ryoji Iwashita 0000-0002-4882-7885]Jun Kataoka 0000-0002-4882-7885]Yoshiaki Sofue ## 1 Introduction Galactic all-sky survey observations have identified numerous giant structures across multiple wavelengths. In the microwave bands, the nature of the diffuse Galactic emission in the WMAP temperature anisotropy data was investigated, and "Haze" component was observed by Finkbeiner (2004) or Dobler (2012). Similar Haze structures have been observed in all-sky surveys conducted by Planck satellites (Planck Collaboration et al., 2013), and they have been confirmed to extend from the Galactic Center (GC) to within the range of \(|b|\sim 35^{\circ}-50^{\circ}\)and \(|l|\sim 15^{\circ}-20^{\circ}\)(Dobler & Finkbeiner, 2008). It is also suggested that the emission from the GC is hard-spectrum synchrotron radiation. In the gamma-ray bands, the Fermi Gamma-ray Space Telescope discovered "Fermi bubbles", a giant bubble structure extending from the center of the galaxy toward the north and south (Su et al., 2010). This structure has sharp edges of radiation that extend approximately \(50^{\circ}\) above and below the GC, with a longitudinal width of \(\sim 40^{\circ}\), exhibiting a bipolar symmetric structure. In the X-ray band, similar bubble structures were discovered by ROSAT and eROSITA. According to Predehl et al. (2020), soft X-ray bubbles that extend approximately \(14\,\)kpc above and below the GC are not remnants of a local supernova, but a galactic-scale giant structure closely related to the features observed in gamma-ray bubbles. The authors estimate the energy of the X-ray bubbles to be \(\sim 10^{56}\,\)erg, which is sufficient to perturb the structure, energy content, and chemical enrichment of the circumgalactic medium of the Milky Way. A common origin has been proposed for the two phenomena, the WMAP-Planck Haze and high-energy bubbles, and the spatial dimensions and locations of the haze in the microwave and Fermi bubbles are indeed compatible within the limits of the experimental error (Rubtsov & Zhezher, 2018). The North Polar Spur (NPS)/Loop I is one of the most characteristic structures in Galactic all-sky maps and observed in both the radio and X-ray bands. Loop I is the largest northward emission that spans \(\sim 120^{\circ}\) in the sky, and the brightest part of these regions is called the North Polar Spur (NPS). Although half a century has passed since its discovery, two competing ideas have been actively debated to postulate its origin. One of these is a local bubble near the solar system (\(100\sim 200\) pc) (Berkhuijsen et al., 1971). According to this claim, the NPS/Loop I is attributed to supernova activity from the Sco-Cen OB association (Egger and Aschenbach, 1996; Krause et al., 2018), and several authors have concluded that it is a collection of gas and dust expanded by a supernova explosion and stellar wind. In addition, the spatial nonuniformity of NPS/Loop I is another factor that supports this theory. Another idea is the remnant of active galactic nucleus (AGN) and/or starburst outflow from the GC over 10 Myr ago (Sofue, 1977). The recent discovery of a series of structures from the GC, such as the Fermi bubbles, popularized this theory. It has been suggested that NPS/Loop I is located along the edges of these galactic structures, indicating that bubble structures and NPS/Loop I have the same origin from the galactic explosions. In the X-ray observation by Suzaku satellite, emission from NPS is well reproduced by the three-component thermal radiation: (1) Local Bubble and solar wind charge exchange (SWCX), (2) thermal emission and Galactic Halo(GH), and (3) cosmic X-ray background (CXB) (Kataoka et al., 2013; Akita et al., 2018). The NPS was represented by a thin thermal emission of \(kT=0.3\) keV during the ionization equilibrium process. This is above the temperature of a typical galactic halo (\(kT=0.2\) keV) and can be interpreted as a shock-heated GH gas. These authors concluded that the results suggest past activity in and around the GC. In contrast, in radio observation, various analyses have been conducted since the discovery of NPS. Spoelstra (1972) analyzed the linear polarization at 1415 MHz and estimated the distance to the NPS to be 50-100 pc based on the coincidence of the polarization directions of light and radio waves. Sun et al. (2015) created a Faraday rotation measure (RM) map of the NPS using data sets from 1280-1750 MHz, which indicated that a part of the NPS is a local structure of several hundred parsecs. In Kataoka et al. (2021), the relationship between radio and X-ray emissions of the NPS was discussed. The X-ray emissions are closer to the GC by \(\sim 5^{\circ}\) compared with the corresponding radio emission and the radio and X-ray offsets in the NPS are attributed to the shock compression and heating of the halo gas during past galactic explosions. Although many NPS analyses have been performed at single or a few wavelengths (the literature above) in the radio band, but broadband analysis over multiple wavelengths has not yet been performed. In this study, we aim to clarify the NPS emission mechanism and obtain a closer insight into its origin by combining NPS radio data over multiple wavelengths and analyzing the corresponding spectra. ## 2 Radio Data ### Data Processing The 6 all-sky maps from 22 MHz to 2.3 GHz were taken on the ground (Table 1) while data at 23 GHz were obtained with the WMAP satellite. The \(30\,\mathrm{GHz}\cdot 44\,\mathrm{GHz}\cdot 70\,\mathrm{GHz}\) data by the Planck satellite was downloaded from NASA ([https://lambda.gsfc.nasa.gov](https://lambda.gsfc.nasa.gov)), while the Planck data was obtained from NASA/IPAC ([https://irsa.ipac.caltech.edu/Missions/planck.html](https://irsa.ipac.caltech.edu/Missions/planck.html)). The 23 GHz map was al \begin{table} \begin{tabular}{c c c c l l} \hline Product & Release & Observatory & Projection & Resolution(HEALPix) & Data Type \\ \hline DRAO 22 MHz Map & 2019 & DRAO & \(1.2^{\circ}\times 1.5^{\circ}\) & nested,res 8 (Nside=256) & HEALPix \\ All-sky 150 MHz Map & 2019 & Parkes,AUS & \(5^{\circ}\) & nested,res 8 (Nside=256) & HEALPix \\ Haslam 408 MHz & 2014 & GER,AUS,ENG & \(56^{\prime}\) & ring,res 9 (Nside=512) & Mollweide \\ Dwingeloo 820 MHz Map & 2019 & Dwingeloo,NLD & \(1.2^{\circ}\) & nested,res 8 (Nside=256) & HEALPix \\ 1.4 GHz Continuum Map & 2001 & Stockert,Villa-Elisa & \(35.4^{\prime}\) & nested,res 8 (Nside=256) & Mollweide \\ 2.3 GHz Continuum Map & 2019 & Hartebeesthoek,ZAF & \(20^{\prime}\) & ring,res 8 (Nside=256) & Mollweide \\ 23 GHz & - & WMAP & - & - & Mollweide \\ 30 GHz Full Channel Map & 2015 & Planck & - & nested,res 10 (Nside=1024) & HEALPix \\ 44 GHz Full Channel Map & 2015 & Planck & - & nested,res 10 (Nside=1024) & HEALPix \\ 70 GHz Full Channel Map & 2015 & Planck & - & nested,res 10 (Nside=1024) & HEALPix \\ \hline \end{tabular} References. 22MHz: Roger et al. (1999), 150MHz: Landecker and Wielebinski (1970), 408MHz: Remazeilles et al. (2015), 820MHz: Paradis et al. (2012), 1420MHz: Paradis et al. (2012), 2300MHz: Jonas et al. (1998), 23GHz: Gold et al. (2011), Planck: Planck Collaboration et al. (2014) \end{table} Table 1: Radio all-sky data used in this study ready separated into synchrotron and free-free radiation using the Maximum Entropy Method (MEM) analysis in Gold et al. (2011). Table 1 lists the data used in this study. The unit for each all-sky data point is the brightness temperature \(T_{b}=I_{\nu}\times c^{2}/2k_{B}\nu^{2}\), where \(I_{\nu}\) and \(k_{B}\) denote the brightness and Boltzmann constant, respectively. The brightness temperature is a physical quantity that describes the radiation intensity of an astronomical object. As shown in Table 1, the all-sky map is displayed differently depending on the frequency, such as Mollweide and HEALPix. Therefore, in this study, the all-sky maps were converted to a Mercator diagram (Appendix: Figure 5). The mean of the observed data within a pixel was taken as the measured value, and the standard deviation was the error at that pixel. In addition, the cosmic microwave background (CMB; \(T_{\rm CMB}=2.7\) K) was uniformly subtracted from the 22, 150, 408, 820, and 1420 MHz observed data. Because some of the brightness temperature data of Planck contained negative values that were consistent with zero within the uncertainty, all negative values were set to zero. When the lower limit of the error bar was negative, the upper limit was considered for analysis. ### Observation Error The radio data for each frequency in the ground-based observations contained some errors, which, in addition to the errors introduced by data processing in Section 2.1, are mainly classified as scale and zero-level errors. Table 2 summarizes these errors in the observed data. However, Roger et al. (1999) does not mention zero-level or scale errors for the 22 MHz observations; instead, they used a value of \(\pm 5000\) K as a typical error. For the 150 MHz observations, Patra et al. (2015) calibrated the 150 MHz map by comparing the absolutely calibrated sky brightness measurements between 110 and 175 MHz calculated using the SARAS spectrometer, and Monsalve et al. (2021) proposed a calibration by comparing with absolutely calibrated measurements from an Experiment to Detect the Global EoR Signature (EDGES). We considered which of the three errors (standard deviation during data processing, scale error, and zero-level error) dominated the analyzed domain. Consequently, the standard deviation for each frequency was \(\lesssim 10\,\%\) in the NPS region and dominant at 22, 150, 408, and 820 MHz; however, at 1420 and 2300 MHz, the zero-level error was dominant. Therefore, standard deviations were used as errors for ground-based observations from 22 to 820MHz and for WMAP and Planck, whereas zero-level errors were used for 1420 and 2300 MHz. The original data were used for the two corrections at 150 MHz because exhibited a minor effect. The WMAP separation of thermal/non-thermal emissions was performed using the maximum entropy method (MEM) of Bennett et al. (2003). This study states that the total observed galactic emission matched the MEM model by less than 1%, whereas the emission was separated into individual components with low accuracy. Therefore, we considered that these separations are sufficiently reliable. ## 3 Analysis and Results ### Turnover Frequency To compare the differences in the spectral shape between high and low frequencies, we compared the power-law using frequency data at 22 MHz, 150 MHz, 408 MHz, 1420 MHz, 2300 MHz, and 23 GHz. All data were analyzed at a resolution of \(5^{\circ}\) in both the galactic-longitude and galactic-latitude to align with the 150 MHz resolution. First, we obtained \(\beta\) maps (\(T_{b}=A\nu^{-\beta}\)) using the least squares method with the brightness temperatures measured at different frequencies, \[\beta=-\frac{\sum_{i=1}^{n}\left(\ln\nu_{i}-\overline{\ln\nu}\right)\left( \ln T_{i}-\overline{\ln T}\right)}{\sum_{i=1}^{n}\left(\ln\nu_{i}-\overline{ \ln\nu}\right)^{2}} \tag{1}\] where \(\nu_{i}\) = 22 MHz, 150 MHz, and 408 MHz for a low-frequency map (Figure 1, top), whereas \(\nu_{i}\) = 1420 MHz, 2300 MHz, and 23GHz for a high-frequency map (Figure 1, middle). As shown in Figure 1, the spectral power-law of the NPS region (white dotted line in Figure 1) was flatter than that in the other areas for both high and low frequencies. We obtained \(\beta\simeq 2.4\) to 2.7 at low frequencies and \(\beta\simeq 2.7\) to 3.2 at high frequencies in the NPS region, indicating that the power-law of the spectrum at high frequencies was steeper than at low frequencies. \begin{table} \begin{tabular}{l l l l} \hline Frequency & Scale Error [\%] & Zero-level Error & Supplement \\ \hline 22 MHz & - & - & \(\pm 5000\) K \\ [MISSING_PAGE_POST] NDFOOTNOTE] \end{table} Table 2: Errors in Radio Data Second, to understand the difference in the cutoff frequency of the radio spectrum in the all-sky, we obtained a turnover frequency map, as shown in the bottom of Figure 1. This figure plots the frequency at which the two power-law spectra obtained as shown at the top and middle of Figure 1 intersect. For the two power-law functions obtained from 22 MHz to 408 MHz (\(T_{1}=A_{1}\nu_{1}^{-\beta_{1}}\)) and from 1420 MHz to 23 GHz (\(T_{2}=A_{2}\nu_{2}^{-\beta_{2}}\)), the turnover frequency at which the two functions intersect is \[\nu_{\rm turnover}=\left(\frac{A_{2}}{A_{1}}\right)^{\frac{1}{\beta_{1}-\beta_{2 }}}[{\rm Hz}]. \tag{2}\] As shown in the figure, the NPS spectral turnover was on the lower frequency (\(<1\) GHz) than the other regions (see also Mou et al. (2023)). In reality, the turnover frequency in the NPS region for \(l=30^{\circ}-35^{\circ}\), \(b=55^{\circ}-60^{\circ}\) was 0.76 GHz, whereas it was 1.1 GHz for \(l=40^{\circ}-45^{\circ}\) in the same galactic-latitudes around the NPS. Next, in order to confirm the spectral turnover of the NPS region in more detail, the NPS spectra for each galactic-latitude are analyzed in the next section. ### Spectral Analysis We extracted the spectrum for each galactic-latitude of the NPS. To eliminate the effects of linear polarization and ensure that it was sufficiently larger than the beam width of all frequency maps, the spectral region was \(5^{\circ}\) in both the galactic-longitude and galactic-latitude. The galactic-longitude was fixed at \(l=30^{\circ}-35^{\circ}\) where the NPS radiation is the brightest, and spectra were produced for regions varying in galactic-latitude by \(10^{\circ}\). Figure 2 presents the typical spectra for every \(10^{\circ}\) galactic-latitude in the NPS. The black, green, blue, and red points represent 22 - 2300 MHz, 23 GHz synchrotron radiation, 23 GHz free-free radiation, and Planck data (30 - 70 GHz), respectively. The NPS emissions were found to decrease at \(\beta\simeq 2.4-2.7\,(T_{b}\propto\nu^{-\beta})\) power-law or \(\alpha\simeq-0.4\) to \(-0.7\) (\(l_{\nu}\propto\nu^{\alpha}\), where \(\alpha=2-\beta\)), up to a few GHz, regardless of the galactic-latitude. This result is consistent with the values in the literature \(\beta\simeq 2.55-2.65\)(Guzman et al., 2011) or \(\beta\simeq 2.3-3.0\)(Reich & Reich, 1988). Consequently, the electron spectrum indicates a power-law relationship with its index (\(s\)) of \(N(\gamma)\propto\gamma^{-s}\) (\(s\simeq 1.8-2.4\), where \(s=1-2\alpha\)) if synchrotron radiation dominated up to a few GHz. In addition, cut-offs were observed around \(\nu_{\rm brk}\simeq 1\) GHz, especially at high galactic-latitudes. The synchrotron/free-free data at 23 GHz exhibited a stronger fraction of free-free radiation at low galactic-latitudes, and the contribution of free-free radiation decreased with increasing galactic-latitude. This indicated that synchrotron radiation dominated at high galactic-latitudes. Free-free radiation exhibited a flat power-law \(\alpha\simeq-0.1\) in the optically thin region, whereas synchrotron radiation exhibited a significantly steeper power-law behavior with \(\alpha>-1.0\). Therefore, synchrotron radiation clearly dominates at high galactic-latitudes and up to a few GHz. We consider the 30 - 70 GHz Planck data where the radiation was consistent with the thermal radiation from dust with \(\alpha\simeq 2.0\) power-law. In the optically thin limit, the Spectral Energy Distribution (SED) of the emission from a uniform population of grains is well described empirically by a modified blackbody \(I_{\nu}=\tau_{\nu}B_{\nu}(T)\), where \(\tau_{\nu}\) is the frequency-dependent dust optical depth and \(B_{\nu}\) is the Planck function for dust at temperature \(T\). Figure 1: [top and middle] High and low frequency \(\beta\) maps. [bottom] Turnover frequency map. The figures show the derivation of the power-law from 22 MHz to 408 MHz and from 1420 MHz to 23GHz, along with the turnover frequency. The black dotted line represents the NPS region. Figure 2: Spectra of the NPS in the radio band. The figure shows typical spectra for every \(10^{\circ}\) of galactic-latitude. The black, green, blue, and red (upper limit) points represent 22 - 2300 MHz, 23 GHz synchrotron radiation, 23 GHz free-free radiation, and Planck data (30 - 70 GHz), respectively. The green dashed line shows the synchrotron emission with the turnover obtained in section 3.1. The blue dashed line shows the free-free emission with the \(\alpha_{ff}=-0.1\) power-law. The yellow line is the dust emission assuming \(\tau=5.0\times 10^{-6}\), and \(T=20\) K. ## 4 Discussion ### Radio emission from NPS In Section 3.2, we show that the radio emission of the NPS consists of (i) synchrotron radiation, (ii) free-free radiation, and (iii) dust emission. The characteristics of these types of radiation are described below. Synchrotron radiation decreases following a power-law behavior up to a few GHz and cut-off at \(\nu_{\rm brk}\simeq 1\) GHz. Synchrotron radiation dominates up to a few GHz, whereas free-free radiation and dust radiation are almost negligible. Free-free radiation theoretically decreases at \(\alpha_{\rm ff}\simeq-0.1\) power-law and can be observed from a few GHz. The contribution of free-free radiation increases, and the amount of radiation even exceeds that of synchrotron radiation, especially at low galactic-latitudes. This is clearly confirmed by the NPS spectra as shown in Figure 2. Dust radiation is dominant above several tens of GHz. In the optically thin limit, the SED of the emission from a uniform population of grains is well described empirically by a modified blackbody \(I_{\nu}=\tau_{\nu}B_{\nu}(T)\). Radio emissions of the NPS based on these characteristics are shown in Figure 3. ### Spectral turnover The spectral turnover in the NPS can be discussed relative to synchrotron cooling. First, we assume the simplest Leaky-box model in which fresh, accelerated electrons are injected into an emission region with a magnetic field \(B\), followed by the particle loss due to energy-independent advection and radiative energy loss. Approximating the transport equation for electrons passing through the shock by the Leaky-Box model, the time evolution of the electron energy distribution \(N_{e}(\gamma)\) can be expressed as follows (Inoue and Takahara, 1996), \[\frac{\partial N_{e}(\gamma)}{\partial t}=\frac{\partial}{\partial\gamma} \left(\frac{\gamma}{t_{\rm cool}}N_{e}(\gamma)\right)+Q_{0}(\gamma)-\frac{N_{e }(\gamma)}{t_{\rm adv}}, \tag{3}\] where \(t_{\rm cool}\) is the electron cooling time; \(t_{\rm adv}\) is the advection time; and \(Q_{0}(\gamma)\) is the electron injection rate. The electron energy steady-state solution can be expressed approximately as a broken power-law of the form of Equation 4 \[\begin{split} N_{e}&(\gamma>\gamma_{\rm min})\\ &=N_{0}\gamma^{-s}\left(1+\frac{\gamma}{\gamma_{\rm brk}}\right) ^{-1}\exp\left(\frac{\gamma}{\gamma_{\rm max}}\right).\end{split} \tag{4}\] Here, \(s\), \(N_{0}\), and \(\gamma_{\rm min}\) (or \(\gamma_{\rm max}\)) denote the injection index, normalization constant, and minimum (or maximum) electron energy, respectively. Then, \(\gamma_{\rm brk}\) corresponds to the energy at which the electron cooling time and advection time are balanced (\(t_{\rm cool}\simeq t_{\rm adv}\)), resulting in an electron distribution with a turnover whose power-law index \(s\) steepens by one. If the spectral turnover \(\gamma_{\rm brk}\) originates from synchrotron cooling, the magnetic field \(B\) can be estimated by balancing the advection and cooling time scales of electrons. In a non-relativistic strong shock wave, the compression ratio is \(v_{2}/v_{1}=1/4\), where \(v_{1}\) and \(v_{2}\) are the upstream and downstream velocities in the rest frame of the shock, respectively. The electron advection time is obtained as follows: \[\begin{split} t_{\rm adv}&\simeq\frac{R}{v_{2}}\\ &\simeq 1.23\times 10^{15}\left(\frac{R}{1\,{\rm kpc}}\right)\left( \frac{v_{1}}{100\,{\rm km/s}}\right)^{-1}\,[{\rm sec}],\end{split} \tag{5}\] where \(R\) denotes thickness of the radiation area. Using the literature value of \(v_{1}\simeq 320\) km/s (Kataoka et al., 2013), in the GC model, and assuming that the distance to the radiation source was 8000 pc and that the NPS had a spherical structure with an outer diameter \(R_{\rm out}\simeq 5\) kpc and an inner diameter \(R_{\rm in}\simeq 3\) kpc (\(R=R_{\rm out}-R_{\rm in}\simeq 2\) kpc) (Kataoka et al., 2015), the advection time of an electron was \(t_{\rm adv}\simeq 7.7\times 10^{14}\,{\rm sec}\simeq 24.4\) Myr. In the SNR model, assuming that the distance to the radiation source was 150 pc, we obtained \(R\simeq 2\,{\rm kpc}\times 150/8000=38\) pc, and the advection time of the electron was \(t_{\rm adv}\simeq 1.5\times 10^{13}\,{\rm sec}\simeq 4.6\times 10^{5}\) yr. Figure 3: Characteristics of the NPS radio spectrum. (i) synchrotron radiation : dominant emission at low-frequency and high galactic-latitudes, with a cut-off around \(\nu_{\rm brk}\simeq 1\) GHz (ii) free-free radiation : dominant emission at low galactic-latitudes (iii) Dust emission : dominant emission at high-frequency. Meanwhile, the electron cooling time owing to synchrotron radiation is \[\begin{split} t_{\rm cool}&\simeq\frac{E}{dE/dt}=\frac{ \gamma m_{\rm e}c^{2}}{\frac{4}{3}\sigma_{\rm e}cU_{\rm B}\gamma^{2}}=\frac{6 \pi m_{\rm e}c}{\sigma_{\rm e}B^{2}\gamma}\\ &\simeq\frac{5.1\times 10^{8}}{B^{2}\gamma}\,[{\rm sec}],\end{split} \tag{6}\] where \(\sigma_{\rm e}\), \(U_{B}\), and \(\gamma\) denote the Thomson cross-section, magnetic field energy density, and Lorentz factor, respectively. A typical synchrotron frequency can be expressed as \[\nu_{\rm syn}=\frac{eB}{2\pi m_{\rm e}}\gamma^{2}\simeq 1.2\times 10^{6}B\gamma^{ 2}\,[{\rm Hz}], \tag{7}\] where \(B\) denotes the magnetic field in units of Gauss. Then, by substituting Equation 7 into Equation 6, we obtain \[\begin{split} B\simeq& 5.9\times\left(\frac{\nu_{\rm syn }}{1\,{\rm GHz}}\right)^{-1/3}\\ &\times\left(\frac{v_{1}}{100\,{\rm km/s}}\right)^{2/3}\left( \frac{R}{1\,{\rm kpc}}\right)^{-2/3}\,[\mu{\rm G}].\end{split} \tag{8}\] Figure 2 shows the presence of a cut-off at \(\nu_{\rm brk}\simeq 1\) GHz, which indicates that the electron cooling time and advection time are balanced (\(t_{\rm cool}\simeq t_{\rm adv}\)). Thus, by substituting the cut-off frequency \(\nu_{\rm syn}\sim\nu_{\rm brk}\simeq 1\) GHz and the electron advection time \(t_{\rm adv}\) into Equation 8, the magnetic field of the NPS can be obtained, whose values are \(8\mu\)G and \(114\mu\)G for the GC and SNR models, respectively. Note that the NPS radius varies by a few kiloparsecs in the literature for the GC model (cf. Predehl et al. (2021)). When the shell thickness is doubled, \(B\simeq 5\,\mu\)G and \(B\simeq 72\,\mu\)G for the GC and SNR models, respectively, but we do not believe that these errors affect the discussion. Last of all, we note that such spectral turnover may be owning to various other models. Firstly, if the low-energy electrons do not have sufficient time to establish a steady state relative to synchrotron cooling, but radiation loss time is sufficiently short for high-energy electrons, similar turnover could be observed around the GHz energy band. In such a case, we can no longer assume exquisite balance between \(t_{\rm adv}\) and \(t_{\rm cool}\); however, Equation 6 and 7 still hold, which leads to \(B\simeq 68\times\nu_{\rm syn}^{1/3}\,[{\rm GHz}]\times\rm t_{\rm cool}^{-2/3}\,[{ \rm Myr}]\). Therefore, if the NPS is a GC structure over \(\simeq 10\) Myr ago and the cooling time is comparable, \(B\simeq 10\,\mu\)G is naturally obtained. Similarly, we obtained \(B\simeq 100\,\mu\)G in the case of nearby SNR. Hence, we infer that our discussion will not be affected provided the cooling time is comparable to the dynamical time scale (or age) of the NPS. Secondly, the bremsstrahlung energy losses may play a significant role in including spectral breaks, particularly in regions with high gas densities (Crocker et al., 2010). However, the bremsstrahlung energy loss timescale in the case of NPS is approximately three orders of magnitude longer than the synchrotron timescale, advection time, and age of the NPS because of its notably thin gas density of \(n\simeq 10^{-3}\,{\rm cm}^{-3}\) (e.g., Sofue et al. (2019)) compared to galactic center. Consequently, the effect of the bremsstrahlung energy losses can be regarded as negligible. Finally, although there is certainly a steady state, bremsstrahlung collisions experienced by the electrons on the ISM gas are the dominant loss process for low-energy electrons. In fact, bremsstrahlung emission from non-thermal electrons is observed in a few SNRs, where a hard spectrum is possibly owing to the interaction of electrons with a thick interstellar medium and a gas density of approximately \(n\simeq 10-100\,{\rm cm}^{-3}\)(Uchiyama et al. (2002); Tanaka et al. (2018)). Compared to this, the gas density in the NPS is sufficiently low, where \(n\simeq 10^{-3}\,{\rm cm}^{-3}\) and \(n\simeq 1\,{\rm cm}^{-3}\) for the GC and SNR scenarios, respectively. Therefore, we infer that the contamination of the bremsstrahlung emission from non-thermal electrons cannot account for the spectral turnover as observed in GHz band. ### Spectral Energy Distribution \begin{table} \begin{tabular}{l l l|l} \hline Parameter & NPS/GC & NPS/SNR & Bubbles \\ \hline Distance [pc] & 8000 & 150 & 8000 \\ \(B\) [\(\mu\)G] & 8 & 114 & 10 \\ Region radius [cm] & \(2.2\times 10^{20}\) & \(4.0\times 10^{18}\) & \(1.2\times 10^{22}\) \\ Bulk Lorentz Factor & 1 & 1 & 1 \\ \(Q_{0}\,[{\rm cm}^{-3}\gamma^{-1}{\rm sr}^{-1}]\) & \(4.0\times 10^{-4}\) & \(5.2\times 10^{-4}\) & \(1.7\times 10^{-4}\) \\ \(\gamma_{\rm min}\) & 1.0 & 1.0 & 1.0 \\ \(\gamma_{\rm brk}\) & \(1.0\times 10^{4}\) & \(2.7\times 10^{3}\) & \(1.0\times 10^{6}\) \\ \(\gamma_{\rm max}\) & \(2.5\times 10^{4}\) & \(6.0\times 10^{3}\) & \(1.0\times 10^{7}\) \\ Index \(s\) & 1.9 & 1.9 & 2.2 \\ Brightness index \(\beta\) & 2.45 & 2.45 & 2.6 \\ Intensity index \(\alpha\) & -0.45 & -0.45 & -0.6 \\ \hline \(U_{e}\) [erg cm\({}^{-3}\)] & \(1.2\times 10^{-12}\) & \(1.2\times 10^{-12}\) & \(1.8\times 10^{-13}\) \\ \(U_{B}\) [erg cm\({}^{-3}\)] & \(2.5\times 10^{-12}\) & \(5.2\times 10^{-10}\) & \(4.0\times 10^{-12}\) \\ \(P_{\rm n/th}\) [erg cm\({}^{-3}\)] & \(1.2\times 10^{-12}\) & \(1.7\times 10^{-10}\) & \(1.4\times 10^{-12}\) \\ \hline \end{tabular} 1 \end{table} Table 3: Comparison of the fitting parameters of the SED Figure 4: Spectral Energy Distribution of the NPS \((l,b)=(30^{\circ}-35^{\circ},55^{\circ}-60^{\circ})\) in the region of \(5^{\circ}\times 5^{\circ}\). Fitting was performed on data from 22 MHz – 23 GHz, which is pure synchrotron radiation, assuming distances to the NPS for each of the GC (8000 pc) and SNR (150 pc). The leptonic model was used for fitting (Inoue & Takahara, 1996; Kataoka et al., 1999). Green : synchrotron radiation, Blue : IC of the CMB, Red : Synchrotron-Self-Compton, Yellow : IC of the Dust, Purple : Fermi bubbles (Kataoka et al., 2013). We performed a SED analysis for the high galactic-latitude region, where synchrotron radiation is dominant. We used the \(\mathrm{area}\left(l,b\right)=(30^{\circ}-35^{\circ},55^{\circ}-60^{\circ})\), where the spectral region is \(5^{\circ}\) in both the galactic-longitude and galactic-latitude. Fitting was performed on data from 22 MHz - 23 GHz, which is pure synchrotron radiation, assuming distances to the NPS for each GC (8000 pc) and SNR (150 pc). The leptonic model was used for the fitting (Inoue and Takahara, 1996; Kataoka et al., 1999), assuming the electron distribution as in Equation 4 and using the magnetic field obtained in Section 4.2. The fitting results under these conditions are shown in Figure 4. The green, red, and blue lines represent synchrotron radiation, IC scattering when the CMB photons are knocked, and synchrotron-self-Compton (SSC) scattering. The yellow dotted line indicates the dust radiation, represented as \(I_{\nu}=\tau_{\nu}B_{\nu}(T)\), whose optical thickness and temperature are \(\tau_{\nu}\sim 10^{-6}-10^{-5}\) and \(T\sim 20\) K, respectively, from the literature (Planck Collaboration et al., 2014). Yellow is the IC when the dust radiation is knocked. In addition, the purple dotted lines and plots depict the SED of the Fermi bubbles fitted the one-zone leptonic model (Kataoka et al., 2013). The GeV data plots correspond to the emissions of the entire bubble structure, following (Su et al., 2010). The radio data plots correspond to the WMAP haze emissions averaged over \(b=-20^{\circ}\) to \(-30^{\circ}\) for \(|l|<10^{\circ}\). The bowtie centered at the 23 GHz K-band indicates the range of synchrotron spectral indices allowed for the WMAP haze, following (Dobler and Finkbeiner, 2008). The parameters used for the fitting and the corresponding results are listed in Table 3. Note that the best fit parameter, the hard electron spectrum \(s\simeq 1.9\), cannot be easily explained by the standard shock model. Although this is a very interesting topic, we simply adopted this value as it is beyond the scope of this study. In the GC model, the IC of the CMB is dominated by high-energy radiation, with a peak in the 100-1000 keV range and flux of \(10^{-9}\) erg s\({}^{-1}\)cm\({}^{-2}\)sr\({}^{-1}\), whose brightness is almost equal to that of the high-energy band of the Fermi bubble. However, the IC of the CMB is dominant in the SNR model, similar to the GC model, but with a peak below 10 keV and flux of approximately \(10^{-12}\) erg s\({}^{-1}\)cm\({}^{-2}\)sr\({}^{-1}\). This value is several orders of magnitude lower than the detection limits of modern astronomical satellites. More accurate fitting will be possible if all-sky observations of gamma-rays progress in the future and NPS are discovered. ### NPS non-thermal Energy In Section 4.3, owing to the distance to the two different NPS, we obtained the parameters shown in the Table 3, where \(U_{e}\) and \(U_{B}\) are electron and magnetic field energy densities. \[U_{e} =\int d\gamma m_{e}c^{2}\gamma N_{e}(\gamma) \tag{9}\] \[U_{B} =B^{2}/8\pi \tag{10}\] \(P_{\mathrm{n/th}}\) is the non-thermal pressure, \(P_{\mathrm{n/th}}=(U_{e}+U_{B})/3\). The total energy can be estimated by assuming NPS volume. In the GC model, the total volume of the NPS can be calculated as \(V=4\pi/3\times(R_{\mathrm{out}}^{3}-R_{\mathrm{in}}^{3})\sim 1.2\times 10^{67}\) cm\({}^{3}\) using \(R_{\mathrm{out}}\simeq 5\) kpc and \(R_{\mathrm{in}}\simeq 3\) kpc (Kataoka et al., 2015). Thus, we estimated that the non-thermal energy of the NPS was \(E_{\mathrm{n/th}}=V\times(U_{e}+U_{B})\simeq 4.3\times 10^{55}\) erg, which was consistent with the literature values \(E_{\mathrm{n/th}}\sim 10^{55-56}\)(Sofue, 1984; Kataoka et al., 2018; Sofue and Kataoka, 2021). It is notable that the NPS radius contains an error of several kiloparsecs in the GC model. When the shell thickness is doubled, some of the parameters listed in Table 3 change to \(Q=1.0\times 10^{-3}\) cm\({}^{-3}\gamma^{-1}\)sr\({}^{-1}\), \(\gamma_{brk}=1.2\times 10^{4}\), \(U_{e}=2.9\times 10^{-12}\) erg cm\({}^{-3}\), and \(U_{B}=9.9\times 10^{-13}\) erg cm\({}^{-3}\), and the non-thermal energy is estimated to be \(E_{n/th}\simeq 1.3\times 10^{56}\) erg. However, we do not believe that these errors to affect the discussion. The thermal pressure of the NPS was estimated by Kataoka et al. (2013) using X-ray Suzaku observations. It should be noted that they are estimated using \(8^{\circ}<l<16^{\circ},\,42^{\circ}<b<48^{\circ}\) observation data, and yielded \(P_{\mathrm{th}}\simeq 2\times 10^{-12}\) erg cm\({}^{-3}\). It was found to be in close agreement with the non-thermal energy determined for the first time from the radio spectrum of the NPS. Therefore, this can be interpreted as natural in the case of GC. In contrast, in the SNR model, the volume of the NPS was \(V\sim 7.9\times 10^{61}\) cm\({}^{3}\), and the non-thermal energy was estimated to be \(E_{\mathrm{n/th}}\simeq 4.1\times 10^{52}\) erg. When the shell thickness is doubled, the parameters change to \(Q=1.2\times 10^{-3}\) cm\({}^{-3}\gamma^{-1}\)sr\({}^{-1}\), \(\gamma_{brk}=3.4\times 10^{3}\), \(U_{e}=2.8\times 10^{-12}\) erg cm\({}^{-3}\), and \(U_{B}=2.1\times 10^{-10}\) erg cm\({}^{-3}\), and the non-thermal energy is estimated to be \(E_{n/th}\simeq 4.8\times 10^{52}\) erg. Compared to the GC, the distance to the NPS was approximately \(\sim 1/50\) times larger, and the number density of the gas was \(\sim 1/7\) times greater. Thus, the thermal pressure was estimated to be \(P_{\mathrm{th}}\simeq 1.4\times 10^{-11}\) erg cm\({}^{-3}\) which was approximately ten times larger than the non-thermal pressure. Therefore, the pressure balance was found to be disturbed in the SNR model. In addition, the typical energy of a supernova remnant is \(E\sim 10^{51}\) erg, which is unnatural, at least for a single SNR. Therefore, if the NPS is a local structure, it could be a massive structure associated with a super bubble (Krause et al., 2018). ## 5 Conclusions We analyzed broadband radio observations covering a range between 22 MHz and 70 GHz to provide a systematic analysis of the thermal/non-thermal emissions associated with the NPS. Spectral analysis showed that the radio emission of the NPS is composed of (1) synchrotron radiation, (2) free-free radiation, and (3) dust emission. Up to a few GHz, NPS emissions were found to decrease at \(\beta\simeq 2.4-2.7\) (\(T_{b}\propto\nu^{-\beta}\)), regardless of the galactic-latitude. Consequently, the electron spectrum exhibited a power-law relationship with its index (s) of \(N(\gamma)\propto\gamma^{-s}(s\simeq 1.8-2.4)\). In addition, the cut-off in the radio spectrum was found to be approximately \(\nu_{\rm brk}\simeq 1\) GHz for the first time, which may indicate that the electron cooling time and advection time are balanced if the spectral turnover is derived from synchrotron cooling. Moreover, the magnetic field of the NPS was estimated as \(B\simeq 8\mu\)G and \(B\simeq 114\mu\)G in the GC and SNR models, respectively. In the SED analysis of the high galactic-latitude region of the NPS in the GC model, we estimated that the gamma-ray emission associated with the NPS, through IC of the CMB, peaked at approximately 100 - 1000 keV with a flux of \(\sim 10^{-9}\) \(\rm erg\,cm^{-2}s^{-1}sr^{-1}\). However, in the SNR model, IC of the CMB dominated, but with a peak below 10 keV and flux of approximately \(10^{-12}\) \(\rm erg\,cm^{-2}s^{-1}sr^{-1}\). Using the fitting results, the non-thermal energy of the NPS was calculated to be \(E_{\rm n/th}\simeq 4.3\times 10^{55}\) erg in the GC model, which was consistent with the literature values \(E_{\rm n/th}\sim 10^{55-56}\) erg (Sofue, 1984; Krause et al., 2018). Moreover, it was found that the thermal energy confirmed by Kataoka et al. (2013) in the X-ray was almost balanced by the newly obtained non-thermal energy in the radio bands. We obtained \(E_{\rm n/th}\simeq 4.1\times 10^{52}\) erg in the SNR model, which indicates that if the NPS radiation originates from a local bubble, it could be a super bubble (Krause et al., 2018). Future deep surveys in MeV range, such as eASTROGAM (de Angelis et al., 2018) and DUAL (von Ballmoos et al., 2011), may further clarify the origin of NPS. ## Acknowledgements This work was supported by JST ERATO Grant Number JPMJER2102 and JSPS KAKENHI grant no. JP20K20923, Japan. We thank an anonymous referee for his useful comments and suggestions to improve the manuscript. ## Appendix A Radio All-sky Maps The all-sky map used in this study is shown in Figure 5. The method for creating an all-sky map is described in Section 2.
2302.10216
Quantum Relativity
Starting with a consideration of the implication of Bell inequalities in quantum mechanics, a new quantum postulate is suggested in order to restore classical locality and causality to quantum physics: only the relative coordinates between detected quantum events are valid observables. This postulate supports the EPR view that quantum mechanics is incomplete, while also staying compatible to the Bohr view that nothing exists beyond the quantum. The new postulate follows from a more general principle of quantum relativity, which states that only correlations between experimental detections of quantum events have a real classical existence. Quantum relativity provides a framework to differentiate the quantum and classical world.
Michael Spanner
2023-02-04T02:05:25Z
http://arxiv.org/abs/2302.10216v1
# Quantum Relativity ###### Abstract Starting with a consideration of the implication of Bell inequalities in quantum mechanics, a new quantum postulate is suggested in order to restore classical locality and causality to quantum physics: only the relative coordinates between detected quantum events are valid observables. This postulate supports the EPR view that quantum mechanics is incomplete, while also staying compatible to the Bohr view that nothing exists beyond the quantum. The new postulate follows from a more general principle of quantum relativity, which states that only correlations between experimental detections of quantum events have a real classical existence. Quantum relativity provides a framework to differentiate the quantum and classical world. ## I Bell inequalities in quantum mechanics "For in fact what is man in nature? A Nothing in comparison with the Infinite, an All in comparison with the Nothing, a mean between nothing and everything." - Blaise Pascal "It's a combination of both. I mean here is the natural instinct and here is control. You are to combine the two in harmony. [...] If you have one to the extreme you'll be very unscientific, if you have another to the extreme, you become all of a sudden a mechanical man. No longer a human being. [...] It is a successful combination of both. [...] So therefore it is not pure naturalness or un-naturalness. The ideal is: un-natural naturalness or natural un-naturalness." - Bruce Lee "\(\hat{x}\hat{p}-\hat{p}\hat{x}=i\hbar\)" - Heisenberg "\(|x\rangle|p\rangle-|p\rangle|x\rangle\)" - EPR Einstein, Podolsky, and Rosen [1] argued that quantum mechanics was an incomplete description of physical reality. Bohr [2] maintained that there was nothing more beyond the quantum. Bell [3] proposed a scenario that could test these perspectives. Consider a traditional Bell scenario where some initial object with zero angular momentum breaks apart into two fragments moving in opposite directions along the same line. Each fragment is a 2D rotor that can spin in the plane perpendicular to the spatial motion. The singlet state for this system usable in a Bell scenario can be written as \[|\psi\rangle=|V\rangle|H\rangle-|H\rangle|V\rangle \tag{1}\] where the \(|V\rangle\) and \(|H\rangle\) states for a rotor are given by \[\langle\theta|V\rangle=\cos(\theta),\ \langle\theta|H\rangle=\sin(\theta). \tag{2}\] Standard normalization of the wavefunctions is ignored, it does not matter for the following discussion. These states are linear combinations of the \(m=\pm 1\) angular momentum states, meaning that they are compatible with the idea that which fragment received \(\pm 1\) angular momentum is not known. The double-angle wavefunction is given by \[\psi(\theta_{1},\theta_{2}) = \langle\theta_{1},\theta_{2}|\psi\rangle \tag{3}\] \[= \cos(\theta_{1})\sin(\theta_{2})-\sin(\theta_{1})\cos(\theta_{2}).\] Now include the coordinate \(\Delta\theta\) defined by \(\theta_{2}=\theta_{1}+\Delta\theta\). With this coordinate the wavefunction becomes \[\psi(\theta_{1},\Delta\theta) = \cos(\theta_{1})\sin(\theta_{1}+\Delta\theta) \tag{4}\] \[-\sin(\theta_{1})\cos(\theta_{1}+\Delta\theta)\] \[= \sin(\Delta\theta).\] The correlated probability distribution of detecting the fragments is then given by \[|\Psi(\theta_{1},\Delta\theta)|^{2}=|\Psi(\Delta\theta)|^{2}=\sin^{2}(\Delta \theta). \tag{5}\] The wavefunction and distribution are plotted in Fig.1. As can be seen from the previous two equations or from the plots, the wavefunction and correlated probability distribution depend only on the relative angle \(\Delta\theta\) between the coordinates at the isolated detectors. This is the core of the Bell inequalities. The original Bell inequality [3] and the CHSH [4] version are procedures to test for this property. I'll call the underlying symmetry, that the probability of correlated detection only depends on the relative angle \(\Delta\theta\) of the two detections, the Bell correlation. Assuming that the singlet state represents two independent classical objects with individual hidden variables, the specific symmetry of the corresponding classical probability distribution can not reproduce the Bell correlation [3]. ## II Quantum Relativity and Elements of Classical Reality "We do not describe the world we see, we see the world we can describe." - Rene Descartes "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions." - Gottfried Leibniz If the perspective is taken that only the single relative coordinate between the two detection events _exists_ for quantum events, then constructing a classical distribution that exhibits the Bell correlation is trivial--it would satisfy the Bell correlation by construction. In order to make sense of the Bell situation, one way forward then is to add to quantum mechanics the additional postulate that only the relative coordinates between experimental settings are valid observables when comparing correlated measurements of quantum events. In this way, violations of Bell inequalities arise because the quantum world has fundamentally less degrees-of-freedom for the hidden variables than Bell had assumed. He was averaging over additional non-existent classical configurations in deriving the classical side of the inequality. This is the meaning of the Copenhagen interpretation--if you can not measure it, it effectively does not exist. In this sense, EPR [1] and Bohr [2] are both correct. There really is nothing beyond the quantum, but quantum mechanics could be considered incomplete in that it was missing a postulate. The postulate can also be thought of as a quantum relativity principle where only the relative coordinates of experimentally-detected quantum events have a real classical existence. This statement will be called "weak" quantum relativity. A "full" version will be introduced below. Consider doing a Bell experiment in a completely empty universe where nothing exists except the detectors and the singlet state. Only the relative angle between the detection events _could_ exist in this universe since there is nothing else in it to define another angle against. Even if the two detectors are classically connected to the entire universe, there still isn't a reference to measure anything except a single relative coordinate. Alternatively, think of measuring the position and momentum of a quantum particle. Detecting a quantum event at a single point on a position-sensitive screen gives you information about position, but absolutely no information about momentum. To be a full element of classical reality requires that a particle have both position and momentum simultaneously, but a single detection point clearly does not have a momentum. Therefore, a single quantum event does not carry a full element of classical reality. What then is a quantum event? A single measured quantum event must be correlated with a second quantum event in order to be a valid element of classical reality. What have been called quantum particles are not full elements of classical reality. Rather, it is actually the correlations between the quantum particles that must be considered as the full elements of reality within the classical description of the world. One pair of correlated quantum events carries one element of correlation, and one element of correlation is only one element of classical reality. ## III Uncertainty Principles For clarity, it is useful to start with sound as an analogy. Sound can be represented as a single value that depends on time, \(S(t)\), or as a single value that depends on frequency, \(\bar{S}(f)\), but not both. Gabor showed [5] how this property leads to an uncertainty principle between \(t\) and \(f\). He believed that his treatment of acoustics was simply a curious mathematical analogy and did not suggest a serious connection with quanta of the atomic world. Frequencies and times do not exist independent of each other. We write music as a series of pitches that occur at different times, but this is not really what is happening. Music is defined not by absolute pitch occurring at different times, but more by relative pitches changes that occur at the correct time intervals. This is why any particular song is recognizable as long as all the frequency intervals remain the same. Transposing to a different Figure 1: Left: Wavefunction of the singlet state from Eq.(4). Blue is negative, red is positive. Right: Correlated probability distribution of the singlet state from Eq.(5). White is zero, black is maximum. pitch does not change the song (though it might change the mood of the song as you perceive it). Likewise, any particular song will be recognizable if it is shifted in time (though you might not be in the mood to perceive it at all times). The analogous situation holds true for position and momentum of quantum point particles. The path of a point object is fully characterized by a single coordinate, say \(x(t)\) or \(p(t)\), but not both. One can be derived from the other as long as we are only interested in intervals between positions and/or momenta at different points in time. In analogy with the uncertainty principle for sound, this leads to the existence of the Heisenberg uncertainty principle between \(x\) and \(p\) of a quantum coordinate. Uncertainty relations arise when we attempt a description of reality that is overcomplete. Only relative distances and relative momenta carry meaning when we measure the world. The Heisenberg uncertainty principle also follows from the recognition that we are only really able measure positions and times of events, that we live in space-time. Consider measuring the momentum of an object. To do so in practice requires measuring some property of that object at two points in space-time, say for example the center-of-mass coordinate at two different times \[p(t)=M\frac{x_{cm}(t+\Delta t)-x_{cm}(t)}{\Delta t} \tag{6}\] where \(M\) is the mass of the object. When momentum is measured, it is really relative distances and relative times that are being measured. Position and momentum are not independent of each other. ## III Building the classical world "If you with to make an apple pie from scratch, you must first invent the universe." - Carl Sagan Can quantum relativity be reconciled with the apparent observation that classical correlations, and not Bell correlations, seem to exist all around us? That is, can the quantum relativity principle help us explain the quantum to classical transition? Let's invent the classical universe. Returning to the double-rotor system, allow the fragments to carry more than one unit of angular momentum. Let the system break apart from some unknown initial state into the state \[\langle\theta_{1},\theta_{2}|\psi\rangle = \cos(m_{2}\theta_{1})\sin(m_{1}\theta_{2}) \tag{7}\] \[-\sin(m_{1}\theta_{1})\cos(m_{2}\theta_{2})\] where \(m_{1}\) and \(m_{2}\) are the total angular momenta of each fragment. These are like excited-state singlets. A physical scenario that creates excited-state singlets using spatial coordinates is presented below when considering EPR-type dissociated diatomic scenarios. The correlated probability distribution for measurements of the two observed angles \(\theta_{1}\) and \(\theta_{2}\) is \[|\langle\theta_{1},\theta_{2}|\psi\rangle|^{2} = \big{|}\cos(m_{1}\theta_{1})\sin(m_{2}\theta_{2}) \tag{8}\] \[-\sin(m_{2}\theta_{1})\cos(m_{1}\theta_{2})\big{|}^{2}.\] The correlated distribution can not in general be written as a function of just the relative angle \(\Delta\theta\) for all combination of \(m_{1}\) and \(m_{2}\). Consequently, some combinations of \(m_{1}\) and \(m_{2}\) will not show Bell correlations nor be able to violate a Bell inequality. To elaborate on this point and the symmetries that appear in the distributions defined by Eq.(8), Fig.(2) shows the double-angle wavefunctions and correlated distributions for a variety of fragment momenta \(m_{1}\) and \(m_{2}\). Two clear symmetries can be seen. One set has wavefunctions that can be written as a function of the single relative coordinate \(\Delta\theta\) with corresponding distributions that display Bell correlations. This set can violated of a Bell inequality. The other set has wavefunctions that can not be written as a function of a single relative coordinate and distributions that do not display Bell correlations. This set can not violate a Bell inequality. From these two classes of symmetries, two classes of coordinates can be recognized: quantum and classical. Quantum coordinates arise when \(m_{1}=m_{2}\) and display Bell correlations. Classical coordinates arise when \(m_{1}\neq m_{2}\) and do not have the Bell correlations. The quantum or classical nature of the coordinates being measured depends of the type of initial state prepared, and therefore the design of the experiment controls whether quantum of classical effects can be seen in the measured coordinates. Note also that the symmetry of the wavefunctions is fermionic-like under exchange of experimental coordinates. ## IV Positive singlet states Instead of negative singlets \(|A\rangle|B\rangle-|B\rangle|A\rangle\), consider now positive singlets \(|A\rangle|B\rangle+|B\rangle|A\rangle\). For the rotor system, the wavefunctions and correlated distributions of these states are \[\langle\theta_{1},\theta_{2}|\psi\rangle = \cos(m_{1}\theta_{1})\sin(m_{2}\theta_{2}) \tag{9}\] \[+\sin(m_{2}\theta_{1})\cos(m_{1}\theta_{2})\] and \[|\langle\theta_{1},\theta_{2}|\psi\rangle|^{2} = \big{|}\cos(m_{1}\theta_{1})\sin(m_{2}\theta_{2}) \tag{10}\] \[+\sin(m_{2}\theta_{1})\cos(m_{1}\theta_{2})\big{|}^{2}.\] They are plotted in Fig.3. There are again classical-type distributions appearing for \(m_{1}\neq m_{2}\) that can not be factorized into a single relative coordinate, but now the quantum-like \(m_{1}=m_{2}\) states depend only on the sum of the two experimental coordinates \(\theta_{sum}=\theta_{1}+\theta_{2}\) instead of \(\Delta\theta\). I'll call this symmetry the anti-Bell correlation. This symmetry could also violate a Bell inequality that was properly constructed specifically for this state. Here is where full quantum relativity can be stated: There is only a single classically-real correlation contained in the coordinates of a pair of quantum events. This correlation can appear in either the negative (\(\theta_{1}-\theta_{2}\)) or positive (\(\theta_{1}+\theta_{2}\)) correlation of the detector coordinates, but not both. Two quantum events equals one element of correlation, one element of classical reality. While the negative singlet wavefunctions had fermionic character, the positive singlets have bosonic character--they are symmetric under exchange of coordinates. The bosonic or fermionic character of correlated experimental coordinates depends on the character of the singlet state being measured. ## IV The dissociated diatomic Consider now an example closer to the original EPR state [1], a dissociated homonuclear diatomic. Let \(\varphi_{n}(r)\) be the initial eigenstate of the bond length coordinate \[r=r_{1}-r_{2} \tag{11}\] before dissociation, and \(\psi(R)\) be the wavefunction of the center-of-mass coordinate \[R=(r_{1}+r_{2})/2. \tag{12}\] The total wavefunction is then \[\Psi(r,R)=\psi(R)\varphi_{n}(r). \tag{13}\] Let the diatomic dissociate into two identical neutral atoms, located at \(r_{1}\) and \(r_{2}\), that subsequently fly apart. In general, writing out the wavefunction explicitly in terms of \(r_{1}\) and \(r_{2}\) would yield a non-separable expression for all but a small set of functions \(\phi(R)\) and \(\varphi_{n}(r)\). However, the state is trivially always separable in \(R\) and \(r\) simply by construction, so the wavefunction can be written as \[\Psi(r_{1},r_{2})=\psi(R(r_{1},r_{2}))\varphi_{n}(r(r_{1},r_{2})). \tag{14}\] Figure 2: Wavefunctions (blue negative, red positive) and correlated probability distributions (white zero, black maximum) for a variety of negative excited-state singlets. Since it was postulated that the two atoms at \(r_{1}\) and \(r_{2}\) are identical, which experimentally means that they each can be detected with the same type of detector, there should not be a difference if the two detectors are exchanged. Hence, the wavefunction must be written as \[\Psi(r_{1},r_{2}) = \psi(R(r_{1},r_{2}))\varphi_{n}(r(r_{1},r_{2})) \tag{15}\] \[\pm\psi(R(r_{2},r_{1}))\varphi_{n}(r(r_{2},r_{1})).\] This is now in the form of an excited-state singlet similar to Eq.(7). Whether the dissociated fragments become a negative or positive singlet depends on the nature of the initial diatomic. As in Figs.2 and 3 for the double-rotor case, the classical or quantum nature of the atomic fragments following dissociation will depend on the nature of the singlet state Eq.(15). For all but a small set of \(\psi(R)\) and \(\varphi_{n}(r)\), Eq.(15) will not display Bell correlations and hence \(r_{1}\) and \(r_{2}\) will behave classically in most cases. It should be possible experimentally to prepare various states of the singlet Eq.(15) by first exciting a beam of diatomics to specific energy eigenstates to control \(\varphi(r)\), passing the beam through a double-slit to imprint sturcture onto \(\psi(R)\), and then dissociating the diatomics after the slits. ## III Double-slit experiments What is generally being measured in double-slit experiments is the correlated probability distribution between the initial position \(\vec{x}_{i}\) at the particle jet and the final position on a detection screen \(\vec{x}_{f}\) \[P(\vec{x}_{f},\vec{x}_{i})=\left\langle\vec{x}_{f}\left|\widehat{DS}\right| \vec{x}_{i}\right\rangle. \tag{16}\] There are two orthogonal pathways for each possible combination of \(\vec{x}_{i}\) and \(\vec{x}_{f}\). The pathways are "through slit 1, miss slit 2" and "miss slit 1, through slit 2". The quantum operator for the double-slit process can then be written as \[\widehat{DS}=|T\rangle|M\rangle\langle T|\langle M|+|M\rangle|T\rangle\langle M |\langle T| \tag{17}\] where \(|T\rangle\) means the particle went through the slit, and \(|M\rangle\) means the particle missed the slit. \(\widehat{DS}\) can be further factorized into \[\widehat{DS}=|\Psi_{DS}\rangle\langle\Psi_{DS}| \tag{18}\] where \[|\Psi_{DS}\rangle=|T\rangle|M\rangle+|M\rangle|T\rangle \tag{19}\] Figure 3: Wavefunctions (blue negative, red positive) and correlated probability distributions (white zero, black maximum) for a variety of positive excited-state singlets. is a positive singlet state. This is the origin of the quantum effects seen in the double-slit experiment--it is a measurement of a singlet state created by the slit. Within quantum relativity, all quantum effects comes from an experimental realization of a singlet state. If the screen is placed directly after the slits, then the measured distribution is \[P_{near}(\vec{x}_{f},\vec{x}_{i}) = \langle\vec{x}_{f}|\Psi_{DS}\rangle\langle\Psi_{DS}|\vec{x}_{i}\rangle \tag{20}\] \[= |\delta(x_{s}{-}d/2)+\delta(x_{s}{+}d/2)|^{2}\] where the slits are represented with \(\delta\)-functions. In practice, finite slits would imprint a narrow but finite double-peak shape upon the wavefunction. When measured with the screen placed far from the slits, the measured distribution can be written as \[P_{far}(\vec{x}_{f},\vec{x}_{i}) = \langle\vec{x}_{f}|\Psi_{DS}\rangle\langle\Psi_{DS}|\vec{x}_{i}\rangle \tag{21}\] \[= \cos^{2}\left(\frac{d}{2}\frac{p_{0}}{\hbar}\sin\theta\right).\] Both of these measured distributions are characterized only by relative parameters. In Eq.(20), \(x_{s}\) is the position along the axis that passes through each slit as measured relative to the center of the slits, and \(d\) is the distance between the slits. In Eq.(21), \(\theta\) is the angle of the detection position relative to the axis that intersects the center of the slits, and \(p_{0}\) is a prior characterization of the average momentum of the beam that relies on many relative measurements. Expressing the measured distribution of quantum detection events in terms of only relative coordinates of the detection events is the weak quantum relativistic perspective. Alternatively, the distribution Eq.(20) can be seen as depending only on the sum (\(s{+}d/2\)) and difference (\(s{-}d/2\)) of detection coordinates--this is the full quantum relativistic perspective. One can switch between the weak and full perspectives with a coordinate transformation. It is important to note that if the whole double-slit experiment is shifted in space, no one would expect the probability distributions to change. The double-slit experiment does not care about the absolute position of the experiment. This is analogous to what is happening in the traditional Bell scenario. In Bell experiments there is an angle invariance that reflects the fact that the measured probability distribution is independent of absolute angle. When trying to measure which slit the particles passed through, the interference pattern at the far screen is removed. For example, maybe in the case of a double-slit experiment with electrons one could flip the spin as it passes one of the slits but not the other. Then the detections on the screen could be correlated with events where the spin-flipper triggers, which would give information about which slit the electron passed. This experiment is now described by \[P^{(s)}(\vec{x}_{f},s_{f},\vec{x}_{i},s_{i})=\left\langle\vec{x}_{f},s_{f} \left|\widehat{DS}\right|\vec{x}_{i},s_{i}\right\rangle \tag{22}\] where \(s_{i}\) and \(s_{f}\) are the initial and final spin coordinates of the electron, and the (\(s\)) superscript implies this is for the spin-coupled version of the double-slit. The pathways are now "through slit 1, miss slit 2, no spin flip" and "miss slit 1, through slit 2, flip spin". Assuming that the initial spin of the electron beam is uniform, the quantum operator for the spin-coupled double-slit process is \[\widehat{DS}^{(s)} = |T\rangle|M\rangle|\alpha\rangle\langle T|\langle M|\langle\alpha \tag{23}\] \[+|M\rangle|T\rangle|\beta\rangle\langle M|\langle T|\langle\alpha\] where the \(|\alpha\rangle\) and \(|\beta\rangle\) are the two possible spin states, and the incident electron beam was prepared in the \(|\alpha\rangle\) state. Unlike \(\widehat{DS}\), \(\widehat{DS}^{(s)}\) is not separable into a process built from a singlet state. This results in two possible processes as seen by the detection screen, \[\widehat{DS}^{(\alpha)}=|T\rangle|M\rangle\langle T|\langle M| \tag{24}\] and \[\widehat{DS}^{(\beta)}=|M\rangle|T\rangle\langle M|\langle T|, \tag{25}\] that do not interfere like in the singlet case but instead represent two different classical pathways that occur by chance. From the quantum relativity perspective, since this process does not require a singlet description it should not display any quantum effects. When it is said that quantum particles of a given species are indistinguishable, what is really indistinguishable are particular dichotomic pathways that exist in the singlet state being prepared by experiments designed to measure those quantum particles. If these pathways are indistinguishable, then the related relative coordinates being measured behave like quantum coordinates. If the pathways are distinguishable through coupling to existing internal observed variables, then the coordinates behave like classical coordinates. ## IV Additional thoughts and incomplete speculations _Relative vs Absolute Reference Frames:_ There seems to have been a discussion between Newton and Leibniz regarding the nature of space-time. Newton argued for an absolute coordinate system, a world stage on which his physical laws played out. Leibniz argued that only relationships between physical objects exist, that the world is fundamentally relative. While special and general relativity as well as quantum field theory removed the dependence of physical laws from an absolute reference frame for 4D macroscopic space-time, they still lacked a full relativistic view of the relationships between microscopic degrees-of-freedom. They are formulated in mixed reference frames. How are the relative and absolute world views related? Perhaps what appears local and causally-related and in a relative world appears non-local and non-causal from an absolute frame and vice versa. _Gravity and QFT:_ Within quantum relativity, gravity could be an emergent phenomena, not something fundamental. When we see an object or star with our eyes, we are not measuring the object or star. Rather, we are measuring the massless photons that were coupled to the object or star. Mass is directly related to the degrees-of-freedom that we do not see. This is consistent with \(M=E_{i}(\vec{q}_{i})/c^{2}\), where what we call internal energy \(E_{i}\) is expressible in terms of relative coordinates \(\vec{q}_{i}\) that are fully internal to the object. Massive objects derive their mass from the existence of potentially-observable but currently-unobserved degrees-of-freedom that they carry. Perhaps a photon is effectively massless because it carries no further possible internal structure, although this does seem to leave the origin of polarization as an open question. Quantum relativity might help remove infinities in quantum field theory and general relativity. Maybe no longer a need for renormalization or black hole core infinities in our descriptions? Do the infinities in QFT and GR arise from assuming infinite degrees-of-freedom somewhere? A quantum relativistic description can be finite by construction. Black hole information loss and Hawking radiation could be analogous to burning a page of writing to destroy the measurable classical information with the combustion fumes being analogous to the radiation. The underlying fundamental degrees-of-freedom of the objects that fall into a black hole can not be destroyed (quantum information can not be destroyed), but the observed relations between them (the classically-measurable relationships between the degrees-of-freedom) are lost. Thanks to Ben Sussman and Khabat Heshami with whom I've had many discussions about the subtle aspects of quantum mechanics.
2308.05126
Data-Driven Intelligence can Revolutionize Today's Cybersecurity World: A Position Paper
As cyber threats evolve and grow progressively more sophisticated, cyber security is becoming a more significant concern in today's digital era. Traditional security measures tend to be insufficient to defend against these persistent and dynamic threats because they are mainly intuitional. One of the most promising ways to handle this ongoing problem is utilizing the potential of data-driven intelligence, by leveraging AI and machine learning techniques. It can improve operational efficiency and saves response times by automating repetitive operations, enabling real-time threat detection, and facilitating incident response. In addition, it augments human expertise with insightful information, predictive analytics, and enhanced decision-making, enabling them to better understand and address evolving problems. Thus, data-driven intelligence could significantly improve real-world cybersecurity solutions in a wide range of application areas like critical infrastructure, smart cities, digital twin, industrial control systems and so on. In this position paper, we argue that data-driven intelligence can revolutionize the realm of cybersecurity, offering not only large-scale task automation but also assist human experts for better situation awareness and decision-making in real-world scenarios.
Iqbal H. Sarker, Helge Janicke, Leandros Maglaras, Seyit Camtepe
2023-08-09T04:48:55Z
http://arxiv.org/abs/2308.05126v1
# Data-Driven Intelligence can Revolutionize Today's Cybersecurity World: A Position Paper ###### Abstract As cyber threats evolve and grow progressively more sophisticated, cyber security is becoming a more significant concern in today's digital era. Traditional security measures tend to be insufficient to defend against these persistent and dynamic threats because they are mainly intuitional. One of the most promising ways to handle this ongoing problem is utilizing the potential of _data-driven intelligence_, by leveraging AI and machine learning techniques. It can improve operational efficiency and saves response times by automating repetitive operations, enabling real-time threat detection, and facilitating incident response. In addition, it augments human expertise with insightful information, predictive analytics, and enhanced decision-making, enabling them to better understand and address evolving problems. Thus, data-driven intelligence could significantly improve real-world cybersecurity solutions in a wide range of application areas like critical infrastructure, smart cities, digital twin, industrial control systems and so on. In this position paper, we argue that data-driven intelligence can revolutionize the realm of cybersecurity, offering not only large-scale task _automation_ but also _assist human experts_ for better situation awareness and decision-making in real-world scenarios. Keywords:Cybersecurity, Data-Driven Intelligence, Automation, Human Assistance, Augmenting Experts Knowledge, AI, Machine Learning. ## 1 Introduction Cybersecurity has emerged as a major problem in today's hyperconnected world due to the growing cyber threat landscape and the increasing number of sophisticated malicious actors. According to the Telecommunication Standardization Sector of International Telecommunication Union [1] "Cybersecurity is the collection of tools, policies, security concepts, safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies that can be used to protect the cyber environment and organization and user's assets." In the real-world scenario, protecting sensitive data and digital assets from continuously evolving threats is a challenging task for businesses in a variety of application areas such as critical infrastructures, smart city applications, information and operational technology networks, etc. Traditional security solutions might not be sufficient to provide defense against today's persistent and constantly evolving threats in these areas. There is an urgent need for innovative approaches that can effectively counteract the dynamic nature of cyber threats. Therefore, in this paper, we focus on data-driven intelligence, which offers a powerful combination of automation and human assistance and could be one of the most promising strategies for solving this ongoing problem. Data-driven intelligence typically can be defined as the process of using data analysis and interpretation to derive insights or useful knowledge, and eventually make intelligent decisions. It thus involves identifying trends, patterns, correlations, and other pertinent information primarily through the use of data, which could then be applied to regulate corporate operations and strategic decisions. The development of data-driven intelligence, powered by machine learning and artificial intelligence [2], has tremendous potential for revolutionizing cybersecurity in various application areas, discussed briefly in Section 4. Data-driven intelligence has the capability to reveal hidden patterns, detect anomalies and predict potential cyberattacks by utilizing the enormous amounts of data generated from numerous sources, such as network logs, system events, and user behavior. This enables the development of proactive and adaptive defense systems rather than simply relying on predefined rules and signatures, enhancing an organization's capacity to recognize, respond to, and mitigate cyber threats. In addition to automating tasks, cyber analysts can gain deeper insights into the tactics, techniques, and procedures employed by cyber adversaries through the extracted insights from data, discussed briefly in Section 3. In order to better understand the main focus of this position paper and overall contributions, we formulate three major questions below: * Can data-driven intelligence _automate_ the large-scale complex tasks in the context of cybersecurity? * Does data-driven intelligence have the potential to _augment_ human expertise or knowledge through in-depth understanding as well as to _assist_ them in their decision-making process in real-world scenarios? * Is it worthwhile to _rethink_ the present cyberspace across a variety of application areas while taking into account the power of data-driven intelligence, particularly in terms of automation and assisting human experts in the domain? Answering these questions, we believe that data-driven intelligence can revolutionize today's cybersecurity world. Towards this, we provide a clear understanding of the potential of data-driven intelligence as well as their applicability and impact from the perspective of next-generation cybersecurity solutions in the following sections. Thus this paper contributes to the ongoing discussion about the role of data-driven modeling and the importance of ensuring that innovative methods are developed and deployed in a manner that maximizes its benefits while minimizing its risks. The ultimate purpose of this paper is not only to highlight data-driven intelligence but also to use the extracted insights or useful knowledge gained from data to make intelligent decisions that improve the current cybersecurity landscape. The rest of the paper is organized as follows: Section 2 highlights the significance of data intelligence considering both automating tasks and human experts' decision-making. We discuss data-driven modeling in Section 3. We also explore the potentiality of data-driven intelligence in various real-world application domains in Section 4. The key challenges and issues are highlighted in Section 5 and finally, Section 6 concludes this paper. ## 2 Why Data-Driven Intelligence for Next-Generation Cybersecurity? In the area of cybersecurity, data-driven intelligence offers a substantial contribution to _automation_ as well as _assisting human expert decision-making_ to solve real-world problems. Human experts may not have the scalability and speed of automated systems, but they do have the capability for critical thought, intuition, and the ability to realize bigger organizational goals as well as ethical concerns when making decisions. The symbiotic relationship between automation and human expertise enables businesses to develop strong cyber defense capabilities, react to threats promptly, and maintain a competitive advantage in the continually evolving landscape of cybersecurity concerns. In this section, we discuss how data-driven intelligence can serve as a strength factor in cybersecurity by automating repetitive processes, anticipating threats, as well as augments human expertise providing useful information. 1. _Automation of Large-Scale Cyber Tasks:_ Cybersecurity tasks like log analysis, anomaly detection, and routine security checks can be automated us Figure 1: An illustration highlighting the potential of data-driven intelligence for both automation and assisting human experts in the context of cybersecurity. ing data-driven intelligence [3]. Data-driven automated systems use insights from raw data to drive decision-making. These tasks can be completed more quickly and accurately by utilizing machine learning and AI algorithms [2], alleviating stress on human experts for complicated tasks. By continuously monitoring and analyzing enormous volumes of data from many sources, data-driven intelligence automates the process of threat detection. It instantly detects anomalies, suspicious activity, and potential threats in real time using machine learning techniques. The incident response process is sped up by automation, which also minimizes the risk of human error and ensures that the cybersecurity teams are acting systematically. Through the extraction of insights from raw data, data-driven automated systems are able to continually learn, adapt, and make decisions in real-time, and eventually boost operational effectiveness. 2. _Augmenting Human Understanding and Expertise for Improved Cyber Solutions:_ The capabilities of human cybersecurity experts are strengthened by data-driven intelligence in various ways, discussed below. These are - _Assisting Human Experts Decision-Making with Evidence-based Recommendations:_ Instead of depending exclusively on intuition or prior experiences, cybersecurity professionals could establish complete cybersecurity plans based on empirical evidence and data-informed recommendations with the advancement of data-driven insights. This data-driven approach allows them to conduct comprehensive risk assessments, understand the impact of different attack vectors, and identify critical areas for policy improvement. By providing context-sensitive information about particular incidents and attack tactics, data-driven intelligence improves human experts' knowledge of cyber risks. This deeper understanding aids analysts in determining the seriousness of a threat and developing appropriate countermeasures specific to the organization's particular security posture. Ultimately, data-driven intelligence empowers cybersecurity analysts to support evidence-based, dynamic, and robust policy recommendations that strengthen an organization's resilience against cyber threats. 3. _Enhancing Human Experts' Domain Knowledge for Advanced Thinking:_ Data-driven intelligence plays a pivotal role in enhancing cyber experts' domain knowledge, specifically for further modeling and analysis. By processing large volumes of cybersecurity data, data-driven tools can uncover valuable insights, patterns, and correlations that experts can use to build more accurate and sophisticated models. For instance, data insights can help determine which entities are essential for building an effective cybersecurity knowledge graph [4] or cybersecurity taxonomy building [5] through identifying common properties and characteristics of entities as well as their internal relationships. These data-driven models can capture the complexities of the cyber landscape, simulate various attack scenarios, and predict potential outcomes with higher precision. As cyber experts integrate data-driven intelligence into their domain knowledge, they can continuously refine their models, improve their un derstanding of cyber threats, and develop more effective strategies to defend against evolving challenges. Ultimately, the fusion of data-driven intelligence with the expertise of cyber experts enables them to create advanced models that are both robust and adaptable, empowering organizations to stay ahead in the ever-changing cybersecurity landscape. * _Knowledge Retention and Transfer:_ Developing and maintaining efficient cybersecurity capabilities is a complex and continuing process that requires significant investment in terms of time, resources, and expertise. Professionals in the field of cybersecurity require not only technical skills but also a thorough awareness of the infrastructure, processes, and potential vulnerabilities within the organization. This knowledge is essential for quickly recognizing risks and taking appropriate action. In the real-world scenario, the expense of bringing cyber professionals is not only limited to salary and overheads but also the economic loss due to incidents which could have been better handled with experienced staff. Experience and such investments are lost momentarily when an experienced staff member leaves an organization. Consequently, this may result in a knowledge and expertise gap that is difficult to recover instantly. Numerous negative effects, such as increased vulnerability to cyberthreats, decreased efficacy of incident response, and potential project disruptions, may result from this loss. The hiring of a new professional with matching capabilities may not be sufficient because understanding the organizational context usually takes time and may result in further incidents. The data-driven approach creates new opportunities to retain this knowledge and experience and transfer them to new professionals within an organization as needed. In summary, data-driven intelligence derived from raw data are crucial for both automating large-scale complex tasks and assisting human experts while making their decisions in the context of cybersecurity, illustrated in Figure 1. It combines the strengths of data insights as well as AI and machine learning techniques for advanced modeling, highlighted in Section 3 to improve overall cyber defense capabilities and maximize teamwork between automated systems and human analysts. While each strategy has merits, a well-balanced approach that leverages both human expertise and data-driven automation to improve overall security posture and incident response capabilities could be the most effective way in cybersecurity. It enables human analysts to focus their attention on tasks that need critical thinking, creativity, and strategic planning by providing a wealth of data and insights. ## 3 Data Insights and Modeling This section mainly consists of two parts. We initially focus on different types of insights that are associated with data-driven intelligence, and then we concentrate on a general data-driven modeling workflow for further exploration to address a specific issue. ### Cyber Data Insights For a better understanding of the insights involved in the data-driven intelligence process, we have highlighted three key questions in the section below. The answers to these queries can aid human analysts in deeper understanding and solving a specific problem in the context of cybersecurity as well as in automating the necessary tasks. These are: * _What happened in the past?:_ This typically explores the happenings and incidents in the world of cybersecurity. It includes analyzing historical data, logs, and incident reports to determine the type of cyberattacks, the methods employed by threat actors, the affected systems or data, and the overall impact on the organization. Experts in cybersecurity can react quickly, mitigate loss, and initiate the proper incident response procedures when they are aware of what happened. Building a strong defense strategy also involves identifying patterns and trends in cyber incidents. * _Why did it happen?:_ Here, the emphasis is on underlying the root causes and associated factors for cybersecurity events. Understanding the "why" requires an in-depth investigation of the security infrastructure's shortcomings, configuration issues, human errors, and vulnerabilities that led to the attack's success. Analysts can find systemic issues and weaknesses in their security procedures, work processes, and employee awareness using this investigative process. Organizations may improve their defenses, reduce potential risks, and build a more resilient cybersecurity framework by tackling these core causes. * _What will happen in the future?:_ This element involves predicting and forecasting probable future cybersecurity threats and trends. Cyber threats are always changing, and threat actors are constantly coming up with new strategies to exploit vulnerabilities. Forecasting potential threats can be aided by data-driven intelligence, exchanging threat intelligence, and investigation of emerging technologies. Organizations can prepare for these challenges and be better able to protect themselves against new and emerging cyber threats by understanding what can happen in the future. Thus extracting these insights could be the key to building the foundation of a data-driven intelligence model, where various techniques within the broad area of data science can be used discussed in the following. ### Data-Driven Modeling with Explanation An effective modeling technique is essential to extract insights or useful knowledge, where various data-preprocessing and visualization techniques as well as AI and machine learning algorithms for advanced modeling can be used. The key components of this process are as follows: * _Data Collection and Preparation:_ Gathering broad and comprehensive datasets related to cybersecurity is the first step. These datasets may contain information from various sources such as logs, network traffic, system events, security alerts, and historical attack data. To ensure consistency and quality, the collected data should be preprocessed, cleansed, and transformed towards the target solutions. Synthetic data generation as well as handling imbalanced issues using techniques like oversampling, and undersampling [6] might be helpful depending on the nature of the data. * _Feature Selection and Engineering:_ This involves selecting or extracting meaningful features from the preprocessed data that can be used to build the model. It is essential to choose features carefully since traditional machine-learning methods, such as neural networks, SVMs, etc. are sensitive to the features used as inputs [7]. The most pertinent features can be found through statistical analysis or machine learning algorithms [3]. In some cases, feature extraction may require human expertise based on contextual information and awareness of cyber risks and vulnerabilities [8]. To identify relevant features and reduce the dimensionality of the data both algorithmic approach and domain experts may guide towards optimal feature engineering process. * _Exploratory Analysis and Visualization:_ Before moving on to advanced modeling or decision-making, this exploratory analysis helps in understanding in-depth data structure and patterns, and eventually to gain insights into normal behavior and identify patterns associated with cyber threats. Various statistical and visual techniques and tools such as Histograms, Scatter Plots, Bar charts, Heatmaps, etc. [9] can be employed to analyze the distributions, correlations, and structure of the data. * _Model Development and Training:_ Models may vary depending on the characteristics of the data and fitting the problem domain. This includes applying AI and machine learning techniques like decision trees, random forests, neural network learning, as well as rule-based modeling and explanation [10][11]. To improve performance and generalization, optimizing model parameters is important. In several cases, innovative methods might need to develop based on what insights are needed to explore as discussed earlier. Developing hybrid or ensemble models that aggregate outcomes from multiple base models might need to take into account to improve model robustness and generalizability as well as overall accuracy. * _Model Evaluation:_ A comprehensive evaluation is necessary after building and training the model with the relevant cyber data. The efficiency of the model can be assessed using evaluation criteria like accuracy, precision, recall, or F1 score [3]. Validation methods like k-fold cross-validation aid in estimating the performance of the model on unseen data and evaluating its generalizability, which is important to take into account diverse real-world issues. * _Human-in-the-Loop Integration:_ While automated models are capable of detecting a wide range of threats involved, they may not be flawless and could sometimes generate false positives or false negatives. Experts in cybersecurity may contribute their knowledge and expertise to the process by analyzing and verifying the outcomes of automated models. Thus, this module incorporates incident response teams and cybersecurity analysts in the pro cess to provide domain expertise, interpret model outputs, and make critical decisions. * _Deployment and Continuous Improvement:_ The models can be deployed in a real-world cybersecurity context if they have been established to be satisfactory. To ensure the model's efficacy over time, it is essential to continuously assess its performance, detection rates, false positives, and false negatives. To keep the model realistic and up to date, regular updates, retraining, and adaptation to changing threats are required. Overall, a comprehensive data-driven intelligence framework for cybersecurity modeling needs to be adaptable, resilient, and able to handle the constantly changing and evolving nature of cyber threats. To develop reliable and effective cybersecurity solutions, it thus needs to incorporate in-depth data analysis, machine learning, and domain expertise. ## 4 Real-World Cybersecurity Application Areas Data-driven intelligence can be employed in various application areas for effective cybersecurity solutions. In the following, we summarize and discuss some important fields where data-driven intelligence could play a key role in both automation and assisting human experts in their decision-making process in various real-world applications. ### Critical Infrastructure Critical infrastructure (CI) typically refers to the systems, assets, and networks that are essential for the functioning of a society and economy, for example - energy, water, transportation, communications, healthcare, and finance are some of the potential sectors [12][13]. Thus, CI cybersecurity and resilience is one of the topmost important sectors nowadays, where data-driven intelligence could play a crucial role in practical solutions through data insights and sophisticated analytical modeling. The basis of intelligence could involve analyzing and visualizing CI data gathered from various sources including network logs, system activity, and threat intelligence feeds. The extracted insights from data could provide a comprehensive picture of the security landscape, providing human professionals with a better understanding of potential threats and vulnerabilities. Data-driven intelligence is also capable of predicting possible future cyber threats and attack trends using data patterns and AI algorithms [2]. These predictive insights could be beneficial to human experts to further analyze the potential attacks and make countermeasures for them, enabling them to proactively strengthen CI defenses. In many cases, automaton is necessary because of speeding up the investigation and management of incidents as well as minimizing the possibility of human error. For example, routine incident response tasks, such as anomaly detection, malware analysis, and containment processes, could be automated through a data-driven modeling process. Overall, the potential of data-driven intelligence could be the key to next-generation CI security offering automating large-scale tasks as well as assisting CI professionals to make well-informed decisions in various real-world scenarios. ### Digital Twin Nowadays, more and more businesses are using digital twins, which are virtual replicas of physical assets or systems [14]. As physical, digital as well as communication space is associated with digital twin systems [15], an effective security measure is necessary. Data-driven intelligence may keep track of the network traffic, user interactions, and behavior of the digital twin [16]. Thus it enables real-time monitoring of digital twin systems, continuously collecting data to detect any deviations from normal behavior. Cyber professionals may gain deeper insights into the behavior of the physical and virtual components through this extensive data analysis. For instance, when any suspicious activity or possible security issues are identified, they may receive prompt notifications, enabling quick response and mitigation. It can also forecast potential cybersecurity risks and vulnerabilities based on the insights extracted from data. Overall, this could be a useful tool for automatically solving security issues as well as enhancing human expertise and aiding in their decision-making process in real-world applications. ### Smart Cities Smart cities could be another potential area, which typically rely on interconnected digital systems and devices to enhance efficiency and improve the quality of life for residents. Massive amounts of data are produced by smart cities from a variety of sources, including IoT devices, sensors, infrastructure, and human interactions [17]. This data can be analyzed by data-driven intelligence to find trends and abnormalities that could point to possible cyber threats. It can identify suspicious activity in real-time and inform cybersecurity professionals, allowing them to take prompt action to stop cyberattacks. Data-driven intelligence may establish baseline behaviors for various parts of the smart city infrastructure by using AI and machine learning techniques [2]. This involves being aware of the typical data exchange patterns, user behavior with regard to smart devices, and network traffic flow. Automated incident response systems may be triggered when cyber threats are identified. This can forecast potential future cyber threats through analysis of historical data and cyberattack trends. Decision-makers can comprehend how cybersecurity resources are used by conducting data analysis. Human experts could learn about emerging threats, observe trends, and make wise decisions about security practices and procedures according to this comprehensive picture. ### IoT The Internet of Things (IoT) enables communication and interaction with numerous devices, generates an enormous amount of data, which can then be utilized to identify trends, behaviors, make predictions, and conduct assessments [18]. Thus decision-making in IoT cybersecurity is facilitated by data-driven intelligence, which substantially enhances human expert knowledge as well. Data-driven systems have the ability to rapidly detect abnormalities, recognize potential threats, and anticipate emerging issues by analyzing the enormous volumes of data produced by IoT devices and networks. This proactive strategy and real-time monitoring enable human professionals to react to cyber incidents quickly and strategically, reducing their effects. A thorough understanding of the complex IoT ecosystem is made possible by data-driven insights, which give important context and correlation from many data sources. This collaborative synergy enables cybersecurity experts to take well-informed decisions, allocate resources efficiently, and put into place efficient measures to protect IoT environments from emerging threats. ### Ics/ot ICS stands for "Industrial Control Systems", and is typically used to monitor and control physical processes and operations, which typically connect IT components with sensors, actuators, and other operational technology (OT) devices [19]. Supervisory control and data acquisition (SCADA) systems, distributed control systems (DCS), PLCs, and other ICS components are frequently targets of cyberattacks [20]. Potential threats to the ICS include advanced persistent threats, supply chain compromise, distributed denial of services, etc., where data-driven intelligence can contribute to detect and mitigate through an extensive analysis. Utilizing real-time and historical data collected from numerous interconnected devices and networks within industrial infrastructure also enables human experts to gain an in-depth understanding of the evolving threat landscape. By analyzing patterns, anomalies, and potential vulnerabilities, experts may deal with cyber threats proactively before they escalate. Additionally, data-driven solutions enable routine and large-scale complex operations to be automated, allowing human experts stress-less. Overall, the security and reliability of crucial industrial systems could be ensured by developing effective defense modeling with the fusion of data insights and human expertise. ### Metaverse Metaverse could be another potential area that can create secure, scalable, and realistic virtual worlds on a reliable and always-on platform. Users can interact with each other and digital objects in real-time using technologies like virtual reality (VR) or augmented reality (AR) [21]. Due to the massive volume of data moving around in the Metaverse, users are constantly running a higher risk of misuse [22]. Businesses are investing heavily in building an artificially intelligent Metaverse, which has increased the need for cybersecurity [23]. Data-driven cybersecurity solutions can track user behavior, interactions, and network traffic throughout the metaverse. These systems are capable of quickly identifying possible risks or unusual activities, such as unauthorized access attempts or malware activity, by analyzing patterns and anomalies. Automated incident response systems that can react to known threats and attacks without requiring human involvement could be provided by data-driven intelligence. Real-time monitoring and visualization of cybersecurity metrics and events within the metaverse can be provided through data-driven intelligence. These visualizations enable human professionals to promptly comprehend the security posture and pinpoint areas of concern. While human experts contribute critical thinking, domain knowledge, and decision-making, data-driven intelligence enhances these capabilities with rapid analysis, real-time insights, and automation of large-scale tasks. This can secure the metaverse environment in a comprehensive and proactive manner. ### Advanced Networking and Communications Nowadays, data-driven technology is also popular in the area of advanced communications and networking [24]. Based on current demand and traffic patterns, this can optimize the allocation of resources like bandwidth, computing power, and spectrum. In terms of security, data-driven technologies are capable of analyzing user behavior and network traffic patterns to detect anomalies and possible security breaches [25]. Machine learning models can identify suspicious activity and trigger prompt countermeasures to stop intrusions. Predictive maintenance powered by data-driven intelligence enables proactive defense against evolving attack vectors, which can help prevent network downtime and improves overall reliability. Thus, this can ensure a balanced trade-off between security and usability, which dynamically adjusts security configurations based on network conditions, user behavior, and threat levels [26]. An effective access control system can be implemented by investigating user behavior and contextual data to ensure secure and reliable authentication. Overall, advanced communications and network security have been significantly impacted by data-driven intelligence as it provides insights, automation, and adaptation to address complex problems in real time. In summary, organizations could enhance their threat detection and prevention, improve incident response capabilities, and strengthen their cybersecurity posture overall by utilizing data-driven intelligence in various real-world application domains. Data-driven intelligence augments human expertise rather than substituting it. In order to make informed decisions during security issues, it gives security analysts and operators more information and context. Data-driven intelligence helps human professionals to respond quickly and effectively by providing pertinent information and potential directions of action. Organizations can remain resilient in the face of emerging cyber threats when they have the capability to analyze massive datasets, uncover patterns, and make decisions based on data insights. ## 5 Challenges and Research Direction While the concept of data-driven intelligence revolutionizing the cybersecurity world holds promise, there are several challenges that researchers and practi tioners need to address to fully realize its potential. These challenges discussed below encompass various aspects of research and development in the field: * _Data Quality and Availability:_ One of the major challenges in incorporating data-driven intelligence for cybersecurity research and applications is ensuring the quality and availability of relevant data. Obtaining comprehensive, accurate, and diverse datasets could be challenging, especially when dealing with sensitive information. Researchers need to overcome data limitations and address biases, as well as meaningful synthetic data generation to ensure the reliability and effectiveness of their research and ultimate outcome. Methods that enable cybersecurity models to transfer knowledge from one domain or task to another could be useful. * _Algorithmic Transparency and Interpretability:_ The use of AI and complex machine learning algorithms, such as deep neural network learning [2], in data-driven intelligence may raise challenges in algorithmic transparency and interpretability. Understanding how algorithms make decisions and being able to interpret their outputs is crucial in the context of cybersecurity. Researchers need to focus on developing explainable AI techniques, e.g., rule-based modeling [11] or others that can provide insights into the reasoning behind algorithmic decisions, allowing cybersecurity professionals to trust and validate the results generated by data-driven intelligence systems. * _Privacy Concerns:_ Data-driven intelligence might raise important privacy and ethical concerns. The collection and analysis of large amounts of personal and sensitive data need to be conducted responsibly and in compliance with privacy regulations. Researchers thus need to explore privacy-preserving techniques such as differential privacy, federated learning, data anonymization, etc. [27] to ensure that individuals' privacy is protected while still extracting meaningful insights from data. For instance, federated learning enables training models across numerous devices or organizations without sharing raw data, hence protecting data privacy. * _Adversarial Attacks and Defenses:_ Adversaries can manipulate or poison datasets to mislead machine learning algorithms, leading to erroneous decisions or bypassing detection mechanisms. Research is necessary for developing robust models that are resilient to adversarial attacks and maintain high reliability and accuracy in practical settings. Developing advanced anomaly detection techniques identifying unusual behavior that can detect and respond to previously unknown and unseen threats and zero-day attacks is crucial. Hybrid models combining data-driven approaches such as machine learning, and rule-based approaches with expert knowledge can enhance the overall effectiveness of cybersecurity models. * _Generalizability and Scalability:_ The effectiveness of data-driven intelligence models in cybersecurity may vary across different contexts, environments, and evolving cyber threats. Thus, ensuring the generalizability and adaptability of research findings and models is crucial. Investigating transfer learning techniques can assist models in maintaining high detection accuracy while adapting rapidly to new attack patterns. To manage huge datasets in real time, it is also necessary to develop scalable algorithms, distributed computing frameworks, and optimized processing strategies. This is crucial to assure scalability and efficiency due to the exponential growth of cybersecurity data volume. * _Human-in-the-Loop and Accountability:_ While data-driven intelligence can provide valuable insights, the 'human' element in real-world applications might not be overlooked. Researchers need to take into account how human operators interact with data-driven systems, understand their decision-making processes, and design effective user interfaces and visualizations to aid decision-making. Combining AI and human expertise can also increase accountability. For instance, cybersecurity professionals can validate model outcomes, intervene when necessary, and provide explanations for actions made by the AI system. Thus, a regulatory guiding framework comprised of data science researchers, cybersecurity experts, legal professionals, and policymakers is crucial to bridge the gap between technological breakthroughs and actual application. In summary, data-driven intelligence for cybersecurity modeling is an area of study that involves resolving issues with data quality, processing techniques, model robustness, privacy, human expertise, and more. Addressing these challenges is crucial to fully realize the potential of data-driven intelligence in revolutionizing the cybersecurity landscape, which should be the key focus for future research and improvement. ## 6 Conclusion This position paper has made a convincing argument for the revolutionary effects of data-driven intelligence in the cybersecurity area. For this, we have explored and discussed in-depth potential of data-driven intelligence, particularly, in terms of automating large-scale complex tasks as well as assisting human experts to make their decisions in real-world scenarios. Organizations can improve their capability to recognize and address emerging threats by utilizing the power of data intelligence. The proactive and adaptable nature of data-driven intelligence also allows security professionals to stay one step ahead of malicious actors, significantly reducing risks. However, this paradigm shift also includes several challenges such as data availability, algorithm bias, incorporating human expertise in the loop that are needed to be resolved, discussed in this paper. Building a well-balanced framework leveraging both human expertise and data-driven intelligence which can improve overall security posture, is also highlighted. Overall, we believe that data-driven intelligence could be the key to next-generation cybersecurity if it is deployed wisely and ongoing research is undertaken. ## Acknowledgement This work is supported by Cyber Security Cooperative Research Centre (CSCRC), Australia.
2303.07443
On $C^0$-stability of compact leaves with amenable fundamental group
In his work on the generalization of the Reeb stability theorem, Thurston conjectured that if the fundamental group of a compact leaf $L$ in a codimension-one transversely orientable foliation is amenable and if the first cohomology group $H^1(L;\mathbb{R})$ is trivial, then $L$ has a neighborhood foliated as a product. This was later proved as a consequence of Witte-Morris' theorem on the local indicability of amenable left orderable groups and Navas' theorem on the left orderability of the group of germs of orientation-preserving homeomorphisms of the real line at the origin. In this note, we prove that Thurston's conjecture also holds for any foliation that is sufficiently close to the original foliation. Hence, if the fundamental group $\pi_1(L)$ is amenable and $H^1(L;\mathbb{R})=0$, then for every transversely orientable codimension-one foliation $\mathcal{F}$ having $L$ as a leaf, there is a neighborhood of $\mathcal{F}$ in the space of $C^{1,0}$ foliations with Epstein $C^0$ topology consisting entirely of foliations that are locally a product $L \times \mathbb{R}$.
Sam Nariman, Mehdi Yazdi
2023-03-13T19:52:00Z
http://arxiv.org/abs/2303.07443v2
# On \(C^{0}\)-stability of compact leaves with amenable fundamental group ###### Abstract. In his work on the generalization of the Reeb stability theorem ([21]), Thurston conjectured that if the fundamental group of a compact leaf \(L\) in a codimension-one transversely orientable foliation is amenable and if the first cohomology group \(H^{1}(L;\mathbb{R})\) is trivial, then \(L\) has a neighborhood foliated as a product. This was later proved as a consequence of Witte-Morris' theorem on the local indicability of amenable left orderable groups and Navas' theorem on the left orderability of the group of germs of orientation-preserving homeomorphisms of the real line at the origin. In this note, we prove that Thurston's conjecture also holds for any foliation that is sufficiently close to the original foliation. Hence, if the fundamental group \(\pi_{1}(L)\) is amenable and \(H^{1}(L;\mathbb{R})=0\), then for every transversely orientable codimension-one foliation \(\mathcal{F}\) having \(L\) as a leaf, there is a neighborhood of \(\mathcal{F}\) in the space of \(C^{1,0}\) foliations with Epstein \(C^{0}\) topology consisting entirely of foliations that are locally a product \(L\times\mathbb{R}\). ## 1. Introduction A codimension-\(m\)_foliation_ of an \(n\)-dimensional manifold \(M\) is a decomposition of \(M\) into injectively immersed \((n-m)\)-dimensional submanifolds such that, locally the submanifolds form a product \(\mathbb{R}^{n-m}\times\mathbb{R}^{m}\) by slices \(\mathbb{R}^{n-m}\times\{\text{point}\}\). The connected components of the submanifolds in this decomposition are called _leaves_. Thus, locally \(\mathcal{F}\) is given by an atlas whose transition functions preserve the planes \(\mathbb{R}^{n-m}\times\{\text{point}\}\). If the transition functions are \(C^{k}\) with respect to \(\mathbb{R}^{n-m}\) coordinates and \(C^{r}\) with respect to \(\mathbb{R}^{m}\) coordinates, then the foliation is of _class_\(C^{k,r}\). By a \(C^{r}\) foliation, we mean a foliation of class \(C^{r,r}\). Just as the dynamics of a function heavily depend on its regularity class, the global properties of a foliation are influenced by its regularity class. For example, Denjoy's theorem ([1]) as well as Kopell's Lemma ([12]) can each be used to give examples of codimension-one foliations that are not topologically conjugate to any \(C^{2}\) foliation. Harrison's results on nonsmoothable diffeomorphisms ([14, 15]) imply, via the mapping torus construction, that for every non-negative integer \(r\geq 0\) and every codimension \(m\neq 1,4\), there are codimension-\(m\)\(C^{r}\) foliations that are not topologically conjugate to any \(C^{r+1}\) foliation. Cantwell and Conlon ([13]) showed that there are codimension-one \(C^{0}\) foliations that are not topologically conjugate to any \(C^{1}\) foliations. Independently, Tsuboi ([20]) showed that for each non-negative integer \(r\geq 0\), there are \(C^{r}\) actions of the finitely presented group \(\langle a,b|[a,[b]]=1\rangle\) on the interval that are not topologically conjugate to any \(C^{r+1}\) action. See [1, 1, 12, 13, 14] for more recent results on the degree of regularity for group actions on \(1\)-manifolds. A compact leaf \(L\) of a codimension-\(m\) foliation \(\mathcal{F}\) is _globally stable_ if every leaf of \(\mathcal{F}\) is diffeomorphic to \(L\) and has a neighborhood that is foliated as a product \(L\times\mathbb{R}^{m}\) by leaves \(L\times\text{point}\). For codimension-one transversely orientable foliations on compact connected manifolds, the existence of a globally stable leaf \(L\) implies that \(\mathcal{F}\) is either the product foliation \(L\times[0,1]\) or a fibration over the circle. The stability of compact or proper leaves of foliations, especially in codimension-one, has been extensively studied. See [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 48, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 4 **Acknowledgment**.: SN was partially supported by a grant from the Simons Foundation (41000919, SN) and NSF CAREER Grant DMS-2239106. MY is supported by a UKRI Postdoctoral Research Fellowship. ## 2. Left-invariant orders on groups In this section we collect some background on left-orderable groups; a good reference is Clay and Rolfsen ([1]). We then prove that a certain group of germs of homeomorphisms is left-orderable, generalizing Navas's theorem. A group \(G\) is _left-orderable_ if there is a total order \(\prec\) on the elements of \(G\) that is invariant under left multiplication by \(G\); i.e. for every \(f,g,h\in G\) \[g\prec h\implies fg\prec fh.\] Note that a left-orderable group is torsion-free, since if \(e\in G\) is the identity element and \(e\neq g\in G\) is such that \(g^{n}=e\) and \(g\succ e\), then \[g^{n}\succ g^{n-1}\succ\cdots\succ g\succ e\] which is a contradiction. Similarly, \(g\prec e\) leads to a contradiction, implying that \(G\) is torsion-free. A subgroup of a left-orderable group is again a left-orderable group, which is easily seen by restricting the order. _Example 2.1_.: An important example of a left-orderable group is the group \(\mathrm{Homeo}^{+}(\mathbb{R})\) of orientation-preserving homeomorphisms of \(\mathbb{R}\). To see this, let \(a_{1},a_{2},a_{3},\cdots\) be a dense sequence in \(\mathbb{R}\). For every two distinct elements \(f,g\in\mathrm{Homeo}^{+}(\mathbb{R})\), let \(n=n(f,g)\) be the smallest number such that \(f(a_{n})\neq g(a_{n})\), and define an ordering \(\prec\) as \[f\prec g\iff f(a_{n})<g(a_{n}).\] Since the \(\{a_{i}\}\) are dense in \(\mathbb{R}\), for every \(f\neq g\) such \(n\) exists. Moreover, since elements of \(\mathrm{Homeo}^{+}(\mathbb{R})\) are strictly increasing, the above ordering is invariant under left multiplication. The following theorem shows that \(\mathrm{Homeo}^{+}(\mathbb{R})\) is a fairly general example of a left-orderable group. **Theorem 2.2** (Dynamic realization).: _Let \(G\) be a countable group. Then \(G\) is left-orderable if and only if there is an injective homomorphism \(h\colon G\to\mathrm{Homeo}^{+}(\mathbb{R})\); i.e. \(G\) acts faithfully on the real line by orientation-preserving homeomorphisms._ Proof.: We bring the proof given in [11, Theorem 2.2.19] since we need to slightly modify it later for our application. Suppose that \(G\) admits a left-invariant order \(\prec\). First, define an embedding \(t\colon G\to\mathbb{R}\) as follows: Choose a numbering \(g_{0},g_{1},g_{2},\cdots\) for the elements of \(G\). Let \(t(g_{0})=0\in\mathbb{R}\). Assume that \(g_{0},g_{1},\cdots,g_{i}\) are defined and define \(g_{i+1}\) inductively as follows. If \(g_{i+1}\) is larger (in the order \(\prec\)) than all of the previously defined \(g_{1},g_{2},\cdots,g_{i}\) then set \[t(g_{i+1})=\max\{t(g_{0}),\cdots,t(g_{i})\}+1.\] If \(g_{i+1}\) is smaller than all of \(g_{1},g_{2},\cdots,g_{i}\) then define \[t(g_{i+1})=\min\{t(g_{0}),\cdots,t(g_{i})\}+1.\] If there are \(0\leq m,n\leq i\) such that \(g_{n}\prec g_{i+1}\prec g_{m}\) and no other element of \(\{g_{0},g_{1},\cdots,g_{i}\}\) lies strictly between \(g_{n}\) and \(g_{m}\) then define \[t(g_{i+1})=\frac{t(g_{n})+t(g_{m})}{2}.\] This defined an embedding \(t\colon G\to\mathbb{R}\). Note that the image of \(t\) is unbounded, since if \(g\succ e\) then \(t(g^{n})\to+\infty\) as \(n\to+\infty\). Then \(G\) naturally acts on \(t(G)\) by setting \(g\cdot(t(g_{i})):=t(gg_{i})\) for every \(g\in G\), and this action continuously extends to the closure \(\overline{t(G)}\) of \(t(G)\) in \(\mathbb{R}\). The complement of the closure of \(t(G)\) in \(\mathbb{R}\) is an open subset of \(\mathbb{R}\), and so is a countable union of open intervals. Extend this action of \(G\) on \(\overline{t(G)}\) to an action of \(G\) on \(\mathbb{R}\) such that each \(g\in G\) acts by an affine map when restricted to each of the intervals of \(\mathbb{R}\setminus\overline{t(G)}\). _Remark 2.3_.: We show that in Theorem 2.2, one can assume that the following conditions are also satisfied: * each element in the image of \(h\) fixes the point \(0\in\mathbb{R}\); and * each element in the image of \(h\) has non-trivial germ at \(0\). These two conditions will be used later in the proof of Corollary 2.7. Let \(h\colon G\to\mathrm{Homeo}^{+}(\mathbb{R})\) be the constructed action in the proof of Theorem 2.2. Identify \(\mathbb{R}\) with the interval \((-\infty,0)\subset\mathbb{R}\) to obtain a new action \(h^{\prime}\colon G\to\mathrm{Homeo}^{+}(\mathbb{R})\). Then \(h^{\prime}\) has the desired properties. Clearly, every element in the image of \(h^{\prime}\) fixes the point \(0\). Moreover, each element in the image of \(h^{\prime}\) has a non-trivial germ at \(0\). This is because for every element \(g\succ e\) we have \(h^{\prime}(g)^{n}(t(g_{0}))\to 0\) as \(n\to+\infty\). If \(X\) is a subset of a group \(G\), denote by \(S(X)\) the semigroup generated by elements of \(X\). If \(X=\{x_{1},\cdots,x_{n}\}\) is a finite set, we write \(S(X)\) as \(S(x_{1},\cdots,x_{n})\). For proof of the following criterion for left-orderability see Clay and Rolfsen [16, Theorem 1.48]. **Theorem 2.4**.: _A group \(G\) is left-orderable if and only if for every finite subset \(\{x_{1},\cdots,x_{n}\}\) of \(G\) which does not contain the identity, there exist \(\epsilon_{i}\in\{-1,1\}\) such that \(S(x_{1}^{\epsilon_{1}},\cdots,x_{n}^{\epsilon_{n}})\) does not contain the identity element of \(G\)._ Intuitively, the above criterion says that if there is no obstruction for defining a left-order on each _finitely generated_ subgroup of the group \(G\), then \(G\) is left-orderable. **Notation 2.5**.: _Let \(S\) be a topological space and \(0\in S\) be a base point. Let \(\widetilde{\mathrm{Homeo}}^{(S,0)}_{+}(\mathbb{R},0)\) be the group of germs of local homeomorphisms of \(\mathbb{R}\times S\) of the form \((h_{s}(x),s)\) at the point \((0\times 0)\) where each \(h_{s}(x)\) is a local orientation-preserving homeomorphism of \(\mathbb{R}\). In particular, all elements of \(\widetilde{\mathrm{Homeo}}^{(S,0)}_{+}(\mathbb{R},0)\) fix the point \((0,0)\)._ The following is an analog of Navas's theorem ([15, Proposition 3]). The case of \(S=\{0\}\) is the statement of Navas's theorem. The proof is also similar. **Proposition 2.6**.: _Let \((S,0)\) be a pointed topological space. The group \(\widetilde{\mathrm{Homeo}}^{(S,0)}_{+}(\mathbb{R},0)\) is left-orderable._ Proof.: Let \(G=\widetilde{\mathrm{Homeo}}^{(S,0)}_{+}(\mathbb{R},0)\). By Theorem 2.4, it is enough to show that for every non-identity elements \(f_{1},\cdots,f_{k}\) in \(G\), there are \(\epsilon_{1},\cdots,\epsilon_{k}\) in \(\{-1,+1\}\) such that the semigroup generated by \(f_{i}^{\epsilon_{i}}\) does not contain the identity element. Note that the identity element, denoted by \(e\), is the germ of the identity homeomorphism of \(\mathbb{R}\times S\) at \((0\times 0)\). Hence we have the following criterion for detecting non-identity elements: \(f\in G\) is not equal to the identity element \(e\) if and only if there exists a sequence \((p_{j},q_{j})\in\mathbb{R}\times S\) of points converging to \((0,0)\) as \(j\to\infty\) such that \(f(p_{j},q_{j})\neq(p_{j},q_{j})\). For each \(1\leq i\leq k\), pick a representative homeomorphism for \(f_{i}\) and denote it by \(f_{i}\) again by abuse of notation. We first define \(\epsilon_{1},\cdots,\epsilon_{k}\), starting with \(\epsilon_{1}\). Since \(f_{1}\neq e\), there is a sequence \((x_{r},s_{r})\to(0,0)\) such that for every \(r\) \[f_{1}(x_{r},s_{r})\neq(x_{r},s_{r}).\] Let \(\pi_{1}\colon\mathbb{R}\times S\to\mathbb{R}\) be the projection map onto the first factor. After passing to a subsequence of \((x_{r},s_{r})\) we may assume that either 1. for every \(r\) we have \(\pi_{1}\circ f_{1}(x_{r},s_{r})>x_{r}\); or 2. for every \(r\) we have \(\pi_{1}\circ f_{1}(x_{r},s_{r})<x_{r}\). In case (1) define \(\epsilon_{1}=1\), and in case (2) set \(\epsilon_{1}=-1\). Now consider \(f_{2}\). At least one of the following holds: 1. for infinitely many \(r\) we have \(\pi_{1}\circ f_{2}(x_{r},s_{r})>x_{r}\); or 2. for infinitely many \(r\) we have \(\pi_{1}\circ f_{2}(x_{r},s_{r})<x_{r}\); or 3. for all but finitely many \(r\) we have \(\pi_{1}\circ f_{2}(x_{r},s_{r})=x_{r}\). In case i) define \(\epsilon_{2}=1\), in case ii) set \(\epsilon_{2}=-1\), and in case iii) we declare \(\epsilon_{3}\) undefined for the moment. In cases i) and ii), after passing to a subsequence of \((x_{r},s_{r})\) we can assume that \(\pi_{1}\circ f_{2}^{\epsilon_{2}}(x_{r},s_{r})>x_{r}\) for all \(r\). Repeat this procedure for \(f_{3},\cdots,f_{k}\) to define a subset of \(\epsilon_{1},\cdots,\epsilon_{k}\). After reordering, assume that \(\epsilon_{1},\cdots,\epsilon_{\ell}\) are defined for some \(1\leq\ell\leq k\). Hence for every \(1\leq i\leq\ell\) and every \(r\) we have \(\pi_{1}\circ f_{i}^{\epsilon_{i}}(x_{r},s_{r})>x_{r}\), and for every \(i>\ell\) and every \(r\) we have \(\pi_{1}\circ f_{i}(x_{r},s_{r})=x_{r}\). Now start with \(f_{\ell+1}\neq e\) and choose a new sequence \((x_{r}^{\prime},s_{r}^{\prime})\) converging to \((0,0)\) as \(r\to\infty\) such that \(f_{\ell+1}(x_{r}^{\prime},s_{r}^{\prime})\neq(x_{r}^{\prime},s_{r}^{\prime})\) for every \(r\). We then repeat the previous procedure using the sign of \(f_{\ell+1}(x_{r}^{\prime},s_{r}^{\prime})-(x_{r}^{\prime},s_{r}^{\prime})\) to define a non-empty subset of \(f_{\ell+1},\cdots,f_{k}\). Repeating this procedure we define all of \(\epsilon_{1},\cdots,\epsilon_{k}\) using a finite number, say \(N\), of sequences \(\{(x_{r},s_{r})\}_{r=1}^{\infty},\{(x_{r}^{\prime},s_{r}^{\prime})\}_{r=1}^{ \infty},\cdots\) with each sequence converging to \((0,0)\). Put the sequences \(\{(x_{r},s_{r})\}_{r=1}^{\infty},\{(x_{r}^{\prime},s_{r}^{\prime})\}_{r=1}^{ \infty},\cdots\) in successive rows of a table with \(N\) rows and infinitely many columns, and define the sequence \((p_{j},q_{j})\) by following successive finite diagonals of this table. Let \(w\in S(f_{1}^{\epsilon_{1}},\cdots,f_{k}^{\epsilon_{k}})\) be a non-empty word. We show that there are infinitely many \(j\) such that \(w(p_{j},q_{j})\neq(p_{j},q_{j})\). Consider two cases: 1. At least one of \(f_{1},\cdots,f_{\ell}\) appears in \(w\). Recall that \(\ell\) was the index such that \(f_{1},\cdots,f_{\ell}\) were defined using the first sequence \((x_{r},s_{r})\). By construction, for each \(1\leq m\leq\ell\) we have \(\pi_{1}\circ f_{m}^{\epsilon_{m}}(x_{r},s_{r})>x_{r}\) and for every \(m>\ell\) we have \(\pi_{1}\circ f_{m}^{\epsilon_{m}}(x_{r},s_{r})=x_{r}\). Therefore we have \(\pi_{1}\circ w(x_{r},s_{r})>x_{r}\) for every \(r\). Since \((x_{r},s_{r})\) is an infinite subsequence of \((p_{j},q_{j})\) we are done. 2. None of \(f_{1},\cdots,f_{\ell}\) appear in \(w\). In this case, we have a word in \(f_{\ell+1},\cdots,f_{k}\) and we can use the next sequence \((x_{r}^{\prime},s_{r}^{\prime})\) to argue similarly. **Corollary 2.7**.: _Let \(G\) be a countable group. The following statements are equivalent:_ 1. \(G\) _has a non-trivial homomorphism into_ \(\mathrm{Homeo}_{+}(\mathbb{R})\)_;_ 2. \(G\) _has a non-trivial homomorphism into_ \(\widetilde{\mathrm{Homeo}}_{+}(\mathbb{R},0)\)_;_ 3. \(G\) _has a non-trivial homomorphism into_ \(\widetilde{\mathrm{Homeo}}^{(S,0)}_{+}(\mathbb{R},0)\)_, where_ \(S=\{0\}\cup\{\frac{1}{n}|n\in\mathbb{N}\}\subset\mathbb{R}\)_;_ 4. \(G\) _has a non-trivial homomorphism into_ \(\widetilde{\mathrm{Homeo}}^{(S,0)}_{+}(\mathbb{R},0)\) _for some pointed topological space_ \((S,0)\)_._ Proof.: The implications (2) \(\implies\) (3) \(\implies\) (4) are immediate. By Navas' theorem, we have (2) \(\implies\) (1). To see this, let \[\phi\colon G\to\widetilde{\mathrm{Homeo}}_{+}(\mathbb{R},0)\] be a homomorphism with a non-trivial image. Then the image \(\phi(G)\) is countable and left orderable. Therefore, \(\phi(G)\) is a subgroup of \(\operatorname{Homeo}_{+}(\mathbb{R})\) and so there is an injective homomorphism \(i\colon\phi(G)\hookrightarrow\operatorname{Homeo}_{+}(\mathbb{R})\). The composition \(i\circ\phi\) then gives a non-trivial homomorphism of \(G\) into \(\operatorname{Homeo}_{+}(\mathbb{R})\). The same argument shows that (4) \(\implies\) (1), where we use Proposition 2.6 instead of Navas' theorem. The implication (1) \(\implies\) (2) is also well-known: Let \[\phi\colon G\to\operatorname{Homeo}_{+}(\mathbb{R})\] be a homomorphism with a non-trivial image. The image \(\phi(G)\) is countable and left orderable. Therefore, by the dynamic realization construction (Theorem 2.2), there is an injective homomorphism \[\psi\colon\phi(G)\to\operatorname{Homeo}_{+}(\mathbb{R})\] such that * each element in the image of \(\psi\) fixes the point \(0\in\mathbb{R}\); and * each element in the image of \(\psi\) has non-trivial germ at \(0\). See Remark 2.3. It follows that \(\psi\) induces a non-trivial homomorphism \(\overline{\psi}\colon\phi(G)\to\operatorname{Homeo}_{+}(\mathbb{R},0)\). Then \(\overline{\psi}\circ\phi\) is a non-trivial homomorphism from \(G\) into \(\operatorname{Homeo}_{+}(\mathbb{R},0)\). This completes the equivalence of items (1)-(4). ## 3. Deformations of foliations and their holonomies We begin by defining the notion of an etale groupoid following the exposition of Haefliger [1]. For further examples see Bridson and Haefliger [1, Chapter III.\(\mathcal{G}\)]. ### Etale groupoids A _groupoid_\((\mathcal{G},T)\) is a small category with a set of objects \(T\) and morphisms \(\mathcal{G}\) such that all elements of \(\mathcal{G}\) are invertible. There are two projections: the _source projection_\(s\colon\mathcal{G}\to T\) and the _target projection_\(t\colon\mathcal{G}\to T\). The composition \(gg^{\prime}\) of two elements \(g\) and \(g^{\prime}\) of \(\mathcal{G}\) is defined if \(s(g)=t(g^{\prime})\). A _topological groupoid_\((\mathcal{G},T)\) is a groupoid such that \(\mathcal{G}\) and \(T\) are topological spaces, with the following properties: * the composition and taking inverse are continuous; and * the inclusion \(x\to 1_{\{x\}}\) is a homeomorphism \(T\to\mathcal{G}\) onto its image, where \(1_{x}\) is the identity morphism at \(x\). An _etale groupoid_ is a topological groupoid \((\mathcal{G},T)\) such that the source and target projections are etale maps; i.e. are locally homeomorphisms. Given an etale groupoid \((\mathcal{G},T)\) and an open cover \(\mathcal{U}=\{U_{i}\}_{i\in I}\) of \(T\), the _localization of \((\mathcal{G},T)\) over \(\mathcal{U}\)_ is an etale groupoid \((\mathcal{G}_{\mathcal{U}},T_{\mathcal{U}})\) defined as follows: * here \(T_{\mathcal{U}}\) is the disjoint union of open sets \(U_{i}\); * the elements of \(\mathcal{G}_{\mathcal{U}}\) are the triples \((i,g,j)\) with \(s(g)\in U_{j}\) and \(t(g)\in U_{i}\); * the source and target projections map \((i,g,j)\) to \((j,s(g))\) and \((i,t(g))\) respectively; and * The composition \((i,g,j)(j,h,k)\) is defined as \((i,gh,k)\). Let \(\Gamma\) and \(\Gamma^{\prime}\) be two etale groupoids whose spaces of objects (units as the space of morphisms) are \(T\) and \(T^{\prime}\) respectively. A _homomorphism_ from \(\Gamma\) to \(\Gamma^{\prime}\) is a continuous functor that in particular induces a local homeomorphism between morphism spaces. But the notion of a homomorphism is restrictive since we want to work with etale groupoids up to equivalence. **Definition 3.1** (Equivalence of etale groupoids).: Two etale groupoids \((\mathcal{G},T)\) and \((\mathcal{G}^{\prime},T^{\prime})\) are _equivalent_ if there is an open cover \(\mathcal{U}\) of \(T\) and an open cover \(\mathcal{U}^{\prime}\) of \(T^{\prime}\) such that the localizations \((\mathcal{G}_{\mathcal{U}},T_{\mathcal{U}})\) and \((\mathcal{G}^{\prime}_{\mathcal{U}^{\prime}},T^{\prime}_{\mathcal{U}^{\prime}})\) are isomorphic. In particular, as we shall see, we want to consider the fundamental groupoid of foliations whose definition depends on the choice of a complete transversal but different choices give equivalent groupoids. So we consider the following notion of _morphisms_ between etale groupoids. **Definition 3.2** (Morphism between etale groupoids).: Let \(\Gamma\) and \(\Gamma^{\prime}\) be two etale groupoids whose space of units are \(T\) and \(T^{\prime}\) respectively. Let \(\mathcal{U}\) and \(\mathcal{V}\) be two open covers of \(T\), and let \(\phi_{\mathcal{U}}\colon\Gamma_{\mathcal{U}}\to\Gamma^{\prime}\) and \(\phi_{\mathcal{V}}\colon\Gamma_{\mathcal{V}}\to\Gamma^{\prime}\) be two homomorphisms. Denote by \(\mathcal{U}\coprod\mathcal{V}\) the open cover of \(T\) that is the disjoint union of the open covers \(\mathcal{U}\) and \(\mathcal{V}\). We say \(\phi_{\mathcal{U}}\) and \(\phi_{\mathcal{V}}\) represent equivalent _morphism_\(\phi\colon\Gamma\to\Gamma^{\prime}\) if there exists a homomorphism \(\phi_{\mathcal{U}}\coprod\mathcal{V}\colon\Gamma_{\mathcal{U}}\coprod\mathcal{ V}\to\Gamma^{\prime}\) extending \(\phi_{\mathcal{U}}\) and \(\phi_{\mathcal{V}}\). ### Fundamental groupoid of a foliation Bonatti and Haefliger ([1]) developed a theory for studying germs of deformations of a foliation and germs of deformations of its holonomy. See [1] for a comprehensive exposition and various applications. Let \(\mathcal{F}\) be a \(k\)-dimensional \(C^{1,0}\) foliation of a compact \(n\)-dimensional manifold \(M\) and \(T\mathcal{F}\) be the tangent bundle to \(\mathcal{F}\). A _transversal to the foliation_\(\mathcal{F}\) is a (possibly disconnected) manifold \(T\) of dimension \(n-k\) and an immersion \(\tau\colon T\to M\) such that at each point \(x\in T\) we have \[T_{\tau(x)}(M)=\tau_{*}(T_{x}(T))\oplus T_{\tau(x)}(\mathcal{F}).\] By abuse of notation, we refer to the transversal by \(T\). A transversal \(T\) is _complete_ if it intersects all leaves of \(\mathcal{F}\). **Definition 3.3** (Fundamental groupoid).: Let \(\mathcal{F}\) be a foliation of a manifold \(M\), and \(\tau\colon T\to M\) be a complete transversal. The _fundamental groupoid of \(\mathcal{F}\) for the transversal \(T\)_, denoted by \(\Pi_{\mathcal{F}}(T)\), is an etale groupoid defined as follows: * As a groupoid, its objects are the points in \(T\). A morphism from a point \(p\in T\) to a point \(q\in T\) is the homotopy class of a path \(\gamma\) in a leaf of \(\mathcal{F}\) starting at \(\tau(p)\) and ending at \(\tau(q)\), where all the paths during the homotopy lie in the same leaf of \(\mathcal{F}\). * A open basis for the topology on the set of morphisms is obtained as follows: Let \(h\) be a homeomorphism from an open subset \(U\) of \(T\) to an open subset \(V\) of \(T\) such that there exists a continuous map \(C\colon U\times[0,1]\to M\) such that, for each \(u\in U\), \(C_{u}\colon t\to C(u,t)\) is a path contained in a leaf of \(\mathcal{F}\), starting at \(\tau(u)\) and terminating at \(\tau(h(u))\). Then the collection of homotopy classes in the leaves of \(\mathcal{F}\) of the paths \(C_{u}\), \(u\in U\) is such an open set. Note the fundamental groupoid of a foliation \(\mathcal{F}\) depends on the choice of a complete transversal \(T\). However for two complete transversals \(T\) and \(T^{\prime}\) the fundamental groupoids \(\Pi_{\mathcal{F}}(T)\) and \(\Pi_{\mathcal{F}}(T^{\prime})\) are equivalent in the sense of Definition 3.1. There is a natural map that associates to each path \(\gamma\) tangent to \(\mathcal{F}\) its holonomy germ, and induces a homomorphism from \(\Pi_{\mathcal{F}}(T)\) to the groupoid \(\underline{\operatorname{Homeo}}(T)\) of germs of local homeomorphisms of \(T\) \[\operatorname{H}\colon\Pi_{\mathcal{F}}(T)\to\underline{\operatorname{Homeo}}(T).\] The etale groupoid \(\underline{\operatorname{Homeo}}(T)\) has \(T\) as its space of objects, and morphisms between two points \(x\) and \(y\) in \(T\) are given by the set of germs of homeomorphisms sending \(x\) to \(y\). The union of these sets as \(x\) and \(y\) vary is the morphism space that has the so-called sheaf topology. An open neighborhood of a germ \(f\) sending \(x\) to \(y\) in the morphism space is described as follows. Let \(F\) be a local homeomorphism of \(T\) from an open set \(U\) containing \(x\) to an open set \(V\) containing \(y\) such that its germ at \(x\) is \(f\). The germs of \(F\) at points in \(U\) give an open neighborhood of \(f\) in the morphism space of \(\underline{\operatorname{Homeo}}(T)\). ### Epstein topology on the space of foliations We want to define the holonomy map for deformations of foliations as an etale map out of the fundamental groupoid. To do so, we shall first recall the Epstein topology on the space of foliations. For a codimension-\(m\) foliation \(\mathcal{F}\) of an \(n\)-dimensional manifold \(M\), we choose a _neighborhood scheme_\(S=(I,\{\phi_{i}\},\{K_{i}\})\) where for each \(i\) in the index set \(I\), the map \(\phi_{i}\colon U_{i}\to\mathbb{R}^{n}\) is a \(C^{r}\)-diffeomorphism from an open subset \(U_{i}\) of \(M\) to an open set in \(\mathbb{R}^{n}\), such that the image of each leaf in \(U_{i}\) is parallel to \(\mathbb{R}^{n-m}\times\{\text{point}\}\), and \(K_{i}\subset U_{i}\) gives a locally finite family of compact sets in \(M\) such that \(\phi_{i}(K_{i})\) is a closed \(n\)-cube with sides parallel to the axes of \(\mathbb{R}^{n}\), and the interiors \(\{\text{Int}(K_{i})\}\) covers \(M\). Each pair \((\phi\colon U\to\mathbb{R}^{n},K)\) with the same properties is called a _distinguished chart_ for \(\mathcal{F}\). Given a neighborhood scheme \(S\), an index \(i\in I\), and a positive real number \(\delta\), we define the neighborhood \(N(i,\delta)\) of \(\mathcal{F}\) in \(C^{r}\)-topology to be the set of those foliations \(\mathcal{F}^{\prime}\) for which there is a distinguished chart \((\phi^{\prime},K)\) for \(\mathcal{F}^{\prime}\) such that \(K_{i}\subset K\) and \(|\phi^{\prime}\circ\phi_{i}^{-1}-\text{Id}|_{r}\leq\delta\) on \(\phi_{i}(K_{i})\) where \(|\cdot|_{r}\) is the \(C^{r}\)-norm on functions on \(\mathbb{R}^{n}\). Then a basis in Epstein's \(C^{r}\) topology on the space of foliations is given by finite intersections of such neighborhoods. Let \(\operatorname{Fol}_{m}^{r}(M)\) be the space of codimension-\(m\)\(C^{r}\) foliations on \(M\) endowed with Epstein \(C^{r}\) topology, as defined in the introduction. Epstein ([10]) proved that this topology on the space of foliations satisfies the following two axioms: The first axiom states that the group of \(C^{r}\) homeomorphisms of \(M\) acts continuously on \(\operatorname{Fol}_{m}^{r}(M)\). Intuitively, the second axiom states that every foliation \(\mathcal{F}^{\prime}\) sufficiently close to \(\mathcal{F}\) has holonomy defined and sufficiently close to that of \(\mathcal{F}\). More precisely, let \(D(r)\) be the open ball of radius \(r\) in \(\mathbb{R}^{m}\) centered at the origin. Let \(h\colon I\times D(1)\to M\) be a \(C^{r}\) map such that \(h|\{t\}\times D(1)\) is an embedding transverse to \(\mathcal{F}\) for each \(t\in I\) such that for each \(x\in D(1)\), \(h(I\times\{x\})\) lies in a single leaf of \(\mathcal{F}\). We require that if \(\mathcal{F}^{\prime}\) is sufficiently close to \(\mathcal{F}\), there is a \(C^{r}\) map \(k\colon I\times D(\frac{1}{2})\to M\) such that * \(k|0\times D(\frac{1}{2})=h|0\times D(\frac{1}{2})\); * for each \(x\in D(\frac{1}{2})\), \(k|I\times\{x\}\) lies on a single leaf of \(\mathcal{F}^{\prime}\); * \(k|t\times D(\frac{1}{2})\subset h|t\times D(\frac{3}{4})\), and \(k|t\times D(\frac{1}{2})\) is an embedding for each \(t\); * for each \(t\), \(\mathcal{F}^{\prime}\) is transverse to \(h|t\times D(\frac{3}{4})\); * \(k\) is \(C^{r}\)-near to \(h|I\times D(\frac{1}{2})\). We only need the second axiom in this article. Schweitzer ([11, Lemma 1.1 and Proposition 4.1]) gave a simplified proof of a version of the second axiom for the \(C^{1}\) case; it is straightforward to see that his proof works for the \(C^{0}\) case as well. ### Deformation of a foliation Here we recollect part of the main theorem of Bonatti-Haefliger ([1]) for the deformations of \(C^{r}\)-foliations for \(r>0\) that also holds for \(C^{0}\)-case. Let \((S,0)\) be a locally compact topological space with a base point \(0\in S\). We think of \(S\) as the parameter space for deforming either the foliation or its holonomy; here the initial foliation \(\mathcal{F}\) corresponds to the base point \(0\in S\). Two useful examples to keep in mind are those of \(S\) being \(\{0\}\cup\{\frac{1}{n}|n\in\mathbb{N}\}\subset[0,1]\) with the subspace topology and the base point \(0\), or \(S\) being the interval \((-1,1)\) with the base point \(0\). We want to consider the _germ of a foliation \(\mathcal{F}^{S}\) of \(M\times S\) around \(M\times\{0\}\)_. We shall first recall some definitions to make sense of foliations on \(M\times S\) when \(S\) is only a topological space. Let \(\operatorname{Homeo}^{S}(T)\) be the pseudogroup of local homeomorphisms of \(T\times S\) of the form \((h_{s}(x),s)\) where \(h_{s}\) is a local homeomorphisms of \(T\) continuously varying with \(s\). Denote by \(\operatorname{\widetilde{Homeo}}^{(S,0)}(T)\) (respectively \(\operatorname{\widetilde{Homeo}}^{S}(T)\)) the groupoid of germs of elements of \(\operatorname{Homeo}^{S}(T)\) at points of \(T\times\{0\}\) (respectively \(T\times S\)). There is a natural projection \[\pi\colon\operatorname{\widetilde{Homeo}}^{(S,0)}(T)\to\operatorname{ \widetilde{Homeo}}(T).\] A _foliation \(\mathcal{F}^{S}\) parametrized by \(S\)_ on an open set \(U\) in \(M\times S\) is given by the following cocycle data: * Let \(\{U_{i}\}_{i\in I}\) be an open cover of \(U\) in \(M\times S\). And let \(U_{i}^{s}\) be the intersection \(U_{i}\cap(M\times\{s\})\). * For each \(i\), we have a map \(f_{i}\) from \(U_{i}\) to an open set in \(\mathbb{R}^{n}\times S\), of the form \(f_{i}(x,s)=(f_{i}^{s}(x),s)\) where \(f_{i}^{s}\) is a submersion from \(U_{i}^{s}\) into \(\mathbb{R}^{n}\) which varies continuously with \(s\). * For each \(i\) and \(j\), we have a continuous map \(g_{ij}:U_{i}\cap U_{j}\to\operatorname{\widetilde{Homeo}}^{S}(\mathbb{R}^{n})\) such that for all \((x,s)\in U_{i}\cap U_{j}\), the maps \(g_{ij}(x,s)\circ f_{j}\) and \(f_{i}\) have the same germ at \((x,s)\). Two such cocycles \((\mathcal{U},f_{i},g_{ij}),(\mathcal{V},h_{i},k_{ij})\) define the same foliation \(\mathcal{F}^{S}\) on \(U\) if one can simultaneously extend these two cocycles to a cocycle for the open covering \(\mathcal{U}\coprod\mathcal{V}\) of \(U\). Note that the foliation \(\mathcal{F}^{S}\) defined in this way on \(U\) gives a foliation \(\mathcal{F}^{s}\) on \(U^{s}=U\cap(M\times\{s\})\) which varies continuously with \(s\). **Definition 3.4** (Germ of deformation of a foliation).: Let \(\mathcal{F}\) be a foliation of a compact manifold \(M\). A _local deformation parametrized by \((S,0)\) of \(\mathcal{F}\)_ is a foliation \(\mathcal{F}^{S}\) parametrized by \(S\) of an open neighborhood \(U\) of \(M\times\{0\}\) in \(M\times S\) such that \(\mathcal{F}^{0}\) is the foliation \(\mathcal{F}\) on \(M\times\{0\}\). Two such local deformations of \(\mathcal{F}\) have the same _germ_ if they agree on a neighborhood of \(M\times\{0\}\) in \(M\times S\). A _germ of deformation parametrized by \((S,0)\) of \(\mathcal{F}\)_ is the germ of a local deformation parametrized by \((S,0)\) of \(\mathcal{F}\). Two germs of foliations \(\mathcal{F}^{S}_{1}\) and \(\mathcal{F}^{S}_{2}\) are defined to be _equivalent_ if there exist open neighborhoods \(U_{1}\) and \(U_{2}\) of \(M\times\{0\}\) in \(M\times S\), and a homeomorphism \(\phi\colon U_{1}\to U_{2}\) that lies in \(\operatorname{Homeo}^{S}(M)\) such that the restriction of \(\phi\) to \(M\times\{0\}\) is the identity and that \(\phi\) maps \(\mathcal{F}^{S}_{1}|_{U_{1}}\) to \(\mathcal{F}^{S}_{2}|_{U_{2}}\). Bonatti and Haefliger give an algebraic analog of germ of deformation of foliation by defining the notion of deformation of holonomy. **Definition 3.5** (Germ of deformation of holonomy for the transversal \(T\)).: Let \(\mathcal{F}\) be a foliation and \(T\) be a complete transversal for \(\mathcal{F}\). we define a _germ of deformation (parametrized by \((S,0)\)) of the holonomy of \(\mathcal{F}\) for the transversal \(T\)_ to be a homomorphism \(\operatorname{H}^{S}\) from \(\Pi_{\mathcal{F}}(T)\) into \(\underline{\mathrm{Homeo}}^{(S,0)}(T)\) that makes the following diagram commutative: **Definition 3.6** (Germ of deformation of holonomy).: Define a _germ of deformation (parametrized by \((S,0)\)) of the holonomy of \(\mathcal{F}\)_ as a morphism from \(\Pi_{\mathcal{F}}(T)\) to \(\mathrm{Homeo}^{(S,0)}(T)\). If \((\mathcal{F}^{s})_{s\in S}\) is a deformation of \(\mathcal{F}\) parametrized by \(S\), then the family \((\mathcal{F}^{s})_{s\in S}\) can be seen as a foliation \(\mathcal{F}^{S}\) parametrized by \(S\) of \(M\times S\) of the same dimension as that of \(\mathcal{F}\) such that each \((M\times\mathrm{point})\) is saturated by leaves. In this case \(T\times S\) is a transversal for the foliation \(\mathcal{F}^{S}\) of \(M\times S\) in a neighborhood of \(T\times\{0\}\). It follows from Epstein's second axiom that there is a homomorphism \[\mathrm{H}^{S}\colon\Pi_{\mathcal{F}}(T)\to\underline{\mathrm{Homeo}}^{(S,0) }(T)\] that to each path \(c\) tangent to \(\mathcal{F}\) and with endpoints on \(T\) assigns the germ of the holonomy of \(\mathcal{F}^{S}\) along \(c\times\{0\}\) for the transversal \(T\times S\). In other words, a germ of deformation of a foliation defines a germ of deformation of its holonomy. In the \(C^{r}\)-category for \(r>0\), one can similarly define the germ of deformations of a \(C^{r}\) foliation \(\mathcal{F}\) and the germ of deformation of the holonomy of \(\mathcal{F}\) parametrized by \((S,0)\). And similarly one can define the map \(H^{S}\) in this category to associate to a germ of deformation of \(\mathcal{F}\) a germ of deformation of the holonomy of \(\mathcal{F}\). A fundamental result of Bonatti and Haefliger ([1]) states that in the \(C^{r}\)-category for \(r>0\), conversely one can also associate to a germ of deformation of the holonomy of a \(C^{r}\) foliation \(\mathcal{F}\) a germ of deformation of \(\mathcal{F}\), and the following correspondence holds. **Theorem 3.7** (Bonatti-Haefliger).: _Let \(\mathcal{F}\) be a \(C^{r}\) foliation (\(r\geq 1\)) of a compact smooth manifold \(M\), and \((S,0)\) be a pointed locally compact topological space. The natural map \(\mathrm{H}^{S}\), as above, defines a bijection between the set of equivalence classes of germs of deformations parametrized by \((S,0)\) of the foliation \(\mathcal{F}\) and the set of germs of deformations parametrized by \((S,0)\) of the holonomy map \(\mathrm{H}\)._ Bonatti and Haefliger prove the surjection by utilizing the notion of microfoliation and openness of the transversality condition to \(C^{r}\)-foliations when \(r\geq 1\). Given that topological transversality is not an open condition, it would be interesting to see under what condition the \(C^{0}\)-case of Bonatti-Haefliger's theorem holds. For \(C^{1,0}\) foliations, we show that Bonatti-Haefliger's map is an injection. This will be used in the proof of Theorem 1.1. **Theorem 3.8**.: _Let \(\mathcal{F}\) be a \(C^{1,0}\) foliation of a compact smooth manifold \(M\), and \((S,0)\) be a pointed locally compact topological space. The natural map \(\mathrm{H}^{S}\), as above, defines an injective map between the set of equivalence classes of germs of deformations parametrized by \((S,0)\) of the foliation \(\mathcal{F}\) and the set of germs of deformations parametrized by \((S,0)\) of the holonomy map \(\mathrm{H}\)._ Proof.: The proof of injectivity in Bonatti [1] is given for \(C^{r}\) foliations where \(r\geq 1\), but as we will see below, it works with some modification for \(C^{1,0}\) foliations as well. Choose a complete transversal \(T\) such that \(T\) is an embedded submanifold with a trivial normal bundle. For example, \(T\) can be a union of disjoint \(k\)-dimensional transverse disks, where \(k\) is the codimension of \(\mathcal{F}\). _Step 1_: Let \(\mathcal{F}_{1}^{S}\) and \(\mathcal{F}_{2}^{S}\) be two germs of deformations of \(\mathcal{F}\). Denote by \(\mathrm{H}_{1}^{S}\) and \(\mathrm{H}_{2}^{S}\) the corresponding germs of deformations of holonomy. We should prove that if \(\mathrm{H}_{1}^{S}\) is equivalent (as a morphism between etale groupoids) to \(\mathrm{H}_{2}^{S}\), then the germs \(\mathcal{F}_{1}^{S}\) and \(\mathcal{F}_{2}^{S}\) of deformations of \(\mathcal{F}\) are equivalent. First, we show that one can assume that \(\mathrm{H}_{1}^{S}\) and \(\mathrm{H}_{2}^{S}\) are equal (rather than equivalent). For this, we show that there is a deformation \(\mathcal{F}_{3}^{S}\) whose germ is equivalent to that of \(\mathcal{F}_{2}^{S}\) and such that it induces the same germ of deformation of holonomy as the one induced by \(\mathcal{F}_{1}^{S}\). Let \(T\coprod T:=\{1\}\times T\cup\{2\}\times T\) be the disjoint union of two copies of \(T\). Then \(T\coprod T\) is a complete transversal for \(\mathcal{F}\). For \(i=1,2\), let \(\varphi_{i}\) be the inclusion \[\varphi_{i}\colon\Pi_{T}(\mathcal{F})\hookrightarrow\Pi_{T\coprod T}(\mathcal{ F})\] induced by identifying \(T\) with \(\{i\}\times T\). Since \(\mathrm{H}_{1}^{S}\) and \(\mathrm{H}_{2}^{S}\) are equivalent morphisms (Definition 3.2), the following holds: there is a homomorphism \[\mathrm{H}_{3}^{S}\colon\Pi_{T\coprod T}(\mathcal{F})\to\underline{\mathrm{ Homeo}}^{(S,0)}(T)\] that makes the following diagram commutative Let \(\tau\colon T\to M\) be the embedding of the transversal. For each \(x\in T\) denote by \(x_{1,2}\) the constant path in \(\Pi_{T\coprod T}(\mathcal{F})\) with source \((1,x)\) and target \((2,x)\). Let \(x_{2,1}=(x_{1,2})^{-1}\). For each \(\gamma\in\Pi_{\mathcal{F}}(T)\) with source \(x\in T\) and target \(y\in T\) we have \[\varphi_{2}(\gamma)=x_{2,1}\cdot\varphi_{1}(\gamma)\cdot y_{1,2},\] where concatenation of paths is written from left to right. Therefore for every \(s\in S\) close to \(0\) we have \[\mathrm{H}_{2}^{s}(\gamma) =\mathrm{H}_{3}^{s}(\varphi_{2}(\gamma))=\mathrm{H}_{3}^{s}(x_{2, 1}\cdot\varphi_{1}(\gamma)\cdot y_{1,2})\] \[=\mathrm{H}_{3}^{s}(y_{1,2})\circ\mathrm{H}_{3}^{s}(\varphi_{1}( \gamma))\circ\mathrm{H}_{3}^{s}(x_{2,1})\] \[=\mathrm{H}_{3}^{s}(y_{1,2})\circ\mathrm{H}_{1}^{s}(\gamma)\circ \mathrm{H}_{3}^{s}(x_{2,1}).\] Note that the map \(x\to x_{12}\) gives a continuous embedding of \(T\) into \(\Pi_{T\coprod T}(\mathcal{F})\), and \(\mathrm{H}_{3}^{S}(x_{1,2})\) is a germ of a homeomorphism in \(\mathrm{Homeo}^{(S,0)}(T)\) at \((x,0)\in T\times S\) that fixes \((x,0)\). Since \(\mathrm{H}_{3}^{S}\) is an etale map between etale groupoids, there exists an open neighborhood \(U_{x}\) in \(T\times S\) around each \((x,0)\in T\times\{0\}\) and a local homeomorphism \(\psi_{U_{x}}\) in \(\mathrm{Homeo}^{(S,0)}(T)\) such that its germ at every \(z\in(T\times\{0\})\cap U_{x}\) is the germ \(\mathrm{H}_{3}^{S}(z_{1,2})\). In particular, the restriction of \(\psi_{U_{x}}\) to \((T\times\{0\})\cap U_{x}\) is the identity. Given that \(T\) is compact, finitely many such neighborhoods \(U_{x}\) cover \(T\times\{0\}\) and so we can choose a small neighborhood \(U\) of \(T\times 0\) in \(T\times S\) and a homeomorphism \(\psi\) of \(U\) into an open subset of \(T\times S\) such that for each \(x\in T\) the germ of \(\psi\) at \((x,0)\) is equal to \(\mathrm{H}_{3}^{S}(x_{1,2})\). Then for each \(\gamma\in\Pi_{\mathcal{F}}(T)\) we have \[\mathrm{H}_{2}^{S}(\gamma)=\psi\circ\mathrm{H}_{1}^{s}(\gamma)\circ\psi^{-1}.\] _Claim_: There exists a neighborhood \(\mathcal{V}\) of \(M\times\{0\}\) in \(M\times S\) and a homeomorphism \(\Psi\) of \(\mathcal{V}\) into an open subset of \(M\times S\) such that the restriction of \(\Psi\) to \(M\times\{0\}\) is the identity and such that there exists a neighborhood \(V\subset U\) of \(T\times\{0\}\) in \(T\times S\) where \(\Psi(V)\subset T\times S\) and the restriction of \(\Psi\) to \(V\) coincides with \(\psi\). _Proof of Claim_: Here is the part that the proof differs from that of [1]. By Edwards-Kirby theorem, the group \(\mathrm{Homeo}(T)\) is locally contractible. Moreover, the point whose neighborhood is being contracted can be left fixed during the contraction. See [1, Corollary 1.1] and the proceeding remark. Let \(\mathcal{U}\) be a small neighborhood of the identity \(\mathrm{Id}\) in \(\mathrm{Homeo}(T)\) and \(G\colon\mathcal{U}\times[0,1]\to\mathcal{U}\) be a contraction such that \[G(\phi,0)=\phi, G(\phi,1)=\mathrm{Id}\ \ \ \ \ \forall\phi\in\mathcal{U},\] \[G(\mathrm{Id},t)=\mathrm{Id} \ \ \ \ \forall t\in[0,1].\] Since \(T\) is embedded, it has a closed tubular neighborhood \(\mathcal{N}\). By our assumption on \(T\), we can identify \(\mathcal{N}\) with the trivial bundle \(T\times\mathbb{D}^{k}\) over \(T\), where \(\mathbb{D}^{k}\) is the unit disk in \(\mathbb{R}^{k}\). For each \(s\) close to \(0\) in \(S\), let \(\psi_{s}\in\mathrm{Homeo}(T)\) be the restriction of \(\psi\) to \(T\times\{s\}\). We extend \(\psi_{s}\) to a homeomorphism \(\Psi_{s}\) of \(M\) as follows. Define the restriction of \(\Psi_{s}\) to \(M-\mathcal{N}\) to be the identity map. Denote the Euclidean norm of a point \(z\in\mathbb{D}^{k}\) by \(|z|\). For each \((x,z)\in T\times\mathbb{D}^{k}\) define \[\Psi_{s}(x,z)=(G(\psi_{s},|z|)(x),z).\] Then we have \[|z|=1\implies\Psi_{s}(x,z)=(x,z),\] \[z=0\implies\Psi_{s}(x,0)=(\psi_{s}(x),0).\] Therefore, we can define the restriction of \(\Psi\) to \(M\times\{s\}\) to be equal to \(\Psi_{s}\). This proves the _Claim_. Let \(\mathcal{F}_{3}^{S}=\Psi(\mathcal{F}_{2}^{S})\). This is a deformation of \(\mathcal{F}\) where \(\mathcal{F}_{3}^{0}=\mathcal{F}\), since the restriction of \(\Psi\) to \(M\times\{0\}\) is the identity. The deformation of holonomy of \(\mathcal{F}\) for the transversal \(T\) induced by \(\mathcal{F}_{3}^{S}\) is exactly \(\mathrm{H}_{1}^{S}\). Then \(\mathcal{F}_{3}^{S}\) is equivalent to the germ \(\mathcal{F}_{2}^{S}\) of deformation of \(\mathcal{F}\), with the homeomorphism \(\Psi\) inducing the equivalence, and the germ of deformation of holonomy of \(\mathcal{F}\) induced by \(\mathcal{F}_{3}^{S}\) is equal to that of \(\mathcal{F}_{1}^{S}\). This completes the construction of \(\mathcal{F}_{3}^{s}\). Therefore we can assume that \[\mathrm{H}_{1}^{S}=\mathrm{H}_{2}^{S}\colon\Pi_{T}(\mathcal{F})\to\mathrm{ Homeo}^{(S,0)}(T).\] _Step 2_: Next we define a field of disks transverse to \(\mathcal{F}\). Let \(q\colon N\to M\) be the normal bundle of the foliation \(\mathcal{F}\), and \(\sigma\colon M\to N\) be the zero section. Consider a neighborhood \(U\) of the zero section \(\sigma(M)\) and a submersion \(\phi\colon U\to M\) coinciding with \(q\) on \(M\) (that is \(\phi\circ\sigma=\mathrm{id}_{M}\)) and such that the restriction of each fiber \(q^{-1}(x)\cap U\) is an embedded transversal to the foliation \(\mathcal{F}\). For example, fixing a Riemannian metric on \(M\), the map \(\phi\) can be defined via the exponential function. By choosing the neighborhood \(U\) small enough, the fibers \(D_{x}=q^{-1}(x)\cap U\) are \(k\)-dimensional disks transverse to \(\mathcal{F}\), where \(k\) is the codimension of the foliation \(\mathcal{F}\). The family \(\{D_{x}\}_{x\in M}\) is a _field of disks transverse to \(\mathcal{F}\)_. We may assume that for each \(x\in T\), the disk \(D_{x}\) is included in \(T\). For \(s\) close to \(0\), the foliations \(\mathcal{F}_{1}^{s}\) and \(\mathcal{F}_{2}^{s}\) are transverse to the field of disks \(\{D_{x}\}_{x\in M}\). _Step 3_: Choose a Riemannian metric on \(M\). By compactness of \(M\) and local compactness of \(S\), there exists \(l>0\) and a neighborhood \(S_{0}\) of \(0\) in \(S\) such that for every \(s\in S_{0}\) the following two properties hold: 1. For each \(x\in M\) there exists a path \(\gamma_{x}^{s}\) tangent to the foliation \(\mathcal{F}_{1}^{s}\) such that \(\gamma_{x}^{s}(0)\in T\) and \(\gamma_{x}^{s}(1)=x\) and the length \(\ell(\gamma_{x}^{s})\) is at most \(l\). 2. Each path \(\gamma\) tangent to \(\mathcal{F}_{1}^{s}\) and of length \(\ell(\gamma)\leq 2l\) projects to a path \(\overline{\gamma}\) tangent to \(\mathcal{F}_{2}^{s}\) such that \(\overline{\gamma}(0)=\gamma(0)\) and \(\overline{\gamma}(t)\in D_{\gamma(t)}\) for every \(t\in[0,1]\). Here we use Epstein's second axiom. _Claim_: There exists a neighborhood \(S_{1}\subset S_{0}\) of \(0\) in \(S\) such that for each \(s\in S_{1}\) the following holds: Let \(\gamma_{1}\) and \(\gamma_{2}\) be two paths tangent to \(\mathcal{F}_{1}^{s}\) such that \(\gamma_{1}(0)\) and \(\gamma_{2}(0)\) lie on \(T\), and \(\gamma_{1}(1)=\gamma_{2}(1)\), and such that the lengths \(\ell(\gamma_{1})\) and \(\ell(\gamma_{2})\) are at most \(l\). Let \(\overline{\gamma}_{1}\) and \(\overline{\gamma}_{2}\) be the projections of \(\gamma_{1}\) and \(\gamma_{2}\) to \(\mathcal{F}_{2}^{s}\) along the field of disks \(\{D_{x}\}_{x\in M}\). Then we have \(\overline{\gamma}_{1}(1)=\overline{\gamma}_{2}(1)\). _Proof of Claim_: Assume to the contrary, and suppose that there exists a sequence of \(s_{i}\in S\) converging to \(0\) such that for each \(i\) there are paths \(\gamma_{1}^{i}\) and \(\gamma_{2}^{i}\) tangent to \(\mathcal{F}_{1}^{s}\) and satisfying the hypothesis of the claim and such that \(\overline{\gamma}_{1}^{i}(1)\neq\overline{\gamma}_{2}^{i}(1)\). Let \(\sigma_{i}=\gamma_{1}^{i}(\gamma_{2}^{i})^{-1}\), where concatenation is from left to right. Then \(\sigma_{i}\) is a path tangent to \(\mathcal{F}_{1}^{i}\) and of length at most \(2l\). Since \(s_{i}\in S_{0}\), we can project \(\sigma_{i}\) to a path \(\overline{\sigma}_{i}\) tangent to \(\mathcal{F}_{2}^{i}\) and satisfying \(\overline{\sigma}_{i}(0)=\sigma_{i}(0)\) and \(\overline{\sigma}_{i}(1)\in D_{\sigma_{i}(1)}\subset T\). We first argue that \(\overline{\sigma}_{i}(1)\neq\sigma_{i}(1)\): otherwise we must have \(\overline{\gamma}_{1}^{i}(1)=\overline{\gamma}_{2}^{i}(1)\) (and \(\overline{\sigma}_{i}=\overline{\gamma}_{1}^{i}(\overline{\gamma}_{2}^{i})^{-1}\)) which is not the case by hypothesis. Hence we established that \(\overline{\sigma}_{i}(1)\neq\sigma_{i}(1)\). Since \(\sigma_{i}\) have lengths bounded by \(2l\), and have endpoints on the compact transversal \(T\), and \(S\) is locally compact, after passing to a subsequence we may assume that \(\sigma_{i}\) converge to a path \(\sigma\) tangent to \(\mathcal{F}_{1}^{0}=\mathcal{F}\), with endpoints on \(T\), and of length \(\ell(\sigma)\leq 2l\). We show that \(\mathrm{H}_{1}^{S}(\sigma)\neq\mathrm{H}_{2}^{S}(\sigma)\) to arrive at a contradiction. This is because for each realizations \(h_{1}\) and \(h_{2}\) of the germs \(\mathrm{H}_{1}^{S}\) and \(\mathrm{H}_{2}^{S}\) and for \(i\) large enough we have \[h_{1}(\sigma_{i}(0))=\sigma_{i}(1)\neq\overline{\sigma}_{i}(1)=h_{2}(\sigma_{ i}(0)).\] The contradiction \(\mathrm{H}_{1}^{S}\neq\mathrm{H}_{2}^{S}\) proves the _Claim_. _Step 4_: For each \(x\in M\) and each \(s\in S_{1}\), there exists a path \(\gamma_{x}^{s}\) tangent to \(\mathcal{F}_{1}^{s}\), starting on \(T\) and terminating at \(x\), and with length \(\ell(\gamma_{x}^{s})\leq l\). By the above _Claim_, the endpoint of the projection \(\overline{\gamma}_{x}^{s}\) of \(\gamma_{x}^{s}\) to \(\mathcal{F}_{2}^{s}\) does not depend on the choice of \(\gamma_{x}^{s}\). Define \(\psi^{s}(x)=\overline{\gamma}_{x}^{s}(1)\). We show that for \(s\) close to \(0\), the map \(\psi^{s}\) is a homeomorphism of \(M\). Note that \(\psi^{s}\) is a local homeomorphism: if \(\psi^{s}(x)\) is defined via the path \(\gamma_{x}^{s}\) tangent to \(\mathcal{F}_{1}^{s}\), then we can use paths tangent to \(\mathcal{F}_{1}^{s}\) and almost parallel to \(\gamma_{x}^{s}\) to define \(\psi^{s}(y)\) for \(y\) close to \(x\). Since \(\psi^{s}\) is continuous and \(M\) is compact, the image of \(\psi^{s}\) is compact and hence a closed subset of \(M\). On the other hand, the image of \(\psi^{s}\) is open since \(\psi^{s}\) is a local homeomorphism. Therefore, the image of \(\psi^{s}\) is all of \(M\), as \(M\) is connected. Now \(\psi^{s}\) being a local homeomorphism and also surjective, it is a covering map. It is enough to argue that the degree of \(\psi^{s}\) is equal to one. Note that the space of continuous maps \(C(M,M)\) is a Banach manifold, and so it is locally contractible. Since \(\psi^{0}\) is the identity map, \(\psi^{s}\) lie in a small neighborhood of \(\psi^{0}\), and the degree is invariant under homotopy, it follows that the degree of \(\psi^{s}\) is equal to one. This shows that for \(s\) close to \(0\), the map \(\psi^{s}\) is a homeomorphism of \(M\). Moreover, \(\psi^{s}\) continuously varies with \(s\) (by Epstein's second axiom), the map \(\psi^{0}\) is the identity, and \(\psi^{s}(\mathcal{F}_{1}^{s})=\mathcal{F}_{2}^{s}\). Therefore, the germs of \(\mathcal{F}_{1}^{S}\) and \(\mathcal{F}_{2}^{S}\) are equivalent. ## 4. Proof of Theorem 1.1 Proof of Theorem 1.1.: Let \(\mathcal{F}\) be a codimension-one transversely oriented \(C^{1,0}\) foliation of a compact manifold \(M\), and let \(L\) be a compact leaf of \(\mathcal{F}\) such that \(\pi_{1}(L)\) is amenable and \(H^{1}(L;\mathbb{R})=0\). Here we have suppressed the base point from \(\pi_{1}(L)\). By Witte-Morris and Navas, we know that \(L\) is globally stable and so \((M,\mathcal{F})\) is either the product foliation \(L\times[0,1]\) or \((M,\mathcal{F})\) is the foliation induced by fibers of a fibration over \(S^{1}\) with fiber \(L\). Let \(T\) be a complete transversal for \(\mathcal{F}\) which is either 1. \(\operatorname{point}\times[0,1]\) if \((M,\mathcal{F})\) is the product foliation \(L\times[0,1]\); or 2. a circle that intersects every leaf once if \((M,\mathcal{F})\) is a fibration over \(S^{1}\) with fiber \(L\). We show that if \(\mathcal{F}_{n}\) is a sequence of transversely oriented codimension-one \(C^{1,0}\) foliations that \(C^{0}\)-approximate \(\mathcal{F}\), then \(\mathcal{F}_{n}\) is topologically equivalent to \(\mathcal{F}\) for large \(n\). Let \(S\) be the topological space \(\{0\}\cup\{\frac{1}{n}|n\in\mathbb{N}\}\subset\mathbb{R}\) with the subspace topology. Let \(\mathcal{F}^{S}\) be the foliation of \(M\times S\) such that the restriction of \(\mathcal{F}^{S}\) to \(M\times\{\frac{1}{n}\}\) is \(\mathcal{F}_{n}\), and the restriction of \(\mathcal{F}^{S}\) to \(M\times\{0\}\) is \(\mathcal{F}\). Then \(\mathcal{F}^{S}\) defines a germ of deformation of \(\mathcal{F}\). Denote by \(\Pi_{T}(\mathcal{F})\) the fundamental groupoid of \(\mathcal{F}\) for the transversal \(T\). Let \(\operatorname{Homeo}^{(S,0)}_{+}(T)\) be the pseudogroup of local homeomorphisms of \(T\times S\) of the form \((h_{s}(x),s)\) where \(h_{s}\) is a local orientation-preserving homeomorphism of \(T\) continuously varying with \(s\). Denote by \(\widetilde{\operatorname{Homeo}^{(S,0)}_{+}}(T)\) the etale groupoid of germs of elements of \(\operatorname{Homeo}^{(S,0)}_{+}(T)\) at points of \(T\times\{0\}\). Let \[\operatorname{H}^{S}\colon\Pi_{T}(\mathcal{F})\to\widetilde{\operatorname{ Homeo}^{(S,0)}_{+}}(T)\] be the germ of deformation of the holonomy induced by the germ of deformation \(\mathcal{F}^{S}\) of the foliation \(\mathcal{F}\). By Theorem 3.8, it is enough to show that \(H^{S}\) is equivalent (as a morphism of etale groupoids) to the germ of deformation of holonomy induced by the germ of trivial (i.e. constant) deformation \(\mathcal{F}^{S}_{c}\) of \(\mathcal{F}\) \[\operatorname{H}^{S}_{c}\colon\Pi_{T}(\mathcal{F})\to\widetilde{\operatorname{ Homeo}^{(S,0)}_{+}}(T),\] where the restriction of \(\mathcal{F}^{S}_{c}\) to each \(M\times\{s\}\) is \(\mathcal{F}\). Note that by i)-ii), for every \(t\in T\) there is a copy of the fundamental group \(\pi_{1}(L)\) in \(\Pi_{T}(\mathcal{F})\) by restricting to those paths that start and end on \(t\) and remain inside a leaf; denote this copy of \(\pi_{1}(L)\) by \(\pi_{1}(L\times t)\). Since \(\operatorname{H}^{S}\) is a homomorphism of etale groupoids, and \(\pi_{1}(L\times t)\) is a group, the image of \(\pi_{1}(L\times t)\) under \(\operatorname{H}^{S}\) is a group. Moreover this image \(\operatorname{H}^{S}(\pi_{1}(L\times t))\) is an amenable group since it is a quotient of an amenable group. On the other hand, the following diagram is commutative where \(\operatorname{H}\) is the holonomy homomorphism for \(\mathcal{F}\), and \(\pi\) is the natural projection from \(\widetilde{\operatorname{Homeo}^{(S,0)}_{+}}(T)\) to \(\widetilde{\operatorname{Homeo}_{+}}(T)\). Therefore, the image \(\operatorname{H}^{S}(\pi_{1}(L\times t))\) lies in the subgroup \(G_{t}\) of \(\widetilde{\operatorname{Homeo}^{(S,0)}_{+}}(T)\) consisting of germs of homeomorphisms of \(T\times S\) at the point \(t\times 0\). Note that \(G_{t}\) is isomorphic to the group \(\widetilde{\operatorname{Homeo}^{(S,0)}_{+}}(\mathbb{R},0)\), and hence \(G_{t}\) is left orderable. It follows that \(\operatorname{H}^{S}(\pi_{1}(L\times t))\) is a left orderable group, and by Witte-Morris' theorem, it should be the trivial group. This implies that \(\operatorname{H}^{S}(\pi_{1}(L\times t))\) consists only of the germ of the identity homeomorphism of \(T\times S\) at the point \((t,0)\). Since this holds for every \(t\in T\), it follows that the homomorphism \(\operatorname{H}^{S}\) is equal to (and hence also equivalent to) \(\operatorname{H}^{S}_{c}\). This completes the proof. ## 5. Questions A countable group \(G\) is left orderable if and only if it is a subgroup of \(\operatorname{Homeo}_{+}(\mathbb{R})\). Therefore, Navas' theorem shows that every countable subgroup of \(\widetilde{\operatorname{Homeo}_{+}}(\mathbb{R},0)\) is also a subgroup of \(\operatorname{Homeo}_{+}(\mathbb{R})\). Andres Navas has asked the following: **Question 5.1** (Navas).: _Is every countable subgroup of \(\widetilde{\operatorname{Homeo}}_{+}(\mathbb{R}^{2},0)\) also a subgroup of \(\operatorname{Homeo}_{+}(\mathbb{R}^{2},0)\)?_ More generally one can ask the following: **Question 5.2**.: _Does any of the equivalences in Corollary 2.7 hold for \(\mathbb{R}^{n}\) for \(n>1\)?_
2304.13491
Neutrino Oscillations by a Manifestly Coherent Mechanism and Massless vs. Massive Neutrinos
The neutrino oscillations in vacuum are derived in a manifestly coherent scheme. The mechanism is operative in a quantum field theoretical framework, justifying nevertheless a formal analogy with quantum mechanical two- (or more) level systems and their oscillatory behaviour. Both the flavour states and the massive states are eigenstates of certain Hamiltonians which, in special conditions, can be argued to share the same Hilbert space. In this scheme, flavour neutrinos are massless and play the role of asymptotic states for any interactions, including the weak interactions, while massive neutrinos are effective propagation states. The vacuum is interpreted as a medium, where the flavour neutrinos undergo coherent forward scatterings which modify their energy and mix their flavour. The treatment of matter conversion and MSW effect fits in naturally; the extension to other neutral particle oscillations, like $K_0-\bar K_0$, is straightforward. The scheme is eclectic insofar as it combines seamlessly quantum field theory and quantum mechanics.
Anca Tureanu
2023-04-26T12:29:58Z
http://arxiv.org/abs/2304.13491v1
###### Abstract ###### Abstract The neutrino oscillations in vacuum are derived in a manifestly coherent scheme. The mechanism is operative in a quantum field theoretical framework, justifying nevertheless a formal analogy with quantum mechanical two- (or more) level systems and their oscillatory behaviour. Both the flavour states and the massive states are eigenstates of certain Hamiltonians which, in special conditions, can be argued to share the same Hilbert space. In this scheme, flavour neutrinos are massless and play the role of asymptotic states for any interactions, including the weak interactions, while massive neutrinos are effective propagation states. The vacuum is interpreted as a medium, where the flavour neutrinos undergo coherent forward scatterings which modify their energy and mix their flavour. The treatment of matter conversion and MSW effect fits in naturally; the extension to other neutral particle oscillations, like \(K_{0}-\bar{K}_{0}\), is straightforward. The scheme is eclectic insofar as it combines seamlessly quantum field theory and quantum mechanics. **Neutrino Oscillations** **by a Manifestly Coherent Mechanism** **and Massless vs. Massive Neutrinos** **Anca Tureanu** \({}^{a)}\)_Department of Physics, University of Helsinki,_ _P.O.Box 64, FI-00014 University of Helsinki, Finland_ \({}^{b)}\)_Helsinki Institute of Physics,_ _P.O.Box 64, FI-00014 University of Helsinki, Finland_ ## 1 Introduction The theory of neutrino oscillations in vacuum is still arousing controversies, yet the gist of it is generally agreed upon [1, 2, 3, 4, 5, 6, 7, 8]. It can be summarized by saying that neutrinos are produced in weak interactions as coherent superpositions of spinor states of different masses. The free propagation in vacuum of the superposition of states is controlled by the mass difference squared, for ultrarelativistic neutrinos, leading to the oscillation of the flavour quantum number. As a result, there exists a certain probability to detect a neutrino of a different flavour compared to the one that was produced. The standard framework of neutrino oscillations is based on Pontecorvo's extension of the state mixing and oscillation paradigm from the \(K_{0}-\bar{K}_{0}\) system [9, 10] to neutrinos [11, 12, 13, 14] (see also [15]). For the interpretation of vacuum neutrino oscillations experiments, the following two assumptions are necessary: 1. _The production and detection of neutrinos by charged current weak interactions takes place with strict conservation of flavour quantum number._ This assumption reflects the fact that neutrinos by themselves are never observed, and their flavour is inferred from the type of charged lepton that accompanies them in the interaction. 2. _The flavour oscillation happens only during the free propagation of neutrinos, due to the flavour mixing terms in the Lagrangian._ Those terms induce the flavour neutrino states to be coherent superpositions of different mass states. These conditions are purely theoretical constructions - it is impossible to experimentally confirm the strict separation of flavour-conserving weak interactions from flavour-violating propagation in vacuum. One can state that, to the present level of experimental precision, weak interactions conserve flavour, since "zero-distance conversion" of neutrinos has not been observed1. This, however, does not preclude the flavour violation in weak interactions. Footnote 1: Zero-distance conversion of neutrinos means that neutrinos produced in conjuction with electrons, for example (beta decay), have a probability to interact with muons, and this probability is independent on the length from the production point. The process is usually discussed in the context of mixing with heavy sterile neutrinos. Nevertheless, such conversions have not been observed. Below are summarized the main points of the traditional approach to neutrino oscillations. It is generally believed that the neutrino interactions are described by the Standard Model Lagrangian, with additional mass terms, which also mix the flavour neutrino fields. In this paper we shall consider, for simplicity, a two-neutrino model with Dirac mass terms. The generalization to any number of flavours and the inclusion of Majorana terms is straightforward. With these assumptions, the Lagrangian of the two neutrino fields \(\nu_{\ell}(x)\), with \(\ell=e,\mu\), reads: \[{\cal L}={\cal L}_{0}+{\cal L}_{mass}+{\cal L}_{CC}+{\cal L}_{NC}, \tag{1.1}\] where \({\cal L}_{0}\) contains the kinetic terms: \[{\cal L}_{0}=\sum_{\ell=e,\mu}\left[\bar{\nu}_{\ell L}(x)i\partial\!\!\!/\nu_ {\ell L}(x)+\bar{\nu}_{\ell R}(x)i\partial\!\!\!/\nu_{\ell R}(x)\right], \tag{1.2}\] \({\cal L}_{mass}\) contains the mass and mixing terms2: Footnote 2: Throughout this paper we work with the convention that all the mixing in lepton sector is manifest in neutrino mixing. \[{\cal L}_{mass}=-\left(\begin{array}{cc}\bar{\nu}_{eL}(x)&\bar{\nu}_{\mu L }(x)\end{array}\right)\left(\begin{array}{cc}m_{ee}&m_{e\mu}\\ m_{e\mu}&m_{\mu\mu}\end{array}\right)\left(\begin{array}{c}\nu_{eR}(x)\\ \nu_{\mu R}(x)\end{array}\right)+h.c., \tag{1.3}\] \({\cal L}_{CC}\) describes the charged current interactions: \[{\cal L}_{CC}=-\frac{g}{\sqrt{2}}\left[\bar{\nu}_{eL}(x)\gamma_{\mu}e_{L}(x)+ \bar{\nu}_{\mu L}(x)\gamma_{\mu}\mu_{L}(x)\right]W^{\mu}(x)+h.c., \tag{1.4}\] and \({\cal L}_{NC}\) describes the neutral current interactions. Although the right-chiral fields are sterile for weak interactions, we still denote them by a flavour index, to keep a certain order in the formulas. Written in terms of flavour field operators as above, the Lagrangian is invariant under the global \(U(1)\) flavour symmetry, with the exception of the terms proportional to the mixing parameters \(m_{e\mu}\). The bilinear terms in (1.3) mix the two _flavour field operators_ and the Lagrangian is diagonalized by a change of variables, using the orthogonal transformation3: Footnote 3: We consider \(m_{ee}m_{\mu\mu}>m_{e\mu}^{2}\) in order to obtain positive mass eigenstates in (1.7) with a simple rotation like (1.5). If this condition is not fulfilled, one tweaks a bit the rotation matrix transforming it into a unitary matrix and then apply the Autonne–Takagi procedure, which will necessarily lead to real and positive masses. The procedure is well known and can be found in any neutrino physics book. Here we prefer not to clutter unnecessarily the formulas. \[\left(\begin{array}{c}\nu_{e}(x)\\ \nu_{\mu}(x)\end{array}\right)=\left(\begin{array}{cc}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right)\left(\begin{array}{c}\nu_{1}(x)\\ \nu_{2}(x)\end{array}\right), \tag{1.5}\] with \[\tan 2\theta=\frac{2m_{e\mu}}{m_{\mu\mu}-m_{ee}} \tag{1.6}\] and \[m_{1,2} = \frac{1}{2}\left\{(m_{ee}+m_{\mu\mu})\mp\sqrt{(m_{ee}^{2}-m_{\mu \mu}^{2})^{2}+4m_{e\mu}^{2}}\right\},\ \ \ \ m_{ee}m_{\mu\mu}>m_{e\mu}^{2}. \tag{1.7}\] In terms of the new field variables \(\nu_{i}(x),\ i=1,2\), the Lagrangian terms read: \[{\cal L}_{0}+{\cal L}_{mass} = \bar{\nu}_{1}(i\not{\partial}-m_{1})\nu_{1}+\bar{\nu}_{2}(i\not{ \partial}-m_{2})\nu_{2} \tag{1.8}\] and \[{\cal L}_{CC} = -\frac{g}{\sqrt{2}}\Big{[}\cos\theta\,\bar{\nu}_{1L}(x)\gamma_{ \mu}e_{L}(x)+\sin\theta\,\bar{\nu}_{2L}(x)\gamma_{\mu}e_{L}(x) \tag{1.9}\] \[- \sin\theta\,\bar{\nu}_{1L}(x)\gamma_{\mu L}\mu(x)+\cos\theta\, \bar{\nu}_{2L}(x)\gamma_{\mu}\mu_{L}(x)\Big{]}W^{\mu}+h.c.\] In this form, the effect of the mixing terms is dispersed in the whole Lagrangian, and the flavour \(U(1)\) symmetry is manifestly broken in the weak interaction terms, in contrast to the expression (1.4). In order to fulfill the first requirement above (i.e. flavour conservation in the weak interactions of leptons), it is customary to consider that the weak interactions are described by the Lagrangian (1.4) and not by (1.9). On the other hand, the asymptotic states are considered to be the massive neutrino states of masses \(m_{1,2}\), and not the flavour states associated to \(\nu_{e}(x),\nu_{\mu}(x)\). In short, the weak interactions should produce and annihilate massive states in certain prescribed/coherent combinations, thus violating the principles of QFT (more about it in Sect. 2)4. Finding ways out of this predicament is a constant preoccupation in neutrino physics up to these days, which sparkles still controversies (for a limited selection of references, see [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]). Footnote 4: In Appendix C it is shown that this alteration of the basic rules of quantum field theory actually does not help justify the conservation of flavour in the processes involving neutrinos, not even at high energies (see also [16]). To describe the vacuum oscillations, one veers abruptly from quantum field theory to the quantum mechanics of two-level systems. The procedure hinges on the _ad hoc assumption that the flavour states produced by weak interactions, without being quanta of the flavour field operators \(\nu_{\ell}(x)\), are however coherent superpositions of massive Fock states. In Sect. 2 we explain why this assumption collides with (and succumbs to) the basic principles of quantum field theory. In short, for a mixing of two neutrino flavours, the standard approach is to _postulate_ the existence of the flavour states \(|\nu_{e}\rangle\) and \(|\nu_{\mu}\rangle\), defined by: \[\left(\begin{array}{c}|\nu_{e}\rangle\\ |\nu_{\mu}\rangle\end{array}\right)=\left(\begin{array}{cc}\cos\theta&\sin \theta\\ -\sin\theta&\cos\theta\end{array}\right)\left(\begin{array}{c}|\nu_{1} \rangle\\ |\nu_{2}\rangle\end{array}\right), \tag{1.10}\] where \(|\nu_{1}\rangle\) and \(|\nu_{2}\rangle\) represent the massive neutrino Fock states, with the masses \(m_{1}\) and \(m_{2}\), while \(\theta\) is the mixing angle (1.6). The coherence of the superposition is presumed without justification (though this fact is glossed over and hardly ever explicitly acknowledged in the literature) and it is encoded in the fact that the mixing matrix is strictly the rotation matrix in (1.10) and no random phase is allowed. Although in the standard approach the massive states are characterized by the same momentum5, this is not implicit in formula (1.10). Curiously, formula (1.10) is believed to express a well-defined superposition of _undefined_ massive states. Footnote 5: This assumption has been amply debated in the literature. It is also standard to postulate that the massive neutrinos satisfy the non-relativistic Schrodinger equations: \[i\frac{\partial}{\partial t}\left(\begin{array}{c}|\nu_{1}({\bf p})\rangle \\ |\nu_{2}({\bf p})\rangle\end{array}\right)=\left(\begin{array}{cc}E_{1}&0\\ 0&E_{2}\end{array}\right)\left(\begin{array}{c}|\nu_{1}({\bf p})\rangle\\ |\nu_{2}({\bf p})\rangle\end{array}\right), \tag{1.11}\] with \[E_{i}=\sqrt{{\bf p}^{2}+m_{i}^{2}}=|{\bf p}|\left(1+\frac{m_{i}^{2}}{2|{\bf p} |^{2}}+{\cal O}\left(\frac{m_{i}^{4}}{|{\bf p}|^{4}}\right)\right),\quad i=1,2. \tag{1.12}\] The standard oscillation probability, in the approximation that neutrinos are ultrarelativistic, is \[P_{\nu_{e}\rightarrow\nu_{\mu}}=\ \ \sin^{2}2\theta\sin^{2}\left(\tfrac{ \Delta m^{2}}{4E}L\right),\quad\Delta m^{2}=m_{2}^{2}-m_{1}^{2}, \tag{1.13}\] where \(E\) is the energy of the neutrinos in the beam and \(L\) is the distance between the neutrino production and detection points [2, 3, 4, 5, 6, 7, 8], taking into account that in the limit where the speed of the neutrinos is almost the speed of light, we can approximate the time of flight by the distance traveled (\(t\sim L\)). In this paper, we shall show that, in certain conditions, formula (1.10) can be derived for states of identical momenta, when \(m_{1},m_{2}\ll\) p. In Sect. 2 we argue why this formula cannot be postulated in general and present its conceptual drawbacks. In Sect. 3 we present the derivation of formula (1.10) in a quantum field theoretical scheme which justifies a formal quantum mechanical treatment of the oscillations. In the same section we discuss the oscillations and conversion in matter. In Sect. 4 we evaluate the magnitude of the flavour violation in the weak interactions of neutrinos and show that it is below the present experimental capabilities of detection, and in Sect. 5 we discuss comparatively the neutral kaon oscillations and how the present scheme fits into the description of mixing proposed in the original papers of Gell-Mann, Pais and Piccioni [9, 10]. Superposition of quantum states in standard approach to oscillations The standard approach with all its variations relies on the assumption that the asymptotic/free neutrino states are the Fock states of the massive spinor fields \(\nu_{1}(x),\nu_{2}(x)\). Nevertheless, for some reason that is never explained, those particles do not interact individually with the charged leptons according to the Lagrangian (1.9). Instead, some _coherent_ linear combinations of those massive states do. This is the essence of the problem: particle states of different masses cannot be coherently created or annihilated in quantum field theory. The transition amplitudes are summed together and interfere if and only if all external particle states are the same [30, 31, 32, 33]. Technically, in QFT, the particle states \(|\nu_{1}\rangle\) and \(|\nu_{2}\rangle\) belong to distinct and orthogonal Hilbert spaces, therefore any combination thereof can only be incoherent mixing. This is directly related to Haag's theorem [34]. For more detailed considerations of this aspect in the context of neutrino mixing, see [20, 22]. Consequently, it is impossible to defend the claim that weak interactions produce coherent states like \(c_{1}|\nu_{1}\rangle+c_{2}|\nu_{2}\rangle\), with \(|c_{1}|^{2}+|c_{2}|^{2}=1\). There are several other difficulties with the postulated formula (1.10), even if we choose to ignore the glaring fact that such states cannot be technically created in weak interactions: * In order to work with the logic of a two-level system, the two sets of states, namely \((|\nu_{e}\rangle,|\nu_{\mu}\rangle)\) and \((|\nu_{1}\rangle,|\nu_{2}\rangle)\), have to be sets of eigenstates of two different Hamiltonians (see Appendix A). Moreover, the states have to be unequivocally specified (for example, by their momentum or energy). While it is clear from (1.11) that \((|\nu_{1}\rangle,|\nu_{2}\rangle)\) are Hamiltonian eigenstates, the flavour set \((|\nu_{e}\rangle,|\nu_{\mu}\rangle)\) does not fulfill this requirement. * The superposition on the r.h.s. involves states of particles of different masses, and in quantum mechanics as well as quantum field theory, particles of different masses represent different physical systems. The principle of superposition in quantum mechanics is valid for states of the same system, provided that no superselection rules are forbidding the coherent superposition. Recall that the principle of superposition is a consequence of the basic fact that a linear combinations of solutions of a differential equation (in this case, Schrodinger's equation) is also solution of the same equation. But the states \(|\nu_{1}\rangle\) and \(|\nu_{2}\rangle\) satisfy different differential equations, therefore the argument of superposition fails. * In technical terms, if two particle states belong to different Hilbert spaces, then one can produce only probabilistic mixtures of those states, described by density matrices and in which the relative phase is arbitrary. Quantum superpositions with fixed relative coefficients are not allowed [35]. Yet only quantum superposition can exhibit intereference (and oscillations), while the probabilistic mixtures cannot (see also [23]). Let us recall that coherent superpositions of states are pure states, which correspond to vectors in the Hilbert space. Consequently, by the logic of coherent superposition, the flavour neutrino states should be vectors in a Hilbert space - but such a Hilbert space does not exist. In short, formula (1.10) with massive Fock states on the r.h.s. cannot be justified in either quantum field theory or quantum mechanics, although it is postulated to be valid in both. This is hardly any wonder, since this formula is simply a translation, in terms of states, of the rigorous formula (1.5) for field operators - a procedure which is not meaningful in quantum field theory. ## 3 The coherent scheme for neutrino oscillations We shall show that, in certain conditions, _formula_ (1.10) _can be derived with the required coherence, by changing the interpretation of flavour and massive states_. By those conditions, we ensure that both flavour and massive neutrinos can be treated as quantum mechanical states belonging to the same Hilbert space, and well defined by association with specific Hamiltonians. The assumptions are the following: 1. The asymptotic states and the only physical states are massless neutrino states carrying flavour number. The Lagrangian is expressed in terms of the flavour fields (including also right-handed sterile ones), and the terms which are usually called mass terms, namely \[-\left(\begin{array}{cc}\bar{\nu}_{eL}(x)&\bar{\nu}_{\mu L}(x)\end{array} \right)\left(\begin{array}{cc}m_{ee}&m_{e\mu}\\ m_{e\mu}&m_{\mu\mu}\end{array}\right)\left(\begin{array}{c}\nu_{eR}(x)\\ \nu_{\mu R}(x)\end{array}\right)+h.c.,\] are treated as bilinear interaction terms. This assumption is in line with the general principle that all the masses of fundamental particles are due to some interaction and spontaneous gauge symmetry breaking, like the Brout-Englert-Higgs mechanism. In the case of Dirac masses, \(m_{\ell\ell^{\prime}}=vy_{\ell\ell^{\prime}}/\sqrt{2}\), where \(v\) is the Higgs field vacuum expectation value (vev) and \(y_{\ell\ell^{\prime}}\) stands for the matrix of Yukawa couplings that mix the neutrinos among themselves. The massive states are only vacuum propagation states, and they do not interact weakly or in any other manner. 2. The massless electron neutrino and the muon neutrino of equal momenta are interpreted as degenerate states of a two-level system. The massive states of the same momentum are the eigenstates of the two-level system after the application of the perturbation. 3. In quantum mechanics, the Hilbert space of states of a given system is unique (Stone-von Neumann theorem), irrespective of the possible interactions that can be turned on or off. This cannot be reproduced rigorously with particles of different masses, which are born from orthogonal vacua. However, in the ultrarelativistic limit, the effects of the different vacua are negligible (see Appendix B). Consequently, we shall consider that massless and massive states are sharing the same Hilbert space and apply the formalism of a quantum mechanical two-level system. As we shall see later, the massless states represent particles (flavour neutrinos), while the massive states do not. 4. The number of particles is constant throughout the propagation and the particles' momentum remains constant. Even if the states switch from one flavour to the other, the change is viewed as a consequence of the interaction with a potential. This is ensured at quantum field theoretical level by the coherent scattering. ### Potential associated with the bilinear mass terms According to the above outlined programme, the weak interaction asymptotic states are massless flavour neutrinos. Once produced, they propagate in the "medium" created by the Higgs vev, which can be interpreted as an interaction with a constant background scalar field \(v\) (see Fig. 1). The potential associated with the bilinear mass term changes the energy of the propagating massless particles. We consider the simple Lagrangian: \[{\cal L} = {\cal L}_{0}+{\cal L}_{int}, \tag{3.1}\] where \[{\cal L}_{0} = \bar{\psi}_{L}(x)i\gamma^{\mu}\partial_{\mu}\psi_{L}(x)+\bar{\psi }_{R}(x)i\gamma^{\mu}\partial_{\mu}\psi_{R}(x)\] \[{\cal L}_{int} = -m\left(\bar{\psi}_{L}(x)\psi_{R}(x)+\bar{\psi}_{R}(x)\psi_{L}(x )\right). \tag{3.2}\] Thus, we regard the "mass term" \({\cal L}_{int}\) as an interaction term of the Weyl spinors \(\psi_{L}(x)\) and \(\psi_{R}(x)\), which have the free-field expansions: \[\psi_{L}({\bf x},0)=\int\frac{d^{3}p}{(2\pi)^{3/2}\sqrt{2{\rm p} }}\left(a_{\downarrow}({\bf p})u_{\downarrow}({\bf p})e^{i{\bf p}\cdot{\bf x}} +b_{\uparrow}^{\dagger}({\bf p})v_{\uparrow}({\bf p})e^{-i{\bf p}\cdot{\bf x} }\right),\] \[\psi_{R}({\bf x},0)=\int\frac{d^{3}p}{(2\pi)^{3/2}\sqrt{2{\rm p}} }\left(a_{\uparrow}({\bf p})u_{\uparrow}({\bf p})e^{i{\bf p}\cdot{\bf x}}+b_{ \downarrow}^{\dagger}({\bf p})v_{\downarrow}({\bf p})e^{-i{\bf p}\cdot{\bf x} }\right). \tag{3.3}\] The corresponding Hamiltonian of interaction is: \[{\cal H}_{int}(x)=m\left(\bar{\psi}_{L}(x)\psi_{R}(x)+\bar{\psi}_{R}(x)\psi_{L }(x)\right) \tag{3.4}\] and the S-matrix reads: \[S=\exp\left[iT\int d^{4}x{\cal H}_{int}(x)\right] \tag{3.5}\] It is well known that one can convert the massless propagator into a massive propagator by summing the geometric series \[\frac{i}{\not{p}}+\frac{i}{\not{p}}(-im)\frac{i}{\not{p}}+\frac{i}{\not{p}}( -im)\frac{i}{\not{p}}(-im)\frac{i}{\not{p}}+\ldots=\frac{i}{\not{p}-m}, \tag{3.6}\] obtained by treating the mass term as an interaction terms, according to the separation in formula (3.1). This inspires us to use a similar procedure for on-shell massless particles, as explained below. We consider the lowest order nontrivial process depicted in Fig. (1). Its Feynman amplitude is: \[i{\cal M} = -im^{2}\bar{u}_{\downarrow}({\bf p})P_{R}\frac{\not{p}}{p^{2}+i \epsilon}u_{\downarrow}({\bf p})=-i\frac{m^{2}}{p^{2}+i\epsilon}\sum_{\lambda} \bar{u}_{\lambda}({\bf p})P_{R}P_{R}\not{p}P_{L}u_{\lambda}({\bf p}) \tag{3.7}\] \[= -i\frac{m^{2}}{p^{2}+i\epsilon}{\rm Tr}\,\left[u_{\lambda}({\bf p })u_{\lambda}({\bf p})P_{R}\not{p}\right]=-i\frac{m^{2}}{p^{2}+i\epsilon}{\rm Tr }\,\left[\not{p}P_{R}\not{p}\right]=-2im^{2}.\] The corresponding \(T\)-matrix element, where \(S=1+iT\), is: \[\langle p|iT|p^{\prime}\rangle=V(2\pi)\delta(E-E^{\prime})\frac{1}{2VE}i{\cal M}, \tag{3.8}\] where \(V\) is the normalization volume. Using the Born approximation formula from non-relativistic quantum mechanics (see, for example, [32]), we can extract from this matrix element the potential energy \(W\) of the corresponding interaction: \[\langle p|iT|p^{\prime}\rangle=-(2\pi)\delta(E-E^{\prime})iW. \tag{3.9}\] Thus, we obtain: \[W=\frac{m^{2}}{E},\ \ \ \ E={\rm p}. \tag{3.10}\] We interpret this as the potential energy that the massless fermion of momentum \({\bf p}\) experiences during the propagation through the vacuum, which is seen as a constant, homogeneous medium. This potential has to be added to the kinetic energy \(E={\rm p}\), to obtain the total energy during propagation, \[{\rm p}+\frac{m^{2}}{E},\ \ \ \ E={\rm p}. \tag{3.11}\] We note the discrepancy between (3.11) and the energy of a relativistic particle of mass \(m\), namely \({\rm p}+\frac{m^{2}}{2E}\). This is due to the fact that the calculation above is done on the vacuum of massless particles, as explained in Appendix B. A genuine massive relativistic particle is not perfectly equivalent to a massless particle moving in a potential. Nevertheless, the factor 2 is irrelevant - all that matters is the dependence of the potential energy on the mass parameters (seen as "coupling constants") and the momentum of the particle. We shall return to this later in Sect. 3.2. Figure 1: The interaction of the Weyl fields with the constant scalar field \(v\) in (a) is equivalent to the mass insertions in (b). ### Ultrarelativistic neutrinos as two-level oscillating system Flavour neutrinos as massless states of unperturbed Hamiltonian:The left-helicity flavour neutrinos are produced in weak interactions as massless fermionic states. They are eigenstates of the free Hamiltonian \[H_{0}=\left(\begin{array}{cc}\mbox{p}&0\\ 0&\mbox{p}\end{array}\right), \tag{3.12}\] satisfying the free Schrodinger equation: \[i\partial_{t}\left(\begin{array}{c}|\nu_{e}(\mbox{p})\rangle\\ |\nu_{\mu}(\mbox{p})\rangle\end{array}\right)=\left(\begin{array}{cc}\mbox{ p}&0\\ 0&\mbox{p}\end{array}\right)\left(\begin{array}{c}|\nu_{e}(\mbox{p})\rangle\\ |\nu_{\mu}(\mbox{p})\rangle\end{array}\right). \tag{3.13}\] This is the fundamental state of a degenerate two-level system. We omit to indicate the helicity of the states, but the convention is as in Standard Model, namely that all massless neutrinos are left-handed and all massless antineutrinos are right-handed. Bilinear mass terms as perturbation of the free Hamiltonian:Once created as massless flavour states, the electron and muon neutrinos enter _suddenly_ a "medium" with which they interact, similarly to the photons entering a dielectric. The difference is that the medium for neutrinos is a homogeneuous and constant scalar field \(v\), which is normally called the vacuum expectation value of the Higgs field. The massless flavour neutrinos are coherently scattered in this medium, which does not get disturbed by their passage. The Lagrangian responsible for this part of the scheme is: \[{\cal L}_{0}+{\cal L}_{mass} = \sum_{\ell=e,\mu}\left[\bar{\nu}_{\ell L}(x)i\partial\!\!\!/\nu_{ \ell L}(x)+\bar{\nu}_{\ell R}(x)i\partial\!\!\!/\nu_{\ell R}(x)\right] \tag{3.14}\] \[- \left(\begin{array}{cc}\bar{\nu}_{eL}(x)&\bar{\nu}_{\mu L}(x) \end{array}\right)\left(\begin{array}{cc}m_{ee}&m_{e\mu}\\ m_{e\mu}&m_{\mu\mu}\end{array}\right)\left(\begin{array}{c}\nu_{eR}(x)\\ \nu_{\mu R}(x)\end{array}\right)+h.c.\] We consider that the interaction with the vacuum produces a perturbation, in the form of a potential energy that mixes the two degenerate states and shifts as well the diagonal matrix elements of the Hamiltonian. The quantum mechanical interpretation as potential is obtained using the Born approximation, which relates the transition amplitude, calculated in quantum field theory, to the quantum mechanical potential scattering (see Sect. 3.1). The Feynman amplitudes for the relevant processes described by the Feynman diagrams in Fig. 2 are \[i{\cal M}_{\nu_{e}\rightarrow\nu_{e}} = -i(m_{ee}^{2}+m_{e\mu}^{2})\bar{u}_{\downarrow}(\mbox{p})P_{R} \frac{p\!\!\!/}{p^{2}+i\epsilon}u_{\downarrow}(\mbox{p})=-2i(m_{ee}^{2}+m_{e \mu}^{2}),\] \[i{\cal M}_{\nu_{e}\rightarrow\nu_{\mu}} = i{\cal M}_{\nu_{\mu}\rightarrow\nu_{e}}=-im_{e\mu}(m_{ee}+m_{\mu \mu})\bar{u}_{\downarrow}(\mbox{p})P_{R}\frac{p\!\!\!/}{p^{2}+i\epsilon}u_{ \downarrow}(\mbox{p})=-2im_{e\mu}(m_{ee}+m_{\mu\mu}),\] \[i{\cal M}_{\nu_{\mu}\rightarrow\nu_{\mu}} = -i(m_{\mu\mu}^{2}+m_{e\mu}^{2})\bar{u}_{\downarrow}(\mbox{p})P_{R }\frac{p\!\!\!/}{p^{2}+i\epsilon}u_{\downarrow}(\mbox{p})=-2i(m_{\mu\mu}^{2}+m_ {e\mu}^{2}). \tag{3.15}\] The corresponding potentials are collected in the interaction Hamiltonian: \[H_{int}=\left(\begin{array}{cc}W_{11}&W_{12}\\ W_{21}&W_{22}\end{array}\right)=\frac{1}{\mbox{p}}\left(\begin{array}{cc}m_{ ee}^{2}+m_{e\mu}^{2}&m_{e\mu}(m_{ee}+m_{\mu\mu})\\ m_{e\mu}(m_{ee}+m_{\mu\mu})&m_{\mu\mu}^{2}+m_{e\mu}^{2}\end{array}\right). \tag{3.16}\] Thus, the total Hamiltonian of the massless flavour neutrinos in the physical vacuum becomes: \[H=H_{0}+H_{int}=\left(\begin{array}{cc}\mathrm{p}&0\\ 0&\mathrm{p}\end{array}\right)+\frac{1}{\mathrm{p}}\left(\begin{array}{cc}m_{ ee}^{2}+m_{e\mu}^{2}&m_{e\mu}(m_{ee}+m_{\mu\mu})\\ m_{e\mu}(m_{ee}+m_{\mu\mu})&m_{\mu\mu}^{2}+m_{e\mu}^{2}\end{array}\right), \tag{3.17}\] leading to the perturbed Schrodinger's equations: \[i\partial_{t}\left(\begin{array}{c}|\nu_{e}(\mathbf{p})\rangle\\ |\nu_{\mu}(\mathbf{p})\rangle\end{array}\right)=\frac{1}{\mathrm{p}}\left( \begin{array}{cc}\mathrm{p}^{2}+(m_{ee}^{2}+m_{e\mu}^{2})&m_{e\mu}(m_{ee}+m_ {\mu\mu})\\ m_{e\mu}(m_{ee}+m_{\mu\mu})&\mathrm{p}^{2}+(m_{\mu\mu}^{2}+m_{e\mu}^{2})\end{array} \right)\left(\begin{array}{c}|\nu_{e}(\mathbf{p})\rangle\\ |\nu_{\mu}(\mathbf{p})\rangle\end{array}\right). \tag{3.18}\] Massive states as eigenstates of the perturbed Hamiltonian:We assume that a left-helicity electron neutrino \(|\nu_{e}(\mathbf{p})\rangle\) is produced by weak interactions as massless eigenstate of \(H_{0}\). Once produced, it enters the "medium" in which it interacts through the bilinear coupling. This is equivalent to a _sudden switching on_ of the bilinear interaction. Since the process is diabatic, the eigenstate \(|\nu_{e}(\mathbf{p})\rangle\) of \(H_{0}\) starts to evolve according to the total Hamiltonian \(H\), as a coherent superposition of the eigenstates of \(H\). We diagonalize \(H\) given by formula (3.17) using the transformation: \[UHU^{T}=\left(\begin{array}{cc}E_{1}&0\\ 0&E_{2}\end{array}\right),\hskip 14.226378ptU=\left(\begin{array}{cc} \cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right), \tag{3.19}\] finding the eigenvalues: \[E_{1,2}=\mathrm{p} + \frac{1}{2\mathrm{p}}\left[m_{ee}^{2}+m_{\mu\mu}^{2}+m_{e\mu}(m_{ ee}+m_{\mu\mu})\right]\] \[\mp \frac{1}{2\mathrm{p}}\sqrt{(m_{ee}^{2}-m_{\mu\mu}^{2})^{2}+4m_{e \mu}^{2}(m_{ee}+m_{\mu\mu})^{2}}\] Fig. 2: Feynman diagrams for the bilinear interactions (3.14) in the lowest order of perturbation. and the mixing matrix: \[U=\left(\begin{array}{cc}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right),\qquad\tan 2\theta=\frac{2m_{e\mu}}{m_{ \mu\mu}-m_{ee}}. \tag{3.21}\] We denote \[m_{1,2}^{2}=\frac{1}{2}\left\{\left[m_{ee}^{2}+m_{\mu\mu}^{2}+m_{e \mu}(m_{ee}+m_{\mu\mu})\right]\mp\sqrt{(m_{ee}^{2}-m_{\mu\mu}^{2})^{2}+4m_{e \mu}^{2}(m_{ee}+m_{\mu\mu})^{2}}\right\}, \tag{3.22}\] which gives \[m_{1,2} = \frac{\sqrt{2}}{2}\left\{(m_{ee}+m_{\mu\mu})\mp\sqrt{(m_{ee}^{2} -m_{\mu\mu}^{2})^{2}+4m_{e\mu}^{2}}\right\}. \tag{3.23}\] According to (A.3), the eigenstates of \(H_{0}\) are related to the eigenstates of \(H\) by \[\left(\begin{array}{c}|\nu_{e}({\bf p})\rangle\\ |\nu_{\mu}({\bf p})\rangle\end{array}\right)=\left(\begin{array}{cc}\cos \theta&\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right)\left(\begin{array}{c}|\nu_{1}({\bf p })\rangle\\ |\nu_{2}({\bf p})\rangle\end{array}\right). \tag{3.24}\] Up to the constant \(\sqrt{2}\), the formulas above coincide with the standard formulas (1.6)-(1.7). The extra factor simply scales the parameters introduced in the Lagrangian, which are anyway free parameters, therefore apriorically unknown. For the analysis of experimental data, the rescaling of the mass parameters in the Lagrangian is irrelevant. From the point of view of the _quantum mechanical formalism_, * the transformation (3.24) is a genuine change of basis in the Hilbert space of the two-level system, consequently it is _implicitly coherent_, as well as unitary; * both sets of states, \(|\nu_{e}({\bf p})\rangle,|\nu_{\mu}({\bf p})\rangle\) and \(|\nu_{1}({\bf p})\rangle,|\nu_{2}({\bf p})\rangle\), are well-defined eigenstates of quantum mechanical Hamiltonians. From the point of view of _quantum field theory_, * the flavour states \(|\nu_{e}({\bf p})\rangle\) and \(|\nu_{\mu}({\bf p})\rangle\) are massless Fock states and represent the asymptotic/free states of any process involving neutrinos; * the states \(|\nu_{1}({\bf p})\rangle\) and \(|\nu_{2}({\bf p})\rangle\) are "propagation states" and not Fock states: they simply encode the information about the probability to find the initially produced flavour state or the other flavour state in the beam after a certain time, due to the mixing interaction. ### Oscillations and conversion in matter The coherent scheme described above integrated perfectly the propagation of neutrinos in matter. Assuming that matter is formed of first-generation particles, the coherent scattering of electron neutrinos on electrons by charged current weak interactions will be the dominating additional process compared to the vacuum propagation. This will add to the Hamiltonian (3.17), as shown by Wolfenstein [36], an extra potential \(\sqrt{2}G_{F}n_{e}(x)\), where \(n_{e}(x)\) is the density of electrons; equal contributions \(V_{NC}\) from the coherent scattering of electron and muon neutrinos on neutrons have to be appropriately added as well, though their effect cancels in the oscillatory process. The Lagrangian of interaction (omitting the neutral current part) contains: \[{\cal L}_{int}^{matter}=-\left(\begin{array}{cc}\bar{\nu_{e}}(x)&\bar{\nu_{ \mu}}(x)\end{array}\right)\left(\begin{array}{cc}m_{ee}&m_{e\mu}\\ m_{e\mu}&m_{\mu\mu}\end{array}\right)\left(\begin{array}{c}\nu_{e}(x)\\ \nu_{\mu}(x)\end{array}\right)-\left[\frac{g}{\sqrt{2}}\bar{\nu_{e}}_{L}(x) \gamma_{\mu}e_{L}(x)W^{\mu}+h.c.\right] \tag{3.25}\] and the coherent scattering processes are correspondingly: \[\left(\begin{array}{c}\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\ In our scheme, the only relevant bases in matter are the set of eigenstates of \(H_{0}\) and the set of eigenstates of \(H_{matter}\). The vacuum mass eigenstates play absolutely no role in the matter propagation, except for the obvious fact that they represent themselves instantaneous adiabatic states, for \(n_{e}(x)=0\). This is in contrast with the standard treatment, where the interaction of the flavour neutrinos with the plasma is calculated using mass eigenstates in vacuum (see, for example, [36] or [4]), implying that at any time during the propagation in matter there are _three bases_ which are relevant, which is clearly one too many. The present description justifies the assertion that the flavour neutrinos are always "the same", whether they are in vacuum or in matter, i.e. \[|\nu_{e}({\bf p})\rangle = \cos\theta|\nu_{1}({\bf p})\rangle+\sin\theta|\nu_{2}({\bf p})\rangle, \tag{3.30}\] \[|\nu_{e}({\bf p})\rangle = \cos\theta_{m}|\nu_{1m}({\bf p})\rangle+\sin\theta_{m}|\nu_{2m}({ \bf p})\rangle, \tag{3.31}\] where \(|\nu_{1m}({\bf p})\rangle,|\nu_{2m}({\bf p})\rangle\) is the basis in which \(H_{matter}\) is diagonal and \(\theta_{m}\) is the corresponding mixing angle. Thus, massless flavour neutrinos are the only interacting entities. When an electron neutrino is born in the core of the Sun, for example, it is emitted always as a massless neutrino, which enters suddenly a medium composed of the background constant scalar \(v\) and a plasma, in which it interacts according to the Lagrangian (3.25). As a result, it becomes subject to a series of coherent scattering microprocesses depicted in (3.26), which slow down its passage and also mix the massless electron neutrino and muon neutrino states. Due to the sudden turning on of the interaction, the electron neutrino state starts to propagate as a coherent superposition of the adiabatic states \(|\nu_{1m}({\bf p})\rangle,|\nu_{2m}({\bf p})\rangle\), with the coefficients depending on the density of electrons and the energy of the neutrino at the site of production. The propagation may be adiabatic, as in the MSW effect [37], or not. Although the vacuum eigenstates do not feature at all in the present formalism for matter propagation, the expression (3.29) and all the calculations in terms of the set of vacuum parameters \((m_{1},m_{2},\theta)\) remains useful for comparison of data between vacuum and matter experiments/observations. The rest can be found in any textbook. ## 4 Neutrino flavour nonconservation in weak interactions We shall turn now to the first requirement stated in the beginning, namely that the weak interactions of flavour neutrinos take place with flavour conservation. Apparently, this requirement is a more serious hurdle. It is avowed times and again in books and articles as a fact, but to our knowledge it has never been quantitatively proven (see also Appendix C). Actually, quantum field theory leads us to expect the opposite, since the flavour violation in the quadratic part of the Lagrangian is likely to show up in any process, once quantum corrections are taken into account. For definiteness, let us consider the process of pion decay to antimuons. At tree level, we expect it to take place as described by Fig. 3a, according to the charged current Lagrangian (1.4). However, once we perform the external neutrino leg "renormalization", the situation changes drastically: since mixing processes such as those in 2c are allowed the pion decay to antimuons can lead to the production of either electron neutrinos, or muon neutrinos, as in Fig. 3b and 3c. As a result, the inference of the number of a certain type of produced (or detected) flavour neutrinos by counting the charged leptons may be inaccurate, if the effect is substantial. The same flavour violation would happen in neutral current interactions. According to Fig. 4, in calculating the amplitude corresponding to Fig. 3c in the second order in masses, we have to use for the external neutrino line \[\frac{1}{E^{2}}m_{e\mu}\left(m_{\mu\mu}+m_{ee}\right)u(\mathbf{p}),\ \ \ \ | \mathbf{p}|=E. \tag{4.1}\] When we square the amplitude, the probability of the pion decay with neutrino flavour violation in Fig. 3c turns out to be of the order \(m^{4}/E^{4}\). For ultrarelativistic neutrinos, this effect, which should manifest itself among other things as a "zero-length flavour conversion", is too small to be measured at the present level of experimental accuracy. We note also that the similar external leg dressing will appear also for neutrinos which preserve their flavour as in Fig. 3b. The effects of the dressing are significant when the neutrino energies are very low, \(E\sim m\). For this reason, in the previous sections we ignored the flavour violation in the weak interactions of high-energy neutrinos and adopted the view of a strictly sequential order of interactions. Nevertheless, the analysis of oscillations can be refined by adding the dressing as sketched above, though we expect relatively tiny contributions. In Appendix C the flavour violation in the beta decay is calculated using the standard approach to neutrino oscillations, as described in Sect. 1. The result is that, when using correctly the quantum field theory basic rules, even in the ultrarelativistic neutrino limit, the flavour violation effect is sizeable and at present measurable as zero-distance flavour conversion. ## 5 Musing on the \(K_{0}-\bar{K}_{0}\) system Neutrino oscillations were modeled originally following the theory of \(K_{0}-\bar{K}_{0}\) mixing and oscillations, proposed by Gell-Mann and Pais [9] and Pais and Piccioni [10]. The problems of coherence and "competition" between different interactions were clearly significant in these papers, though not much space is devoted to them. We can still glean the understanding of these issues in the interpretation of Gell-Mann, Pais and Piccioni. The bilinear part of the effective Lagrangian that describes the neutral kaon system is: \[{\cal L} = \partial^{\mu}\Phi^{\dagger}(x)\partial_{\mu}\Phi(x)-\frac{1}{2} \left(\begin{array}{cc}\Phi^{\dagger}(x)&\Phi(x)\end{array}\right)\left( \begin{array}{cc}M^{2}&2\epsilon^{2}\\ 2\epsilon^{2}&M^{2}\end{array}\right)\left(\begin{array}{c}\Phi(x)\\ \Phi^{\dagger}(x)\end{array}\right). \tag{5.1}\] The off-diagonal mass parameter \(\epsilon\) expresses effectively the weak interactions depicted in Fig. 5, which mix the states \(|K_{0}\rangle\) and \(|\bar{K}_{0}\rangle\), created by strong interactions. Gell-Mann and Pais make a point in [9] that, if one has a free complex massive scalar field \(\phi(x)\) of mass \(M\), that field can be written in terms of its real and imaginary part fields as: \[\phi(x)=\frac{1}{\sqrt{2}}(\phi_{1}(x)+\phi_{2}(x)), \tag{5.2}\] with \(\phi_{1}\) even and \(\phi_{2}\) odd under \({\cal C}\)-conjugation. The quanta of the real fields \(\phi_{1}\) and \(\phi_{2}\) will be called \(|K_{1}\rangle\) and \(|K_{2}\rangle\) and the quanta of the complex field \(\phi\) will be called \(|K_{0}\rangle\) and Figure 4: External neutrino line ”renormalization” at the lowest order in perturbation. Figure 5: \(K_{0}-\bar{K}_{0}\) mixing by weak interactions lead to the effective off-diagonal mass terms in (5.1). \(|\bar{K}_{0}\rangle^{7}\).Then, we have for the corresponding states with the same quantum numbers (same momenta and, if the field is not scalar, same spin polarization): \[|K_{1}\rangle=\frac{1}{\sqrt{2}}(|K_{0}\rangle+|\bar{K}_{0}\rangle). \tag{5.3}\] In other words, the creation of a \(|K_{1}\rangle\) quantum "corresponds physically to the creation, with equal probability and with prescribed relative phase", of either a \(|K_{0}\rangle\) or a \(|\bar{K}_{0}\rangle\). This statement is not new, nor surprising. What is noteworthy is the acribia of Gell-Mann and Pais to make the statement, emphasizing that coherence ("prescribed relative phase") occurs when all the quanta \(|K_{0}\rangle,|\bar{K}_{0}\rangle,|K_{1}\rangle,|K_{2}\rangle\) have _the same mass_. Later on in the same paper, they still rely on the formula (5.3) though the quanta \(|K_{1}\rangle,|K_{2}\rangle\) are acknowledged to have different masses, "though the mass difference is surely tiny". This does not illuminate the point on how to preserve coherence when the two masses are different, but definitely suggests that their opinion was that the tininess of the mass difference with respect to the mass of either particle has a role in this game. Soon after, in Ref. [10], Pais and Piccioni proposed the experiment that bears their name to confirm the fact that the \(|K_{0}\rangle\)-meson is described by mixed states. In the same paper, they write for the first time the \(K_{0}-\bar{K}_{0}\) oscillation formula. In the introduction of the paper, they return to the inverse of the formula (5.3), \[|K_{0}\rangle=\frac{1}{\sqrt{2}}(|K_{1}\rangle+i|\bar{K}_{2}\rangle), \tag{5.4}\] scrupulously mentioning again that the superposition is "well-defined" (the equivalent of "prescribed relative phase" from [9]), that all the particles are in the same momentum (and, if necessary, spin polarization) state, and adding the footnote: "This implies that we ignore the influence of the weak decay interactions on the production" of the \(|K_{0}\rangle\). In other words, the \(|K_{0}\rangle\) state is produced strictly by strong interactions and the mixing terms proportional to \(\epsilon^{2}\) in (5.1) are neglected, such that the masses of all states in (5.4) are the same. The footnote has still another logical consequence: if the weak interactions are neglected at the production of \(K_{0}\), they must spring in later on, during the propagation stage, leading up to the particles \(K_{1}\) and \(K_{2}\) acquiring the tiny mass difference that engenders the \(K_{0}-\bar{K}_{0}\) oscillation. This is suggestive of a sequential treatement of the interactions, supported by the fact that the strangeness-violation effect in the production of \(K_{0}\) is truly negligible. Remark also that the conception of Gell-Mann, Pais and Piccioni was that the states \(|K_{0}\rangle,|\bar{K}_{0}\rangle\) have definite mass and the formulas (5.3) and (5.4) are written for well-defined momentum states. All these aspects of the original papers have survived to this day in the description of the neutral kaon systems, for which definite masses are assigned to \(K_{0}\), \(K_{S}\) and \(K_{L}\) (the latter two being, barring CP violation effects, the same as \(K_{1}\) and \(K_{2}\), respectively). In the manifestly coherent scheme presented in Sect. 3 for neutrinos, the (nonrelativistic) degenerate states \(|K_{0}\rangle,|\bar{K}_{0}\rangle\) are described by the Hamiltonian: \[H_{0}^{K}=\left(\begin{array}{cc}M&0\\ 0&M\end{array}\right). \tag{5.5}\] In quantum field theory, they are the eigenstates of the free Hamiltonian \[{\cal H}_{0}=\partial^{\mu}\Phi^{\dagger}(x)\partial_{\mu}\Phi(x)-M^{2}\Phi^{ \dagger}(x)\Phi(x). \tag{5.6}\] The weak interaction sets in _after production_ and mixes the two states due to the interaction Hamiltonian: \[{\cal H}^{K}_{int}=\epsilon^{2}\left(\Phi^{\dagger}(x)\Phi^{ \dagger}(x)+\Phi(x)\Phi(x)\right), \tag{5.7}\] such that \[\langle\bar{K}_{0}({\bf p})|{\cal H}^{K}_{int}|K_{0}({\bf p}) \rangle\neq 0. \tag{5.8}\] The Feynman amplitude for the \(|K_{0}({\bf p})\rangle\rightarrow|\bar{K}_{0}({\bf p})\rangle\) transition, in the lowest order, is: \[i{\cal M}_{K_{0}\rightarrow\bar{K}_{0}}=i\epsilon^{2}. \tag{5.9}\] By Born approximation, we find the corresponding potential to be: \[V_{K_{0}\rightarrow\bar{K}_{0}}=\frac{\epsilon^{2}}{2E},\quad E =M,\quad\epsilon^{2}/M^{2}\ll 1. \tag{5.10}\] The Hamiltonian during propagation, omitting the absorptive part, is altered to \[H^{K}=\left(\begin{array}{cc}M&\frac{\epsilon^{2}}{2M}\\ \frac{\epsilon^{2}}{2M}&M\end{array}\right)=\left(\begin{array}{cc}\sqrt{M ^{2}-\epsilon^{2}}&0\\ 0&\sqrt{M^{2}+\epsilon^{2}}\end{array}\right). \tag{5.11}\] When the \(K_{0}\) particle is nonrelativistic and the condition \(\epsilon^{2}/M^{2}\ll 1\) is fulfilled, the eigenstates of \(H_{0}\) and those of \(H\) can be confidentely assumed to belong to the same Hilbert space (see Appendix B). The propagation eigenstates correspond to the weak-decay states \(|K_{1}\rangle=|K_{S}\rangle\) and \(|K_{2}\rangle=|K_{L}\rangle\) (CP violation ignored). The mixing is maximal and the coherence of the superposition is guaranteed by the two-level system formalism: \[|K_{0}({\bf p})\rangle = \frac{1}{\sqrt{2}}\left(|K_{1}({\bf p})\rangle-|K_{2}({\bf p}) \rangle\right),\] \[|\bar{K}_{0}({\bf p})\rangle = \frac{1}{\sqrt{2}}\left(|K_{1}({\bf p})\rangle+|K_{2}({\bf p}) \rangle\right). \tag{5.12}\] This scheme explains why the "prescribed relative phase" in (5.12) survives also when \(|K_{1}\rangle\) and \(|K_{2}\rangle\) have different masses, as long as the mass difference is very tiny compared to the mass of \(K_{0}\). ## 6 Outlook We have proposed a scheme, combining quantum field theory and formal aspects of quantum mechanics, in which neutrino oscillations are formulated in analogy with the quantum mechanical two-(or more) level systems. How we separate the Lagrangian into the free part and interaction part is a matter of choice (see [30, 38]). The temptation is to diagonalize quickly whatever can be diagonalized. This is not always the most natural course of action. In the case of neutrinos, where the oscillation paradigm requires us to assume that flavour states are produced and detected, the sensible approach is to keep those states as the asymptotic/free ones. Consequently, the scheme gives a definite identity to the flavour states as massless Fock states. They have the leading role as asymptotic states in all processes involving neutrinos, be they weak interactions or mass-generating interactions. The latter set in suddenly after the high-energy flavour neutrinos are produced by weak interactions, and they create a "resistant" medium in which neutrinos propagate with flavour violation. The flavour states and the propagation (massive) states form two bases in the Hilbert space of the two-level system, as eigenstates of the unperturbed Hamiltonian \(H_{0}\) and the perturbed one, \(H\). Nevertheless, from the quantum field theory point of view, only the massless states are Fock states. The coherence arises implicitly, technically following from the genuine change of basis between flavour and propagation states. In this framework, vacuum oscillations and matter conversion of neutrinos arise both due to coherent scatterings of flavour neutrinos in media which mix the flavours, as shown in (3.26). The "sameness" of flavour neutrinos in vaccum and in matter is mathematically substantiated by their personal identity as massless eigenstates of the Hamiltonian \(H_{0}\). This treatment of neutrinos highlights the interactions which keep the flavours locked together during propagation and the transfer between flavours (see Fig. 2). Intuitively, the microscopic description of neutrino oscillations in vacuum is of a massless particle with a distance-dependent probability of being in one flavour state or another, determined by the "competition" between the flavour-preserving and flavour-violating interactions depicted in Fig. 2. The massive states are purely theoretical tools to determine, macroscopically, the flavour composition probability at a certain point after production. They do not feature in the calculations of neutrino production or detection probabilities. The mass parameters are coupling constants rather than kinematic parameters. The intuitive picture leads automatically to the conclusion that _any flavour neutrino produced in vacuum oscillates in perpetuity_, or at least until it is destroyed by weak interactions or other non-standard interactions. There is no chance that the propagation states \(|\nu_{1}\rangle\) and \(|\nu_{2}\rangle\) ever lose coherence, because they do not represent particle states and do not carry energy. Still, because neutrinos can travel undisturbed over huge spans of space, it is interesting to determine the velocity of propagation for an oscillating neutrino. Even if "in between two scatterings" the neutrinos propagate at the speed of light, the interaction itself is bound to slow them down8. Footnote 8: The situation is similar to the photon propagation in a dielectric: in between two scatterings, the photons propagate with the speed of light, however the change in phase between the incoming and scattered waves induces a perceived change in wavelength, therefore in speed. Even if there is no decoherence when we speak about individual neutrinos, the propagation over long distances leads to averaging effects of the oscillations. We never know precisely where a given flavour neutrino was emitted by weak interaction, therefore the distance \(L\) between production and detection sites is always affected by our ignorance. The energy resolution of the detector requires also averaging. As a result, in most of the experimental setups, the measurable probability of conversion is effectively constant and depending only on the elements of the mixing matrix (see, for example, [4]). Here we have demonstrated the scheme on the simplest possible model, namely the two-flavour mixing with Dirac mass terms. Increasing the number of flavours or using Majorana mass terms (type II seesaw) should be straightforward. It will be interesting to study the case of mixing with sterile neutrinos which come with very heavy mass parameters (type I or III seesaw). Since this mechanism involves only massless neutrinos with definite helicity, it automatically follows that right-helicity neutrino states, or left-helicity antineutrino states can never be produced or observed in any frame. In contrast, in the standard approach using massive asymptotic states, there should be a sizeable amount of laboratory frame left-helicity nonrelativistic antineutrinos produced, for example, together with the electrons in the tail of the beta decay. We expect the neutrinos with energies around and below \(m_{i},i=1,2\) given by (3.23) to be strongly influenced by the "dressing" discussed in Sect. 4. Consequently, though a rigorous analysis in the present framework is still necessary, it is likely that the absolute values of the mass parameters be still experimentally accessible through the KATRIN experiment, for example. This scheme accommodates also the neutrinoless double beta decay [39] and the \(2\nu\)-mediated forces [40] as signaling the Majorana nature of the bilinear terms in the description of neutrinos, since the neutrino propagators will contain mass insertions. To summarize, we have presented a novel scheme for neutrino interactions and oscillations, in which the massless flavour neutrino states have the key role. The mechanism interpolates smoothly between high-energy neutrino phenomena and low-energy neutrino behaviour. The main achievement reported in this paper is the reconciliation of flavour neutrino production and detection with the coherence required for the description of oscillations in vacuum. ### Acknowledgments I am grateful to Gabriela Barenboim, Jose Bernabeu, Masud Chaichian, Jose Gracia-Bondia, Kazuo Fujikawa, Markku Oksanen and Adam Schwimmer for inspiring discussions, insightful comments and constructive suggestions. ## Appendix A Oscillation of states and coherence in two-level systems The prototype for the description of neutrino oscillations is the quantum mechanical two-level system. This is a system with two stationary states, that get mixed by the sudden switching on of a time-independent interaction. The system is initially described by Hamiltonian \(H_{0}\) with the orthonormal basis states \(|\psi_{a}\rangle\) and \(|\psi_{b}\rangle\), with eigenvalues \(E_{a}\) and \(E_{b}\): \[H_{0}\left(\begin{array}{c}|\psi_{a}\rangle\\ |\psi_{b}\rangle\end{array}\right)=\left(\begin{array}{cc}E_{a}&0\\ 0&E_{b}\end{array}\right)\left(\begin{array}{c}|\psi_{a}\rangle\\ |\psi_{b}\rangle\end{array}\right).\] (A.1) We assume that on can turn on an interaction, such that the system is described by the perturbed Hamiltonian \[H=H_{0}+H_{int},\] with a new basis of stationary states \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\), i.e. \[H|\psi_{i}\rangle=E_{i}|\psi_{i}\rangle,\ \ i=1,2.\] In the general case, the perturbation shifts the diagonal elements of \(H_{0}\) and it also mixes the states, i.e. \[H_{int}=\left(\begin{array}{cc}W_{11}&W_{12}\\ W_{21}&W_{22}\end{array}\right),\ \ \ \ \ W_{12}=W_{21}^{*}.\] (A.2) If \(H_{int}\) is real, then \(W_{12}=W_{21}=W\). According to the Stone-von Neumann theorem, all the representations of the canonical algebra for a given quantum mechanical system are equivalent, implying a _unitary change of basis_ between the eigenstates of \(H\) and the eigenstates of \(H_{0}\): \[\left(\begin{array}{c}|\psi_{a}\rangle\\ |\psi_{b}\rangle\end{array}\right)=\left(\begin{array}{cc}\cos\theta&\sin \theta\\ -\sin\theta&\cos\theta\end{array}\right)\left(\begin{array}{c}|\psi_{1} \rangle\\ |\psi_{2}\rangle\end{array}\right).\] (A.3) In other words, the states \(|\psi_{a}\rangle\) and \(|\psi_{b}\rangle\) are coherent superpositions of the states \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\). The same rotation matrix has to diagonalize the Hamiltonian \(H\) by a similarity transformation: \[UHU^{\dagger}=\left(\begin{array}{cc}E_{1}&0\\ 0&E_{2}\end{array}\right),\ \ \ \ U=\left(\begin{array}{cc}\cos\theta&\sin \theta\\ -\sin\theta&\cos\theta\end{array}\right),\ \ \ \ \tan 2\theta=\frac{2W}{E_{a}-E_{b}+W_{22}-W_{11}}.\] (A.4) The eigenvalues of \(H\) are \[E_{1,2}=\frac{1}{2}(E_{a}+E_{b}+W_{11}+W_{22})\mp\frac{1}{2} \sqrt{(E_{a}-E_{b}+W_{11}-W_{22})^{2}+4W^{2}}.\] (A.5) In the treatment of two-level system, it is customary to write the Hamiltonian (3.17) as \[H=H_{0}+H_{int}=\frac{1}{2}\left(\begin{array}{cc}-\Delta& \Omega\\ \Omega&\Delta\end{array}\right)+E_{0}{\bf 1}_{2\times 2},\] (A.6) where \[E_{0}=\frac{1}{2}(E_{a}+E_{b}+W_{11}+W_{22}).\] (A.7) The variable \(\Delta=E_{a}-E_{b}+W_{11}-W_{22}\) is called the _detuning_ and \(\Omega=2W\) is called the _Rabi frequency_ of the system, in remembrance of two-level atoms coupled with an electromagnetic field which couples the two levels. The mixing angle is then given by \[\tan 2\theta=\frac{\Omega}{\Delta},\] (A.8) which shows that resonance is obtained for zero detuning. Both \(\Delta\) and \(\Omega\) may depend on time, in which case the transitions between levels can be adiabatic or not. The eigenvalues (3.20) of the Hamiltonian \(H\) are: \[E_{1,2}=E_{0}\mp\frac{1}{2}\sqrt{\Delta^{2}+\Omega^{2}}.\] (A.9) We consider a time-independent perturbation. The system is prepared in the stationary state \(|\psi_{a}\rangle\) and it evolves with \(H_{0}\). At \(t=t_{0}\), the interaction is turned on _suddenly (diabatically)_, such that the system does not transition slowly into a stationary state of \(H\), but it remains for an instant in the state \(|\psi_{a}\rangle=|\psi(t_{0})\rangle\). Since the initial state \(|\psi_{a}\rangle\) is a _coherent superposition_ of the states \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\), after evolving with the Hamiltonian \(H\) for a period of time \(\Delta t\), the system will be in the state \[|\psi(t)\rangle=\cos\alpha e^{-iE_{1}\Delta t}|\psi_{1}\rangle+\sin\alpha e^{- iE_{2}\Delta t}|\psi_{2}\rangle.\] (A.10) At the time \(t=t_{0}+\Delta t\) we remove suddenly the interaction and determine the state of the system, which can be either of the \(H_{0}\) eigenstates, \(|\psi_{a}\rangle\) or \(|\psi_{b}\rangle\). The probability that the system has transitioned into the state \(|\psi_{b}\rangle\) is: \[{\cal P}_{|\psi_{a}\rangle\rightarrow|\psi_{b}\rangle}=|\langle\psi_{b}|e^{-iH \Delta t}|\psi_{a}\rangle|^{2}\sim\sin^{2}\left(\frac{\Delta E}{2}\Delta t \right),~{}~{}~{}~{}\Delta E=E_{2}-E_{1}=\sqrt{\Delta^{2}+\Omega^{2}}.\] (A.11) The question is whether the above simple picture can be applied to particle oscillations. Let us note the following facts regarding the two-level system: i) the states of the two bases are well defined as stationary states of either \(H_{0}\) or \(H\); ii) the coherent superposition of states (leading to interference and finally to oscillation) is achieved by _turning on/off suddenly the interaction_. ## Appendix B Particles of different masses and their Hilbert spaces ### Spinors Consider \(a_{\lambda}({\bf p})\) and \(b_{\lambda}({\bf p})\) annihilation operators for free spinor particles and antiparticles of mass \(M\) and helicity \(\lambda\), operating on a vacuum \(|0\rangle\), and \(A_{\lambda}({\bf p})\) and \(B_{\lambda}({\bf p})\) the corresponding operators for massive spinor particles of mass \(M+m\), operating on the vacuum \(|\Phi\rangle\). All the operators satisfy canonical anti-commutation relations. The two sets of operators are connected by the Bogoliubov transformations: \[A_{\lambda}({\bf p}) = \alpha_{\rm p}a_{\lambda}({\bf p})+\beta_{\rm p}b_{\lambda}^{ \dagger}(-{\bf p}),\] \[B_{\lambda}({\bf p}) = \alpha_{\rm p}b_{\lambda}({\bf p})-\beta_{\rm p}a_{\lambda}^{ \dagger}(-{\bf p}),\] (B.1) where \[\alpha_{\rm p}=\sqrt{\frac{\Omega_{\rm p}+\omega_{\rm p}}{2\Omega_{\rm p}}},~ {}~{}\beta_{\rm p}={\rm sgn}\,\lambda\,\sqrt{\frac{\Omega_{\rm p}-\omega_{\rm p }}{2\Omega_{\rm p}}},~{}~{}~{}|\alpha_{\rm p}|^{2}+|\beta_{\rm p}|^{2}=1.\] (B.2) \[\omega_{\rm p}=\sqrt{{\rm p}^{2}+M^{2}},\quad\Omega_{\rm p}=\sqrt{{\rm p}^{2}+(M+m)^ {2}}.\] (B.3) The vacua \(|0\rangle\) and \(|\Phi\rangle\) are orthogonal in the infinite momentum and infinite volume limit, \[\langle 0|\Phi\rangle=0,\] therefore the Fock spaces built on either of these two vacua are orthogonal as well. In other words, a particle state of a certain mass cannot be written as a linear superposition of particle states of different mass(es). If we take \(M=0\), then \[\alpha_{\rm p}=\sqrt{\frac{1}{2}\left(1+\frac{{\rm p}}{\Omega_{ \rm p}}\right)},\ \ \beta_{\rm p}={\rm sgn}\,\lambda\,\sqrt{\frac{1}{2}\left(1-\frac{{\rm p}}{ \Omega_{\rm p}}\right)}.\] (B.4) Now, \(A_{\lambda}^{\dagger}({\bf p})|\Phi\rangle\) represents a one-particle state with mass \(m\) and \(\alpha_{\rm p}a_{\lambda}^{\dagger}({\bf p})|0\rangle\) represents a massless one-particle state. They are rigorously orthogonal, so we cannot really compare them. But our purpose is to view the massive particle as a massless one, with the mass contribution to its energy given by a potential. The closest we can get to this interpretation in quantum field theory is to compare the states \(A_{\lambda}^{\dagger}({\bf p})|0\rangle\) and \(a_{\lambda}^{\dagger}({\bf p})|0\rangle\). Although \(A_{\lambda}^{\dagger}({\bf p})|0\rangle\) is genuinely massless, it still carries the memory of the free massive Hamiltonian \(H\). Applying the hermitian conjugate of (B.1) on the massless vacuum \(|0\rangle\), we obtain: \[A_{\lambda}^{\dagger}({\bf p})|0\rangle=\alpha_{\rm p}a_{\lambda }^{\dagger}({\bf p})|0\rangle.\] (B.5) In the limit \(m/{\rm p}\ll 1\), we have \[A_{\lambda}^{\dagger}({\bf p})|0\rangle=\left(1-\frac{1}{8}\frac {m^{2}}{{\rm p}^{2}}\right)a_{\lambda}^{\dagger}({\bf p})|0\rangle,\] (B.6) therefore the two states coincide up to the order \(m/{\rm p}\). The difference of the order \({\cal O}(m^{2}/{\rm p}^{2})\) indicates that the energy of the massless state in a potential, \(A_{\lambda}^{\dagger}({\bf p})|0\rangle\), will be different from the energy of the bare massless state \(a_{\lambda}^{\dagger}({\bf p})|0\rangle\). In Sect. 3.1 we confirm that the energy difference is of this order by using a Feynman diagram approach followed by Born approximation. ### Scalars We consider \(a({\bf p})\) and \(b({\bf p})\) the canonical annihilation operators for free scalar particles and antiparticles of square mass \(M^{2}\), operating on a vacuum \(|0\rangle\), and \(A({\bf p})\) and \(B({\bf p})\) the corresponding operators for scalar particles of square mass \(M^{2}+\epsilon^{2}\), operating on the vacuum \(|\Phi\rangle\). The two sets of operators are connected by the Bogoliubov transformations: \[A({\bf p}) = \alpha_{\rm p}a({\bf p})+\beta_{\rm p}b^{\dagger}(-{\bf p}),\] \[B({\bf p}) = \alpha_{\rm p}b({\bf p})-\beta_{\rm p}a^{\dagger}(-{\bf p}),\] (B.7) where \[\alpha_{\rm p}=\frac{\Omega_{\rm p}+\omega_{\rm p}}{2\sqrt{\omega_{\rm p }\Omega_{\rm p}}},\ \ \ \ \beta_{\rm p}=\frac{\Omega_{\rm p}-\omega_{\rm p}}{2\sqrt{\omega_{\rm p }\Omega_{\rm p}}},\ \ \ \ \ \ |\alpha_{\rm p}|^{2}-|\beta_{\rm p}|^{2}=1.\] (B.8) and \[\omega_{\rm p}=\sqrt{{\rm p}^{2}+M^{2}},\ \ \ \Omega_{\rm p}=\sqrt{{\rm p}^{2}+(M ^{2}+\epsilon^{2})}.\] (B.9) As before, we apply the operatorial equation (B.7) to the vacuum of the particles with mass \(M\): \[A^{\dagger}({\bf p})|0\rangle=\alpha_{\rm p}a^{\dagger}({\bf p})|0\rangle.\] (B.10) In the limit \({\rm p}/M\ll 1,\ \epsilon^{2}/M^{2}\ll 1\), \[A^{\dagger}({\bf p})|0\rangle=\left(1-\frac{1}{16}\frac{\epsilon^{4}}{M^{4}} \right)a^{\dagger}({\bf p})|0\rangle.\] (B.11) ## Appendix C Comments on the flavour violation in the weak interactions of neutrinos Here we show that, in the standard approach to oscillations, in which the flavour states are defined exclusively through the superposition (1.10), the flavour violation in weak interactions is actually quite significant, even for ultrarelativistic neutrinos. Consider the production of antineutrinos by beta decay: \[n\to p+\bar{\nu}_{\ell}+e^{-}.\] The matrix element is given by: \[\langle\bar{\nu}_{\ell},e^{-}|\bar{e}(x)\gamma^{\alpha}(1-\gamma_{5})\nu_{e}(x )|0\rangle C_{\alpha}(n,p,\ldots),\] (C.1) where \(C_{\alpha}(n,p,...)\) represents the contributions of the neutron and proton (depending on their fixed momenta and energies, as well as on the parameters of the leptons), including the coupling constant. To assess the flavour violation, we have to consider \(|\nu_{\ell}\rangle=|\nu_{\mu}\rangle\). The spirit of the traditional approach is that the mass eigenstates are produced and annihilated in the weak processes, therefore one has to use (1.5) and (1.10) to re-express the amplitude: \[\langle\bar{\nu}_{\mu},e^{-}|\bar{e}(x)\gamma^{\alpha}(1-\gamma_ {5})\nu_{e}(x)|0\rangle C_{\alpha}(n,p,\ldots)\] \[=-\sin\theta\cos\theta\,C_{\alpha}(n,p,m_{1})\langle\bar{\nu}_{1 },e^{-}|\bar{e}(x)\gamma^{\alpha}(1-\gamma_{5})\nu_{1}(x)|0\rangle\] \[+\sin\theta\cos\theta\,C_{\alpha}(n,p,m_{2})\langle\bar{\nu}_{2}, e^{-}|\bar{e}(x)\gamma^{\alpha}(1-\gamma_{5})\nu_{2}(x)|0\rangle.\] (C.2) We do not use momentum identifiers for the states because the conservation of energy and momentum would make it cumbersome. Anyway, it is assumed that the initial state is well defined kinematically. The momenta of the final states are a source of confusion. This formula is customarily used to justify that flavour violation is practically nil in weak interaction. The argument is based on taking the ultrarelativistic limit in the amplitude formula (C.2), in which case the factors depending on the masses of the antineutrinos become vanishingly small and (C.2) becomes zero because of the opposite sign in the two terms. However, this argument is misleading. Formula (C.2) expresses the sum of two amplitudes in which the final particles differ. According to the rules of quantum field theory, _the two amplitudes do not interfere (their relative phase is irrelevant) and, in order to calculate the transition probability, we have to square each amplitude individually and add the results_, namely: \[{\cal P}_{n\to p+\bar{\nu}_{\mu}+e^{-}} = \frac{1}{4}\sin^{2}2\theta\,|C_{\alpha}(n,p,m_{1})|^{2}|\langle \bar{\nu}_{1},e^{-}|\bar{e}(x)\gamma^{\alpha}(1-\gamma_{5})\nu_{1}(x)|0\rangle| ^{2}\] (C.3) \[+ \frac{1}{4}\sin^{2}2\theta\,|C_{\alpha}(n,p,m_{2})|^{2}|\langle \bar{\nu}_{2},e^{-}|\bar{e}(x)\gamma^{\alpha}(1-\gamma_{5})\nu_{2}(x)|0\rangle| ^{2}.\] The ultrarelativistic limit has to be taken in the probability formula (C.3). Neglecting all the powers of \(m_{i}/{\rm p},\ i=1,2\), where \({\rm p}\) is the momentum of the antineutrino, we obtain: \[{\cal P}_{n\to p+\bar{\nu}_{\mu}+e^{-}} = \frac{1}{2}\sin^{2}2\theta\,{\cal P}^{SM}_{n\to p+\bar{\nu}_{e}+e^{-}},\] (C.4) where \({\cal P}^{SM}_{n\to p+\bar{\nu}_{e}+e^{-}}\) is the Standard Model probability of the flavour-conserving reaction beta decay with the same kinematic configuration. Thus we find a sizable probability of ultrarelativistic muon antineutrino production in the beta decay, and this probability does not depend on the ratio between the mass and the momentum of neutrinos. It should consequently give a measurable "zero-length conversion" probability, which actually has never been observed experimentally. The situation is actually even more implausible, because, if we calculate in the same manner as above the probability of the beta decay with electron antineutrinos in the ultrarelativistic limit, we find the expected result: \[{\cal P}_{n\to p+\bar{\nu}_{e}+e^{-}} = {\cal P}^{SM}_{n\to p+\bar{\nu}_{e}+e^{-}}.\] (C.5) Putting together (C.4) and (C.5) we come to the conclusion that, by taking into account the mixing, the probability of neutron decay into proton, electron and antineutrino _increases_ compared to the same probability calculated in the Standard Model (with zero neutrino masses). The bizarre result is due to the fact that the apparent orthogonality of the states \(|\nu_{e}\rangle\) and \(|\nu_{\mu}\rangle\) in (1.10) is irrelevant in the decay probability computation: the flavour neutrino state could have been \(|\nu_{\mu}\rangle=e^{i\delta}\sin\theta|\nu_{1}\rangle+\cos\theta|\nu_{2}\rangle\), with arbitrary real \(\delta\), and the result would have been the same. This reflects the fact that coherence between states with different masses is an impossibility in quantum field theory. Assuming that we consider the mass eigenstates as asymptotic states, how should one calculate the beta decay probability in a sensible manner? The answer is straightforward: by using the Lagrangian (1.9) and the principles of quantum field theory (see also [41, 42], where the _incoherent decay_ of pions into different massive neutrino states is acknowledged and exploited for experimental signatures). According to the Lagrangian (1.9), two independent processes contribute to the beta decay: \[n\to p+\bar{\nu}_{1}+e^{-}\] \[n\to p+\bar{\nu}_{2}+e^{-}.\] (C.6) With the previous notations, their transition amplitudes are: \[\langle\bar{\nu}_{1},e^{-}({\bf k})|\bar{e}(x)\gamma^{\alpha}(1-\gamma _{5})\nu_{e}(x)|0\rangle C_{\alpha}(n,p,\ldots)\] \[=\cos\theta\,C_{\alpha}(n,p,m_{1})\langle\bar{\nu}_{1},e^{-}({ \bf k})|\bar{e}(x)\gamma^{\alpha}(1-\gamma_{5})\nu_{1}(x)|0\rangle\] \[\langle\bar{\nu}_{2},e^{-}({\bf k})|\bar{e}(x)\gamma^{\alpha}(1- \gamma_{5})\nu_{e}(x)|0\rangle C_{\alpha}(n,p,\ldots)\] \[=\sin\theta\,C_{\alpha}(n,p,m_{2})\langle\bar{\nu}_{2},e^{-}({\bf k })|\bar{e}(x)\gamma^{\alpha}(1-\gamma_{5})\nu_{2}(x)|0\rangle,\] (C.7) and the probabilities: \[{\cal P}_{n\to p+\bar{\nu}_{1}+e^{-}} = \cos^{2}\theta\,|C_{\alpha}(n,p,m_{1})|^{2}|\langle\bar{\nu}_{1}, e^{-}({\bf k})|\bar{e}(x)\gamma^{\alpha}(1-\gamma_{5})\nu_{1}(x)|0\rangle|^{2}\] \[{\cal P}_{n\to p+\bar{\nu}_{2}+e^{-}} = \sin^{2}\theta\,|C_{\alpha}(n,p,m_{2})|^{2}|\langle\bar{\nu}_{2}, e^{-}({\bf k})|\bar{e}(x)\gamma^{\alpha}(1-\gamma_{5})\nu_{2}(x)|0\rangle|^{2}.\] (C.8) In the ultrarelativistic antineutrino limit, neglecting all the powers of \(m_{i}/{\rm p}\), we obtain: \[{\cal P}_{n\to p+antineutrinos+e^{-}} = {\cal P}_{n\to p+\bar{\nu}_{e}+e^{-}}^{SM},\] (C.9) which is a sound result. Nevertheless, in this calculation flavour neutrino states do not enter in any way.
2308.02649
On $p$-refined Friedberg-Jacquet integrals and the classical symplectic locus in the $\mathrm{GL}_{2n}$ eigenvariety
Friedberg--Jacquet proved that if $\pi$ is a cuspidal automorphic representation of $\mathrm{GL}_{2n}(\mathbb{A})$, $\pi$ is a functorial transfer from $\mathrm{GSpin}_{2n+1}$ if and only if a global zeta integral $Z_H$ over $H = \mathrm{GL}_n \times \mathrm{GL}_n$ is non-vanishing on $\pi$. We conjecture a $p$-refined analogue: that any $P$-parahoric $p$-refinement $\tilde\pi^P$ is a functorial transfer from $\mathrm{GSpin}_{2n+1}$ if and only if a $P$-twisted version of $Z_H$ is non-vanishing on the $\tilde\pi^P$-eigenspace in $\pi$. This twisted $Z_H$ appears in all constructions of $p$-adic $L$-functions via Shalika models. We prove various results towards the conjecture by connecting it to the study of classical symplectic families in the $\mathrm{GL}_{2n}$ eigenvariety. If $\pi$ is spherical at $p$, there are $(2n)!$ attached $p$-refinements to Iwahori level; we conjecture the dimensions of such families through each of these, prove the upper bound unconditionally, and (by constructing the families) prove the lower bound under a non-critical slope assumption. For example, for $\mathrm{GL}_4$, we conjecture that (modulo 1-dimensional trivial variation) 8 refinements vary in 2-dimensional symplectic families, 8 in 1-dimensional symplectic families, and prove 8 do not vary in any symplectic family.
Daniel Barrera Salazar, Andrew Graham, Chris Williams
2023-08-04T18:11:00Z
http://arxiv.org/abs/2308.02649v1
On \(p\)-refined Friedberg-Jacquet integrals and the classical symplectic locus in the \(\mathrm{GL}_{2n}\) eigenvariety ###### Abstract Friedberg-Jacquet proved that if \(\pi\) is a cuspidal automorphic representation of \(\mathrm{GL}_{2n}(\mathbf{A})\), \(\pi\) is a functorial transfer from \(\mathrm{GSpin}_{2n+1}\) if and only if a global zeta integral \(Z_{H}\) over \(H=\mathrm{GL}_{n}\times\mathrm{GL}_{n}\) is non-vanishing on \(\pi\). We conjecture a \(p\)-refined analogue: that any \(P\)-parahoric \(p\)-refinement \(\bar{\pi}^{P}\) is a functorial transfer from \(\mathrm{GSpin}_{2n+1}\) if and only if a \(P\)-twisted version of \(Z_{H}\) is non-vanishing on the \(\bar{\pi}^{P}\)-eigenspace in \(\pi\). This twisted \(Z_{H}\) appears in all constructions of \(p\)-adic \(L\)-functions via Shalika models. We prove various results towards the conjecture by connecting it to the study of classical symplectic families in the \(\mathrm{GL}_{2n}\) eigenvariety. If \(\pi\) is spherical at \(p\), there are \((2n)!\) attached \(p\)-refinements to Iwahori level; we conjecture the dimensions of such families through each of these, prove the upper bound unconditionally, and (by constructing the families) prove the lower bound under a non-critical slope assumption. For example, for \(\mathrm{GL}_{4}\), we conjecture that (modulo 1-dimensional trivial variation) 8 refinements vary in 2-dimensional symplectic families, 8 in 1-dimensional symplectic families, and prove 8 do not vary in any symplectic family. ###### Contents * 1 Introduction * I \(P\)-spin refinements * II Dimensions of Symplectic Components * III.1 The symplectic locus in the eigenvariety * III.2 The symplectic locus in the eigenvariety * III.3 Weight obstructions to symplectic families * III.4 Existence of \(P\)-spin families * III.5 Explicit examples for \(\mathrm{GL}_{4}\) * III \(p\)-refined Friedberg-Jacquet Integrals * III \(p\)-refined Friedberg-Jacquet integrals: Statements * III \(p\)-refined Friedberg-Jacquet integrals: Statements The period integrals in (1) appear in the Gan-Gross-Prasad conjectures and are closely connected to the relative Langlands program. The families in (2) have been centrally important in number theory and arithmetic geometry for decades, essential to breakthroughs in the Langlands program (through modularity theorems, constructions of Galois representations, recent instances of Langlands functoriality, and proofs of local-global compatibility) and Iwasawa theory (in work on the Birch-Swinnerton-Dyer, Bloch-Kato and Iwasawa main conjectures). We consider these questions when \(G=\mathrm{GL}_{2n}\) and \(H=\mathrm{GL}_{n}\times\mathrm{GL}_{n}\). In particular: * We prove upper bounds on the dimensions of symplectic classical families for \(G\). This shows that any systematic congruences between Hecke eigensystems in symplectic representations are induced (by Langlands functoriality) from congruences for \(\mathrm{GSpin}_{2n+1}\), providing evidence for a folklore expectation on classical families in eigenvarieties. * We show that non-vanishing of twisted period integrals for an eigenform for \(G\) implies existence of a symplectic family of 'large' dimension through the corresponding eigensystem in the eigenvariety. We use our results on symplectic families to prove this forces the eigensystem to arise from Langlands functoriality from \(\mathrm{GSpin}_{2n+1}\). ### Classical families A system \(\alpha\) of Hecke eigenvalues for \(G\) is _classical (cuspidal)_ if it appears in a (cuspidal) automorphic representation \(\pi\) of \(G(\mathbf{A})\). A _classical (cuspidal) family_ is any subspace of the eigenvariety in which the classical (cuspidal) points are Zariski-dense. A fundamental question is: **Question 1**.: _In how many dimensions does \(\alpha\) vary in a classical cuspidal family?_ In other words: let \(\lambda\) be the weight of \(\alpha\). In which weight directions can one deform \(\lambda\) and always find classical cuspidal eigensystems \(\alpha_{m}\equiv\alpha\,(\mathrm{mod}\,p^{m})\) for arbitrarily large \(m\)? A folklore expectation, described below, says every non-trivial classical family for \(\mathrm{GL}_{N}\) arises from some form of self-duality. A family that is essentially self-dual on \(\mathrm{GL}_{N}\) is either _orthogonal_ or _symplectic_ (containing a Zariski-dense set of orthogonal or symplectic points). In this paper, we consider Question 1 for symplectic families of \(\mathrm{GL}_{N}(\mathbf{A})\). This forces \(N\) to be even (see [1]), so let \(G=\mathrm{GL}_{2n}\), and let \(\alpha\) be attached to a regular algebraic cuspidal automorphic representation (RACAR) \(\pi\) of \(\mathrm{GL}_{2n}(\mathbf{A})\) that admits a Shalika model (equivalent to \(\pi\) symplectic). We assume that \(\pi_{p}\) is unramified, and the Satake parameter of \(\pi_{p}\) is regular, in which case there are \((2n)!\) possible \(p\)-refinements \(\tilde{\pi}=(\pi,\alpha)\) of \(\pi\). Here a \(p\)_-refinement_ is a Hecke eigensystem \(\alpha\) appearing in the Iwahori-invariants of \(\pi_{p}\). In this paper, we define a stratification on the \((2n)!\)\(p\)-refinements \(\alpha\) in terms of parabolic subgroups in \(\mathrm{GL}_{2n}\), and we predict (in Conjecture 4.12) the dimension of any symplectic family through a given \(\alpha\) depends on its position in the stratification. We prove: * the upper bound on the dimension unconditionally; * and the lower bound when \(\alpha\) has non-critical slope. We also give theoretical justification for the lower bound in general. We find that (modulo trivial variation, coming from twists by the norm) there can exist such symplectic families of exact dimension \(d\) for any \(d=0,1,...,n\). This seems striking given that every component of the eigenvariety through any such \(\alpha\) conjecturally has dimension \(n\); so there should be classical families sitting inside 'generically non-classical' components of the eigenvariety. **Example**.: For \(\mathrm{GL}_{4}\), there are \(24\)\(p\)-refinements \(\tilde{\pi}\). By [1, Thm. 1.1.5], every irreducible cuspidal component of the \(\mathrm{GL}_{4}\)-eigenvariety is \(2\)-dimensional (modulo trivial variation). Then: * \(8\) of the \(\tilde{\pi}\) are essentially self-dual, and should vary in \(2\)-dimensional symplectic families, each of which is then an irreducible component of the eigenvariety. * we prove they do not vary in _any_ symplectic family. In any component through these points in the eigenvariety, the classical points should be discrete. * \(8\) of them should vary in a \(1\)-dimensional symplectic family, sitting in a \(2\)-dimensional component of the eigenvariety, which should be generically non-classical. In SS7 we give explicit examples of \((\pi,\alpha)\) in each of these cases, showing that 'generically non-symplectic but with a positive-dimensional symplectic locus' cases do indeed occur. ### Previous work on classical families To put our results into context, we summarise the previous work on Question 1, which broadly falls into two cases: 1. \(G(\mathbf{R})\) admits discrete series (including e.g. if \(G\) forms part of a Shimura datum), 2. \(G(\mathbf{R})\) does _not_ admit discrete series. In case (I), Question 1 is fairly well-understood: Urban [20] has shown that a (non-critical) cohomological cuspidal \(\alpha\)_always_ varies'maximally', in all possible weight directions. This generalises the theory of Hida/Coleman families for modular forms (\(G=\operatorname{GL}_{2}\)). However, many fundamental cases - e.g. \(\operatorname{GL}_{n}\) for \(n\geqslant 3\), and \(\operatorname{GL}_{2}\) over non-totally-real fields - are case (II), where our understanding of Question 1 is extremely poor. Ash-Pollack-Stevens [1] and Calegari-Mazur [13] considered the cases of \(\operatorname{GL}_{3}\) and \(\operatorname{Res}_{F/\mathbf{Q}}\operatorname{GL}_{2}\) respectively, for \(F\) an imaginary quadratic field, and conjectured that: 1. _For_ \(G=\operatorname{GL}_{3}\)_or_ \(\operatorname{Res}_{F/\mathbf{Q}}\operatorname{GL}_{2}\)_,_ \(\alpha\) varies in a positive-dimensional classical family if and only if_ \(\alpha\) _is essentially self-dual._ In [16], Xiang has studied one direction of (\(\dagger\)) more generally, proving that if \(\alpha\) is essentially self-dual on \(\operatorname{GL}_{n}\) (that is, both \(\pi\)_and_\(\alpha\) are essentially self-dual) then \(\alpha\) varies in a classical family in all'self-dual/pure' directions in weight space. Since every RACAR, hence every \(\alpha\), has pure weight, this variation is'maximal' in the strongest possible sense. One goal of this paper is to find analogues of (\(\dagger\)) in higher-dimensional settings, where the picture is more subtle. Even when \(\pi\) itself is essentially self-dual, it admits non-essentially-self-dual refinements \(\alpha\), and we show that some of these can be varied in positive-dimensional classical families of smaller dimension. ### Philosophy on classical families Case (I) groups \(G\) yield many classical families. A folklore expectation predicts this accounts for _all_ classical families, in the sense that every classical family is a \(p\)-adic Langlands transfer of a case (I) family. For example, conjecturally: * For \(\operatorname{GL}_{3}\), all classical families are twists of symmetric square families for \(\operatorname{GL}_{2}\); * For \(\operatorname{Res}_{F/\mathbf{Q}}\operatorname{GL}_{2}\), all classical families are twists of base-change families for \(\operatorname{GL}_{2}\), or CM transfers of families for \(\operatorname{Res}_{F^{\prime}/F}\operatorname{GL}_{1}\), for \(F^{\prime}/F\) quadratic. Before we describe our results precisely, let us explain why they fit strongly into this philosophy. We hesitantly suggest they provide further evidence towards it. Any RACAR \(\pi\) of \(\operatorname{GL}_{2n}(\mathbf{A})\) that admits a Shalika model is essentially self-dual, and a Langlands transfer of some RACAR \(\Pi\) for \(\operatorname{GSpin}_{2n+1}(\mathbf{A})\). Note that \(\mathcal{G}:=\operatorname{GSpin}_{2n+1}\) is a case (I) group. There are \(2^{n}n!\) Iwahori \(p\)-refinements of \(\Pi\). By Urban's case (I) theorem, each of these varies in a maximal family over weight space (of dimension \(n\), modulo trivial variation). In the style of Chenevier, each of these families should admit a transfer to \(\operatorname{GL}_{2n}\) interpolating Langlands functoriality on classical points. These \(n\)-dimensional classical \(\operatorname{GL}_{2n}\)-families were constructed and studied in [1], and fall in the case studied by Xiang, corresponding exactly to the essentially self-dual eigensystems in \(\pi\). This only accounts, however, for \(2^{n}n!\) of the \((2n)!\) possible \(p\)-refinements of \(\pi\); even for \(\operatorname{GL}_{6}\) this is only 48 out of 720. To look for classical families through the other refinements, we consider _parabolic_ families for \(\mathcal{G}\), as constructed and studied, for example, in [1, 2]. For any standard parabolic \(\mathcal{P}\subset\mathcal{G}\), one can study \(\mathcal{P}\)-parahoric refinements of \(\Pi\). We show that for every refinement \(\alpha\) of \(\pi\), there exists a unique smallest parabolic \(\mathcal{P}\subset\mathcal{G}\) such that \(\alpha\) 'is a functorial transfer of a \(\mathcal{P}\)-refinement \(\alpha^{\mathcal{G},\mathcal{P}}\) of \(\Pi\)'. Under a natural correspondence, \(\mathcal{P}\) corresponds to a unique'spin' parabolic \(P\subset G\), and we call \(\alpha\) an _optimally \(P\)-spin refinement_. If \(B\) is the corresponding Borel, the optimally \(B\)-spin refinements are exactly the \(2^{n}n!\) essentially self-dual ones studied in [1, 16]. All of this is defined in SS3, where we give Weyl group, Hecke algebra, and combinatorial definitions of being \(P\)-spin, proving they are all equivalent. Let \(\alpha\) be an optimally \(P\)-spin refinement with associated spin eigensystem \(\alpha^{\mathcal{G},\mathcal{P}}\). Under a non-criticality assumption, [10] shows \(\alpha^{\mathcal{G},\mathcal{P}}\) varies in a family in the \(\mathcal{P}\)-parabolic \(\mathcal{G}\)-eigenvariety over a smaller-dimensional weight space. Again, conceptually, this family should admit a transfer to the (Iwahoric) \(G\)-eigenvariety interpolating Langlands functoriality on classical points. This would produce a classical symplectic family in the \(G\)-eigenvariety through \(\alpha\), of some smaller dimension depending on \(\mathcal{P}\) (hence \(P\)). It is not clear how one should construct these transfer maps in general. There is a natural map of (abstract) Hecke algebras \(J^{\vee}:\mathcal{H}^{G}\to\mathcal{H}^{\mathcal{G}}\) at Iwahoric level (see (11)), which should induce a map \[[\text{Iwahoric-$\mathcal{G}$-eigenvariety}]\longrightarrow[\text{Iwahoric-$G$ -eigenvariety}].\] However, one needs detailed automorphic information about classical points in the \(\mathcal{G}\)-eigenvariety to control this, and in any case this recovers families already known to exist by [1, 16]. At parahoric level the situation is worse: a transfer map \[[\mathcal{P}\text{-parahoric-$\mathcal{G}$-eigenvariety}]\longrightarrow[ \text{Iwahoric-$G$-eigenvariety}]\] should be induced from a map \(j^{\vee}_{\mathcal{P}}:\mathcal{H}^{G}\to\mathcal{H}^{\mathcal{G},\mathcal{P}}\) on abstract Hecke algebras, but now there is no natural map, as \(\mathcal{H}^{\mathcal{G},\mathcal{P}}\subsetneq\mathcal{H}^{G}\), but \(j^{\vee}\) is surjective1. To construct even a candidate \(j^{\vee}_{\mathcal{P}}\), it seems necessary to _presuppose_ the existence of the family for \(G\) one wants to construct. As such, we do not pursue this approach to families in this paper. Footnote 1: On the Galois representation side, this should correspond to the problem of consistently choosing triangulations of the \((\varphi,\Gamma)\)-modules attached to every classical point in a paraboline family. ### Our results on symplectic families To a spin parabolic \(P\), in Definition 3.11 we associate a subset \(X_{P}\subset\{1,...,n\}\). Here \(X_{B}=\{1,...,n\}\) and \(X_{G}=\varnothing\). Let \(\pi\) be a symplectic RACAR, and let \(\tilde{\pi}\) be an optimally \(P\)-spin refinement. We prove: **Theorem A**.: 1. _Any symplectic family_ \(\mathscr{C}\) _through_ \(\tilde{\pi}\) _has dimension at most_ \(\#X_{P}+1\)_._ 2. _When_ \(\alpha\) _has non-critical slope and regular weight, there exists a unique symplectic family through_ \(\tilde{\pi}\)_, of dimension exactly_ \(\#X_{P}+1\)_._ (Here we include, as in the main text, the \(1\)-dimensional trivial variation). In particular, if \(\tilde{\pi}\) is optimally \(G\)-spin, then \(\tilde{\pi}\) is'symplectic-rigid', varying in _no_ non-trivial symplectic family. There are, for example, \(8\) such refinements in the \(\mathrm{GL}_{4}\) case. Part (i) is Theorem 4.9, which actually says more: that the weight support of such a family must lie in a \(P\)-parahoric weight space, which has dimension \(\#X_{P}+1\). To prove this, we show first that every classical point in \(\mathscr{C}\) is also optimally \(P\)-spin, and then obtain obstructions to the existence of optimally \(P\)-spin families varying outside the \(P\)-parabolic weight space. Part (ii) is Theorem 4.10. We show further that this unique component is etale over its image in weight space. To construct these families, we use a'refinement-switching' argument to move between points on the \(\mathrm{GL}(2n)\)-eigenvariety attached to a single \(\pi\). The proof highlights interdependencies between the symplectic families through the different \(p\)-refinements, with implications for a hypothetical 'infinite fern' construction for \(\mathrm{GL}_{2n}\) (see Remark 6.13). We remark how Theorem A fits into the philosophy above. Writing \(\mathscr{E}\) for the \(\mathrm{GL}_{2n}\) eigenvariety of some fixed level, we expect there are an infinite number of closed embeddings \(\{\iota_{i}:\mathscr{C}_{i}\hookrightarrow\mathscr{E}:i\in I\}\), where the \(\mathscr{C}_{i}\) are classical families in parabolic \(\mathrm{GSpin}_{2n+1}\) eigenvarieties. Each \(\mathscr{C}_{i}\) is flat over the relevant parabolic weight space, and cannot be varied in higher dimension at the level of \(\mathrm{GSpin}_{2n+1}\) eigensystems. However, \(\mathscr{E}\) varies over a higher-dimensional weight space, and in general \(\iota_{i}(\mathscr{C}_{i})\) will sit properly inside some larger irreducible component of \(\mathscr{E}\). Theorem A says that this irreducible component cannot have any further symplectic variation; that is, the subspaces \(\iota_{i}(\mathscr{C}_{i})\) of \(\mathscr{E}\) cannot be assembled together into any classical family of higher dimension. In other words, all classical symplectic variation, and systematic congruences, should be accounted for by families in (parabolic) \(\mathrm{GSpin}_{2n+1}\) eigenvarieties. This is predicted by our guiding philosophy on classical families in the eigenvariety, suggesting our results provide some further evidence for it. Indeed, motivated by the above theorem and the guiding philosophy, we conjecture: **Conjecture B**.: _Every symplectic family through \(\tilde{\pi}\) is the transfer of a classical parabolic family for \(\operatorname{GSpin}_{2n+1}\) and has dimension \(\#X_{P}+1\)._ In SS7, we give explicit examples for \(\operatorname{GL}_{4}\) illustrating Theorem A and Conjecture B. ### Non-vanishing of twisted period integrals We give an application to the study of non-vanishing of period integrals. Let \(\pi\) be a RACAR of \(G(\mathbf{A})\), and let \(H=\operatorname{GL}_{n}\times\operatorname{GL}_{n}\subset G\). If \(\chi\) is an algebraic Hecke character and \(\varphi\in\pi\), then in (8.1) we define an attached global period integral for \(H\subset G\), denoted \(Z_{H}(\varphi,\chi,s)\). The same kind of period integral appears in the GGP conjectures, and is related to the relative Langlands program. A result of Friedberg-Jacquet [10] says that for any \(s\in\mathbf{C}\), the following are equivalent: 1. There exists \(\varphi\in\pi\) such that \(Z_{H}(\varphi,\chi,s+1/2)\neq 0\); 2. \(\pi\) is a functorial transfer of some \(\Pi\) on \(\operatorname{GSpin}_{2n+1}(\mathbf{A})\), and \(L(\pi\times\chi,s+1/2)\neq 0\). This is related to the relative Langlands program [10]; \(G/H\) is a spherical variety, and \(Z_{H}\) is an \(H\)-period integral (that appears, for example, in the GGP conjectures in related settings). This phenomenon is also explained in great generality in [11, p.174]. We propose a \(p\)-refined analogue of this. Let \(P\subsetneq G\) be a proper spin parabolic, let \(\beta\geqslant 1\), and let \(ut_{P}^{\beta}\in G(\mathbf{Q}_{p})\) be the element defined in Notation 8.2. Here \(u\) is a representative for the open orbit of the action of \(B\) on \(G/H\) and \(t_{P}\) defines the Hecke operator at \(P\). Let \(\tilde{\pi}\) be a \(P\)-parahoric \(p\)-refinement of \(\pi\). **Conjecture C**.: _Suppose \(\chi\) is finite order and has conductor \(p^{\beta}>1\). For any \(s\in\mathbf{C}\), the following are equivalent:_ 1. _There exists an eigenvector_ \(\varphi\in\tilde{\pi}^{P}\) _such that_ \(Z_{H}(ut_{P}^{\beta}\cdot\varphi,\chi,s+1/2)\neq 0\)_._ 2. _All of the following hold:_ * \(\tilde{\pi}^{P}\) _is a functorial transfer of some_ \(\mathcal{P}\)_-refined_ \(\tilde{\Pi}^{\mathcal{P}}\) _on_ \(\operatorname{GSpin}_{2n+1}(\mathbf{A})\)_,_ * \(L(\pi\times\chi,s+1/2)\neq 0\)_,_ * \(P\) _is contained in the_ \((n,n)\)_-parabolic (in the sense of Notation_ 2.6_)._ The quantity \(Z_{H}(ut_{P}^{\beta}\cdot\varphi,\chi,s+1/2)\) appears in constructions of \(p\)-adic \(L\)-functions via Shalika models [12, BDW, BDG\({}^{+}\), Wil]. Conjecture C highlights a close relationship between the \(P\)-spin conditions defined in this paper, and settings where we can expect to construct non-zero \(p\)-adic \(L\)-functions via Shalika models. In this light, the requirement in (2) that \(P\) is contained in the \((n,n)\)-parabolic \(Q\) is natural; the Panchishkin condition [10] predicts that to be able to attach a \(p\)-adic \(L\)-function to \(\tilde{\pi}^{P}\), one requires \(P\subset Q\). As evidence towards this conjecture, we use Theorem A to prove: **Theorem D**.: 1. _(2)_ \(\Rightarrow\) _(1) holds in Conjecture C._ 2. _Suppose_ \(\pi\) _has regular weight and there is a non-critical slope further refinement_ \(\tilde{\pi}\) _of_ \(\tilde{\pi}^{P}\) _to Iwahori level. Then (1)_ \(\Rightarrow\) _(2) holds in Conjecture C._ In particular, the conjecture holds in full for a large class of \(\tilde{\pi}^{P}\). We actually show (ii) (and deduce the full conjecture) under weaker assumptions on \(\tilde{\pi}^{P}\), which we cautiously imagine could hold for _all_\(\tilde{\pi}^{P}\); see Theorem 8.6 and Remarks 8.7. We remark here that whilst Conjecture C is stated globally, when combined with Friedberg-Jacquet's original result it can be reduced to a local statement at \(p\) (given in Conjecture 8.10). Our proof of Theorem D(i) is purely local: given (2), we directly exhibit an eigenvector satisfying (1) using methods developed in [12]. To prove (ii), we also deploy global methods, using ideas from [12, BDG\({}^{+}\)] to show that if (1) holds, then we can construct a symplectic family through \(\tilde{\pi}\) over the \(P\)-parahoric weight space. By (the stronger form of) Theorem A(i), this forces \(\tilde{\pi}^{P}\) to be \(P\)-spin, hence \(\tilde{\pi}^{P}\) is a functorial transfer. We expect that this relationship between non-vanishing of twisted period integrals attached to a \(p\)-refinement, and the refinement being a functorial transfer, should be true much more generally. In future work with Lee, we hope to treat the case of twisted Flicker-Rallis integrals for \(\operatorname{GL}_{n}\) over a CM field, showing non-vanishing implies transfer from a unitary group. Acknowledgements.We thank our co-authors on [BDG\({}^{+}\)], Mladen Dimitrov and Andrei Jorza, for many stimulating discussions whilst preparing our earlier joint paper. We also thank Valentin Hernandez for interesting discussions on Zariski-density of crystalline points and the infinite fern. AG was supported by ERC-2018-COG-818856-HiCoShiVa. CW was supported by EPSRC Postdoctoral Fellowship EP/T001615/1. DBS was supported by FONDECYT 11201025. ### Set-up and notation Let \(n\geqslant 1\) and let \(G:=\mathrm{GL}_{2n}\). We write \(B=B_{2n}\) for the Borel subgroup of upper triangular matrices, \(\overline{B}=\overline{B}_{2n}\) for the opposite Borel of lower triangular matrices and \(T=T_{2n}\) for the maximal split torus of diagonal matrices. Let \(\delta_{B}\) be the standard modulus character. If \(\pi\) is a regular algebraic symplectic cuspidal automorphic representation (RASCAR), then \(\pi\) is essentially self-dual (i.e. there exists a Hecke character \(\eta\) such that \(\pi^{\vee}\cong\pi\otimes\eta^{-1}\)). Moreover by [AS06, FJ93]\(\pi\) is a functorial transfer of a RACAR \(\Pi\) on \(\mathrm{GSpin}_{2n+1}(\mathbf{A})\) and admits an \((\eta,\psi)\)-Shalika model, where \(\psi\) is the standard additive character of \(\mathbf{Q}\backslash\mathbf{A}\). ## Part I \(P\)-spin refinements ### 2. Structure theory and parahoric \(p\)-refinements #### Root systems and spin parabolics Our study of'spin' refinements is rooted in the structure theory of \(\mathrm{GL}_{2n}\) and \(\mathrm{GSpin}_{2n+1}\). We recall the following from [BDG\({}^{+}\), SS6]. The spaces of algebraic characters/cocharacters of the torus \(T\subset G=\mathrm{GL}_{2n}\) are \[X=\mathbf{Z}e_{1}\oplus\mathbf{Z}e_{2}\oplus\cdots\mathbf{Z}e_{2n},\quad X^{ \vee}=\mathbf{Z}e_{1}^{*}\oplus\mathbf{Z}e_{2}^{*}\oplus\cdots\mathbf{Z}e_{2 n}^{*}.\] The root system for \(G\) is \(A_{2n-1}\), with roots \(R=\{\pm(e_{i}-e_{j}):1\leqslant i<j\leqslant 2n\}\), positive roots \(\{e_{i}-e_{j}:i<j\}\), and simple roots \(\Delta_{G}=\{a_{i}:=e_{i}-e_{i+1}:i=1,...,2n-1\}\). The Weyl group \(\mathcal{W}_{G}=\mathrm{S}_{2n}\) acts by permuting the \(e_{i}\). We set this up so that \(\sigma\in\mathcal{W}_{G}\) sends \(e_{i}\) to \(e_{\sigma^{-1}(i)}\), hence \(\sigma\) acts on a character \(\mu=(\mu_{1},...,\mu_{2n})\in X\) as \(\mu^{\sigma}=(\mu_{\sigma(1)},...,\mu_{\sigma(2n)})\). Let \(X_{0}\subset X\) be the space of _pure characters_\(X_{0}=\{\lambda\in X:\exists\mathsf{w}(\lambda)\in\mathbf{Z}\text{ such that }\lambda_{i}+\lambda_{2n-i+1}=\mathsf{w}(\lambda)\ \forall 1 \leqslant i\leqslant n\}\), and let \[\mathcal{W}_{G}^{0}:=\{\sigma\in\mathcal{W}_{G}:\sigma(X_{0})\subset X_{0} \}\subset\mathcal{W}_{G}. \tag{2.1}\] There is a splitting \(\mathcal{W}_{G}^{0}=\{\pm 1\}^{n}\rtimes\mathrm{S}_{n}\), where: * for \(1\leqslant i\leqslant n\), \(\sigma\in\mathrm{S}_{n}\) sends \(e_{i}\) to \(e_{\sigma^{-1}(i)}\), and \(e_{2n+1-i}\) to \(e_{2n+1-\sigma^{-1}(i)}\); * and the \(i\)th copy of \(\{\pm 1\}\) acts by swapping \(e_{i}\leftrightarrow e_{2n+1-i}\). Identifying \(i\leftrightarrow e_{i}\), we view \(\mathcal{W}_{G}^{0}\) as a subgroup of \(\mathrm{S}_{2n}\), and have the following easy fact: **Lemma 2.1**.: _If \(\sigma\in\mathcal{W}_{G}^{0}\), then \(\sigma(i)+\sigma(2n+1-i)=2n+1\) for all \(1\leqslant i\leqslant n\)._ Now fix a standard upper Borel subgroup \(\mathcal{B}\) and maximal split torus \(\mathcal{T}\) in \(\mathcal{G}=\mathrm{GSpin}_{2n+1}\). This has rank \(n+1\)[Asg02, Thm. 2.7]. We use calligraphic letters to denote objects for \(\mathrm{GSpin}\), whilst keeping other notational conventions as before. **Proposition 2.2**.: _The root system for \(\mathcal{G}\) is \((\mathcal{X},\mathcal{R},\mathcal{X}^{\vee},\mathcal{R}^{\vee})\), where_ \[\mathcal{X}=\mathbf{Z}f_{0}\oplus\mathbf{Z}f_{1}\oplus\cdots\oplus\mathbf{Z}f_ {n},\quad\mathcal{X}^{\vee}=\mathbf{Z}f_{0}^{*}\oplus\mathbf{Z}f_{1}^{*}\oplus \cdots\oplus\mathbf{Z}f_{n}^{*},\] _with roots \(\mathcal{R}=\{\pm f_{i}\pm f_{j}:1\leqslant i<j\leqslant n\}\cup\{f_{i}:1 \leqslant i\leqslant n\}\), simple roots_ \[\Delta_{\mathcal{G}}=\{b_{i}:=f_{i}-f_{i+1}:i=1,...,n-1\}\cup\{b_{n}:=f_{n}\},\] _and positive roots \(\{f_{i}:1\leqslant i\leqslant n\}\cup\{f_{i}\pm f_{j}:1\leqslant i<j\leqslant n\}\). The Weyl group \(\mathcal{W}_{\mathcal{G}}\) is isomorphic to \(\{\pm 1\}^{n}\rtimes\mathrm{S}_{n}\), generated by permutations \(\sigma\in\mathrm{S}_{n}\) and sign changes \(\mathrm{sgn}_{i}\), which act on roots and coroots respectively as (for \(j\neq i\))_ \[\sigma f_{0}=f_{0},\ \ \sigma f_{i}=f_{\sigma^{-1}(i)},\ \ \ \mathrm{sgn}_{i}f_{0}=f_{0}+f_{i},\ \ \ \mathrm{sgn}_{i}(f_{i})=-f_{i},\ \ \ \mathrm{sgn}_{j}(f_{i})=f_{i}, \tag{2.2}\] \[\sigma f_{0}^{*}=f_{0}^{*},\ \ \sigma f_{i}^{*}=f_{\sigma^{-1}(i)}^{*},\ \ \ \mathrm{sgn}_{i}f_{0}^{*}=f_{0}^{*},\ \ \ \mathrm{sgn}_{i}(f_{i}^{*})=f_{0}^{*}-f_{i}^{*},\ \ \ \mathrm{sgn}_{j}(f_{i}^{*})=f_{i}^{*}.\] Proof.: The first part is [1, Prop. 2.4], and the second [1, Lem. 13.2.2]. Write \(\langle-,-\rangle_{G}\) (resp. \(\langle-,-\rangle_{\mathcal{G}}\)) for the natural pairing on \(X\times X^{\vee}\) (resp. \(\mathcal{X}\times\mathcal{X}^{\vee}\)). There is a natural injective map \({\jmath}:\mathcal{X}\hookrightarrow X\) given by \[f_{i}\longmapsto e_{i}-e_{2n-i+1}\text{ for }1\leqslant i\leqslant n,\qquad f_{0 }\longmapsto e_{n+1}+\cdots+e_{2n},\] with \(X_{0}={\jmath}(\mathcal{X})\) by [1, Prop. 6.5]. If \(\rho_{G}\) and \(\rho_{\mathcal{G}}\) are half the sum of the positive roots for \(G\) and \(\mathcal{G}\) respectively, a simple check shows \({\jmath}(\rho_{\mathcal{G}})=\rho_{G}\). We also have: **Proposition 2.3** ([1, Proposition 6.6], _There is a map \(\mathcal{W}_{\mathcal{G}}\to\mathcal{W}_{G}\) of Weyl groups, also denoted \({\jmath}\), such that:_ 1. \({\jmath}\) _induces an isomorphism_ \(\mathcal{W}_{\mathcal{G}}\cong\mathcal{W}_{G}^{0}\subset\mathcal{W}_{G}\)_;_ 2. _for all_ \(\sigma\in\mathcal{W}_{\mathcal{G}}\) _and_ \(\mu\in\mathcal{X}\)_, we have_ \({\jmath}(\mu^{\sigma})={\jmath}(\mu)^{{\jmath}(\sigma)}\)_._ Dually, define also a map \({\jmath}^{\vee}:X^{\vee}\to\mathcal{X}^{\vee}\) by sending \(\nu\in X^{\vee}\) to \[{\jmath}^{\vee}(\nu)\coloneqq\sum_{i=0}^{n}\left\langle{\jmath}(f_{i}),\nu \right\rangle_{G}\cdot f_{i}^{*}.\] Then for all \(\mu\in\mathcal{X}\) and \(\nu\in X^{\vee}\), we have \[\langle\mu,{\jmath}^{\vee}(\nu)\rangle_{\mathcal{G}}=\langle{\jmath}(\mu), \nu\rangle_{G} \tag{2.3}\] by construction. Also let \({\jmath}^{\vee}:\mathcal{W}_{G}^{0}\to\mathcal{W}_{\mathcal{G}}\) denote the inverse to \({\jmath}:\mathcal{W}_{\mathcal{G}}\cong\mathcal{W}_{G}^{0}\). **Proposition 2.4** ([1, Proposition 6.7], _For all \(\nu\in X^{\vee}\) and \(\sigma\in\mathcal{W}_{G}^{0}\), we have_ \[{\jmath}^{\vee}(\nu^{\sigma})={\jmath}^{\vee}(\nu)^{{\jmath}^{\vee}(\sigma)}.\] We take a brief general intermission. For any quasi-split reductive group \(\mathrm{G}\) with a fixed choice of Borel pair \((\mathrm{B},\mathrm{T})\), there is a well-known inclusion-preserving correspondence between standard parabolic subgroups \(\mathrm{P}\) of \(\mathrm{G}\) and subsets \(\Delta_{\mathrm{P}}\) of the set \(\Delta\) of simple roots (see e.g. [1, SS2.3]). Here \(\mathrm{B}\) corresponds to the empty set, and any proper maximal standard parabolic corresponds to \(\Delta\backslash\{a\}\) for some simple root \(a\in\Delta\). Further, for any such \(\mathrm{P}\) we have a Levi subgroup \(L_{\mathrm{P}}\), with Weyl group \(\mathcal{W}_{L_{\mathrm{P}}}\), which is naturally a subgroup of \(\mathcal{W}_{\mathrm{G}}\) (namely, the subgroup that preserves the \(\mathbf{Z}\)-span of \(\Delta_{\mathrm{P}}\)). Returning to our specific set-up, note that \({\jmath}\) acts on simple roots by sending \[b_{1}\mapsto a_{1}+a_{2n-1},\ \ b_{2}\mapsto a_{2}+a_{2n-2},\ \ \ldots,\ \ b_{n-1}\mapsto a_{n-1}+a_{n+1},\ \ b_{n}\mapsto a_{n}.\] **Definition 2.5**.: Let \(P\subset G=\mathrm{GL}_{2n}\) be a standard parabolic, corresponding to a subset \(\Delta_{P}\subset\Delta_{G}\). We say \(P\) is a _spin parabolic_ if, for any \(i\), \(a_{i}\in\Delta_{P}\) implies \(a_{2n-i}\in\Delta_{P}\); that is, \(\Delta_{P}\) is a union of some of the sets \[A_{1}\coloneqq\{a_{1},a_{2n-1}\},\ \ A_{2}\coloneqq\{a_{2},a_{2n-2}\},\ldots,\ \ A_{n-1}\coloneqq\{a_{n-1},a_{n+1}\},\ \ A_{n}\coloneqq\{a_{n}\}.\] If \(P\) is a spin parabolic, then there is a corresponding parabolic \(\mathcal{P}\subset\mathcal{G}\), defined by \[b_{i}\in\Delta_{\mathcal{P}}\iff A_{i}\subset\Delta_{P}.\] Under this correspondence the Borel subgroups \(B\subset G\) and \(\mathcal{B}\subset\mathcal{G}\) are identified. **Notation 2.6**.: We call the parabolic \(P\) with Levi \(\mathrm{GL}_{n_{1}}\times\cdots\times\mathrm{GL}_{n_{r}}\) the \((n_{1},...,n_{r})\)-parabolic. Note that \(P\) is a spin parabolic if and only if \((n_{1},...,n_{r})\) is symmetric around the middle (so the (1,4,1)-parabolic is spin, but the (1,3,2)-parabolic is not). ### Parahoric \(p\)-refinements for \(G\) Let \(\pi\) be a \(p\)-spherical RASCAR of \(\mathrm{GL}_{2n}(\mathbf{A})\). We can write \(\pi_{p}=\mathrm{Ind}_{B}^{G}\,\theta\) as an unramified principal series representation (via normalised induction, i.e. the induction of \(\delta_{B}^{1/2}\theta\)). The choice of \(\theta=(\theta_{1},...,\theta_{2n})\) is not unique: we may replace \(\theta\) with \(\theta^{\sigma}\) for any \(\sigma\in\mathcal{W}_{G}=\mathrm{S}_{2n}\) in the Weyl group of \(G\). Here \(\theta_{i}^{\sigma}=\theta_{\sigma(i)}\). **Definition 2.7**.: We say \(\theta\) is _spin_ if \[\theta_{1}\theta_{2n}=\theta_{2}\theta_{2n-1}=\cdots=\theta_{n}\theta_{n+1}= \eta_{p}. \tag{2.4}\] Since \(\pi_{p}\) admits an \((\eta_{p},\psi_{p})\)-Shalika model, using [1] and [2] we may (and will) choose \(\theta\) to be spin. This is the 'Asgari-Shahidi' convention on \(\theta\) described in [1, SS6.1]. This is still not unique: we could replace \(\theta\) with \(\theta^{\sigma}\), for any \(\sigma\in\mathcal{W}_{G}^{0}\subset\mathcal{W}_{G}\). Note this is _different_ from how we chose \(\theta\) in [2], where we assumed \(\theta_{i}\theta_{n+i}=\eta_{p}\). The two choices are exchanged by \(\tau=\left(\begin{smallmatrix}1&\\ &w_{n}\end{smallmatrix}\right)\in\mathcal{W}_{G}\) (see SS6.1 and Remark 6.12_op. cit._). Now let \(B\subset P\subset\mathrm{GL}_{2n}\) be a standard parabolic, with associated parahoric subgroup \(J_{P}\coloneqq\{g\in\mathrm{GL}_{2n}(\mathbf{Z}_{p}):g\,(\mathrm{mod}\,p)\in P (\mathbf{F}_{p})\}\). Note \(J_{B}=\mathrm{Iw}_{G}\). Fix an isomorphism \(i_{p}:\mathbf{C}\to\overline{\mathbf{Q}}_{p}\). **Definition 2.8**.: * For \(1\leq r\leq 2n\), let \(t_{p,r}=\left(\begin{smallmatrix}pI_{r}\\ I_{n-r}\end{smallmatrix}\right)=(e_{1}^{*}+\cdots+e_{r}^{*})(p)\in T(\mathbf{ Q}_{p})\). Let \(U_{p,r}=[J_{P}t_{p,r}J_{P}]\) be the associated double coset operator. * Let \(\mathcal{H}_{p}^{P}\coloneqq\mathbf{Q}_{p}[U_{p,r},U_{p,2n}:1\leq r\leq 2n-1, a_{r}\not\in\mathcal{Q}_{P}]\) be the Hecke algebra at \(p\). * A _\(P\)-parahoric \(p\)-refinement_ of \(\pi\), or \(P\)_-refinement_ for short, is a system \(\alpha^{P}:\mathcal{H}_{p}^{P}\to\overline{\mathbf{Q}}_{p}\) of Hecke eigenvalues such that \(i_{p}^{-1}\circ\alpha_{P}\) appears in \(\pi_{p}^{J_{P}}\). As the \(U_{p,r}\)-eigenvalues on \(\pi^{J_{P}}\) are algebraic, this does not depend on \(i_{p}\). We denote this as \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\). * If \(P=B\), then we write '\(p\)-refinement' or 'Iwahori \(p\)-refinement' instead of '\(B\)-parahoric \(p\)-refinement'. We drop the superscript \(B\), writing \(\mathcal{H}_{p}\coloneqq\mathcal{H}_{p}^{B}\), \(\alpha\coloneqq\alpha^{B}\), \(\tilde{\pi}\coloneqq\tilde{\pi}^{B}\), etc. **Remarks 2.9**.: * Note \(\mathcal{H}_{p}=\mathbf{Q}_{p}[U_{p,1},...,U_{p,2n}]\) is everything. If \(Q\) is the \((n,n)\)-parabolic, then \(\mathcal{H}_{p}^{Q}=\mathbf{Q}_{p}[U_{p,n},U_{p,2n}]\) is the Hecke algebra from [2]. * If \(P^{\prime}\subset P\) are two parabolics, then we have a natural inclusion \(\mathcal{H}_{p}^{P}\subset\mathcal{H}_{p}^{P^{\prime}}\), so \(\mathcal{H}_{p}^{P}\subset\mathcal{H}_{p}\) for all \(P\). Via [1, Cor. 3.16] (see also Proposition 2.10 and (3.2) below), any (Iwahori) \(p\)-refinement \(\tilde{\pi}=(\pi,\alpha)\) restricts to a unique \(P\)-parahoric \(p\)-refinement \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\), with \(\alpha^{P}\coloneqq\alpha|_{\mathcal{H}_{p}^{P}}\). Here \(\tilde{\pi}\) is a further refinement of/extends \(\tilde{\pi}^{P}\). * We wrote \(U_{p,r}\) independent of the parabolic \(P\); this abuse of notation is justified by (ii). The following describes the possible \(p\)-refinements in terms of the Weyl group \(\mathcal{W}_{G}=\mathrm{S}_{2n}\). **Proposition 2.10**.: _Suppose the Satake parameter of \(\pi_{p}=\mathrm{Ind}_{B}^{G}\,\theta\) is regular semisimple._ * _There is a bijection (that depends on_ \(\theta\)_)_ \[\Psi_{\theta}:\{\text{Iwahori $p$-refinements of $\pi$}\}\longrightarrow\mathcal{W}_{G},\] (2.5) _such that if_ \(\tilde{\pi}=(\pi,\alpha)\) _is a_ \(p\)_-refinement with_ \(\Psi_{\theta}(\tilde{\pi})=\sigma\)_, then for each_ \(r\) _we have_ \[\alpha(U_{p,r})=\delta_{B}^{-1/2}\theta^{\sigma}(t_{p,r})=\prod_{j=1}^{r}p^{- \frac{2n-2j+1}{2}}\theta_{\sigma(j)}(p)\neq 0.\] (2.6) * _If_ \(P\) _is a standard parabolic with Levi subgroup_ \(L_{P}\)_, there is a bijection_ \[\Psi_{\theta}^{P}:\{\text{P-refinements of $\pi$}\}\longrightarrow\mathcal{W}_{G}/ \mathcal{W}_{L_{P}},\] _such that if_ \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\) _is a_ \(P\)_-refinement with_ \(\Psi_{\theta}^{P}(\tilde{\pi}^{P})=[\sigma]\) _for_ \(\sigma\in\mathcal{W}_{G}\)_, then_ \(\alpha^{P}(U_{p,r})\) _is given by (_2.6_) whenever_ \(U_{p,r}\in\mathcal{H}_{p}^{P}\)_._ * _If_ \(\tilde{\pi}^{P}\) _is a_ \(P\)_-refinement, then the possible extensions to Iwahori level are exactly the_ \(p\)_-refinements_ \(\tilde{\pi}\) _with_ \(\Psi_{\theta}(\tilde{\pi})=\Psi_{\theta}^{P}(\tilde{\pi}^{P})\,(\mathrm{mod}\, \mathcal{W}_{L_{P}})\)_._ Proof.: We first prove the following. **Lemma 2.11**.: _Let \(\mathcal{W}_{G}\) be a \(p\)-refinement with \(\mathcal{W}_{G}\) and \(\mathcal{W}_{L_{P}}\) be a \(p\)-refinement with \(\mathcal{W}_{G}\) and \(\mathcal{W}_{L_{P}}\) be a \(p\)-refinement with \(\mathcal{W}_{G}\) and \(\mathcal{W}_{L_{P}}\) be a \(p\)-refinement with \(\mathcal{W}_{G}\) and \(\mathcal{W}_{L_{P}}\) be a \(p\)-refinement with \(\mathcal{W}_{G}\) and \(\mathcal{W}_{L_{P}}\) be a \(p\)-refinement with \(\mathcal{W}_{L_{P}}\) and \(\mathcal{W}_{L_{P}}\) be a \(p\)-refinement with \(\mathcal{W}_{L_{P}}\) and \(\mathcal{W}_{L_{P}}\) respectively. Then \(\mathcal{W}_{G}\) is a \(p\)-refinement with \(\mathcal{W}_{G}\) and \(\mathcal{W}_{L_{P}}\) is a \(p\)-refinement with \(\mathcal{W}_{L_{P}}\) and \(\mathcal{W}_{L_{P}}\) is a \(p\)-refinement with \(\mathcal{W}_{L_{P}}\) and \(\mathcal{W}_{L_{P}}\) is a \(p\)-refinement with \(\mathcal{W}_{L_{P}}\) and \(\mathcal{W}_{L_{P}}\) respectively._ Proof.: We first prove the following. **Lemma 2.12**.: _Let \(\mathcal{W}_{G}\) be a \(p\)-refinement with \(\mathcal{W}_{G}\) and \(\mathcal{W}_{L_{P}}\) be a \(p\)-refinement with \(\mathcal{W}_{G}\) and \(\mathcal{W}_{L_{P}}\) be a \(p\)-refinement with \(\mathcal{W}_{L_{P}}\) and \(\mathcal{W}_{L_{P}}\) respectively. Then \(\mathcal{W}_{G}\) is a \(p\)-refinement with \(\mathcal{W}_{G}\) and \(\mathcal{W}_{L_{P}}\) is a \(p\)-refinement with \(\mathcal{W}_{L_{P}}\) and \(\mathcal{W}_{L_{ Proof.: (i) is [11, Lem. 4.8.4]. (ii) is [OST, Cor. 3.16]. (iii) is immediate. **Remark 2.11**.: For any \(\nu\in\mathcal{W}_{G}\) and any \(p\)-refinement \(\tilde{\pi}\), we have \(\Psi_{\theta^{\nu}}(\tilde{\pi})=\nu\Psi_{\theta}(\tilde{\pi})\). In [BDG\({}^{+}\)] we denoted \(\theta\) for what would be \(\theta^{\tau}\) here, where \(\tau=\operatorname{diag}(1,w_{n})\), where \(w_{n}\) is the longest Weyl element for \(\operatorname{GL}_{n}\). Thus our bijection \(\Psi_{\theta}\) is denoted \(\Delta_{\theta^{\tau}}\) there. The \(U_{p,r}\)-eigenvalues will not, in general, vary \(p\)-adic analytically. For \(p\)-adic interplation, we must instead use normalised analogues \(U_{p,r}^{\circ}\) of \(U_{p,r}\). **Definition 2.12**.: If \(\lambda\) is the weight of \(\pi\), we define \[U_{p,r}^{\circ}=\lambda(t_{p,r})U_{p,r}=p^{\lambda_{1}+\cdots+\lambda_{r}}U_{ p,r}\in\mathcal{H}_{p}.\] Let \(\pi\) be a RASCAR of weight \(\lambda\), and \(P\) a spin parabolic. **Definition 2.13**.: Let \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\) be a \(P\)-refinement of \(\pi\). We say \(\tilde{\pi}^{P}\) has _non-\(P\)-critical slope_ if \[v_{p}(\alpha^{P}(U_{p,r}^{\circ}))<\lambda_{r}-\lambda_{r+1}+1\qquad\text{for all $1\leqslant r\leqslant 2n-1$ with $a_{r}\not\in\Delta_{P}$.}\] (Note that \(\lambda_{r}-\lambda_{r+1}=\lambda_{2n-r}-\lambda_{2n-r+1}\) by purity, so the bounds for \(U_{p,r}^{\circ}\) and \(U_{p,2n-r}^{\circ}\) agree). We say a \(p\)-refinement \(\tilde{\pi}\) has non-\(P\)-critical slope if its associated \(P\)-refinement \(\tilde{\pi}^{P}\) does. We say \(\tilde{\pi}\) has non-critical slope if it has non-\(B\)-critical slope. ## 3 \(P\)-spin refinements Let \(P\subset G=\operatorname{GL}_{2n}\) be a spin parabolic. We now generalise [BDG\({}^{+}\), SS6] to an arbitrary such \(P\). Let \(\pi=\operatorname{Ind}_{B}^{G}\theta\) be a RASCAR of \(\operatorname{GL}_{2n}(\mathbf{A})\) that is spherical and regular at \(p\), recalling we have fixed a spin \(\theta\) satisfying \(\theta_{1}\theta_{2n}=\cdots=\theta_{n}\theta_{n+1}=\eta_{p}\) (2.4). Recall \(\Psi_{\theta}\) from (2.5). **Definition 3.1**.: * We say an Iwahori \(p\)-refinement \(\tilde{\pi}=(\pi,\alpha)\) is a \(P\)_-spin refinement_ if \[\Psi_{\theta}(\tilde{\pi})\in\mathcal{W}_{G}^{0}\cdot\mathcal{W}_{L_{P}}\subset \mathcal{W}_{G}.\] * We say a \(P\)-refinement \(\tilde{\pi}^{P}\) is \(P\)-spin if \[\Psi_{\theta}^{P}(\tilde{\pi}^{P})\in\operatorname{Im}\Bigl{(}\mathcal{W}_{G} ^{0}\to\mathcal{W}_{G}\to\mathcal{W}_{G}/\mathcal{W}_{L_{P}}\Bigr{)}\subset \mathcal{W}_{G}/\mathcal{W}_{L_{P}}.\] **Lemma 3.2**.: _A \(P\)-refinement \(\tilde{\pi}^{P}\) is \(P\)-spin if and only if all of its extensions to Iwahori \(p\)-refinements are \(P\)-spin._ Proof.: Immediate from the definitions and Proposition 2.10(iii). **Remarks 3.3**.: * The cases of \(B\)-spin and \(Q\)-spin refinements, for \(Q\) the \((n,n)\)-parabolic, were defined in [BDG\({}^{+}\), Lem. 6.12, Rem. 6.14]. * Since any two choices of spin \(\theta\) differ by an element of \(\mathcal{W}_{G}^{0}\), this definition is independent of such a choice of \(\theta\) by Remark 2.11. ### \(P\)-spin refinements via Hecke algebras. Recall objects for \(\mathcal{G}=\operatorname{GSpin}_{2n+1}\) (e.g. Borel \(\mathcal{B}\), parabolics \(\mathcal{P}\)) are written as calligraphic versions of objects for \(G=\operatorname{GL}_{2n}\) (e.g. \(B,P\)). As \(\pi\) is symplectic, it is the functorial transfer of a RACAR II of \(\mathcal{G}(\mathbf{A})\). Moreover \(\Pi_{p}=\operatorname{Ind}_{\mathcal{B}}^{\theta}\theta_{\mathcal{G}}\) is an unramified principal series for \(\mathcal{G}(\mathbf{Q}_{p})\), for \(\theta_{\mathcal{G}}\) an unramified character of \(\mathcal{T}\) satisfying \(j(\theta_{\mathcal{G}})=\theta\) (by [1, p.177(i)] and [1, Prop. 5.1]). Our primary motivation for \(P\)-spin refinements is that they interact well with this functoriality, as we will show in Proposition 3.7. #### 3.1.1 Parahoric refinements for \(\operatorname{GSpin}_{2n+1}\). **Definition 3.4**.: Let \(\mathcal{B}\subset\mathcal{P}\subset\mathcal{G}\) be a parabolic, with parahoric subgroup \(\mathcal{J}_{\mathcal{P}}\subset\mathcal{G}(\mathbf{Z}_{p})\). * For \(1\leqslant r\leqslant n\), let \(\mathcal{U}_{p,r}:=[\mathcal{J}_{p}\cdot j^{\vee}(t_{p,r})\cdot\mathcal{J}_{p}]\), where \(j^{\vee}(t_{p,r})=(f_{1}^{*}+\cdots+f_{r}^{*})(p)\). Let \(\mathcal{V}_{p}:=[\mathcal{J}_{p}\cdot f_{0}^{*}(p)\cdot\mathcal{J}_{p}]\), which acts on \(\Pi_{p}^{\mathcal{J}_{p}}\) via the central action of \(p\in\mathbf{Q}_{p}\). * Define a Hecke algebra \(\mathcal{H}_{p}^{\mathcal{G},\mathcal{P}}:=\mathbf{Q}_{p}[\mathcal{U}_{p,r}, \mathcal{V}_{p}:b_{r}\notin\Delta_{\mathcal{P}}]\). * A \(\mathcal{P}\)_-parahoric \(p\)-refinement \(\hat{\Pi}^{\mathcal{P}}=(\Pi,\alpha^{\mathcal{G},\mathcal{P}})\) of \(\Pi\)_ is an eigensystem \(\alpha^{\mathcal{G},\mathcal{P}}:\mathcal{H}_{p}^{\mathcal{G},\mathcal{P}} \rightarrow\overline{\mathbf{Q}}_{p}\) appearing in \(\Pi_{p}^{\mathcal{J}_{p}}\). We sometimes write \(\mathcal{P}\)-refinement for short. #### 3.1.2 Functoriality for parahoric refinements. Let \(P\subset G\) be a spin parabolic, with associated \(\mathcal{P}\subset\mathcal{G}\). Note that \(a_{r}\not\in\Delta_{P}\iff b_{r}\not\in\Delta_{\mathcal{P}}\), so \[\mathcal{H}_{p}^{\mathcal{G},\mathcal{P}}=\mathbf{Q}_{p}[\mathcal{U}_{p,r}, \mathcal{V}_{p}:a_{r}\not\in\Delta_{P}].\] We now relate \(P\)- and \(\mathcal{P}\)-refinements. The map \(j^{\vee}:X^{\vee}\rightarrow\mathcal{X}^{\vee}\) induces a map \[j^{\vee}:\mathcal{H}_{p}^{P}\longrightarrow\mathcal{H}_{p}^{\mathcal{G}, \mathcal{P}} \tag{3.1}\] (cf. [BDG\({}^{+}\), SS6.4]). If \(1\leqslant r\leqslant n\) with \(a_{r}\not\in\Delta_{P}\), then \(j^{\vee}\) sends \[U_{p,r}\longmapsto\mathcal{U}_{p,r},\qquad U_{p,2n-r}\longmapsto\mathcal{U}_{p,r}\mathcal{V}_{p}^{n-r},\qquad U_{p,2n}\longmapsto\mathcal{V}_{p}^{n}.\] For \(1\leqslant r\leqslant 2n\), consider the characteristic polynomials \[\mathcal{F}_{\mathcal{G},r}(T):=\det\big{(}T-j^{\vee}(U_{p,r})|\Pi_{p}^{ \mathcal{J}_{p}}\big{)},\qquad F_{G,r}(T):=\det\big{(}T-U_{p,r}|\pi_{p}^{J_{p }}\big{)}.\] **Lemma 3.5**.: _Let \(1\leqslant r\leqslant 2n\). If \(U_{p,r}\in\mathcal{H}_{p}^{P}\), then \(\mathcal{F}_{\mathcal{G},r}(T)\) divides \(F_{G,r}(T)\)._ Proof.: Let \(\nu_{p,r}:=e_{1}^{*}+\cdots+e_{r}^{*}\in X^{\vee}\). By [OST, Cor. 3.16], we may write \[F_{G,r}(T)=\prod_{[\sigma]\in\mathcal{W}_{G}/\mathcal{W}_{L_{P}}}\Big{(}T-p^{ \langle\rho_{G},\nu_{p,r}\rangle_{G}}p^{\langle\theta^{\sigma},\nu_{p,r} \rangle_{G}}\Big{)} \tag{3.2}\] where we identify \(\theta^{\sigma}(t_{p,r})=\theta^{\sigma}(\nu_{p,r}(p))=p^{\langle\theta^{\sigma },\nu_{p,r}\rangle_{G}}\) under the natural extension of \(\langle-,-\rangle_{G}\). For \(\mathcal{G}\), [OST, Cor. 3.16] again gives \[\mathcal{F}_{\mathcal{G},r}(T) =\prod_{\omega\in\mathcal{W}_{\mathcal{G}}/\mathcal{W}_{\mathcal{ C}_{\mathcal{P}}}}\Big{(}T-p^{\langle\rho_{\mathcal{G},J^{\vee}}(\nu_{p,r}) \rangle_{G}}p^{\langle\theta^{\sigma},\nu_{p,r}\rangle_{G}}\Big{)}\] \[=\prod_{[\sigma]\in\mathcal{W}_{\mathcal{G}}^{0}/\mathcal{W}_{L_{ P}}^{0}}\Big{(}T-p^{\langle\rho_{G},\nu_{p,r}\rangle_{G}}p^{\langle\theta^{ \sigma},\nu_{p,r}\rangle_{G}}\Big{)},\] where we identify \(\sigma=\jmath(\omega)\), we write \(\mathcal{W}_{L_{P}}^{0}=\jmath(\mathcal{W}_{\mathcal{C}_{\mathcal{P}}})\), and we have used \(\jmath(\rho_{\mathcal{G}})=\rho_{G}\), Proposition 2.4, and (2.3). Now note that \(\mathcal{W}_{L_{P}}^{0}=\mathcal{W}_{L_{P}}\cap\mathcal{W}_{G}^{0}\), so that \(\mathcal{W}_{G}^{0}/\mathcal{W}_{L_{P}}^{0}\) is naturally a subset of \(\mathcal{W}_{G}/\mathcal{W}_{L_{P}}\). It follows immediately that \(\mathcal{F}_{\mathcal{G},r}\) divides \(F_{G,r}\). **Definition 3.6**.: Let \(P\) be a spin parabolic and \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\) a \(P\)-refinement. We say \(\tilde{\pi}^{P}\) is _the functorial transfer of a \(\mathcal{P}\)-refinement \(\hat{\Pi}^{\mathcal{P}}=(\Pi,\alpha^{\mathcal{G},\mathcal{P}})\) of \(\Pi\)_ if \(\alpha^{P}\) factors as \[\mathcal{H}_{p}^{P}\xrightarrow{j^{\vee}}\mathcal{H}_{p}^{\mathcal{G},\mathcal{P} }\xrightarrow{\alpha^{\mathcal{G},\mathcal{P}}}\overline{\mathbf{Q}}_{p}.\] **Proposition 3.7**.: _Let \(\tilde{\pi}^{P}\) be a \(P\)-refinement. Then_ \[\tilde{\pi}^{P}\] _is \(P\)-spin \[\iff\tilde{\pi}^{P}\] is the functorial transfer of some \[\tilde{\Pi}^{\mathcal{P}}\]._ Proof.: Let \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\) with \(\Psi_{\theta}^{P}(\tilde{\pi})=[\sigma]\in\mathcal{W}_{G}/\mathcal{W}_{L_{P}}\). By the proof of the above lemma, and the fact that \[\alpha^{P}(U_{p,r})=\delta_{B}^{-1/2}\theta^{\sigma}(t_{p,r})=p^{\langle\rho_{G}, \nu_{p,r}\rangle_{G}}p^{\langle\theta^{\sigma},\nu_{p,r}\rangle_{G}},\] we see that \(\alpha^{P}\) factors through \(j^{\vee}\) if and only if \([\sigma]\) is in \(\mathcal{W}_{G}^{0}/\mathcal{W}_{L_{P}}^{0}\subset\mathcal{W}_{G}/\mathcal{W}_{L_{P}}\); that is, if and only if \(\tilde{\pi}^{P}\) is a \(P\)-spin refinement. ### Optimally \(P\)-spin refinements Above, we studied when a \(P\)-refinement was \(P\)-spin (for the same \(P\)). An Iwahori \(p\)-refinement \(\tilde{\pi}\), however, can be \(P\)-spin for many different \(P\)'s. **Definition 3.8**.: We say an Iwahori \(p\)-refinement \(\tilde{\pi}\) is _optimally \(P\)-spin_ if it is \(P\)-spin and there is no spin \(P^{\prime}\subsetneq P\) such that it is \(P^{\prime}\)-spin. **Corollary 3.9**.: _Let \(\tilde{\pi}=(\pi,\alpha)\) be an Iwahori \(p\)-refinement._ 1. _If_ \(P\) _and_ \(P^{\prime}\) _are spin parabolics and_ \(\tilde{\pi}\) _is_ \(P\)_-spin and_ \(P^{\prime}\)_-spin, then_ \(\tilde{\pi}\) _is_ \(P\cap P^{\prime}\)_-spin._ 2. \(\tilde{\pi}\) _is optimally_ \(P_{\tilde{\pi}}\)_-spin for precisely one spin parabolic_ \(B\subseteq P_{\tilde{\pi}}\subseteq G\)_._ Proof.: (i) By Proposition 3.7, the associated \(P\)-refinement \(\alpha^{P}\) and \(P^{\prime}\)-refinement \(\alpha^{P^{\prime}}\) both factor through spin Hecke algebras; that is, there are maps \[\alpha^{\mathcal{G},\mathcal{P}}:\mathcal{H}_{p}^{\mathcal{G},\mathcal{P}}= \mathbf{Q}_{p}[\mathcal{U}_{p,r},\mathcal{V}_{p}:a_{r}\not\in\Delta_{P}] \rightarrow\overline{\mathbf{Q}}_{p},\qquad\alpha^{\mathcal{G},\mathcal{P}^{ \prime}}:\mathcal{H}_{p}^{\mathcal{G},\mathcal{P}^{\prime}}=\mathbf{Q}_{p}[ \mathcal{U}_{p,r},\mathcal{V}_{p}:a_{r}\not\in\Delta_{P^{\prime}}]\rightarrow \overline{\mathbf{Q}}_{p}\] such that \[\mathcal{H}_{p}^{P}\xrightarrow{\ y^{\prime}}\mathcal{H}_{p}^{\mathcal{G}, \mathcal{P}}\xrightarrow{\ **Example.** Recall \(P_{\mp}\) is the unique spin parabolic such that \(\tilde{\pi}\) is optimally \(P_{\mp}\)-spin. The example \(\tilde{\pi}\sim\{216345\}\) above is 1-spin but not 2- or 3-spin, so \(X_{P_{\pm}}=\{1\}\), hence \(\Delta_{P_{\mp}}=\{a_{2},a_{3},a_{4}\}\), i.e \(P_{\tilde{\pi}}\) is the (1,4,1)-parabolic. Similarly \(\Delta_{P_{\pm^{\prime}}}=\{a_{2},a_{4}\}\), so \(P_{\tilde{\pi}^{\prime}}\) is the (1,2,2,1)-parabolic. Proof.: For \(1\leqslant r\leqslant n\), let \(P_{r}\) be the \((r,2n-2r,r)\)-parabolic. Note that \[P=\bigcap_{\begin{subarray}{c}r\in\{1,...,n\}\\ a_{r}\not\in\Delta_{P}\end{subarray}}P_{r},\qquad\text{thus}\qquad X_{P}= \bigcup_{\begin{subarray}{c}r\in\{1,...,n\}\\ a_{r}\not\in\Delta_{P}\end{subarray}}X_{P_{r}},\] so by Corollary 3.9(i), it suffices to show that \[\tilde{\pi}\text{ is }P_{r}\text{-spin}\iff\text{$\tilde{\pi}$ is }r\text{-spin}. \tag{3.5}\] First suppose \(\tilde{\pi}\) is \(P_{r}\)-spin, so we can write \(\Psi_{\theta}(\tilde{\pi})=\zeta\sigma\), with \(\zeta\in\mathcal{W}_{G}^{0}\) and \(\sigma\in\mathcal{W}_{L_{P_{r}}}\). Note \(\sigma\in\mathcal{W}_{L_{P_{r}}}=\mathrm{S}_{r}\times\mathrm{S}_{2n-2r}\times \mathrm{S}_{r}\) preserves \(\{1,...,r\}\) and \(\{2n+1-r,...,2n\}\), hence \(\sigma\) is \(r\)-spin. By Lemma 2.1, as \(\zeta\in\mathcal{W}_{G}^{0}\), \(\sigma(i)+\sigma(j)=2n+1\) if and only if \(\zeta\sigma(i)+\zeta\sigma(j)=2n+1\), i.e. \[(\sigma\text{ is }r\text{-spin})\iff(\zeta\sigma\text{ is }r\text{-spin}). \tag{3.6}\] It follows that \(\zeta\sigma\), hence \(\tilde{\pi}\), is \(r\)-spin, giving \(\Rightarrow\) in (3.5). Conversely, suppose \(\tilde{\pi}\) is \(r\)-spin, and let \(\sigma=\Psi_{\theta}(\tilde{\pi})\in\mathcal{W}_{G}\). **Claim 3.13**.: _Without loss of generality we may assume \(\sigma\) preserves \(\{1,...,r\}\)._ _Proof of claim:_ We may renormalise \(\theta\) by elements of \(\mathcal{W}_{G}^{0}\), as this preserves both being \(P_{r}\)-spin (Remark 3.3) and \(r\)-spin (by Remark 2.11 and (3.6)). We do so repeatedly. First, without loss of generality we may take \[\{\sigma(1),...,\sigma(r)\}\subset\{1,...,n\}. \tag{3.7}\] Indeed, if \(\sigma(i)>n\) for \(1\leqslant i\leqslant r\), then there exists \(2n+1-r\leqslant j\leqslant 2n\) such that \(\sigma(i)+\sigma(j)=2n+1\), so that \(\sigma(j)\leqslant n\); and we may exchange \(\sigma(i)\) and \(\sigma(j)\) by the transposition \((\sigma(i),\sigma(j))\in\mathcal{W}_{G}^{0}\). Given (3.7), after acting by an element of \(S_{n}\subset\mathcal{W}_{G}^{0}\), we may assume \(\{\sigma(1),...,\sigma(r)\}=\{1,...,r\}\), proving the claim. As \(\tilde{\pi}\) is \(r\)-spin, if \(\sigma\) preserves \(\{1,...,r\}\), it must also preserve \(\{2n+1-r,...,2n\}\). This means \(\sigma\in\mathrm{S}_{r}\times\mathrm{S}_{2n-2r}\times\mathrm{S}_{r}=\mathcal{ W}_{L_{P_{r}}}\), so \(\sigma\) (hence \(\tilde{\pi}\)) is \(P_{r}\)-spin, giving \(\Leftarrow\) in (3.5), and hence (3.4). The last statement is immediate as \(P\leftrightarrow X_{P}\) is inclusion-reversing. ### The function \(\gamma_{\sharp}\) Finally, we introduce one more combinatorial description of being \(P\)-spin, which will be useful when we study symplectic families. **Definition 3.14**.: Let \(\tilde{\pi}\) be a \(p\)-refinement and \(\sigma=\Psi_{\theta}(\tilde{\pi})\). Define an injective map \[\gamma_{\tilde{\pi}}:\{1,...,n\}\longleftrightarrow\{1,...,2n\}\] by setting \(\gamma_{\tilde{\pi}}(i)\) to be the unique integer such that \[\sigma(i)+\sigma(2n+1-\gamma_{\tilde{\pi}}(i))=2n+1.\] **Lemma 3.15**.: _The map \(\gamma_{\tilde{\pi}}\) is independent of the choice of \(\theta\) satisfying \(\theta_{i}\theta_{2n+1-i}=\eta_{p}\)._ Proof.: If \(\theta^{\prime}\) is another such choice, there exists \(\nu\in\mathcal{W}_{G}^{0}\) such that \(\theta^{\prime}=\theta^{\nu}\). Remark 2.11 says \(\Psi_{\theta^{\prime}}(\tilde{\pi})=\nu\Psi_{\theta}(\tilde{\pi})=\nu\sigma\). By Lemma 2.1, \(\gamma_{\tilde{\pi}}\) is unchanged if we replace \(\sigma\) with \(\nu\sigma\). **Lemma 3.16**.: _Let \(\tilde{\pi}\) be a \(p\)-refinement. For \(1\leqslant r\leqslant n\), we have_ \[\tilde{\pi}\text{ is }r\text{-spin}\iff\gamma_{\tilde{\pi}}\text{ sends }\{1,...,r\}\text{ to itself}.\] Proof.: We know \(\gamma_{\tilde{\pi}}\) preserves \(\{1,...,r\}\) if and only if \(2n+1-r\leqslant 2n+1-\gamma_{\tilde{\pi}}(i)\leqslant 2n\) for all \(i\). By definition of \(\gamma_{\tilde{\pi}}\), this is if and only if the sets \(\{\sigma(1),...,\sigma(r)\}\) and \(\{\sigma(2n+1-r),...,\sigma(2n)\}\) can be paired off into pairs summing to \(2n+1\). But this is the definition of \(r\)-spin. **Proposition 3.17**.: _Let \(P\) be a spin parabolic, let \(\tilde{\pi}\) be a \(p\)-refinement, and \(\gamma_{\tilde{\pi}}:\{1,...,n\}\hookrightarrow\{1,...,2n\}\) the function from Definition 3.14. Then_ \[\tilde{\pi}\text{ is $P$-spin}\ \ \Longleftrightarrow\ \ \gamma_{\tilde{\pi}}\text{ preserves $\{1,...,r\}$ whenever $r\in X_{P}$.}\] _Additionally, \(\tilde{\pi}\) is optimally \(P\)-spin if \(\gamma_{\tilde{\pi}}\) does not preserve \(\{1,...,r\}\) for all \(r\not\in X_{P}\)._ Proof.: Both statements follow by combining Proposition 3.12 with Lemma 3.16. ## Part II Dimensions of Symplectic Components In part II, we focus on full Iwahori refinements \(\tilde{\pi}\), and study the families through such refinements in the Iwahori eigenvariety. In particular, we conjecture a classification on the dimension of such symplectic families based on the unique spin parabolic \(P_{\tilde{\pi}}\) such that \(\tilde{\pi}\) is optimally \(P_{\pi}\)-spin, prove the upper bound, and prove the lower bound in special cases. ## 4 The symplectic locus in the eigenvariety ### The eigenvariety Recall that \(K=K^{p}\mathrm{Iw}_{G}\) is Iwahori at \(p\), and let \(\mathscr{W}=\mathscr{W}_{K}\) be the _weight space_ for \(G\) of level \(K\) (defined e.g. in [1], SS10.1)]. It is a \(2n\)-dimensional \(\mathbf{Q}_{p}\)-rigid space. Let \(\mathcal{H}=\mathcal{H}^{p}\cdot\mathcal{H}_{p}\), for \(\mathcal{H}^{p}=\otimes_{v\eta\infty}\mathcal{H}_{v}\) the tame Hecke algebra of e.g. [1, Def. 2.2]. The central object of study in this paper is the _eigenvariety for \(G\)_. **Theorem 4.1**.: _([1, Thm. 1.1.2]). There exists a canonical separated rigid analytic space \(\mathscr{E}_{K}^{G}\), and a locally finite map \(w:\mathscr{E}_{K}^{G}\to\mathscr{W}\), such that the \(L\)-points \(x\in\mathscr{E}^{G}\) with \(w(x)=\lambda\) biject with finite-slope systems of \(\mathcal{H}\)-eigenvalues in the overconvergent cohomology \(\mathrm{H}_{\mathrm{c}}^{\bullet}(S_{K},\mathscr{D}_{\lambda})\)._ Here \(\mathcal{H}_{p}\) acts on the cohomology via normalised Hecke operators \(U_{p,r}^{\circ}\) [1, Rem. 3.13]. A point \(x\in\mathscr{E}_{K}^{G}\) is _classical (cuspidal)_ if the corresponding system of eigenvalues appears in \(\pi_{x}^{K}\) for a (cuspidal) automorphic representation \(\pi_{x}\) of \(G(\mathbf{A})\) of weight \(w(x)\). Following [15], [15, Conj. 1.1.5] predicts: **Conjecture 4.2**.: _Every irreducible component of \(\mathscr{E}_{K}^{G}\) containing a non-critical cuspidal classical point of regular weight has dimension \(n+1\)._ The notion of non-criticality is [15, Def. 3.2.3]. In [15, Prop. B.1], Newton has proved that every such component has dimension at least \(n+1\). The full conjecture - which Urban styles as a 'non-abelian Leopoldt conjecture' - has been proved for \(\mathrm{GL}_{N}\) with \(N\leqslant 4\), but is wide open beyond this. ### The classical and symplectic loci **Definition 4.3**.: The _classical cuspidal locus_\(\mathscr{L}_{K}^{G}\subset\mathscr{E}_{K}^{G}\) is the Zariski closure of the classical cuspidal points in \(\mathscr{E}_{K}^{G}\). Let \(\mathscr{W}_{0}\subset\mathscr{W}\) be the \((n+1)\)-dimensional _pure weight space_, the Zariski-closure of all pure algebraic weights (that is, dominant weights \(\lambda=(\lambda_{1},...,\lambda_{2n})\) such that \(\lambda_{1}+\lambda_{2n}=\lambda_{2}+\lambda_{2n-1}=\cdots=\lambda_{n}+\lambda_ {n+1}=\mathsf{w}(\lambda)\) for some \(\mathsf{w}(\lambda)\in\mathbf{Z}\)). By [1, Lem. 4.9] any classical cuspidal point \(x\) has weight \(w(x)\in\mathscr{W}_{0}\), so: **Proposition 4.4**.: _We have \(w(\mathscr{L}_{K}^{G})\subset\mathscr{W}_{0}\)._ Through any point \(x\in\mathscr{L}_{K}^{G}\), there is a 'trivial' \(1\)-dimensional family, corresponding to twists by the norm. (In the introduction, for more conceptual statements, we removed this trivial variation; but here, for cleaner comparisons to other works, we leave it in). **Definition 4.5**.: Let \(x\in\mathscr{L}_{K}^{G}\) be a classical cuspidal point. * An irreducible neighbourhood of \(\mathscr{L}_{K}^{G}\) through \(x\) is _trivial_ if it is exactly \(1\)-dimensional, given by twists by the norm and varying over the weight family \(\{w(x)+(\kappa,....,\kappa)\}\). * A _classical family_ through \(x\) is a non-trivial irreducible neighbourhood \(\mathscr{C}\subset\mathscr{L}_{K}^{G}\) of \(x\). * We say a point/eigensystem \(x\in\mathscr{L}_{K}^{G}\) is _arithmetically rigid_ if it cannot be varied in a classical family (i.e. it varies only in a trivial family). Little is known, or even precisely conjectured, about the classical cuspidal locus. However, there is a folklore expectation that _all_ classical families should come from discrete series, in the sense described in SS1.3. In particular, all such families should 'come from self-duality'. Given the above expectation, it is natural to study RACARs \(\pi\) of \(G(\mathbf{A})\) that are essentially self-dual. Such RACARs are either orthogonal or symplectic. We focus on the latter. **Definition 4.6**.: Define the _symplectic locus_\(\mathscr{S}_{K}^{G}\subset\mathscr{L}_{K}^{G}\subset\mathscr{E}_{K}^{G}\) to be the Zariski closure of all classical cuspidal points \(x\) such that \(\pi_{x}\) is symplectic. A _symplectic family through \(x\)_ is a non-trivial irreducible neighbourhood of \(x\) in \(\mathscr{S}_{K}^{G}\). Our main result (Theorem A of the introduction) gives upper/lower bounds for the dimensions of symplectic families. We state this in the stronger form we prove in SS4.4. ### Parabolic weight spaces To state the more precise version of Theorem A that we actually prove, we must introduce parabolic weight spaces. Recall that if \(P\subset G\) is a parabolic, then the \(P\)_-parabolic weight space_ is the subspace \(\mathscr{W}^{P}\subset\mathscr{W}\) of characters that extend to characters of \(L_{P}\). If \(\lambda_{\pi}\in\mathscr{W}\) is any fixed weight, we denote its coset \[\mathscr{W}_{\lambda_{\pi}}^{P}:=\lambda_{\pi}+\mathscr{W}^{P}\subset\mathscr{ W},\] and call it the \(P\)_-parabolic weight space through \(\lambda_{\pi}\)_. These notions are defined in general, and in detail, in [10, SS3.1]. We also define the pure subspaces \(\mathscr{W}_{0}^{P}\) and \(\mathscr{W}_{0,\lambda_{\pi}}^{P}\) to be the intersections of \(\mathscr{W}^{P}\) and \(\mathscr{W}_{\lambda_{\pi}}^{P}\) with \(\mathscr{W}_{0}\). We now compute their dimensions. **Lemma 4.7**.: _If \(\lambda_{\pi}=(\lambda_{\pi,1},...,\lambda_{\pi,2n})\) and \(\lambda=(\lambda_{1},...,\lambda_{2n})\) are two weights, then \(\lambda\in\mathcal{W}_{\lambda_{\pi}}^{P}\) if and only if_ \[\lambda_{i}-\lambda_{i+1}=\lambda_{\pi,i}-\lambda_{\pi,i+1}\qquad\forall i\text { such that }a_{i}\in\Delta_{P}. \tag{4.1}\] Proof.: We have \(\lambda\in\mathscr{W}_{\lambda_{\pi}}^{G}\) if and only if \(\lambda-\lambda_{\pi}=:\mu=(\mu_{1},...,\mu_{2n})=(\lambda_{1}-\lambda_{\pi,1 },...,\lambda_{2n}-\lambda_{\pi,2n})\) factors through \(L_{P}\). If \(L_{P}=\operatorname{GL}_{m_{1}}\times\cdots\times\operatorname{GL}_{m_{r}}\), then this happens if and only if \(\mu\) factors through \(\operatorname{det}_{1}\times\cdots\times\operatorname{det}_{r}\). This is equivalent to \(\mu_{1}=\cdots=\mu_{m_{1}}\),..., \(\mu_{2n-m_{r}+1}=\cdots=\mu_{2n}\) (i.e. the \(\mu_{i}\)'s are constant in each Levi factor); or in other words, that \(\lambda_{i}-\lambda_{\pi,i}=\mu_{i}=\mu_{i+1}=\lambda_{i+1}-\lambda_{\pi,i+1}\) for all \(i\) with \(a_{i}\in\Delta_{P}\). Rearranging gives (4.1). In particular, \(\lambda_{i}-\lambda_{i+1}\) can vary in a \(P\)-parabolic weight family if and only if \(a_{i}\not\in\Delta_{P}\). For example, in a \(B\)-parabolic weight family weights can vary in all directions (since \(\Delta_{B}=\varnothing\)). If \(Q\) is the \((n,n)\)-parabolic, then \(\Delta_{Q}=\Delta_{B}\backslash\{a_{n}\}\), so in a \(Q\)-parabolic family \(\lambda_{1}-\lambda_{2}\),..., \(\lambda_{n-1}-\lambda_{n}\) are fixed, \(\lambda_{n}-\lambda_{n+1}\) can vary, and \(\lambda_{n+1}-\lambda_{n+2}\),..., \(\lambda_{2n-1}-\lambda_{2n}\) are fixed, so we get the \(2\)-dimensional variation of [10]. **Lemma 4.8**.: _For any spin parabolic \(P\) and \(\lambda_{\pi}\in X_{0}\subset\mathscr{W}_{0}\), we have \(\dim(\mathscr{W}_{0,\lambda_{\pi}}^{P})=\#X_{P}+1\)._ Proof.: By Lemma 4.7, each \(\lambda_{i}-\lambda_{i+1}\) is constant in \(\mathscr{W}_{\lambda_{\pi}}^{P}\) if and only if \(a_{i}\in\Delta_{P}\), and each such condition decreases the dimension by \(1\); so \[\dim(\mathscr{W}_{\lambda_{\pi}}^{P})=2n-\#\Delta_{P}=\#\{1\leqslant i\leqslant 2 n-1:a_{i}\not\in\Delta_{P}\}+1.\] If \(\lambda\in\mathscr{W}_{0,\lambda_{\pi}}^{P}\) and \(1\leqslant i\leqslant n-1\), we must have \(\lambda_{i}+\lambda_{2n+1-i}=\lambda_{i+1}+\lambda_{2n-i}\), whence \(\lambda_{i}-\lambda_{i+1}=\lambda_{2n-i}-\lambda_{2n+1-i}\). (If \(i=n\), this still holds; but then it is vacuous). Thus \(\dim(\mathscr{W}_{0,\lambda_{\pi}}^{P})=\#\{1\leqslant i\leqslant n:a_{i}\not \in\Delta_{P}\}+1=\#X_{P}+1\), as required. ### Main results/conjecture: the dimension of symplectic families We now precisely state the stronger forms of Theorem A that we actually prove. Let \(\pi\) be a RASCAR of weight \(\lambda_{\pi}\) that is spherical and regular at \(p\), and let \(\tilde{\pi}\) be an optimally \(P_{\sharp}\)-spin \(p\)-refinement. In SS5, we will show the following 'upper bound': **Theorem 4.9**.: _Any symplectic family \(\mathscr{C}\subset\mathscr{S}_{R}^{G}\) through \(\tilde{\pi}\) is supported over the \(P_{\tilde{\pi}}\)-parabolic pure weight space, i.e._ \[w(\mathscr{C})\subset\mathscr{W}_{0,\lambda_{\pi}}^{P_{\sharp}}.\] _In particular, \(\dim(\mathscr{C})\leqslant\#X_{P_{\sharp}}+1\)._ Note we make no non-criticality assumption here. The second statement is Theorem A(i); this follows immediately from the first statement, as \(w\) is a locally finite map and \(\dim(\mathscr{W}_{0,\lambda_{\pi}}^{P_{\sharp}})=\#X_{P_{\sharp}}+1\) by Lemma 4.8. Our second main result, a stronger form of Theorem A(ii), is a 'lower bound'. Away from \(p\), let \(K_{1}(\pi)^{p}\subset G(\mathbf{A}_{f}^{(p)})\) be the Whittaker new level from [11] (see e.g. [1]). Let \(K_{1}(\tilde{\pi})=K_{1}(\pi)^{\mathrm{PI}}W_{G}\). In SS6, we prove: **Theorem 4.10**.: _Suppose that \(\tilde{\pi}\) has non-critical slope and \(\lambda_{\pi}\) is regular. Then there is a unique symplectic family through \(\tilde{\pi}\) in \(\mathscr{E}_{K_{1}(\tilde{\pi})}^{G}\). This family has dimension exactly \(\#X_{P_{\sharp}}+1\), and is etale over \(\mathscr{W}_{0,\lambda_{\pi}}^{P_{\sharp}}\) at \(\tilde{\pi}\)._ **Remark 4.11**.: Our guiding expectation is that any classical cuspidal family for \(G\) should be a transfer of a discrete series family. Which discrete series families, then, should give rise to the families of Theorem 4.10? Since \(\tilde{\pi}\) is an optimally \(P_{\sharp}\)-spin \(p\)-refinement, by Proposition 3.7, the associated \(P_{\sharp}\)-refinement \(\tilde{\pi}^{P_{\sharp}}\) is a functorial transfer of a \(P_{\sharp}\)-refinement \(\tilde{\Pi}^{\mathcal{P}_{\sharp}}\) for \(\mathrm{GSpin}_{2n+1}\). Then \(\tilde{\Pi}^{\mathcal{P}_{\sharp}}\) should vary in a'spin family' \(\mathscr{C}^{\mathcal{G}}\) over an \((\#X_{P_{\sharp}}+1)\)-dimensional \(\mathcal{P}\)-parabolic weight space \(\mathscr{W}_{\mathscr{G},\lambda_{1}}^{P_{\sharp}}\) for \(\mathcal{G}\) (see e.g. [1, Cor. 5.16]). The map \(\jmath\) from SS2.1 isomorphically identifies \(\mathscr{W}_{\mathscr{G},\lambda_{1}}^{P_{\sharp}}\) and \(\mathscr{W}_{0,\lambda_{\pi}}^{P_{\sharp}}\), and under Langlands functoriality, we expect that the family of Theorem 4.10 is exactly a transfer to \(G\) of the expected spin family \(\mathscr{C}^{\mathcal{G}}\). If we _suppose_ the existence of this \(p\)-adic functoriality map, then Theorem 4.9 implies that the image of \(\mathscr{C}^{\mathcal{G}}\) in the Iwahori-level \(\mathrm{GL}_{2n}\)-eigenvariety is itself an irreducible component of the symplectic locus (that is, it is not a proper subspace of some larger irreducible component). Remark 4.11, and the philosophy above, suggest the following. **Conjecture 4.12**.: _Let \(\tilde{\pi}\) be a \(p\)-refined RASCAR of \(\mathrm{GL}_{2n}\). Every symplectic family through \(\tilde{\pi}\) is the transfer of a classical parabolic family for \(\mathrm{GSpin}_{2n+1}\), varies over \(\mathscr{W}_{0,\lambda_{\pi}}^{P_{\sharp}}\), and has dimension \(\#X_{P_{\sharp}}+1\)._ ### The dimension of classical families We have predicted the dimension of _symplectic_ families through symplectic \(\tilde{\pi}\). It is desirable to describe more generally the _classical_ families. If the following is true, then these questions are equivalent. **Expectation 4.13**.: _Every classical family through a \(p\)-refined RASCAR is symplectic. In particular, Conjecture 4.12 describes all classical families through RASCARs._ We do not state this as a formal conjecture; without further evidence, we do not feel confident to rule out'strange' behaviour in higher dimension, where it is harder to classify all the possible lifts from discrete series. For example, we do not rule out classical cuspidal families through \(\tilde{\pi}\) that are lifts from discrete series but not themselves essentially self-dual. If we restrict to _essentially self-dual_ families - that is, where the essentially self-dual points are Zariski-dense - then we are on safer ground. Any such family should be symplectic or orthogonal. The symplectic/orthogonal loci should never intersect at classical cohomological points, meaning every classical essentially self-dual family through a \(p\)-refined RASCAR should be symplectic. In the case of \(\mathrm{GL}_{4}\), we expect every classical family to be essentially self-dual, motivating: **Conjecture 4.14**.: _Let \(\tilde{\pi}\) be a \(p\)-refined RASCAR \(\pi\) of \(\mathrm{GL}_{4}\). Every classical family through \(\tilde{\pi}\) is the transfer of a classical family on \(\mathrm{GSp}_{4}\), which varies over a \(P_{\sharp}\)-parabolic weight space and has dimension \(\#X_{P_{\sharp}}+1\)._ This could be considered a (symplectic) \(\mathrm{GL}_{4}\) analogue of [12] (for Bianchi modular forms) and [1] (for \(\mathrm{GL}_{3}\)). It seems at least as difficult. ## 5 Weight obstructions to symplectic families Let \(\pi\) be a RASCAR of weight \(\lambda_{\pi}\) that is spherical and regular at \(p\), and \(\tilde{\pi}\) an optimally \(P_{\sharp}\)-spin \(p\)-refinement. In this section, we prove Theorem 4.9. In particular, let \(\mathscr{C}\) be any classical symplectic family through \(\tilde{\pi}\). We show that \(\mathscr{C}\) varies only over \(\mathscr{W}_{0,\lambda_{\pi}}^{P_{\sharp}}\), so has dimension at most \(\#X_{P_{\sharp}}+1\). Recall \(\pi_{p}=\mathrm{Ind}_{B}^{G}\,\theta\), with \(\theta_{i}\theta_{2n+1-i}=\eta_{p}\) (2.4). This fixed \(\Psi_{\theta}:\{p\text{-refinements}\}\rightsquigarrow\mathcal{W}_{G}\). ### Identities between Hecke eigenvalues. Given a \(p\)-refinement \(\tilde{\pi}=(\pi,\alpha)\), we have so far given several criteria for it being \(P\)-spin. The most natural, in terms of transfer from \(\mathrm{GSpin}_{2n+1}\), is conceptually useful but is hard to check. To study the \(P\)-spin condition in \(p\)-adic families, we would prefer a characterisation purely in terms of eigenvalues that is intrinsic to \(\mathrm{GL}_{2n}\), with no reference to \(\mathrm{GSpin}_{2n+1}\). The following is an easy starting point: **Lemma 5.1**.: _If \(\tilde{\pi}=(\pi,\alpha)\) is \(r\)-spin, then_ \[\eta_{0}(p)^{n-r}\cdot\alpha(U_{p,r}^{\circ})=\alpha(U_{p,2n-r}^{\circ}). \tag{5.1}\] Proof.: By (3.5), \(\tilde{\pi}\) is \(P_{r}\)-spin for the \((r,2n-2r,r)\)-parabolic \(P_{r}\). Applying Proposition 3.7 to \(\tilde{\pi}^{P_{r}}\), we see \(\alpha^{P_{r}}\) factors through \(j^{\vee}:\mathcal{H}_{p}^{P_{r}}\to\mathcal{H}_{p}^{G,\mathcal{P}_{r}}\). Note \(j^{\vee}\) sends \(U_{p,r}\mapsto\mathcal{U}_{p,r}\) and \(U_{p,2n-r}\mapsto\mathcal{V}_{p}^{n-r}\mathcal{U}_{p,r}\), and that \(\mathcal{V}_{p}\) acts on \(\Pi\) via \(\eta(p)\); so this factorisation implies that \[\eta_{p}(p)^{n-r}\cdot\alpha(U_{p,r})=\alpha(U_{p,2n-r}).\] To get the claimed relation for the normalised \(U_{p,r}^{\circ}\)'s, recall \(U_{p,r}^{\circ}=p^{\lambda_{1}+\cdots+\lambda_{r}}U_{p,r}\). We have \(p^{\lambda_{1}+\cdots+\lambda_{n-r}}=p^{\lambda_{1}+\cdots+\lambda_{r}}\cdot p ^{(n-r)\mathsf{w}}\), and \(\eta_{p}(p)=\eta_{0}(p)p^{-\mathsf{w}}\). However, this statement is certainly not if-and-only-if in general. When \(r=n\), for example, the statement (5.1) is vacuous, so is satisfied by all \(\tilde{\pi}\). It is desirable to find analogous relations that _exactly_ characterise the \(r\)-spin (hence \(P\)-spin) refinements. For this, we will use the canonical function \(\gamma_{\tilde{\pi}}:\{1,...,n\}\hookrightarrow\{1,...,2n\}\) attached to \(\tilde{\pi}\), which - by Proposition 3.17 - exactly determines when \(\tilde{\pi}\) is \(P\)-spin. For any \(p\)-refinement \(\alpha\), by Proposition 2.10, \(\alpha(U_{p,r}^{\circ})\neq 0\) for all \(r\). We will repeatedly use the following simple observation. **Lemma 5.2**.: _Let \(\tilde{\pi}\) be a \(p\)-refinement and let \(\sigma=\Psi_{\theta}(\tilde{\pi})\). Then_ \[\theta_{\sigma(r)}(p) =p^{\frac{2n-2r+1}{2}}\cdot\frac{\alpha(U_{p,r})}{\alpha(U_{p,r-1 })} \tag{5.2}\] \[=p^{\frac{2n-2r+1}{2}}\cdot p^{-\lambda_{r}}\cdot\frac{\alpha(U_{p,r}^{\circ})}{\alpha(U_{p,r-1}^{\circ})}.\] _Here, by convention, \(\alpha(U_{p,0})=\alpha(U_{p,0}^{\circ}):=1\)._ Proof.: The first equality follows from Proposition 2.10(i), which says for any \(r\), we have \(\alpha(U_{p,r}^{\circ})=\delta_{B}^{-1/2}(t_{p,r})\cdot p^{\lambda_{1}+\cdots+ \lambda_{r}}\cdot\theta_{\sigma(1)}(p)\cdots\theta_{\sigma(r)}(p).\) The second equality follows as \(U_{p,r}^{\circ}=p^{\lambda_{1}+\cdots+\lambda_{r}}U_{p,r}\). Recall the map \(\gamma_{\tilde{\pi}}:\{1,...,n\}\hookrightarrow\{1,...,2n\}\) from Definition 3.14. Crucially, by definition of \(\gamma_{\tilde{\pi}}\), (2.4) tells us \(\theta_{\sigma(i)}\cdot\theta_{\sigma(2n+1-\gamma_{\tilde{\pi}}(i))}=\eta_{p}\). For ease of notation, let \(\alpha_{r}:=\alpha(U_{p,r})\). **Lemma 5.3**.: _For each \(1\leqslant s\leqslant n\), we have_ \[\alpha_{s}\cdot\prod_{i=1}^{s}p^{2\gamma_{\pi}(i)-2n-1}\frac{\alpha_{2n+1-\gamma _{\pi}(i)}}{\alpha_{2n-\gamma_{\pi}(i)}}=\delta_{B}^{-1/2}(t_{p,s})\cdot\eta_{p }(p)^{s}. \tag{5.3}\] _As \(\pi_{p}\) is regular, \(\gamma_{\pi}\) is the unique map \(\{1,...,n\}\hookrightarrow\{1,...,2n\}\) with this property._ Proof.: We know \(\alpha_{s}=\delta_{B}^{-1/2}(t_{p,s})\theta_{\sigma(1)}(p)\cdots\theta_{ \sigma(s)}(p)\). By Lemma 5.2, the left-hand side is \[\delta_{B}^{-1/2}(t_{p,s})\theta_{\sigma(1)}(p)\cdots\theta_{\sigma(s)}(p) \cdot\prod_{i=1}^{s}\theta_{\sigma(2n+1-\gamma_{\pi}(i))}(p)=\delta_{B}^{-1/2 }\prod_{i=1}^{s}\big{[}\theta_{\sigma(i)}\theta_{\sigma(2n+1-\gamma_{\pi}(i)) }\big{]}(p).\] We deduce (5.3) since \(\theta_{\sigma(i)}\theta_{\sigma(2n+1-\gamma_{\pi}(i))}=\eta_{p}\) for each \(i\). It remains to prove uniqueness. Suppose \(\gamma:\{1,...,n\}\hookrightarrow\{1,...,2n\}\) is another function such that (5.3) holds (with \(\gamma\) in place of \(\gamma_{\pi}\)) for \(1\leqslant s\leqslant n\). Regularity of \(\pi_{p}\) means all the \(\theta_{i}(p)\)'s are distinct. Dividing (5.3) for \(s\) by (5.3) for \(s-1\) gives \[\theta_{\sigma(s)}\cdot\theta_{\sigma(2n+1-\gamma(s))}(p)=\eta_{p}(p)=\theta _{\sigma(s)}\cdot\theta_{\sigma(2n+1-\gamma_{\pi}(s))}(p).\] Regularity implies \(\sigma(2n+1-\gamma(s))=\sigma(2n+1-\gamma_{\pi}(s))\), so \(\gamma(s)=\gamma_{\pi}(s)\), and \(\gamma=\gamma_{\pi}\). **Proposition 5.4**.: _For each \(1\leqslant s\leqslant n\), we have_ \[\alpha(U_{p,s}^{\circ})\cdot\prod_{i=1}^{s}p^{2\gamma_{\pi}(i)-2n-1}\cdot p^{ \lambda_{\gamma_{\pi}(i)}-\lambda_{i}}\cdot\frac{\alpha(U_{p,2n+1-\gamma_{\pi} (i)}^{\circ})}{\alpha(U_{p,2n-\gamma_{\pi}(i)}^{\circ})}=\delta_{B}^{-1/2}(t_{ p,s})\cdot\eta_{0}(p)^{s}. \tag{5.4}\] _If \(\pi_{p}\) is regular, then \(\gamma_{\hat{\pi}}\) is the unique map \(\{1,...,n\}\hookrightarrow\{1,...,2n\}\) with this property._ Proof.: The direct analogue of (5.3) with normalised eigenvalues is \[\alpha(U_{p,s}^{\circ})\cdot\prod_{i=1}^{s}p^{2\gamma_{\hat{\pi}}(i)-2n-1} \cdot p^{-\lambda_{2n+1-\gamma_{\hat{\pi}}(i)}}\cdot\frac{\alpha(U_{p,2n+1- \gamma_{\hat{\pi}}(i)}^{\circ})}{\alpha(U_{p,2n-\gamma_{\hat{\pi}}(i)}^{ \circ})}=p^{\lambda_{1}+\cdots+\lambda_{s}}\cdot\delta_{B}^{-1/2}(t_{p,s}) \cdot\eta_{p}(p)^{s}.\] To get the stated form, we use that \(\lambda_{\gamma_{\pi}(i)}+\lambda_{2n+1-\gamma_{\pi}(i)}=\mathsf{w}\) and \(\eta_{p}(p)=\eta_{0}(p)p^{-\mathsf{w}}\). ### Zariski-density of \(p\)-refined spherical points In our proofs of Theorems 4.9 and 4.10, we will require a Zariski-dense set of classical points with good properties. This is furnished by the following. Note we do _not_ require RASCARs here, only RACARs. **Proposition 5.5**.: _Let \(\mathscr{C}\subset\mathscr{L}_{K}^{G}\) be a classical family containing a classical point corresponding to a \(p\)-refined RACAR that is spherical and regular at \(p\). Then \(\mathscr{C}\) contains a Zariski-dense set of classical points corresponding to \(p\)-refined RACARs that are spherical and regular at \(p\)._ Proof.: Any classical point \(y\in\mathscr{C}\) corresponds to an eigensystem \(\alpha_{y}\) appearing in a RACAR \(\pi_{y}\) such that \(\pi_{y,p}\) is Iwahori-spherical (admits non-zero Iwahori-invariants). By [11, Prop. 2.6], any such \(\pi_{y,p}\) is a \(\mathrm{GL}_{2n}(\mathbf{Q}_{p})\)-submodule of an unramified principal series representation \(\mathrm{Ind}_{B}^{G}\,\theta_{y}\), for an unramified character \(\theta_{y}=(\theta_{y,1},...,\theta_{y,2n})\). First we prove that \(\mathrm{Ind}_{B}^{G}\,\theta_{y}\) is irreducible for a Zariski-dense set of \(y\in\mathscr{C}\), as then \(\pi_{y,p}=\mathrm{Ind}_{B}^{G}\,\theta_{y}\) is spherical. For convenience, drop the subscript \(y\). Let \(\sigma=\Psi_{\theta}(\hat{\pi})\); without loss of generality, replace \(\theta\) with \(\theta^{\sigma}\) and assume \(\sigma=\mathrm{id}\). By [1, Thm. 4.2], \(\mathrm{Ind}_{B}^{G}\,\theta\) is reducible if and only if there exist \(r,s\) such that \(\theta_{r}=\theta_{s}|\cdot|\). As the \(\theta_{i}\) are unramified, this happens if and only if \(p\cdot\theta_{r}(p)=\theta_{s}(p)\). Using Lemma 5.2 with \(\sigma=1\), this is equivalent to \[p\cdot p^{s-r}\cdot p^{\lambda_{s}-\lambda_{r}}\cdot\alpha(U_{p,r}^{\circ}) \cdot\alpha(U_{p,s-1}^{\circ})=\alpha(U_{p,s}^{\circ})\cdot\alpha(U_{p,r-1}^{ \circ}). \tag{5.5}\] Since the \(\alpha(U_{p,i}^{\circ})\) are all analytic and non-zero on \(\mathscr{C}\), the locus \(\mathscr{C}_{r,s}\) in \(\mathscr{C}\) where (5.5) is satisfied is a Zariski-closed subspace (with weight support only over subsets where \(\lambda_{r}-\lambda_{s}\) is constant). However, by assumption \(\mathscr{C}\) contains a \(p\)-refined spherical point, so \(\mathscr{C}_{r,s}\neq\mathscr{C}\), whence \(\mathscr{C}_{r,s}\subset\mathscr{C}\) is a proper subspace of smaller dimension. Any classical point \(y\) where \(\operatorname{Ind}_{B}^{G}\theta_{y}\) is reducible must live in \(\bigcup_{r\neq s}\mathscr{C}_{r,s}\). Since there are only finitely many possible pairs \((r,s)\), this union is a proper subspace of \(\mathscr{C}\) of smaller dimension. It follows that \(\operatorname{Ind}_{B}^{G}\theta_{y}\) is irreducible for a Zariski-dense set of \(y\), and each of these \(y\) corresponds to a \(p\)-refined \(p\)-spherical RACAR. It remains to check a Zariski-dense subset of these \(y\) are regular. Note such a \(y\) is not regular, then there exist \(r\neq s\) such that \(\theta_{r}(p)=\theta_{s}(p)\). As above, this happens if and only if \[p^{s-r}\cdot p^{\lambda_{s}-\lambda_{r}}\cdot\alpha(U_{p,r}^{\circ})\cdot \alpha(U_{p,s-1}^{\circ})=\alpha(U_{p,s}^{\circ})\cdot\alpha(U_{p,r-1}^{\circ}),\] again cutting out a closed subspace in \(\mathscr{C}\). We conclude that there are a Zariski-dense set of \(p\)-regular points as before. **Remark 5.6**.: In any positive-dimensional component of \(\mathscr{C}_{r,s}\) we must have \(\lambda_{r}-\lambda_{s}\) constant. It follows that any everywhere-ramified family must vary over some parabolic weight space \(\mathscr{W}_{0,\lambda}^{P}\) for some non-minimal \(B\subsetneq P\subset G\). In particular, we recover that any classical family over the full pure weight space \(\mathscr{W}_{0}\) contains a Zariski-dense set of spherical points. ### Proof of Theorem 4.9 Let \(\tilde{\pi}\) be an optimally \(P\)-spin \(p\)-refined RASCAR such that \(\pi_{p}\) is spherical and regular, and let \(\mathscr{C}\) be a symplectic family though \(\tilde{\pi}\). To prove Theorem 4.9, we must show that \(w(\mathscr{C})\subset\mathscr{W}_{0,\lambda_{s}}^{P}\). Let \(\mathfrak{X}\) be the set of classical points in \(\mathscr{C}\) that correspond to \(p\)-refined RASCARs \(\tilde{\pi}_{y}\) such that \(\pi_{y,p}\) is spherical and regular. By Proposition 5.5, the set \(\mathfrak{X}\) is Zariski-dense in \(\mathscr{C}\). For each \(y\in\mathfrak{X}\), let \(\gamma_{y}:\{1,...,n\}\hookrightarrow\{1,...,2n\}\) be the function for \(\tilde{\pi}_{y}\) from Definition 3.14. **Lemma 5.7**.: _The function \(\gamma_{y}\) is constant as \(y\) varies in \(\mathfrak{X}\)._ Proof.: There are only finitely many functions \(\gamma:\{1,...,n\}\hookrightarrow\{1,...,2n\}\), so there must exist such a function \(\gamma\) and a Zariski dense subset \(\mathfrak{Y}\subset\mathfrak{X}\subset\mathscr{C}\) such that \(\gamma_{z}=\gamma\) for all \(z\in\mathfrak{Y}\). By Proposition 5.4, at every \(y\) in \(\mathfrak{Y}\), the Hecke relations \[\alpha_{y}(U_{p,s}^{\circ})\cdot\prod_{i=1}^{s}p^{2\gamma(i)-2n-1}\cdot p^{ \lambda_{y,\gamma(i)}-\lambda_{y,i}}\cdot\frac{\alpha_{y}(U_{p,2n+1-\gamma(i)} ^{\circ})}{\alpha_{y}(U_{p,2n-\gamma(i)}^{\circ})}=\delta_{B}^{-1/2}(t_{p,s}) \cdot\eta_{0}(p)^{s} \tag{5.6}\] are satisfied for all \(1\leqslant s\leqslant n\), where \(w(y)=\lambda_{y}\). Since \(U_{p,r}^{\circ}\) defines an analytic function on \(\mathscr{C}\), and these relations hold for the Zariski-dense \(\mathfrak{Y}\), they hold over all of \(\mathscr{C}\). In particular, they hold at every point \(y\in\mathfrak{X}\). Since the points in \(\mathfrak{X}\) are regular, the unicity statement in Proposition 5.4 says \(\gamma_{y}=\gamma\) for all \(y\in\mathfrak{X}\). **Lemma 5.8**.: _Every point \(y\in\mathfrak{X}\) is optimally \(P\)-spin._ Proof.: Let \(y\in\mathfrak{X}\), and let \(P_{y}\) be the unique spin parabolic such that \(\tilde{\pi}_{y}\) is optimally \(P_{y}\)-spin. By Proposition 3.17, \(P_{y}\) is determined by the function \(\gamma_{y}\). By Lemma 5.7, the function \(\gamma_{y}\) is constant over \(\mathfrak{X}\); thus \(P_{y}\) is also constant over \(\mathfrak{X}\). But \(\mathfrak{X}\) contains \(\tilde{\pi}\), which by assumption is optimally \(P\)-spin. Thus \(P_{y}=P\) for all \(y\in\mathfrak{X}\). **Lemma 5.9**.: _For \(1\leqslant i\leqslant n\), if \(a_{i}\in\Delta_{P}\), then \(\lambda_{y,i}-\lambda_{y,i+1}\) is constant as \(y\) varies in \(\mathfrak{X}\)._ Proof.: Let \(\gamma\) be the function from the proof of Lemma 5.7. We showed that the relation (5.6) holds over all of \(\mathscr{C}\), and for all \(1\leqslant s\leqslant n\). As the \(\alpha_{y}(U_{p,r}^{\circ})\) vary analytically with \(y\), for this to be true for all \(s\), the term \(p^{\lambda_{y,\gamma(i)}-\lambda_{i}}\) must be constant for all \(1\leqslant i\leqslant n\). This forces \(\lambda_{y,\gamma(i)}-\lambda_{y,i}\) to be constant. Now, suppose \(a_{i}\in\Delta_{P}\). Then \(i\not\in X_{P}\). Now, since the points of \(\mathfrak{X}\) are optimally \(P\)-spin, by Proposition 3.17 we know that \(\gamma\) does not preserve \(\{1,...,i\}\). In particular, there exists some \(m\in\{1,...,i\}\) such that \(\gamma(m)>i\). Also, by dominance, we have \(\lambda_{m}\geqslant\lambda_{i}\geqslant\lambda_{i+1}\geqslant\lambda_{\gamma(m)}\). Thus if \(\lambda_{y,\gamma(m)}-\lambda_{y,m}\) is constant, as \(y\) varies over \(\mathfrak{X}\), then so is \(\lambda_{y,i}-\lambda_{y,i+1}\). Finally we prove Theorem 4.9. If \(a_{i}\in\Delta_{P}\), then either: 1. \(1\leqslant i\leqslant n\). Lemma 5.9, and Zariski-density of \(\mathfrak{X}\), imply \(\lambda_{i}-\lambda_{i+1}\) is constant over \(w(\mathscr{C})\). 2. or \(n+1\leqslant i\leqslant 2n-1\); then \(1\leqslant 2n-i\leqslant n\). As \(P\) is a spin parabolic \(a_{i}\in\Delta_{P}\) if and only if \(a_{2n-i}\in\Delta_{P}\), so by (1) \(\lambda_{2n-i}-\lambda_{2n-i+1}\) is constant. As \(w(\mathscr{C})\) is in the pure weight space, this implies \(\lambda_{i}-\lambda_{i+1}\) is constant. By Lemma 4.7, this means that \(w(\mathscr{C})\subset\mathscr{W}_{0,\lambda_{n}}^{P}\), as claimed. ## 6 Existence of \(P\)-spin families We have obtained an upper bound on the dimension of symplectic families. We now prove Theorem 4.10, constructing families realising this bound through non-critical slope refinements. ### \(B\)-spin families Let \(\pi\) be a RASCAR of regular weight that is spherical and regular at \(p\). Let \(K_{1}(\tilde{\pi})\) be as before Theorem 4.10. In [BDW] and [BDG\({}^{+}\)] we proved: **Theorem 6.1**.: _Let \(\tilde{\pi}\) be a non-critical \(B\)-spin refinement. There is a unique family \(\mathscr{C}\) through \(\tilde{\pi}\) in \(\mathscr{E}_{K_{1}(\tilde{\pi})}^{G}\) that varies over the pure weight space \(\mathscr{W}_{0}\). Moreover \(\mathscr{C}\) is an \((n+1)\)-dimensional classical symplectic family etale over \(\mathscr{W}_{0}\) at \(\tilde{\pi}\)._ Proof.: When \(K_{1}(\pi)=G(\widehat{\mathbf{Z}})\), this is exactly [BDG\({}^{+}\), Thm. 13.6]. One can treat general \(K_{1}(\pi)\) following exactly the strategy of [BDW, SS7.5,7.6]. **Lemma 6.2**.: _We may shrink \(\mathscr{C}\) so that every classical point \(y\in V\) corresponds to a \(B\)-spin \(p\)-refined RASCAR \(\tilde{\pi}_{y}\) such that \(\pi_{y,p}=\operatorname{Ind}_{B}^{G}\theta_{y}\) is a regular and spherical, with \(\Psi_{\theta_{y}}(\tilde{\pi}_{y})=\Psi_{\theta}(\tilde{\pi})\)._ In other words: 'each classical point is a \(p\)-refined \(p\)-spherical RASCAR, and for each such point, and all the refinements are in the same position in the Weyl group.' Proof.: By Proposition 5.5 and its proof, all the classical points corresponding to RACARs that are ramified at \(p\) live inside a proper closed subspace of the eigenvariety, and since \(x\) is not in this closed subspace, we can shrink the neighbourhood \(\mathscr{C}\) to avoid it completely. Then every classical \(y\) is unramified principal series at \(p\). In this \(\mathscr{C}\), every \(y\) is (optimally) \(B\)-spin by Lemmas 5.7 and 5.8; so \(\Psi_{\theta_{y}}(\tilde{\pi}_{y})\in\mathcal{W}_{G}^{0}\). By Remarks 2.11 and 3.3, we can thus conjugate \(\theta_{y}\) so that \(\Psi_{\theta_{y}}(\tilde{\pi}_{y})=\Psi_{\theta}(\tilde{\pi})\). ### Refinement-switching To produce \(P\)-spin families, we take the part of the \(B\)-spin family supported over the \(P\)-parahoric weight space, and systematically switch between refinements for each classical point in the family. For \(\mathrm{GL}_{4}\), this is pictorially represented in the figure right, and we shall now explain the notation. To enact this strategy, we need to able to pass between optimally \(P\)-spin and optimally \(B\)-spin refinements, and to relate eigenvalues as we do so. Recall the notion of being \(r\)-spin from Definition 3.10, and \(X\)-spin from Definition 3.11. The following lemma shows you can always 'improve' the spin-ness with a controlled transposition. **Lemma 6.3**.: _Suppose \(\tilde{\pi}\) is optimally \(X\)-spin, for \(X\subset\{1,...,n\}\)._ 1. _Let_ \(1\leqslant i\leqslant n-1\)_, and suppose: (a)_ \((i-1)\in X\) _or_ \(i=1\)_, and (b)_ \(i\notin X\)_. Let_ \[k:=\left\{\begin{array}{ll}2n-i&:i-1\text{ is maximal in }X,\\ \min\{i^{\prime}\in X:i^{\prime}>i-1\}&:\text{else}.\end{array}\right.\] _Then there exists_ \(i+1\leqslant j\leqslant k\) _such that the_ \(p\)_-refinement_ \(\tilde{\pi}^{\prime}\) _with_ \[\Psi_{\theta}(\tilde{\pi}^{\prime})=\Psi_{\theta}(\tilde{\pi})\cdot(i,j)\] _is_ \(X\cup\{i\}\)_-spin._ 2. _If_ \(\tilde{\pi}\) _is_ \((n-1)\)_-spin, then it is_ \(n\)_-spin (i.e. if_ \(n-1\in X\)_, then_ \(n\in X\)_)._ Proof.: (i) Let \(\sigma=\Psi_{\theta}(\tilde{\pi})\), and let \(j\) be the unique integer such that \(\sigma(j)+\sigma(2n+1-i)=2n+1\). **Step 1: Inequalities on \(j\)**. For any \(r\in X\), since \(\tilde{\pi}\) is \(r\)-spin, the sets \(\{\sigma(1),...,\sigma(r)\}\) and \(\{\sigma(2n+1-r),...,\sigma(2n)\}\) pair off so that the sum of each pair is \(2n+1\). In particular, \((\dagger)\)\(\sigma(j)\) _is in one of these two sets_ \(\iff\)\(\sigma(2n+1-i)\) _is in the other._ Then: * Apply \((\dagger)\) with \(r=i-1\). As \(\sigma(2n+1-i)\not\in\{\sigma(2n+2-i),...,\sigma(2n)\}\), we know \(\sigma(j)\notin\{\sigma(1),...,\sigma(i-1)\}\). So \(j\not\in\{1,...,i-1\}\), i.e. \(i\leqslant j\). * As \(\tilde{\pi}\) is \((i-1)\)-spin but _not_\(i\)-spin, \(\sigma(i)+\sigma(2n+1-i)\neq 2n+1\), so \(j\neq i\); hence \(i+1\leqslant j\). * As \(i\leqslant n-1\), we have \(\sigma(2n+1-i)\not\in\{\sigma(1),...,\sigma(i-1)\}\), so \(j\leqslant 2n+1-i\). But \(j\neq 2n+1-i\) clearly, so \(j\leqslant 2n-i\) (always). * If \(i-1\) is maximal in \(X\), then \(k=2n-i\) and we are done. Otherwise \(k\) is the next smallest element of \(X\); as \(i<k\) and \(\sigma\) is \(k\)-spin, we have \(\sigma(2n+1-i)\in\{\sigma(2n+1-k),...,\sigma(2n)\}\), so \((\dagger)\) implies \(j\leqslant k\). **Step 2: \(\tilde{\pi}^{\prime}\) is \(X\)-spin.** Now, let \(\zeta=(i,j)\). If \(r\in X\), then either we have \[r<i\text{ and }j\leqslant r<2n+1-r,\qquad\text{ or }\qquad\text{both }i,j\leqslant r.\] Either way, \(\zeta\) preserves \(\{1,...,r\}\) and \(\{2n+1-r,...,2n\}\). In particular, we have \[\{\sigma(1),...,\sigma(r)\} =\{\sigma\zeta(1),...,\sigma\zeta(r)\},\] \[\{\sigma(2n+1-r),...,\sigma(2n)\} =\{\sigma\zeta(2n+1-r),...,\sigma\zeta(2n)\},\] so \(\sigma\zeta\) is \(r\)-spin since \(\sigma\) is. Since this is true of all \(r\in X\), we conclude \(\sigma\zeta=\Psi_{\theta}(\tilde{\pi}^{\prime})\) is \(X\)-spin. **Step 3: \(\tilde{\pi}^{\prime}\) is \(X\cup\{i\}\)-spin.** By above, \(\sigma\zeta\) is \((i-1)\)-spin. Moreover, by construction \(\sigma\zeta(i)+\sigma\zeta(2n+1-i)=2n+1\), so additionally \(\sigma\zeta\) is \(i\)-spin. As it is \(X\)-spin and \(i\)-spin, \(\sigma\zeta=\Psi_{\theta}(\tilde{\pi}^{\prime})\) is \(X\cup\{i\}\)-spin, as claimed. (ii) If \(\tilde{\pi}\) is \((n-1)\)-spin, then by definition, for each \(r\leqslant n-1\), there is \(s\geqslant n+2\) such that \(\sigma(r)+\sigma(s)=2n+1\). This accounts for \(n-1\) of the \(n\) pairs with this property, and forces \(\sigma(n)+\sigma(n+1)=2n+1\) to be the \(n\)th and last. Thus \(\tilde{\pi}\) is also \(n\)-spin. We now relate the Hecke eigenvalues of \(\tilde{\pi}\) and \(\tilde{\pi}^{\prime}\) from the previous lemma. Recall that by Proposition 2.10, since \(\theta_{i}(p)\neq 0\) for all \(i\), \(\alpha\) is finite slope, i.e. \(\alpha(U_{p,i}^{\circ})\neq 0\) for all \(i\). **Lemma 6.4**.: _Let \(\tilde{\pi}=(\pi,\alpha)\) and \(\tilde{\pi}^{\prime}=(\pi,\alpha^{\prime})\) be two \(p\)-refinements, with_ \[\Psi_{\theta}(\tilde{\pi}^{\prime})=\Psi_{\theta}(\tilde{\pi})\cdot(i,j),\] _where \((i,j)\in\mathrm{S}_{2n}\) is a transposition with \(i<j\). Then for all \(r\),_ \[\alpha^{\prime}(U_{p,r}^{\circ})=\left\{\begin{array}{ll}p^{i-j}p^{\lambda_{ i}-j}\frac{\alpha(U_{p,j}^{\circ})}{\alpha(U_{p,j-i}^{\circ})}\cdot\frac{ \alpha(U_{p,i-j}^{\circ})}{\alpha(U_{p,i}^{\circ})}\cdot\alpha(U_{p,r}^{ \circ})&:i\leqslant r<j\\ \alpha(U_{p,r}^{\circ})&:\text{otherwise},\end{array}\right.\] _where \(\pi\) has weight \(\lambda=(\lambda_{1},...,\lambda_{2n})\) and we use the shorthand that "\(\alpha(U_{p,0}^{\circ})\)"\(:=1\)._ Proof.: Let \(\sigma=\Psi_{\theta}(\tilde{\pi})\). By Proposition 2.10 the definition of \(U^{\circ}_{p,r}\) we have \[\alpha(U^{\circ}_{p,r})=\delta_{B}^{-1/2}(t_{p,r})\cdot p^{\lambda_{1}+\dots+ \lambda_{r}}\cdot\theta_{\sigma(1)}(p)\cdots\theta_{\sigma(r)}(p).\] Now \(\alpha^{\prime}(U^{\circ}_{p,r})\) can be described in the same way, except with \(\sigma\) replaced with \(\sigma(i,j)\). When \(r<i\) or \(r\geqslant j\), this is identical to \(\alpha(U^{\circ}_{p,r})\); when \(i\leqslant r<j\), this means \(\theta_{\sigma(i)}(p)\) is replaced by \(\theta_{[\sigma(i,j)](i)}(p)=\theta_{\sigma(j)}(p)\) in the product. Via Lemma 5.2, in this case \[\alpha^{\prime}(U^{\circ}_{p,r}) =\alpha(U^{\circ}_{p,r})\cdot\theta_{\sigma(j)}(p)\cdot\theta_{ \sigma(i)}(p)^{-1}\] \[=\alpha(U^{\circ}_{p,r})\cdot\left[p^{-\lambda_{j}}p^{(2n+1-2j)/ 2}\frac{\alpha(U^{\circ}_{p,j})}{\alpha(U^{\circ}_{p,j-1})}\right]\cdot\left[ p^{-\lambda_{i}}p^{(2n+1-2i)/2}\frac{\alpha(U^{\circ}_{p,i})}{\alpha(U^{\circ}_{p,i-1} )}\right]^{-1},\] which simplifies to the claimed expression. We will use Lemma 6.4 to define maps between families on the eigenvariety. This requires adding inverses to the Hecke algebra. **Definition 6.5**.: Let \(\mathcal{H}^{\mathrm{frac}}=\mathcal{H}^{\mathrm{frac}}_{p}\cdot\mathcal{H}^{p}\), where \[\mathcal{H}^{\mathrm{frac}}_{p}\coloneqq\mathbf{Q}_{p}[U^{\circ}_{p,r},(U^{ \circ}_{p,r})^{-1}:1\leqslant r\leqslant 2n].\] Now fix \(K=K_{1}(\tilde{\pi})\) from before Theorem 4.10. Let \(\mathscr{E}=\mathscr{E}^{G}_{K}\) from Theorem 4.1, defined by the action of \(\mathcal{H}\) on overconvergent cohomology. Let also \(\mathscr{E}^{\prime}=\mathscr{E}^{\prime}_{K}\) be the eigenvariety defined by the same eigenvariety datum, but using instead the action of \(\mathcal{H}^{\mathrm{frac}}\) on the _finite-slope_ overconvergent cohomology. **Lemma 6.6**.: _We have \(\mathscr{E}=\mathscr{E}^{\prime}\)._ Proof.: Both eigenvarieties are defined by writing down local pieces \(\mathscr{E}_{\Omega,h}=\mathrm{Sp}(\mathbf{T}_{\Omega,h})\) and \(\mathscr{E}^{\prime}_{\Omega,h}=\mathrm{Sp}(\mathbf{T}^{\prime}_{\Omega,h})\), where \(\mathbf{T}_{\Omega,h}\) (resp. \(\mathbf{T}^{\prime}_{\Omega,h}\)) is the image of \(\mathcal{H}\otimes\mathcal{O}_{\Omega}\) (resp \(\mathbf{H}^{\mathrm{frac}}\otimes\mathcal{O}_{\Omega}\)) in the \(\mathrm{End}_{\mathcal{O}_{\Omega}}(\mathbf{H}^{*}_{\mathbf{c}}(S_{K}, \mathscr{O}_{\Omega})^{\leqslant h})\). As each \(U^{\circ}_{p,r}\) acts invertibly on the slope \(\leqslant h\) cohomology (see e.g. [20, SS2.3.1]), the image of \(U^{\circ}_{p,r}\) in \(\mathbf{T}_{\Omega,h}\) is invertible; and hence \(\mathbf{T}_{\Omega,h}=\mathbf{T}^{\prime}_{\Omega,h}\), so \(\mathscr{E}_{\Omega,h}=\mathscr{E}_{\Omega,h^{\prime}}\). Both \(\mathscr{E}\) and \(\mathscr{E}^{\prime}\) are defined by the same gluing of the same local pieces, so they are equal. **Definition 6.7**.: For \(\lambda=(\lambda_{1},...,\lambda_{2n})\in X^{*}(T)\), and \(i<j\), define a map \[\phi^{\lambda}_{ij}:\mathcal{H}\longrightarrow\mathcal{H}^{\mathrm{frac}}\] to be the identity map on all operators away from \(p\), and at \(p\) by \[\phi^{\lambda}_{ij}(U^{\circ}_{p,r})=\left\{\begin{array}{cc}p^{i-j}p^{ \lambda_{i}-\lambda_{j}}\frac{U^{\circ}_{p,j-1}}{U^{\circ}_{p,r}}\cdot\frac{U^ {\circ}_{p,i-1}}{U^{\circ}_{p,i}}\cdot U^{\circ}_{p,r}&:i\leqslant r<j\\ U^{\circ}_{p,r}&:\mathrm{otherwise},\end{array}\right.\] **Lemma 6.8**.: _Let \(\pi\) have weight \(\lambda_{\pi}\), and let \(\tilde{\pi}=(\pi,\alpha)\) and \(\tilde{\pi}^{\prime}=(\pi,\alpha^{\prime})\) be \(p\)-refinements with_ \[\Psi_{\theta}(\tilde{\pi}^{\prime})=\Psi_{\theta}(\tilde{\pi})\cdot(i,j)\] _as elements of \(\mathcal{W}_{G}\). Then \(\alpha^{\prime}=\alpha\circ\phi^{\lambda_{\pi}}_{ij}\) and \(\alpha^{\prime}\circ\phi^{\lambda_{\pi}}_{ij}=\alpha\)._ Proof.: Note also \(\Psi_{\theta}(\tilde{\pi}^{\prime})\cdot(i,j)=\Psi_{\theta}(\tilde{\pi})\). Both statements are then direct from Lemma 6.4. ### From \(P\)-spin to \(B\)-spin Let \(\tilde{\pi}=(\pi,\alpha)\) be an optimally \(P\)-spin non-critical slope refinement. **Proposition 6.9**.: 1. _There exists an element_ \(\tau=(i_{1},j_{1})\cdots(i_{k},j_{k})\in\mathcal{W}_{G}\)_, where_ \(k\leqslant n-\#X_{P}\)_, and a_ \(B\)_-spin_ \(p\)_-refinement_ \(\tilde{\pi}^{\prime}=(\pi,\alpha^{\prime})\) _with_ \[\Psi_{\theta}(\tilde{\pi}^{\prime})=\Psi_{\theta}(\tilde{\pi})\cdot\tau.\] _._ 2. _The refinement_ \(\tilde{\pi}^{\prime}\) _from (i) has non-critical slope._ 3. _We have_ \(\alpha^{\prime}\circ\phi^{\lambda_{\pi}}_{\tau}=\alpha\)_, where for any classical_ \(\lambda\) _we let_ \[\phi^{\lambda}_{\tau}:=\phi^{\lambda}_{i_{k},j_{k}}\circ\cdots\circ\phi^{ \lambda}_{i_{1},j_{1}}:\mathcal{H}\longrightarrow\mathcal{H}^{\text{frac}}.\] 4. _We have_ \(\phi^{\lambda}_{\tau}=\phi^{\lambda_{\pi}}_{\tau}\) _for any classical_ \(\lambda\in\mathcal{W}^{P}_{\lambda_{\pi}}\)_._ Proof.: (i) We iterate Lemma 6.3. Let \(X_{P}=\{I_{1},...,I_{\#X_{P}}\}\). Let \(1\leqslant i_{1}\leqslant n\) be minimal with \(i_{1}\not\in X_{P}\). Then there exists some \(r\) such that \(I_{r}<i_{1}<I_{r+1}\) (where \(I_{0}:=0\) and \(I_{\#X_{P}+1}:=2n-I_{\#X_{P}}\)). By Lemma 6.3, there exists \(I_{r}<i_{1}<j_{1}\leqslant I_{r+1}\) and an \((X_{P}\cup\{i_{1}\})\)-spin \(\tilde{\pi}^{(1)}\) satisfying \[\Psi_{\theta}(\tilde{\pi}^{(1)})=\Psi_{\theta}(\tilde{\pi})\cdot(i_{1},j_{1}).\] Iterating this process \(k\leqslant n-\#X_{P}\) times, we obtain a \(p\)-refinement \(\tilde{\pi}^{\prime}=\tilde{\pi}^{(k)}\) which is \(\{1,...,n\}\)-spin with \(\Psi_{\theta}(\tilde{\pi}^{\prime})=\Psi_{\theta}(\tilde{\pi})\cdot(i_{1},j_{ 1})\cdots(i_{k},j_{k})\). By Proposition 3.12\(\tilde{\pi}^{\prime}\) is \(B\)-spin. (ii) From Definition 2.13, \(\tilde{\pi}^{\prime}\) has non-critical slope if \[v_{p}(\alpha^{\prime}(U^{\circ}_{p,i}))<\lambda_{i}-\lambda_{i+1}+1,\qquad 1 \leqslant i\leqslant 2n-1 \tag{6.1}\] By assumption this is true for \(\alpha\). To see it for \(\alpha^{\prime}\): 1. If \(i\geqslant n\): by the proof of [Roc, Thm. 4] (more precisely, the sentence following the second displayed equation), as \(\tilde{\pi}\) has non-critical slope, it is \(n\)-spin. In particular, \(n\in X_{P}\). By construction this forces \(1\leqslant i_{r},j_{r}\leqslant n\) for all \(n\). By Proposition 2.10, we see \(\alpha^{\prime}(U^{\circ}_{p,i})=\alpha(U^{\circ}_{p,i})\). So \(\alpha^{\prime}(U^{\circ}_{p,i})\) is non-critical slope as \(\alpha(U^{\circ}_{p,i})\) is. 2. If \(i<n\): as \(\tilde{\pi}^{\prime}\) is \(i\)-spin, we have \(v_{p}(\alpha^{\prime}(U^{\circ}_{p,i}))=v_{p}(\alpha^{\prime}(U^{\circ}_{p,2n -i}))\) by Lemma 5.1. This is non-critical slope by (ii-1). 3. Follows from iterating Lemma 6.8. 4. By Lemma 4.7, we know \(\lambda_{i}-\lambda_{i+1}\) is constant in \(\mathcal{W}^{P}_{\lambda_{\pi}}\) whenever \(i\not\in X_{P}\). In the map \(\phi^{\lambda}_{i_{r},j_{r}}\), the only dependence on \(\lambda\) is in the term \[p^{\lambda_{i_{r}}-\lambda_{j_{r}}}=p^{\lambda_{i_{r}}-\lambda_{i_{r}+1}}\cdots p ^{\lambda_{j_{r-1}}-\lambda_{j_{r}}}.\] (6.2) By construction, we know that \(I_{s}<i_{r}\leqslant j_{r}\leqslant I_{s+1}\) fall between two adjacent elements of \(X_{P}\), so that \(i_{r},i_{r}+1,...,j_{r}-1\not\in X_{P}\). Thus all of the terms in the product (6.2) are constant as \(\lambda\) varies in \(\mathcal{W}^{P}_{\lambda_{\pi}}\). The result follows. ### From \(B\)-spin to \(P\)-spin Let \(\tilde{\pi}\) and \(\tilde{\pi}^{\prime}\) be as in Proposition 6.9. By Theorem 6.1 and Lemma 6.6, there exists a unique \(n+1\)-dimensional symplectic family \(\mathscr{C}^{\prime}\subset\mathscr{C}^{\prime}\) through \(\tilde{\pi}^{\prime}\). Assume \(\mathscr{C}^{\prime}\) is as in Lemma 6.2, and let \[\mathscr{C}^{\prime}_{P}:=\mathscr{C}^{\prime}\times_{\mathscr{C}^{\prime}} \mathscr{W}^{P}_{0,\lambda_{\pi}}\] be the \((\#X_{P}+1)\)-dimensional subspace varying only over \(\mathscr{W}^{P}_{0,\lambda_{\pi}}\). By Lemma 6.2, every classical point \(y^{\prime}\in\mathscr{C}^{\prime}_{P}\) corresponds to a \(p\)-refined \(\tilde{\pi}^{\prime}_{y}=(\pi_{y},\alpha^{\prime}_{y})\) with \(\pi_{y,p}=\operatorname{Ind}^{G}_{B}\theta_{y}\) spherical and regular. Let \(\tilde{\pi}_{y}=(\pi_{y},\alpha_{y})\) be the unique \(p\)-refinement with \[\Psi_{\theta_{y}}(\tilde{\pi}^{\prime}_{y})=\Psi_{\theta_{y}}(\tilde{\pi}_{y}) \cdot\tau,\] for \(\tau\) as in Proposition 6.9. **Lemma 6.10**.: _The refinement \(\tilde{\pi}_{y}\) is optimally \(P\)-spin and we have_ \[\alpha^{\prime}_{y}\circ\phi^{\lambda_{\pi}}_{\tau}=\alpha_{y}. \tag{6.3}\] Proof.: By Lemma 6.2, we know \(\Psi_{\theta_{y}}(\tilde{\pi}^{\prime}_{y})=\Psi_{\theta}(\tilde{\pi}^{\prime})\). In particular, we have \[\Psi_{\theta_{y}}(\tilde{\pi}_{y})=\Psi_{\theta_{y}}(\tilde{\pi}^{\prime}_{y}) \cdot\tau^{-1}=\Psi_{\theta}(\tilde{\pi}^{\prime})\cdot\tau^{-1}=\Psi_{\theta} (\tilde{\pi}),\] so that \(\tilde{\pi}_{y}\) is optimally \(P\)-spin. The identity (6.3) follows by iterating Lemma 6.8 as in Proposition 6.9(iii). Here we use (iv) of that result to see \(\phi^{\lambda}_{\tau}=\phi^{\lambda*}_{\tau}\). **Lemma 6.11**.: _For a Zariski-dense set of classical \(y^{\prime}\in\mathscr{C}_{P}^{P}\), the p-refinement \(\tilde{\pi}_{y}\) is non-critical slope, and thus corresponds to a classical \(P\)-spin point \(y\in\mathscr{E}\)._ Proof.: Up to shrinking \(\mathscr{C}_{P}^{\prime}\), we may assume that the slope of each \(U^{\circ}_{p,i}\) is constant along \(\mathscr{C}_{P}^{\prime}\). As \(\phi^{\lambda*}_{\tau}(U^{\circ}_{p,i})\) is a product of \(U^{\circ}_{p,i}\)'s and terms constant over \(\mathscr{W}^{P}_{0,\lambda*}\), the slope of \(\alpha_{y}(U^{\circ}_{p,i})=\alpha^{\prime}_{y}\circ\phi^{\lambda*}_{\tau}(U^ {\circ}_{p,i})\) is constant, equal to \(v_{p}(\alpha(U^{\circ}_{p,i}))\), for all \(i\) and for any classical \(y^{\prime}\in\mathscr{C}_{P}^{\prime}\). By assumption \(\tilde{\pi}\) is non-critical slope (for \(\lambda_{\pi}\)). For a Zariski-dense set of classical weights \(\lambda_{y}\in w(\mathscr{C}_{P}^{\prime})\), the non-critical slope condition (6.1) for \(\lambda_{y}\) is strictly weaker than for \(\lambda_{\pi}\); so above all such weights, the points \(\tilde{\pi}_{y}\) are non-critical slope. ### Proof of Theorem 4.10 Let us take stock. We started with a non-critical slope \(P\)-spin refinement \(\tilde{\pi}\), and via an element \(\tau\) in the Weyl group, associated to it a non-critical slope \(B\)-spin refinement \(\tilde{\pi}^{\prime}\). This varies in a unique \((n+1)\)-dimensional family \(\mathscr{C}^{\prime}\subset\mathscr{C}^{\prime}=\mathscr{E}\). Applying \(\tau^{-1}\) to each \(p\)-refined classical point \(y^{\prime}\) in \(\mathscr{C}_{P}^{\prime}\) gives another \(P\)-spin point \(y\in\mathscr{E}\). We now show this association can be interpolated over \(\mathscr{W}^{P}_{0,\lambda*}\). **Proposition 6.12**.: _There exists a finite map \(t:\mathscr{C}_{P}^{\prime}\to\mathscr{E}\) over \(\mathscr{W}^{P}_{0,\lambda*}\) which interpolates the association \(y^{\prime}\mapsto y\). Thus there exists an \((\#X_{P}+1)\)-dimensional symplectic family through \(\tilde{\pi}\)._ Proof.: We use an interpolation idea that originally dates back to Chenevier [10]. The precise version we use is [12, Thm. 3.2.1], which says: suppose we have eigenvariety data \(\mathcal{D}_{1},\mathcal{D}_{2}\), using Hecke algebras \(\mathcal{H}_{1},\mathcal{H}_{2}\), giving eigenvarieties \(\mathscr{E}_{1},\mathscr{E}_{2}\). Suppose there is a map \(\phi:\mathcal{H}_{2}\to\mathcal{H}_{1}\) and a Zariski-dense set of points \(y_{1}\in\mathscr{E}_{1}\) with \(\alpha_{y_{1}}\circ\phi\) appearing as a point \(y_{2}\in\mathscr{E}_{2}\). Then there is a finite map \(\mathscr{E}_{1}\to\mathscr{E}_{2}\) interpolating the transfer \(y_{1}\mapsto y_{2}\). We need only explain why our situation fits this. Let \(\Omega_{P}\coloneqq w(\mathscr{C}_{P}^{\prime})\). The part of the eigenvariety \(\mathscr{E}\) over \(\Omega_{P}\) is constructed from an eigenvariety datum \[\mathcal{D}_{2}=(\Omega_{P},\mathscr{Z},\mathscr{H},\mathcal{H},\psi)\] in the sense of [12, Def. 3.1.1]. Also [12, Cor. 3.1.5] allows us to realise \(\mathscr{C}_{P}^{\prime}\) inside the eigenvariety attached to an eigenvariety datum \[\mathcal{D}_{1}=(\Omega_{P},\mathscr{Z}_{\mathscr{C}_{P}^{\prime}},\mathscr{H }^{\prime},\mathcal{H}^{\text{frac}},\psi),\] where we shrink the weight space to be \(P\)-parabolic, and the Fredholm hypersurface to isolate the component containing \(\mathscr{C}_{P}^{\prime}\). The map of Hecke algebras is \(\phi^{\lambda*}_{\tau}:\mathcal{H}\to\mathcal{H}^{\text{frac}}.\) For a Zariski-dense set of classical \(y^{\prime}\in\mathscr{C}_{P}^{\prime}\), corresponding to eigensystems \(\alpha^{\prime}\), by Lemma 6.11 the eigensystem \(\alpha^{\prime}\circ\phi^{\lambda*}_{\tau}\) appears in \(\mathscr{E}\), and we deduce existence of \(t\) by [12]. Now \(t(\mathscr{C}_{P}^{\prime})\subset\mathscr{E}\) is the required symplectic family through \(\tilde{\pi}\). We have now proved existence of an \(\#X_{P}+1\)-dimensional symplectic family \(\mathscr{C}\) through any non-critical slope \(P\)-spin point in \(\mathscr{E}^{G}_{K_{1}(\tilde{\pi})}\). Theorem 4.10 claims that this family is unique and etale over its image in weight space, an affinoid \(\Omega_{P}\) in \(\mathscr{W}^{P}_{0,\lambda*}\) (noting \(P=P_{\sharp}\)). To complete the proof, key is the observation that at level \(K_{1}(\tilde{\pi})\), with appropriate signs, the \(\tilde{\pi}\)-isotypic part of the top-degree compactly-supported cohomology is \(1\)-dimensional (as in e.g. [1, Prop. 7.19]). Then as in Proposition 7.20 _op. cit._, there exists an ideal \(I\subset\mathcal{O}_{\Omega_{P}}\) such that we have a relation \[\mathcal{O}_{\mathscr{C},\tilde{\pi}}=\mathcal{O}_{\Omega_{P},\lambda_{\pi}}/I\] between the local rings. It suffices to prove \(I=0\). If it is not, then every component through \(\tilde{\pi}\) has dimension \(<\dim(\Omega_{P})=\#X_{P}+1\). But this contradicts the existence of \(\mathscr{C}\), so \(I=0\) and \(w:\mathscr{C}\to\Omega_{P}\) is etale at \(\tilde{\pi}\). This completes the proof of Theorem 4.10. **Remark 6.13**.: For \(\mathrm{GL}_{2}\), the _infinite fern_ (see [12]) is the image of the Coleman-Mazur eigenvariety in an unobstructed deformation space of residual Galois representations. If \(\pi\) is a \(p\)-spherical RACAR of \(\mathrm{GL}_{2}\), then there are two \(p\)-refinements \(\pi_{\alpha},\pi_{\beta}\), each varying in Coleman families; but both \(\pi_{\alpha},\pi_{\beta}\) have the same underlying Galois representation, so have the same image in the infinite fern, and the images of their families in the infinite fern cross at this point. The proof here suggest that, given a hypothetical 'infinite fern' \(\mathscr{I}\) for \(\mathrm{GL}_{2n}\), there would be a picture with higher-dimensional intersections. Consider e.g. \(\mathrm{GL}_{4}\); then the image of the \(\mathrm{GL}_{4}\)-eigenvariety in \(\mathscr{I}\) through \(\pi\) should comprise 24 surfaces (the Iwahori families), intersecting at 6 lines (the \(Q\)-parahoric families), which all intersect at a single point (corresponding to \(\pi\)). Our expectation is that 8 of the surfaces (through the \(B\)-spin refinements) comprise classical points, and these intersect at 4 lines (corresponding to 4 classical families at \(Q\)-parahoric level). A higher-dimensional 'infinite fern' for _polarised_ Galois representations of \(\mathrm{GL}_{n}\) over CM fields is the main focus of [HS]. ## 7 Explicit examples for \(\mathrm{GL}_{4}\) We now illustrate the theory concretely for \(\mathrm{GL}_{4}\), and give an explicit example. There are 4 spin parabolics in \(G\): \(B\), the (2,2)-parabolic \(Q\), the (1,2,1)-parabolic \(Q^{\prime}\), and \(G\) itself. Suppose \(\pi\) is a RASCAR of \(\mathrm{GL}_{4}\) with \(\pi_{p}\) spherical, the transfer of a RACAR \(\Pi\) on \(\mathrm{GSp}_{4}\), and let \(\mathcal{F}\in\Pi\) be a Siegel newform of level prime to \(p\). There are 6 \(Q\)-refinements of \(\pi_{p}\) (Hecke eigensystems in the \(Q\)-parahoric invariants of \(\pi_{p}\)), corresponding to elements of \(\mathcal{W}_{G}/\mathcal{W}_{L_{Q}}\). These are combinatorially represented by decomposing \(\{1,2,3,4\}\) into an ordered disjoint union \(A_{1}\sqcup A_{2}\), where \(\#A_{1}=\#A_{2}=2\) (cf. [13, SS3.3]). Exactly four of these are '\(Q\)-spin', factoring through Klingen refinements of \(\mathcal{F}\): \[\{1,2\}\sqcup\{3,4\},\quad\{1,3\}\sqcup\{2,4\},\quad\{24\}\sqcup\{13\},\quad \{34\}\sqcup\{12\}, \tag{7.1}\] whilst \(\{14\}\sqcup\{23\}\) and \(\{23\}\sqcup\{14\}\) do not factor. These four are the refinements satisfying the combinatorial criterion [13, Def. 3.5(ii)]. There are 24 Iwahori \(p\)-refinements, each lying above a unique \(Q\)-refinement. Each \(Q\)-refinement \(A_{1}\sqcup A_{2}\) has 4 further Iwahori refinements, corresponding to orderings on \(A_{1}\) and \(A_{2}\); e.g. above \(\{1,2\}\sqcup\{3,4\}\) are \(\{1234\}\), \(\{2134\}\), \(\{1243\}\), \(\{2143\}\). The table below lists all the Iwahori \(p\)-refinements \(\hat{\pi}\), together with the smallest parabolic \(P\subset G\) such that \(\hat{\pi}\) is \(P\)-spin. \begin{tabular}{c|l} \(\hat{\pi}\) **optimally:** & \(\Psi_{\theta}(\hat{\pi})\) \\ \hline \(B\)-spin & \(\{1234\}\), \(\{1324\}\), \(\{2143\}\), \(\{2413\}\), \(\{3142\}\), \(\{3412\}\), \(\{4231\}\), \(\{4321\}\) \\ \(Q\)-spin & \(\{2134\}\), \(\{3124\}\), \(\{1243\}\), \(\{4213\}\), \(\{1342\}\), \(\{4312\}\), \(\{2431\}\) \\ \(G\)-spin & \(\{2314\}\), \(\{3214\}\), \(\{1423\}\), \(\{4123\}\), \(\{1432\}\), \(\{2341\}\), \(\{3241\}\) \\ \end{tabular} (Any \(Q^{\prime}\)-spin refinement is automatically a \(B\)-spin refinement by Lemma 6.3(ii)). We conjecture that the dimension of the symplectic locus through the optimally \(B\)-spin, \(Q\)-spin and \(G\)-spin refinements is 3, 2 and 1 respectively; we have proved this for non-critical slope \(\hat{\pi}\). **Example.** From the tables at www.smf.compositio.nl, there is a unique non-endoscopic Siegel modular form \(\mathcal{F}\) on \(\mathrm{GSp}_{4}\) of level 1 that transfers to a RASCAR \(\pi\) on \(\mathrm{GL}_{4}\) of weight \(\lambda=(12,1,-1,-12)\); and \(\pi\) is everywhere spherical. At \(p=11\), by examining the Newton polygon, one sees this \(\pi\) admits a parahoric-ordinary \(Q\)-refinement \(\hat{\pi}^{Q}\), corresponding to an ordinary Klingen refinement of \(\mathcal{F}\). We can normalise \(\theta\) so that this \(Q\)-refinement is \(\{1,2\}\sqcup\{3,4\}\). The 4 Iwahori refinements above \(\hat{\pi}^{Q}\) are \(\{1234\},\{2134\},\{1243\},\{2143\}\). For \(\lambda=(12,1,-1,-12)\), the non-critical slope bounds (6.1) are \(v_{p}(U_{p,1})<12\), \(v_{p}(U_{p,2})<3\), \(v_{p}(U_{p,3})<12\). We see: * \(\{1234\}\) is \(B\)-spin. Its \(U_{p,i}\)-eigenvalues have slopes \(v_{p}(U_{p,1})=v_{p}(U_{p,3})=11\) and \(v_{p}(U_{p,2})=0\). This is non-critical slope, varying in a unique 3-dimensional symplectic family. * \(\{2134\}\) is optimally \(Q\)-spin. The slopes are \(v_{p}(U_{p,1})=11\), \(v_{p}(U_{p,2})=0\), \(v_{p}(U_{p,3})=1\). This is non-critical slope, varying in a 2-dimensional symplectic family, inside a 3-dimensional component of the eigenvariety. Similarly \(\{1243\}\) and \(\{2143\}\) are non-critical slope, optimally \(Q\)-spin and \(B\)-spin respectively. ## Part III. \(p\)-refined Friedberg-Jacquet Integrals In Part III, we focus on parahoric \(P\)-refinements \(\tilde{\pi}^{P}\). We give a conjectural classification of the \(P\)-spin \(P\)-refinements in terms of non-vanishing of twisted global period integrals, and prove various results towards this by using the results of Part II. Our conjecture generalises [BDG\({}^{+}\), Expectation 7.2], which we prove in some cases. ### \(p\)-refined Friedberg-Jacquet integrals: Statements #### Statement of the conjecture Let \(\pi\) be a RACAR of \(G(\mathbf{A})\). For \(\varphi\in\pi\) and Hecke characters \(\chi,\eta\), let \[Z_{H}(\varphi,\chi,s)\coloneqq\int_{\mathbf{A}^{\times}H(\mathbf{Q})\setminus H (\mathbf{A})}\varphi\left[\begin{pmatrix}h_{1}\\ &h_{2}\end{pmatrix}\right]\chi|\cdot|^{s-\frac{1}{2}}\left(\frac{\det(h_{1})}{ \det(h_{2})}\right)\eta^{-1}\big{(}\det(h_{2})\big{)}dh, \tag{12}\] where \(H=\mathrm{GL}_{n}\times\mathrm{GL}_{n}\). In [11, Prop. 2.2] (with [1]) Friedberg-Jacquet proved: **Theorem 8.1**.: _Let \(\pi\) be a RACAR of \(G(\mathbf{A})\). Let \(\chi,\eta\) be algebraic Hecke characters, with \(\chi\) finite order. Then for any \(s\in\mathbf{C}\), the following are equivalent:_ 1. _There exists_ \(\varphi\in\pi\) _such that_ \(Z_{H}(\varphi,\chi,s+1/2)\neq 0\)_._ 2. _All of the following hold:_ * \(\pi\) _is a functorial transfer of some_ \(\Pi\) _on_ \(\mathrm{GSpin}_{2n+1}(\mathbf{A})\) _with central character_ \(\eta\)_,_ * \(L(\pi\times\chi,s+1/2)\neq 0\)_._ For our '\(p\)-refined' version of Theorem 8.1, we need some notation. Let \(P\subset G\) be a spin parabolic. We define \(P\)-refinements of RACARs (not just RASCARs) to be exactly as in Definition 2.8. **Notation 8.2**.: * Let \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\) be a \(P\)-refinement. We say \(\varphi\in\tilde{\pi}^{P}\) (resp. \(\varphi_{p}\in\tilde{\pi}^{P}_{p}\)) if \(\varphi\in\pi^{J_{P}}\) (resp. \(\varphi_{p}\in\pi^{J_{P}}_{p}\)) is an \(\alpha^{P}\)-eigenvector for \(\mathcal{H}^{P}_{p}\). * Let \(u=\left(\begin{smallmatrix}1&-w_{n}\\ 0&1\end{smallmatrix}\right)\in\mathrm{GL}_{2n}(\mathbf{Q}_{p})\), where \(w_{n}\) is the longest Weyl element in \(\mathrm{GL}_{n}(\mathbf{Q}_{p})\) (i.e. antidiagonal \(1\)s). If \(P\) is the \((m_{1},...,m_{r})\)-parabolic (see Notation 2.6), let \[t_{P}=\mathrm{diag}(p^{r-1}\mathrm{I}_{m_{1}},...,p\mathrm{I}_{m_{r-1}}, \mathrm{I}_{m_{r}})\in T(\mathbf{Q}_{p}).\] For any \(\beta\geqslant 1\), we view \(ut_{P}^{\beta}\in G(\mathbf{Q}_{p})\subset G(\mathbf{A})\) in the obvious way. **Definition 8.3**.: Let \(\tilde{\pi}^{P}\) be a \(P\)-refined RACAR, and \(P\subset G\) a spin parabolic, with associated \(\mathcal{P}\subset\mathcal{G}\). We say \(\tilde{\pi}^{P}\) is a _functorial transfer of a \(\mathcal{P}\)-refined \(\tilde{\Pi}^{P}\) on \(\mathrm{GSpin}_{2n+1}(\mathbf{A})\)_ if \(\pi\) is the functorial transfer of \(\Pi\), and \(\tilde{\pi}^{P}\) is the functorial transfer of \(\Pi^{P}\) in the sense of Definition 3.6. **Conjecture 8.4**.: _Let \(P\subsetneq G\) be a proper spin parabolic, with associated \(\mathcal{P}\subset\mathrm{GSpin}_{2n+1}\). Let \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\) be a \(P\)-refined RACAR of \(G(\mathbf{A})\). Let \(\chi,\eta\) be algebraic Hecke characters, with \(\chi\) finite order of conductor \(p^{\beta}>1\). For any \(s\in\mathbf{C}\), the following are equivalent:_ 1. _There exists_ \(\varphi\in\tilde{\pi}^{P}\) _such that_ \(Z_{H}(ut_{P}^{\beta}\cdot\varphi,\chi,s+1/2)\neq 0\)_._ 2. _All of the following hold:_ * \(\tilde{\pi}^{P}\) _is a functorial transfer of some_ \(\tilde{\Pi}^{\mathcal{P}}\) _on_ \(\mathrm{GSpin}_{2n+1}(\mathbf{A})\) _with central character_ \(\eta\)_,_ * \(L(\pi\times\chi,s+1/2)\neq 0\)_,_ * \(P\) _is contained in the_ \((n,n)\)_-parabolic._ We have stated this in a global form close to Theorem 8.1. We shall see a purely local-at-\(p\) reformulation in SS8.3. In particular, in condition (1), the important choice is of a local eigenvector \(\varphi_{p}\in\tilde{\pi}^{P}_{p}\), which is unique up to scalar. In this sense, locally either the integral vanishes or does not vanish on the entire \(p\)-refinement. ### Results towards the conjecture. In the remainder of the paper, we prove a number of results towards this conjecture. In particular, we prove: **Theorem 8.5**.: _Implication (2) \(\Rightarrow\) (1) holds in Conjecture 8.4._ We can also use Theorem 4.9 to prove results towards (1) \(\Rightarrow\) (2); as an example, as in Theorem D of the introduction, we show it when \(\tilde{\pi}^{P}\) has non-\(P\)-critical slope. To state our (stronger) precise result, we need some additional terminology. * Fix a prime-to-\(p\) level \(K^{p}\subset\mathrm{GL}_{2n}(\mathbf{A}_{f}^{(p)})\). For a parabolic \(P\), we let \(K_{P}:=K^{p}J_{P}\subset\mathrm{GL}_{2n}(\mathbf{A}_{f})\), where \(J_{P}\) is the \(P\)-parahoric subgroup. * For any open compact \(K\subset\mathrm{GL}_{2n}(\mathbf{A}_{f})\), let \(S_{K}\) denote the \(\mathrm{GL}_{2n}\)-locally symmetric space of level \(K\) (see [1], SS2.3]). * For any parabolic \(P\), let \(\mathcal{D}_{\lambda}^{P}\) be the module of weight \(\lambda\)\(P\)-parahoric distributions for \(G\), defined in [11, SS3.2]. We have \(\mathcal{D}_{\lambda}^{G}=V_{\lambda}^{\vee}\) is the dual of the algebraic induction, and \(\mathcal{D}_{\lambda}^{B}=\mathcal{D}_{\lambda}\) is the usual module of (Iwahori) locally analytic distributions. We have attached \(p\)-adic local systems \(\mathcal{Y}_{\lambda}/\mathscr{D}_{\lambda}^{P}\) on \(S_{K_{P}}\) (e.g. [1, SS2.3.2]). * The _top degree eigenvariety_ was constructed in [11, SS5], following [16]. It is built from modules \(\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{D}_{\Omega})\), where \(\Omega\subset\mathscr{W}\) is a weight affinoid and \(\mathscr{D}_{\Omega}\) is a local system of locally analytic distributions over \(\Omega\) (as in [1, Def. 3.11]; see [11, SS3.2]). Here \(t=n^{2}+n-1\) is the top degree for cuspidal cohomology. * We say \(\tilde{\pi}^{P}\)_appears in the top degree eigenvariety_ if there exists an Iwahori refinement \(\tilde{\pi}\) above \(\tilde{\pi}^{P}\), and a neighbourhood \(\Omega\subset\mathscr{W}_{0,\lambda_{\pi}}^{P}\) of \(\lambda_{\pi}\), such that the natural specialisation map (induced by \(r_{\lambda_{\pi}}\)), and then projection onto the \(\tilde{\pi}\)-eigenspace) is surjective. This implies the \(\tilde{\pi}\)-localisation in \(\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{D}_{\Omega})\) is non-zero, and hence there is a point in the top degree eigenvariety corresponding to \(\tilde{\pi}\). * We say \(\tilde{\pi}^{P}\) is \(P\)_-strongly-interior_ if the \(P\)-parahoric boundary overconvergent cohomology vanishes \(\mathrm{H}_{\mathrm{c}}^{\bullet}(S_{K_{P}},\mathscr{D}_{\lambda_{\pi}}^{P})_{ \tilde{\pi}^{P}}=0\) (see Def. 5.13_op. cit._). **Theorem 8.6**.: _Suppose \(\pi\) has regular weight, that \(\tilde{\pi}^{P}\) appears in the top degree eigenvariety, and that \(\tilde{\pi}^{P}\) is \(P\)-strongly-interior. Then (1) \(\Rightarrow\) (2) holds in Conjecture 8.4._ **Remarks 8.7**.: * We cautiously suggest the conditions on \(\tilde{\pi}^{P}\) should hold for all \(\tilde{\pi}^{P}\) (whence Conjecture 8.4 would hold in full). Unconditionally, \(\tilde{\pi}^{P}\) is \(P\)-strongly-interior if it is non-\(P\)-critical slope; see [11, Lem. 5.14]. It appears in the top-degree eigenvariety if there exists a non-\(B\)-critical Iwahori-refinement \(\tilde{\pi}\) above \(\tilde{\pi}^{P}\) (see [11, Def. 4.1] and [1, Prop. 7.8]). Hence Theorem 8.6 implies Theorem D(ii) from the introduction. * When \(P=B\), this proves [1, Expectation 7.2] for \(\tilde{\pi}\) satisfying the conditions of Theorem 8.6, thus for non-critical slope \(\tilde{\pi}\) of regular weight (see Remark 7.3_op. cit._). ### Local reinterpretation. Proofs of all of the above crucially use local zeta integrals. If (1) or (2) holds in Conjecture 8.4, then Theorem 8.1 implies \(\pi\) is symplectic, so that \(\pi\) and \(\pi_{v}\) admit Shalika models for all places \(v\). Via [19, Prop. 2.3], (8.1) can then be rewritten as a product of local integrals, the _Friedberg-Jacquet integrals_ \[\zeta_{v}(\varphi_{v},\chi_{v},s):=\int_{\mathrm{GL}_{n}(\mathbf{Q}_{v})} \mathcal{S}_{\tilde{\psi}_{v}}^{\eta_{v}}(\varphi_{v})\left[\begin{pmatrix}x &\\ &1\end{pmatrix}\right]\chi_{v}|\cdot|^{s-\frac{1}{2}}\Big{(}\det x\Big{)}dx.\] where \(\varphi_{v}\in\pi_{v}\) and \(\mathcal{S}_{\tilde{\psi}_{v}}^{\eta_{v}}\) is an intertwining of \(\pi_{v}\) into its Shalika model (see e.g. [1, SS2.6]). Let \(\ell\neq p\) be a finite prime, and \(\varphi_{\ell}\in\pi_{\ell}\). By [19, Prop. 3.1], for each unramified quasi-character \(\chi_{\ell}:F_{\ell}^{\times}\to\mathbf{C}^{\times}\), there exists a holomorphic function \(r_{\ell}(\varphi_{\ell},\chi_{\ell},s)\) such that \(\zeta_{\ell}(\varphi_{\ell}^{\mathrm{FJ}},\chi_{\ell},s)=r_{\ell}(\varphi_{ \ell},\chi_{\ell},s)\cdot L(\pi_{\ell}\times\chi_{\ell},s)\). Moreover there exists \(\varphi_{\ell}^{\mathrm{FJ}}\in\pi_{\ell}\) such that \(r_{\ell}(\varphi_{\ell}^{\mathrm{FJ}},\chi_{\ell},s)=1\). If \(\pi_{\ell}\) is spherical, we may take \(\varphi_{\ell}^{\mathrm{FJ}}\) spherical [19, Prop. 3.2]. At infinity, by [18] there exists a vector \(\varphi_{\infty}^{\mathrm{FJ}}\in\pi_{\infty}\) such that \(\zeta_{\infty}(\varphi_{\infty},\chi_{\infty},s)\neq 0\). **Lemma 8.8**.: _The following are equivalent:_ 1. _There exists_ \(\varphi\in\tilde{\pi}^{P}\) _such that_ \(Z_{H}(ut_{P}^{\beta}\cdot\varphi,\chi,s+1/2)\neq 0\)_,_ 2. \(L(\pi\times\chi,s+1/2)\neq 0\)_,_ \(\pi\) _is symplectic, and there exists_ \(\varphi_{p}\in\tilde{\pi}^{P}_{p}\) _such that_ \(\zeta_{p}(ut_{P}^{\beta}\cdot\varphi_{p},\chi_{p},s+1/2)\neq 0\)_._ Proof.: Suppose (1) holds. Then \(\pi\) is symplectic by Theorem 8.1. Write \(\varphi=\otimes_{v}\varphi_{v}\). Decomposing \(Z_{H}\) into a product of local integrals gives \[0 \neq Z_{H}(ut_{P}^{\beta}\cdot\varphi,\chi,s+1/2)\] \[=\zeta_{\infty}(\varphi_{\infty},\chi_{\infty},s)\cdot r(\varphi^ {(p)},\chi^{(p)},s)\cdot L(\pi\times\chi,s+1/2)\cdot\zeta_{p}(ut_{P}^{\beta} \cdot\varphi_{p},\chi_{p},s+1/2),\] where \(r(\varphi^{(p)},\chi^{(p)},s)=\prod_{\ell\neq p}r_{\ell}(\varphi_{\ell},\chi _{\ell},s)\). This is a holomorphic function, as all but finitely many of the \(\varphi_{\ell}\) are spherical, meaning all but finitely many of the \(r_{\ell}(\varphi_{\ell},\chi_{\ell},s)\) are \(1\). Here we have also used that \(L(\pi_{p}\times\chi_{p},s+1/2)=1\), as \(\chi_{p}\) is ramified. We deduce the non-vanishing statements in \((1^{\prime})\). If conversely \((1^{\prime})\) holds, the same arguments show (1) holds with \(\varphi\mathrel{\mathop{:}}=\varphi_{p}\otimes(\otimes_{v\neq p}\varphi_{v}^{ \mathrm{F}_{q}})\). Proposition 3.7 also gives a local analogue of property (2). **Lemma 8.9**.: _Property (2) in Conjecture 8.4 is equivalent to:_ 1. \(L(\pi\times\chi,s+1/2)\neq 0\)_,_ \(\pi\) _is symplectic,_ \(\tilde{\pi}^{P}\) _is_ \(P\)_-spin, and_ \(P\) _is contained in the_ \((n,n)\)_-parabolic._ Conjecture 8.4 is then equivalent to the following local statement: **Conjecture 8.10**.: _If \(\pi\) is symplectic and \(L(\pi\times\chi,s+1/2)\neq 0\), the following are equivalent:_ 1. _There exists_ \(\varphi_{p}\in\tilde{\pi}^{P}_{p}\) _such that_ \(\zeta_{p}(ut_{P}^{\beta}\cdot\varphi_{p},\chi_{p},s+1/2)\neq 0\)_._ 2. \(P\) _is contained in the_ \((n,n)\)_-parabolic, and_ \(\tilde{\pi}^{P}\) _is_ \(P\)_-spin._ For the rest of the paper, we assume our prime-to-\(p\) level \(K^{p}\subset\mathrm{GL}_{2n}(\mathbf{A}_{f}^{(p)})\) fixes \(\otimes_{\ell\neq p}\varphi_{\ell}^{\mathrm{F}_{d}}\), which is possible by [1, Prop. 3.2]. ## 9 Proof of Theorem 8.5 In this section, we give the proof of Theorem 8.5 (that \((2)\Rightarrow(1)\) in Conjecture 8.4). More specifically, we prove \((2^{\prime})\Rightarrow(1^{\prime})\) in Conjecture 8.10, which implies the theorem by SS8.3. Our proof is constructive; given a \(P\)-spin \(\tilde{\pi}^{P}\), we describe explicitly an eigenvector with non-vanishing local zeta integral. If \(P=B\) or the \((n,n)\)-parabolic \(Q\), then Theorem 8.5 was proved in [1, Cor. 7.15] and [1, Prop. 3.4, Lem. 3.6] respectively. Our proof for general \(P\) is closely modelled on the approach in [1, and we refer to specific places _op. cit._ for more detail. Recall \(\mathcal{S}_{\psi_{p}}^{\eta_{p}}\) is an intertwining of \(\pi_{p}\) into its Shalika model, and for any \(\varphi_{p}\in\pi_{p}\), we let \(W_{\varphi_{p}}\mathrel{\mathop{:}}=\mathcal{S}_{\psi_{p}}^{\eta_{p}}(\varphi_ {p})\). Then we: 1. Express \(\zeta_{p}(ut_{P}^{\beta}\cdot\varphi_{p},\chi_{p},s+1/2)\) as a non-zero multiple of a value of \(W_{\varphi_{p}}\); 2. Show if \(P\subset Q\) and \(\tilde{\pi}^{P}\) is \(P\)-spin, there exists \(\varphi_{p}\in\tilde{\pi}^{P}_{p}\) where this specific value of \(W_{\varphi_{p}}\) is non-zero. ### The local zeta integral **Proposition 9.1**.: _Let \(\varphi_{p}\in\pi_{p}^{\mathrm{Iw}_{\varnothing}}\), and let \(W_{\varphi_{p}}=\mathcal{S}_{\psi_{p}}^{\eta_{p}}(\varphi_{p})\). Let \(\chi_{p}\) be a character of conductor \(p^{\beta}>1\). Let \(t=(\begin{smallmatrix}i_{1}&z_{2}\end{smallmatrix})\in T(\mathbf{Q}_{p})\), and_ \[\nu_{\beta}(t)\mathrel{\mathop{:}}=p^{-\beta}z_{2}^{-1}w_{n}z_{1}.\] _Then for all \(s\),_ \[\Big{[}\zeta_{p}(ut\cdot\varphi_{p},\chi_{p},s)\neq 0\Big{]}\iff\Big{[}W_{ \varphi_{p}}\left(\begin{smallmatrix}\nu_{\beta}(t)&\\ &1\end{smallmatrix}\right)\neq 0\Big{]}. \tag{9.1}\] Proof.: Recalling \(u=\left(\begin{smallmatrix}1&-w_{n}\\ 0&1\end{smallmatrix}\right)\), we have \[\left(\begin{array}{cc}x&\\ &1\end{array}\right)ut=\left(\begin{array}{cc}z_{2}&\\ &z_{2}\end{array}\right)\left(\begin{array}{cc}1&-z_{2}^{-1}xw_{n}z_{2}\\ &1\end{array}\right)\left(\begin{array}{cc}z_{2}^{-1}xz_{1}&\\ &1\end{array}\right). \tag{9.2}\] Let \(\varphi_{p}\in\pi_{p}\) be any \(\mathrm{Iw}_{G}\)-invariant vector. By (9.2), as \(W_{\varphi_{p}}\) is in the Shalika model, we see \[\zeta_{p}(ut\cdot\varphi_{p},\chi_{p},s)=\eta(\det z_{2})\int_{\mathrm{GL}_{n} (\mathbb{Q}_{p})}\psi_{p}\Big{(}\operatorname{tr}(-z_{2}^{-1}xw_{n}z_{2}) \Big{)}W_{\varphi_{p}}\left(\begin{array}{cc}z_{2}^{-1}xz_{1}&\\ &1\end{array}\right)\chi_{p}|\cdot|^{s-\frac{1}{2}}\Big{(}\det x\Big{)}dx.\] Let \(y=-z_{1}^{-1}w_{n}xz_{1}\), and let \(\omega=-z_{2}^{-1}w_{n}z_{1}=-p^{\beta}\nu_{\beta}(t)\). As \(\operatorname{tr}(-z_{2}^{-1}xw_{n}z_{2})=\operatorname{tr}(y)\), changing variables and noting \(dx=dy\), we see \[\zeta_{p}(ut\cdot\varphi_{p},\chi_{p},s)=(\star)\cdot\mathcal{Q},\qquad( \star)\neq 0, \tag{9.3}\] where \[\mathcal{Q}:=\int_{\mathrm{GL}_{n}(\mathbb{Q}_{p})}\psi_{p}(\operatorname{tr} (y))I(\omega y)dy,\qquad I(y)=W_{\varphi_{p}}\begin{pmatrix}y&\\ &1\end{pmatrix}\chi_{p}|\cdot|^{s-\frac{1}{2}}(\det y).\] By (9.3), it suffices to prove \(\mathcal{Q}\neq 0\iff W_{\varphi_{p}}\left(\begin{smallmatrix}\nu_{\beta}(t)\\ &1\end{smallmatrix}\right)\neq 0\). We want to reduce the support of the integral \(\mathcal{Q}\). Let \(M=\mathrm{GL}_{n}(\mathbb{Q}_{p})\cap M_{n}(\mathbb{Z}_{p})\). By [BDG\({}^{+}\), Lem. 5.1], the support of \(I(\omega y)\) (hence \(\mathcal{Q}\)) is contained in \(\omega^{-1}M\). As in [BDG\({}^{+}\), Not. 5.3], let \(A\) denote the set of all diagonal \(n\times n\)-matrices of the form \[\gamma=\mathrm{diag}(c_{11},\ldots,c_{nn}),\qquad c_{ii}\in\mathbf{Z}_{p}^{ \times}.\] Let \(B_{\beta}\) denote the additive group of all \(n\times n\)-matrices \(\delta\) with \[\delta_{i,j}=\left\{\begin{array}{cc}c_{i,j}&\text{ if }i<j\\ 0&\text{ if }i=j\\ p^{\beta}c_{i,j}&\text{ if }i>j\end{array}\right.,\qquad c_{ij}\in\mathbf{Z}_{p}.\] Let \(\alpha=\gamma+\delta\), with \(\gamma\in A\), \(\delta\in B_{\beta}\). Note \(\alpha\in\mathrm{Iw}_{n}(p^{\beta})\) is in the depth \(p^{\beta}\) Iwahori subgroup of \(\mathrm{GL}_{n}(\mathbf{Z}_{p})\). Let \(\varepsilon=\left(\begin{smallmatrix}\alpha^{-1}&1\\ &1\end{smallmatrix}\right)\in\mathrm{Iw}_{G}\). Then \(I(y\alpha^{-1})=\chi_{p}(\det\gamma)^{-1}I(y)\), so \[\mathcal{Q}=\int_{\mathrm{GL}_{n}(\mathbb{Q}_{p})}\psi_{p}\big{(} \operatorname{tr}(y)\big{)}I\big{(}\omega y\big{)}dy =\chi_{p}(\det\gamma)\int_{\omega^{-1}M}\psi_{p}\Big{(} \operatorname{tr}(y)\Big{)}I\Big{(}\omega y\alpha^{-1}\Big{)}dy\] \[=\chi_{p}(\det\gamma)\int_{\omega^{-1}M}\psi_{p}\Big{(} \operatorname{tr}(x\gamma)\Big{)}\psi_{p}\Big{(}\operatorname{tr}(x\delta) \Big{)}I\big{(}\omega x\Big{)}dx,\] where we make the change of variables \(x=y\alpha^{-1}\). If \(x\in\mathrm{GL}_{n}(\mathbf{Q}_{p})\) it is easy to see that \(\delta\mapsto\psi_{p}(\operatorname{tr}(x\delta))\) is the trivial function of \(B_{\beta}\) if and only if \[x_{i,j}\in\left\{\begin{array}{cc}p^{-\beta}\mathbb{Z}_{p}&\text{ if }i<j\\ \mathbb{Z}_{p}&\text{ if }i>j\end{array}\right.\] Denote this subset of \(\mathrm{GL}_{n}(\mathbb{Q}_{p})\) by \(M^{\prime}_{\beta}\). Averaging over \(B_{\beta}\), by character orthogonality (as in [BDG\({}^{+}\), Cor. 5.5]) we find \[\mathcal{Q}=\chi_{p}(\det\gamma)\int_{\omega^{-1}M\cap M^{\prime}_{\beta}} \psi_{p}(\operatorname{tr}(x\gamma))I(\omega x)dx.\] Now we average over \(A\). Consider the function \[\chi_{p}(\det\gamma)\psi_{p}(\operatorname{tr}(x\gamma))=\prod_{i=1}^{n}\chi_ {p}(c_{i,i})\psi_{p}(x_{i,i}c_{i,i})\] Exactly following [BDG\({}^{+}\), Lem. 5.6], we see that \[\Big{[}\int_{\gamma\in A}\chi_{p}(\det\gamma)\psi_{p}(\operatorname{tr}(x\gamma)) d^{\times}A\neq 0\Big{]}\iff\Big{[}x_{i,i}\in p^{-\beta}\mathbb{Z}_{p}^{ \times}\text{ for all }i\Big{]}, \tag{10}\] and when the left-hand side is non-zero, it has form \((\star^{\prime})\prod_{i=1}^{n}\chi_{p}(p^{\beta}x_{i,i})^{-1}\), with \((\star^{\prime})\neq 0\) a scalar depending only on \(\chi,p\) and \(\beta\). Let \(M_{\beta}^{\prime\prime}\subset M_{\beta}^{\prime}\) be the subset where the right-hand condition of (10) is satisfied. Note that \(M_{\beta}^{\prime\prime}=p^{-\beta}\operatorname{Iw}_{n}(p^{\beta})\). We then have \[\mathcal{Q}=(\star^{\prime\prime})\int_{\omega^{-1}M\cap M_{\beta}^{\prime \prime}}\prod_{i=1}^{n}\chi_{p}(p^{\beta}x_{i,i})^{-1}\cdot I(\omega x)dx, \qquad(\star^{\prime\prime})\neq 0.\] Write \(x^{\prime}=p^{\beta}x\) for \(x\in\omega^{-1}M\cap M_{\beta}^{\prime\prime}\). Then \(\chi_{p}(\det x^{\prime})=\prod_{i=1}^{n}\chi_{p}(p^{\beta}x_{i,i})\), as \(\chi_{p}\) has conductor \(p^{\beta}\). If \(\nu=\nu_{\beta}(t)=-p^{-\beta}\omega\), then we find \[\mathcal{Q} =(\star^{\prime\prime})\int_{\omega^{-1}M\cap\operatorname{Iw}_ {n}(p^{\beta})}\chi_{p}(\det p^{\beta}x)^{-1}I(-p^{\beta}x)dx=\int_{\nu^{-1}M \cap\operatorname{Iw}(p^{\beta})}\chi(\det x^{\prime})^{-1}I(-\nu x^{\prime})dx ^{\prime}\] \[=(\star^{\prime\prime})\int_{\nu^{-1}M\cap\operatorname{Iw}_{n}( p^{\beta})}\chi_{p}(\det x^{\prime})^{-1}W_{\varphi_{p}}\begin{pmatrix}-\nu x^{ \prime}&\\ &1\end{pmatrix}\chi_{p}|\cdot|^{s-\frac{1}{2}}(\det-\nu x^{\prime})dx^{\prime}\] \[=(\star^{\prime\prime})\chi_{p}|\cdot|^{s-\frac{1}{2}}(\det-\nu) \int_{\nu^{-1}M\cap\operatorname{Iw}_{n}(p^{\beta})}W_{\varphi_{p}}\begin{pmatrix} \nu&\\ &1\end{pmatrix}dx^{\prime}\] \[=(\star^{\prime\prime})\operatorname{Vol}(\nu^{-1}M\cap \operatorname{Iw}_{n}(p^{\beta}))\cdot W_{\varphi_{p}}\begin{pmatrix}\nu&\\ &1\end{pmatrix}, \tag{11}\] where \((\star^{\prime\prime\prime})\neq 0\) depends only on \(\chi\), \(t\), \(p\), and \(s\). In the penultimate equality we use Iwahori-invariance of \(W_{\varphi_{p}}\). We consider two cases: 1. If \(\nu\not\in M\), then \(W_{\varphi_{p}}(\,^{\nu}\,_{1}\,)=0\), thus \(\mathcal{Q}=0\). In particular, both sides of (10) are \(0\), so Proposition 9.1 holds. 2. If \(\nu\in M\), then \(\nu\operatorname{Iw}_{n}(p^{\beta})\) is a compact open subset of \(\operatorname{GL}_{n}(\mathbb{Q}_{p})\), and it is contained in \(M\). This means \(\operatorname{Iw}_{n}(p^{\beta})\subset\nu^{-1}M\), so the volume above is \(\operatorname{Vol}(\operatorname{Iw}_{n}(p^{\beta}))\) which is non-zero. Then Proposition 9.1 follows from (11). **Corollary 9.2**.: _If \(\zeta_{p}(ut\cdot\varphi_{p},\chi_{p},s_{0})\neq 0\) for some \(s_{0}\in\mathbf{C}\), then \(\zeta_{p}(ut\cdot\varphi_{p},\chi_{p},s)\neq 0\) for all \(s\in\mathbf{C}\)._ Proof.: Non-vanishing of \(W_{\varphi_{p}}\begin{pmatrix}\nu_{\beta}(\,^{\nu}\,_{1}\,)\\ &1\end{pmatrix}\) is independent of \(s\). **Corollary 9.3**.: _If \(P\) is a spin parabolic and \(P\) is not contained in the \((n,n)\)-parabolic, then for all \(\varphi_{p}\in\pi_{p}^{\operatorname{Iw}_{C}}\) and \(s\in\mathbf{C}\), we have_ \[\zeta_{p}(ut_{P}^{\beta}\cdot\varphi_{p},\chi_{p},s)=0.\] _In particular, \(Z_{H}(ut_{P}^{\beta}\cdot\varphi,\chi,s)=0\) for all \(\varphi=\otimes_{v}\varphi_{v}\in\pi^{\operatorname{Iw}_{G}}\)._ Proof.: We apply Proposition 9.1 with \(t=t_{P}^{\beta}\), which we write as \((\,^{z_{1}}\,_{z_{2}})\) as above. Suppose \(P\) has type \((n_{1},...,n_{k})\). As \(P\) is spin, \((n_{1},...,n_{k})\) is symmetric, whence \[t_{P}=p^{k-1}w_{2n}t_{P}^{-1}w_{2n}. \tag{12}\] Equation (12) implies that \(z_{2}=p^{\beta(k-1)}w_{n}z_{1}^{-1}w_{n}\). Thus, for \(\nu_{\beta}(t_{P}^{\beta})\) as above, we have \[\nu_{\beta}(t_{P}^{\beta})=p^{-\beta}z_{2}^{-1}w_{n}z_{1}=p^{-\beta k}w_{n}z_{1 }^{2}. \tag{13}\] Let \([k/2]\) be the floor of \(k/2\). Then \(p^{2\beta[k/2]}\) is the largest power of \(p\) which divides \(z_{1}^{2}\) (so that one remains in \(M_{n}(\mathbf{Z}_{p})\)). Hence \(\nu_{\beta}(t_{P}^{\beta})\in M_{n}(\mathbf{Z}_{p})\) if and only if \(k\) is even. As \(P\) is spin, this happens if and only if \(P\) is contained in the \((n,n)\)-parabolic. Since (by [BDG\({}^{+}\), Lem. 5.1]) the support of \(W_{\varphi_{p}}\begin{pmatrix}\,^{y}\,_{1}\,\end{pmatrix}\) is in \(M\subset M_{n}(\mathbf{Z}_{p})\), the first statement follows by Proposition 9.1. The second follows from the arguments of SS8.3. ### Non-vanishing for \(P\)-spin eigenvectors. Let \(\tilde{\pi}^{P}\) be a \(P\)-spin \(P\)-refinement. Suppose \(P\subset Q\), the \((n,n)\)-parabolic. We now construct \(\varphi_{p}\in\hat{\pi}^{P}_{p}\) such that \(W_{\varphi_{p}}\left(\begin{smallmatrix}\nu_{\beta}(t^{\beta}_{p})\\ &1\end{smallmatrix}\right)\neq 0\). #### 9.2.1 Explicit eigenvectors. We first give eigenvectors in principal series representations, generalising [BDG\({}^{+}\), SS7.1]. Throughout \(\pi_{p}=\operatorname{Ind}^{G}_{B}\theta\) is irreducible and regular, with \(\theta\) spin. We recap (but slightly modify) some notation from [BDG\({}^{+}\)]. Let \(\mathcal{W}_{n}\) be the Weyl group of \(\operatorname{GL}_{n}\). From now on we always view Weyl elements of \(\mathcal{W}_{G}\) (resp. \(\mathcal{W}_{n}\)) as elements of \(G(\mathbb{Z}_{p})\) (resp. \(\operatorname{GL}_{n}(\mathbb{Z}_{p})\)). Recall \(w_{n}\) is the longest element in \(\mathcal{W}_{n}\), and \(\tau=\left(\begin{smallmatrix}1&w_{n}\\ &w_{n}\end{smallmatrix}\right)\in\mathcal{W}_{G}\). * For any \(w,\nu\in\mathcal{W}_{G}\), let \(f^{\nu}_{w}\in\operatorname{Ind}^{G}_{B}\theta^{\nu}\) be the (unique) Iwahori-invariant function supported on \(B(\mathbb{Q}_{p})w\operatorname{Iw}_{G}\) with \(f^{\nu}_{w}(w)=p^{n(n-1)}\). * For \(\rho\in\mathcal{W}_{n}\), let \(w(\rho)=(\begin{smallmatrix}w_{n}\\ \rho\end{smallmatrix})\), and (noting the difference to [BDG\({}^{+}\), Def. 7.6]) let \[F^{\nu}_{\rho}=f^{\nu}_{w(\rho)}\in\operatorname{Ind}^{G}_{B}(\theta^{\nu}).\] The relevance of these vectors is captured by [BDG\({}^{+}\), Prop. 7.4], where we showed: **Proposition 9.4**.: _Let \(\tilde{\pi}_{\nu}=(\pi,\alpha_{\nu})\coloneqq\Psi_{\theta}^{-1}(\nu)\). Then \(f^{\nu}_{w_{2n}}=F^{\nu}_{w_{n}}\in\operatorname{Ind}^{G}_{B}\theta^{\nu}\) is an Iwahori-invariant \(\alpha_{\nu}\)-eigenvector._ We now define parahoric-level analogues. Recall \(\mathcal{W}_{L_{P}}\) is the Weyl group of the levi \(L_{P}\) of \(P\). For \(w\in\mathcal{W}_{G}\), let \([w]\in\mathcal{W}_{G}/\mathcal{W}_{L_{P}}\) denote the corresponding coset. Since \(P\subset Q\), it is a \((k_{1},\dots,k_{r},k_{r},\dots,k_{1})\)-parabolic for some \(k_{i}\) with \(k_{1}+\dots+k_{r}=n\). Let \(\mathcal{W}_{\mathbf{k}}\subset\mathcal{W}_{n}\) denote the Weyl group associated with the Levi of the \((k_{1},\dots,k_{r})\)-parabolic in \(\operatorname{GL}_{n}\). For \(\rho\in\mathcal{W}_{n}\), let \([\rho]^{\prime}\in\mathcal{W}_{n}/\mathcal{W}_{\mathbf{k}}\) denote the corresponding coset. * For \(w,\nu\in\mathcal{W}_{G}\), let \(h^{\nu}_{[w]}\in\operatorname{Ind}^{G}_{B}\theta^{\nu}\) denote the \(J_{P}\)-invariant function supported on \(B(\mathbb{Q}_{p})wJ_{P}\) normalised so that \(h^{\nu}_{[w]}(w)=p^{n(n-1)}\). Writing \(B(\mathbf{Q}_{p})wJ_{P}\) as a union of sets of the form \(B(\mathbf{Q}_{p})w^{\prime}\operatorname{Iw}_{G}\), we have \[h^{\nu}_{[w]}=\sum_{w^{\prime}\in\mathcal{W}_{G},\;[w^{\prime}]=[w]}f^{\nu}_{w^ {\prime}}.\] In particular, \(h^{\nu}_{[w]}=h^{\nu}_{[w^{\prime}]}\) if \([w]=[w^{\prime}]\). * For \(\rho\in\mathcal{W}_{n}\), we set \[H^{\nu}_{[\rho]^{\prime}}=h^{\nu}_{[w(\rho)]}.\] **Proposition 9.5**.: _Let \(\tilde{\pi}^{P}_{\nu}=(\pi,\alpha^{P}_{\nu})\coloneqq(\Psi_{\theta}^{P})^{-1}( [\nu])\). Then \(h^{\nu}_{[w_{2n}]}=H^{\nu}_{[w_{n}]^{\prime}}\in\operatorname{Ind}^{G}_{B} \theta^{\nu}\) is a \(J_{P}\)-invariant \(\alpha^{P}_{\nu}\)-eigenvector._ Proof.: Identical to [BDG\({}^{+}\), Prop. 7.4] or [DJR20, Lem. 3.6]. If \(\nu=1\), we drop the superscript \(\nu\), and simply write \(f_{w},F_{\rho},h_{[w]},H_{[\rho]^{\prime}}\). We return to our fixed \(P\)-spin \(P\)-refinement \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\). **Lemma 9.6**.: _We may choose a spin \(\theta\) so that \(\varphi_{p}\coloneqq H_{[w_{n}]^{\prime}}\in\hat{\pi}^{P}_{p}\) is an \(\alpha^{P}\)-eigenvector._ Proof.: By definition \(\Psi_{\theta}^{P}(\tilde{\pi}^{P})=[\sigma]\in\mathcal{W}_{G}/\mathcal{W}_{L_{ P}}\), for some \(\sigma\in\mathcal{W}_{G}^{0}\). After renormalising \(\theta\) by \(\sigma\) (as in Remarks 2.11 and 3.3) we may assume \(\sigma=1\); as \(\sigma\in\mathcal{W}_{G}^{0}\) such a \(\theta\) is still spin by Definition 2.7. The result follows from Proposition 9.5. #### 9.2.2 Intertwining maps. We now have an eigenvector \(H_{[w_{n}]^{\prime}}\in\operatorname{Ind}^{G}_{B}\theta\). To transfer this into the Shalika model \(\mathcal{S}^{\eta_{p}}_{\varphi_{p}}(\tilde{\pi}_{p})\), we must write down an explicit Shalika intertwining. If \(\Theta\) is an unramified character satisfying \(\Theta_{i}\Theta_{n+i}=\eta_{p}\) for all \(i\), Ash-Ginzburg [AG94, (1.3)] have constructed such an explicit \(\mathcal{S}:\operatorname{Ind}^{G}_{B}\Theta\to\mathcal{S}^{\eta_{p}}_{ \varphi_{p}}(\pi_{p})\), given by \[\mathcal{S}(f)(g)\coloneqq\int_{\operatorname{GL}_{n}(\mathbf{Z}_{p})}\int_{M_{ n}(\mathbf{Q}_{p})}f\left[\left(\begin{smallmatrix}1&1\\ 1\end{smallmatrix}\right)\left(\begin{smallmatrix}X\\ &1\end{smallmatrix}\right)\left(\begin{smallmatrix}k&\\ k\end{smallmatrix}\right)g\right]\psi^{-1}(\operatorname{tr}(X))\eta^{-1}( \det(k))dXdk. \tag{9.8}\] Here we encounter a problem: our choice of \(\theta\) does not satisfy the Ash-Ginzburg condition; rather, \(\theta^{r}\) does, where \(\tau=\operatorname{diag}(1,w_{n})\). We know \(\operatorname{Ind}_{B}^{G}\theta\) and \(\operatorname{Ind}_{B}^{G}\theta^{r}\) are isomorphic, but to use (9.8), we must compute what this isomorphism does to the eigenvector \(\varphi_{p}\) from Lemma 9.6. We do so by generalising \([\operatorname{BDG}^{+},\,\lx@sectionsign\ref{sec:2}]\), using work of Casselman. Let \(\nu=\left(\begin{smallmatrix}1&\\ &\nu^{\prime}\end{smallmatrix}\right)\in\mathcal{W}_{G}\) and \(s=\left(\begin{smallmatrix}1&\\ &\nu\end{smallmatrix}\right)\in\mathcal{W}_{G}\) be a simple reflection. Suppose that \(s\) corresponds to the simple transposition \((a,a+1)\) for \(a\geqslant n+1\). Set \(\theta(s)\coloneqq\theta_{a}(p)/\theta_{a+1}(p)\) and \[c_{s}(\theta^{\nu}):=\frac{1-p^{-1}\theta^{\nu}(s)}{1-\theta^{\nu}(s)}. \tag{9.9}\] Note \(c_{s}(\theta^{\nu})\) is well-defined as \(\theta^{\nu}\) is regular, and _always_ non-zero as \(\operatorname{Ind}_{B}^{G}\theta^{\nu}\) is irreducible. Let \(l\) denote the Bruhat length function on \(\mathcal{W}_{G}\). Then Casselman [10, Thm. 3.4] shows that there are intertwinings \(T_{s}^{\nu}\colon\operatorname{Ind}_{B}^{G}\theta^{\nu}\mapsto\operatorname{ Ind}_{B}^{G}\theta^{\nu s^{-1}}\) with the following property: \[T_{s}^{\nu}(f_{w}^{\nu})=\left\{\begin{array}{ll}p^{-1}f_{w}^{\nu s^{-1}}+( c_{s}(\theta^{\nu})-1)f_{w}^{\nu s^{-1}}&\text{ if }l(sw)>l(w)\\ f_{sw}^{\nu s^{-1}}+(c_{s}(\theta^{\nu})-p^{-1})f_{w}^{\nu s^{-1}}&\text{ if }l(sw)<l(w)\end{array}\right. \tag{9.10}\] The eigenvector \(H_{[w_{n}]^{\prime}}\) is a _sum_ of \(f_{w}^{\nu}\)'s as \(w\) ranges over a coset in \(\mathcal{W}_{G}/\mathcal{W}_{L_{p}}\). The following allows us to apply a case of (9.10) consistently to \(f_{w}^{\nu}\) for every \(w\) in a \(\mathcal{W}_{L_{p}}\)-coset. **Lemma 9.7**.: _Let \(s\in\mathcal{W}_{G}\) be a simple reflection, and let \(w\in\mathcal{W}_{G}\). Then exactly only one of the following possibilities can occur:_ 1. \(sw\mathcal{W}_{L_{p}}=w\mathcal{W}_{L_{p}}\)_, whence left multiplication by_ \(s\) _permutes_ \(w\mathcal{W}_{L_{p}}\)_;_ 2. \(sw\mathcal{W}_{L_{p}}\neq w\mathcal{W}_{L_{p}}\) _and_ \(l(sv)<l(v)\) _for all_ \(v\in w\mathcal{W}_{L_{p}}\)_;_ 3. \(sw\mathcal{W}_{L_{p}}\neq w\mathcal{W}_{L_{p}}\) _and_ \(l(sv)>l(v)\) _for all_ \(v\in w\mathcal{W}_{L_{p}}\)_._ Proof.: If \(sw\mathcal{W}_{L_{p}}=w\mathcal{W}_{L_{p}}\), (1) occurs; so suppose \(sw\mathcal{W}_{L_{p}}\neq w\mathcal{W}_{L_{p}}\). Let \(w_{\min}\) and \(v_{\min}\) be the unique minimal length representatives in \(w\mathcal{W}_{L_{p}}\) and \(sw\mathcal{W}_{L_{p}}\) respectively; properties of such elements are described in [10, SS1.10]. As \(s\) is simple, we must have \(l(sw_{\min})=l(w_{\min})\pm 1\); so we have two possibilities: **Possibility 1:**\(l(sw_{\min})=l(w_{\min})-1<l(w_{\min})\). As \(sw_{\min}\in sw\mathcal{W}_{L_{p}}\), there is a unique \(x\in\mathcal{W}_{L_{p}}\) such that \(sw_{\min}=v_{\min}\cdot x\). We have \(l(sw_{\min})=l(v_{\min})+l(x)\). As \(l(x)\geqslant 0\), we have \[l(v_{\min})\leqslant l(sw_{\min})<l(w_{\min}). \tag{9.11}\] On the other hand, we can write \(v_{\min}=sy\) for some \(y\in w\mathcal{W}_{L_{p}}\). Again, we either have \(l(v_{\min})=l(y)\pm 1\). We also have \(l(w_{\min})\leqslant l(y)\) by minimality of \(l(w_{\min})\). If \(l(v_{\min})=l(y)+1\), then \(l(w_{\min})\leqslant l(y)<l(v_{\min})\), contradicting (9.11). Hence \(l(v_{\min})=l(y)-1\). But then \[l(v_{\min})<l(w_{\min})\leqslant l(y)=l(v_{\min})+1.\] This can only happen if \(l(y)=l(w_{\min})=l(v_{\min})+1\). Therefore \(y=w_{\min}\) (by uniqueness of the minimal length representative), and \(v_{\min}=sw_{\min}\). Now take any \(v\in w\mathcal{W}_{L_{p}}\). There are unique \(X,Y\in\mathcal{W}_{L_{p}}\) such that \(v=w_{\min}X\) and \(sv=v_{\min}Y=sw_{\min}Y\). By uniqueness, we must have \(X=Y\). Finally, we now see that \[l(sv)=l(sw_{\min})+l(X)<l(w_{\min})+l(X)=l(v),\] so we are in case (2) of the Lemma. **Possibility 2:**\(l(sw_{\min})=l(w_{\min})+1>l(w_{\min})\). We break this up into three further cases: 1. If \(l(v_{\min})>l(w_{\min})\), then we must have \(l(v_{\min})\geqslant l(sw_{\min})\). Minimality of \(l(v_{\min})\) forces equality, hence \(v_{\min}=sw_{\min}\) by uniqueness of the minimal length representative. Then if \(v\in w\mathcal{W}_{L_{p}}\), as above we must have \(v=w_{\min}X\) and \(sv=sw_{\min}X\) for some (unique) \(X\in\mathcal{W}_{L_{P}}\). Hence for any \(v\in w\mathcal{W}_{L_{P}}\), we have \[l(sv)=l(sw_{\min})+l(X)>l(w_{\min})+l(X)=l(v),\] whence we are in case (3) of the lemma. 2. If \(l(v_{\min})=l(w_{\min})\), then let \(w_{\min}=s_{1}\cdots s_{r}\) and \(v_{\min}=s^{\prime}_{1}\cdots s^{\prime}_{r}\) be reduced word expressions for these elements. We can write \(sw_{\min}=v_{\min}\cdot t\) for some unique \(t\in\mathcal{W}_{L_{P}}\). Moreover, since \(l(v_{\min})+l(t)=l(sw_{\min})=l(w_{\min})+1\), we see that \(l(t)=1\) and hence \(t\) is simple. We must therefore have \(w_{\min}<sw_{\min}=v_{\min}t\) in the (strong) Bruhat order. Since \(s^{\prime}_{1}\cdots s^{\prime}_{r}t\) is a reduced word for \(v_{\min}t\), we find that \(s_{1}\cdots s_{r}\) occurs inside this word. If \[s_{1}\cdots s_{r}=s^{\prime}_{1}\cdots\hat{s}^{\prime}_{i}\cdots s^{\prime}_{ r}t\] where \(\hat{\cdot}\) denotes omission of the term, then we see that \(w_{\min}\in s^{\prime}_{1}\cdots\hat{s}^{\prime}_{i}\cdots s^{\prime}_{r} \mathcal{W}_{L_{P}}\), which contradicts the fact that \(w_{\min}\) is a minimal length representative. Hence we must have \(s_{1}\cdots s_{r}=s^{\prime}_{1}\cdots s^{\prime}_{r}\), hence \(w_{\min}=v_{\min}\). But this contradicts the assumption that \(sw\mathcal{W}_{L_{P}}\neq w\mathcal{W}_{L_{P}}\). So this case can never occur. 3. If \(l(v_{\min})<l(w_{\min})\), then write \(v_{\min}=sy\) for some \(y\in w\mathcal{W}_{L_{P}}\). Arguing as in Possibility 1, this would imply \(y=w_{\min}\), hence \(l(sw_{\min})<l(w_{\min})\), which is a contradiction to the premise of Possibility 2. Thus (c) also never occurs. Case (a) must thus occur, giving case (3) of the lemma, completing the proof. **Lemma 9.8**.: _There exists an intertwining_ \[M_{\tau}\colon\operatorname{Ind}_{B}^{G}\theta\to\operatorname{Ind}_{B}^{G}\theta^ {\tau}\] _such that_ \[M_{\tau}(H_{[w_{n}]^{\prime}})=H_{[1]^{\prime}}^{\tau}+\sum_{\begin{subarray}{ c}x\in\mathcal{W}_{n}/\mathcal{W}_{\mathbf{k}}\\ x\neq\left|1\right|^{\prime}\end{subarray}}c_{x}H_{x}^{\tau}\] _for some \(c_{x}\in\mathbb{C}\) (note the sum may be empty)._ Proof.: Let \(\rho\in\mathcal{W}_{n}\), and \(s=\left(\begin{smallmatrix}1&s^{\prime}\end{smallmatrix}\right)\in\mathcal{W} _{G}\) a simple reflection. We apply (9.10) in two cases: 1. Suppose \(sw(\rho)\mathcal{W}_{L_{P}}=w(\rho)\mathcal{W}_{L_{P}}\). Then by Lemma 9.7(1), there exist \(w_{1},\dots,w_{b}\in w(\rho)\mathcal{W}_{L_{p}}\) such that \[w(\rho)\mathcal{W}_{L_{p}}=\{w_{1},\dots,w_{b},sw_{1},\dots,sw_{b}\}\] with all the elements in the set distinct. Then we have \[T_{s}^{\nu}(f_{w_{i}}^{\nu}+f_{sw_{i}}^{\nu})=c_{s}(\theta^{\nu})(f_{w_{i}}^{ ws^{-1}}+f_{sw_{i}}^{ws^{-1}})\] hence \(T_{s}^{\nu}(H_{[\rho]^{\prime}}^{\nu})=c_{s}(\theta^{\nu})H_{[\rho]^{\prime}}^{ ws^{-1}}\). 2. Suppose \(sw(\rho)\mathcal{W}_{L_{P}}\neq w(\rho)\mathcal{W}_{L_{P}}\). Then by parts (2) and (3) in Lemma 9.7, we have \[T_{s}^{\nu}(H_{[\rho]^{\prime}}^{\nu})=\left\{\begin{array}{ll}p^{-1}H_{[ \rho]^{\prime}}^{ws^{-1}}+(c_{s}(\theta^{\nu})-1)H_{[\rho]^{\prime}}^{ws^{-1}} &\text{if }l(sw(\rho))>l(w(\rho))\\ H_{[\rho^{\prime}]^{\prime}}^{ws^{-1}}+(c_{s}(\theta^{\nu})-p^{-1})H_{[\rho]^{ \prime}}^{ws^{-1}}&\text{if }l(sw(\rho))<l(w(\rho)).\end{array}\right.\] Crucially the only terms that appear here are of the form \(H_{x}^{\nu s^{-1}}\) for \(x\in\mathcal{W}_{n}/\mathcal{W}_{\mathbf{k}}\). Now write \(w_{n}=s^{\prime}_{1}\cdots s^{\prime}_{e}\), so \(\tau=s^{-1}_{c}\cdots s^{-1}_{1}\) with \(s_{i}=\left(\begin{smallmatrix}1&1\\ s^{\prime}_{i}\end{smallmatrix}\right)\). We may assume that the factorisation of \(w_{n}\) is chosen such that \(s^{\prime}_{c}\cdots s^{\prime}_{b+1}\) is the minimal length representative of the coset \(w_{n}\mathcal{W}_{\mathbf{k}}\subset\mathcal{W}_{n}\) and \(s^{\prime}_{i}\) (\(i=1,\ldots,b\)) are simple reflections in \(\mathcal{W}_{\mathbf{k}}\), for some integer \(1\leqslant b\leqslant c\). Composing, we have \[M_{\tau}=T_{s_{1}}^{s_{-}^{-1}\cdots s_{2}^{-1}}\circ\cdots\circ T_{s_{c-1}}^{ s_{-}^{-1}}\circ T_{s_{c}}^{1}:\pi_{p}=\mathrm{Ind}_{B}^{G}\,\theta\longrightarrow \mathrm{Ind}_{B}^{G}(\theta^{\tau}).\] Iterating the formulae, we see \(M_{\tau}(H_{[w_{n}]^{\prime}})\) is a linear combination of \(H_{x}^{\tau}\)'s for \(x\in\mathcal{W}_{n}/\mathcal{W}_{\mathbf{k}}\). The coefficient of \(H_{[1]^{\prime}}^{\tau}\) is the product of \(\prod_{i=1}^{b}c_{s_{i}}(\theta^{s_{c}^{-1}\cdots s_{i+1}^{-1}})\) and a power of \(p\), and we saw after (9.9) that this product is non-zero. Therefore, we may renormalise \(M_{\tau}\) to make this coefficient equal to \(1\). #### 9.2.3 Non-vanishing. With set-up as above, choose a spin \(\theta\) so that \(H_{[w_{n}]^{\prime}}\in\mathrm{Ind}_{B}^{G}\,\theta\) is an eigenvector for \(\tilde{\pi}^{P}\). We now show that \(H_{[w_{n}]^{\prime}}\) does not vanish under the composition \[\mathrm{Ind}_{B}^{G}\,\theta\xrightarrow[\text{Lemma \ref{eq:H_[w_n]^{\prime}}}]{M_{ \tau}}\mathrm{Ind}_{B}^{G}\,\theta^{\tau}\xrightarrow[\text{\scriptsize$S$} \,\text{\scriptsize$\raisebox{-0.0pt}{\scalebox{1.0}{$\circ$}}$}]{\mathcal{S}} \mathcal{S}_{\psi_{p}}^{\eta_{p}}(\pi_{p})\xrightarrow[\text{\scriptsize$\raisebox {-0.0pt}{\scalebox{1.0}{$\circ$}}$}]{\left(\begin{smallmatrix}\nu_{\beta}( \tilde{\sigma}_{p}^{\beta})\\ 1\end{smallmatrix}\right)$}}\mathbf{C}.\] Write \(t_{p}^{\beta}=\mathrm{diag}(z_{1},z_{2})\) as before. Recalling \(P\) is the \((k_{1},...,k_{r},k_{r},...,k_{1})\)-parabolic, by (9.7) we have \(\nu_{\beta}(t_{P}^{\beta})=p^{-2\beta r}w_{n}z_{1}^{2}\), for \(r\) as _op. cit_. Note \(z:=p^{-\beta r}z_{1}\) has coefficients in \(\mathbb{Z}_{p}\) (as \(P\subset Q\); see the proof of Corollary 9.3). **Lemma 9.9**.: _(cf. [BDG\({}^{+}\), Prop. 7.9]). Let \(\delta\in\mathcal{W}_{n}\). We have_ \[\left(\begin{smallmatrix}1&1\\ 1&1\end{smallmatrix}\right)\left(\begin{smallmatrix}1&X\\ 1&1\end{smallmatrix}\right)\left(\begin{smallmatrix}k&k\\ k\end{smallmatrix}\right)\left(\begin{smallmatrix}w_{n}z^{2}&1\\ 1\end{smallmatrix}\right)\in B(\mathbb{Q}_{p})\left(\begin{smallmatrix}w_{n}&w_{ n}\\ \end{smallmatrix}\right)J_{P}\] _if and only if:_ * \([\delta w_{n}]^{\prime}=[1]^{\prime}\)_,_ * \(k\in B_{n}(\mathbb{Z}_{p})w_{n}J_{\mathbf{k}^{\prime}}\)_, where_ \(J_{\mathbf{k}^{\prime}}\) _is the parahoric in_ \(\mathrm{GL}_{n}\) _of type_ \(\mathbf{k}^{\prime}=(k_{r},\ldots,k_{1})\)_,_ * _and_ \(k^{-1}X\in w_{n}z^{2}M_{n}(\mathbb{Z}_{p})\)_._ Proof.: The proof closely follows that of [BDG\({}^{+}\), Prop. 7.9], and we merely indicate the small differences here. The "if" direction is identical to _op. cit_. For the "only if" direction, we again start from (7.10) _op. cit._ (where now the matrix \(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\) is in \(J_{P}\)). If we can show \([\delta w_{n}]^{\prime}=[1]^{\prime}\), then the remaining conditions follow as in (1)-(4) following (7.10) _op. cit_. If \(P=Q\), then \([\delta w_{n}]^{\prime}=[1]^{\prime}\) is always satisfied. Suppose then that \(P\neq Q\) (hence \(r>1\)), and that \([\delta w_{n}]^{\prime}\neq[1]^{\prime}\), i.e. \(\delta w_{n}\not\in\mathcal{W}_{\mathbf{k}}\). We have the following analogue of Claim 7.12 _op. cit._: let \(Y_{P}:=\{k_{1},k_{1}+k_{2},...,k_{1}+\cdots+k_{r-1}\}\). Then \(\mathcal{W}_{\mathbf{k}}=\cap_{m\in Y_{P}}\mathcal{W}_{(m,n-m)}\). Thus \(\delta w_{n}\not\in\mathcal{W}_{(m,n-m)}\) for some \(m\in Y_{P}\), whence \[B_{n}(\mathbb{Q}_{p})\delta w_{n}\overline{J}_{m}\cap B_{n}(\mathbb{Q}_{p}) \overline{J}_{m}=\varnothing \tag{9.12}\] where \(\overline{J}_{m}\) is the opposite parahoric in \(\mathrm{GL}_{n}(\mathbb{Z}_{p})\) of type \((m,n-m)\). Now factorise \(z^{2}=t_{p,m}\mu\). Via the same proof of the analogous statement in [BDG\({}^{+}\)], we can show \(kw_{n}\mu\in B_{n}(\mathbb{Q}_{p})\overline{J}_{m}\cap B_{n}(\mathbb{Q}_{p}) \delta w_{n}\overline{J}_{m}\), a contradiction to (9.12). We deduce \([\delta w_{n}]^{\prime}=[1]^{\prime}\), and hence the lemma. Recall \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\) is a \(P\)-spin \(P\)-refinement, with \(P\subset Q\). We finally obtain: **Proposition 9.10**.: _The element \(\mathcal{S}(M_{\tau}(H_{[w_{n}]^{\prime}}))\) is an \(\alpha^{P}\)-eigenvector in \(\mathcal{S}_{\psi_{p}}^{\eta_{p}}(\pi_{p})\), and_ \[\mathcal{S}(M_{\tau}(H_{[w_{n}]^{\prime}}))\left(\begin{smallmatrix}\nu_{\beta}( t_{p}^{\beta})\\ 1\end{smallmatrix}\right)\neq 0.\] Proof.: This is an \(\alpha^{P}\)-eigenvector by Lemma 9.6 and Hecke-equivariance of \(M_{\tau}\) and \(\mathcal{S}\). Non-vanishing follows exactly the same proof as [BDG\({}^{+}\), Prop. 7.12]. Precisely, we show that \[\mathcal{S}(M_{\tau}(H_{[w_{n}]^{\prime}}))\left(\begin{smallmatrix}\nu_{\beta}( t_{p}^{\beta})\\ 1\end{smallmatrix}\right)=\mathcal{S}(H_{[1]^{\prime}}^{\tau})\left(\begin{smallmatrix }\nu_{\beta}(t_{P}^{\beta})\\ 1\end{smallmatrix}\right)\neq 0.\] Here the first equality holds as Lemma 9.8 expresses \(M_{\tau}(H_{[w_{n}]^{\prime}})\) as a linear combination of \(H_{x}^{\tau}\)'s, and Lemma 9.9 shows that the the integrand of \(\mathcal{S}\) (in (9.8)) vanishes on each of these except \(H_{[1]^{\prime}}^{\tau}\). Non-vanishing is a direct calculation. ### Proof of Theorem 8.5 As in SS8.3, it suffices to prove \((2^{\prime})\Rightarrow(1^{\prime})\) in Conjecture 8.10; i.e. we must show there exists \(\varphi_{p}\in\tilde{\pi}^{P}\) such that \(\zeta_{p}(ut_{P}^{\beta}\cdot\varphi,\chi,s)\neq 0\). By Proposition 9.1, it suffices to prove \(W_{\varphi_{p}}\left(\begin{smallmatrix}\nu_{\beta}(t_{\beta}^{\prime})\\ 1\end{smallmatrix}\right)\neq 0\), where \(W_{\varphi_{p}}=\mathcal{S}_{\psi_{p}}^{\eta_{p}}(\varphi_{p})\) for some Shalika intertwining \(\mathcal{S}_{\psi_{p}}^{\eta_{p}}\). Since the \(\alpha^{P}\)-eigenspaces in \(\tilde{\pi}_{p}^{\mathrm{Iw}_{G}}\) and \(\mathcal{S}_{\psi_{p}}^{\eta_{p}}(\tilde{\pi}^{\mathrm{Iw}_{G}})\) are both \(1\)-dimensional, it suffices to exhibit _any_\(\alpha^{P}\)-eigenvector in the Shalika model with this non-vanishing property. Such an eigenvector is given by Proposition 9.10. ## 10 Proof of Theorem 8.6 Finally we use our study of the symplectic locus to prove a result towards the remaining implication \((1)\Rightarrow(2)\) in Conjecture 8.4. If the hypotheses of Theorem 8.6 are satisfied, this furnishes a 'good' choice of Iwahori refinement \(\tilde{\pi}\) above \(\tilde{\pi}^{P}\). Key to our proof is: **Proposition 10.1**.: _Suppose (1) of Conjecture 8.4 holds. There is an \((\#X_{P}+1)\)-dimensional symplectic family \(\mathscr{C}\) through \(\tilde{\pi}\) in the \(\mathrm{GL}_{2n}\)-eigenvariety \(\mathcal{E}_{K_{B}}^{G}\), varying over \(\mathscr{W}_{0,\lambda_{*}}^{P}\)._ _Proof of Theorem 8.6, given Proposition 10.1:_ Suppose (1) is satisfied in Conjecture 8.4. By Corollary 9.3, we see \(P\) must be contained in the \((n,n)\)-parabolic. Further, as in SS8.3, we deduce that \(L(\pi\times\chi,s+1/2)\neq 0\); that \(\pi\) is symplectic; and that to deduce (2) in Conjecture 8.4 it suffices to prove \(\tilde{\pi}\) (hence \(\tilde{\pi}^{P}\)) is \(P\)-spin. Let \(\Omega\coloneqq w(\mathscr{C})\), open of maximal dimension in \(\mathscr{W}_{0,\lambda_{*}}^{P}\). If \(\tilde{\pi}\) is not \(P\)-spin, then it is optimally \(P^{\prime}\)-spin for some spin parabolic \(P^{\prime}\not\subset P\). Then Theorem 4.9 shows that \(w(\mathscr{C})\subset\mathscr{W}_{0,\lambda_{*}}^{P^{\prime}}\), hence \[\Omega=w(\mathscr{C})\subset\mathscr{W}_{0,\lambda_{*}}^{P^{\prime}}\cap \Omega\subsetneq\Omega,\] a contradiction; so \(\tilde{\pi}\) is \(P\)-spin. The proof of Proposition 10.1 occupies the rest of this section. ### Big evaluation maps: \(p\)-adic interpolation of branching laws Our proof closely follows [BDG\({}^{+}\), Thm. 13.6], which treated the case \(P=B\); and [BDW, Thm. 7.6(a-c)], which treated the analogous result in the \((n,n)\)-parabolic eigenvariety. These works constructed _evaluation maps_ on overconvergent cohomology groups, over affioids \(\Omega\) in the weight space, valued in torsion-free \(\mathcal{O}_{\Omega}\)-modules. Non-vanishing of these maps puts strong constraints on the structure of the overconvergent cohomology, and was shown to produce symplectic families in the eigenvariety. We refer the reader to these works for any undefined notation. Let \(K=K^{p}K_{p}\subset G(\mathbf{A}_{f})\) be open compact, with \(K_{p}\subset J_{P}\) inside the \(P\)-parahoric subgroup. As in [BDW, SS2.10], choices at infinity fix for all \(K\) (non-canonical) embeddings \[\pi_{f}^{K}\longleftrightarrow\mathrm{H}_{\mathrm{c}}^{t}(S_{K},\mathcal{V} _{\lambda_{*}}^{\vee}(\overline{\mathbf{Q}}_{p}))_{\tilde{\pi}},\qquad \varphi\mapsto\phi_{\varphi}, \tag{10.1}\] where the subscript \(\tilde{\pi}\) denotes the \(\tilde{\pi}\)-eigenspace. For a dominant weight \(\lambda=(\lambda_{1},...,\lambda_{2n})\), let \[\mathrm{Crit}(\lambda)\coloneqq\{j\in\mathbf{Z}:-\lambda_{n+1}\geqslant j \geqslant-\lambda_{n}\}.\] In [BDG\({}^{+}\), SS4], to the data of \(\lambda,P,\chi,j\in\mathrm{Crit}(\lambda)\), and \(\eta=\eta_{0}|\cdot|^{\mathfrak{w}(\lambda)}\) with \(\eta_{0}\) finite order, we constructed parahoric evaluation maps \[\mathcal{E}_{\lambda_{*},P,\chi}^{j,\eta_{0}}:H_{\mathrm{c}}^{t}(S_{K}, \mathcal{V}_{\lambda}^{\vee}(\overline{\mathbf{Q}}_{p}))\longrightarrow \overline{\mathbf{Q}}_{p}. \tag{10.2}\] Let \(\varphi^{(p)}=\otimes_{\ell\neq p}\varphi_{\ell}^{\mathrm{F}}\)J, as in SS8.3. Then for any \(\varphi_{p}\in\pi_{p}\), by [BDG\({}^{+}\), Thm. 4.16] we have \[\mathcal{E}_{\lambda_{*},P,\chi}^{j,\eta_{0}}\left(\phi_{\varphi}\right)=A_{ \lambda_{*},P,\chi}^{j}\cdot L\Big{(}\pi\times\chi,j+\tfrac{1}{2}\Big{)}\cdot \zeta_{p}\Big{(}ut_{P}^{\beta}\cdot\varphi_{p},\chi_{p},j+\tfrac{1}{2}\Big{)}, \tag{10.3}\] where \(\varphi=\varphi^{(p)}\otimes\varphi_{p}\in\pi_{f}\) and \(A_{\lambda,P,\chi}^{j}\) is a non-zero scalar. In the rest of SS10.1 we will prove the following existence of a 'big evaluation map', interpolating (10.2) as \(\lambda\) varies over an \((\#X_{P}+1)\)-dimensional affinoid \(\Omega=\mathrm{Sp}(\mathcal{O}_{\Omega})\subset\mathscr{W}_{0,\lambda_{*}}^{P}\), which we henceforth fix. **Proposition 10.2**.: _Let \(\beta\geqslant 1\), \(\chi\) a Dirichlet character of conductor \(p^{\beta}\), \(\eta_{0}\) a Dirichlet character, and \(j\in\operatorname{Crit}(\lambda_{\pi})\). Then for any classical \(\lambda\in\Omega\), we have \(j_{\lambda}:=j-\mathsf{w}(\lambda-\lambda_{\pi})/2\in\operatorname{Crit}(\lambda)\), and there exists an \(\mathcal{O}_{\Omega}\)-module map \(\mathcal{E}^{j,\eta_{0}}_{\Omega,P_{\chi}}:\mathrm{H}^{t}_{\mathrm{c}}(S_{K}, \mathcal{Q}^{P}_{\Omega})\to\mathcal{O}_{\Omega}\) such that for all classical \(\lambda\in\Omega\), we have a commutative diagram_ (10.4) #### 10.1.1 Recap of classical evaluation maps Let \(\iota:H\to G\) be the map \((h_{1},h_{2})\mapsto\left(\begin{smallmatrix}h_{1}&\\ &h_{2}\end{smallmatrix}\right)\). The classical evaluation maps \(\mathcal{E}^{j,\eta_{0}}_{\lambda,P_{\chi}}\) were constructed as the composition of: **Construction 10.3**.: 1. _[label=()]_ 2. _Pivitalise_ \(\iota^{*}\mathscr{N}^{\vee}_{\lambda}\) _on each connected component and integrate over fundamental classes,_ 3. _Pass to scalars via a branching law for the critical integer_ \(j\)_,_ 4. _Take the sum over connected components, weighted by_ \(\chi\) _and_ \(\eta_{0}\)_._ When \(P=Q\) (resp. \(P=B\)), the construction of (10.4) was done in [BDW, SS5-6] (resp. [BDG\({}^{+}\), SS11-12]). In that construction, we replaced the coefficients \(\mathscr{N}^{\vee}_{\lambda}\) in Construction 10.3 with \(\mathcal{Q}_{\Omega}\). Of the four steps, the compatability of steps (1) and (2) for \(\mathcal{Q}_{\Omega}\) and \(\mathscr{N}^{\vee}_{\lambda}\) is easy via [BDG\({}^{+}\), Lemma 4.8], particularly Lemma 4.8. Step (4) is the same in both cases. This leaves (3), which we handle by an interpolation of branching laws. #### 10.1.2 Explicit branching laws For integers \(j_{1},j_{2}\), let \(V^{H}_{(l_{1},j_{2})}\) denote the \(1\)-dimensional \(H(\mathbf{Z}_{p})\)-representation given by the character \(\mathrm{det}^{j_{1}}_{1}\cdot\mathrm{det}^{j_{2}}_{2}\). Then we have [GR14, Prop. 6.3.1], [BDW, Lem. 5.2] \[j\in\operatorname{Crit}(\lambda)\quad\Longleftrightarrow\quad\dim \operatorname{Hom}_{H(\mathbf{Z}_{p})}\bigl{(}V^{\vee}_{\lambda},V^{H}_{(j,- \mathsf{w}(\lambda)-j)}\bigr{)}=1.\] Via step (3) of Construction 10.3, the map \(\mathcal{E}^{j,\eta_{0}}_{\lambda,P_{\chi}}\) depends on a choice of generator \(\kappa_{\lambda,j}\) in this space, or dually, an element \(v_{\lambda,j}\in V^{H}_{(-j,\mathsf{w}(\lambda)+j)}\subset V_{\lambda}|_{H( \mathbf{Z}_{p})}\). For \(p\)-adic interpolation, we need to choose such generators compatibly in \(\lambda\). It is expedient to recall how we handled the Borel case in [BDG\({}^{+}\), SS11.1]; there we described explicit choices as follows. Define weights \[\alpha_{1}=(1,0,...,0,-1), \alpha_{2}=(1,1,0,...,0,-1,-1),\ \...,\ \ \alpha_{n-1}=(1,...,1,0,0,-1,...,-1), \tag{10.5}\] \[\alpha_{0}=(1,...,1,1,...,1),\ \ \ \alpha_{n}=(1,...,1,0,...,0),\] a \(\mathbf{Z}\)-basis for the pure algebraic weights. Note that if \(\lambda\) is a dominant algebraic weight then we can write uniquely \[\lambda=\lambda_{\pi}+\sum_{i=0}^{n}\mu_{i}\alpha_{i},\qquad\mu_{i}\in \mathbf{Z}_{\geqslant 0},\] so that \(\mathsf{w}(\lambda)=\mathsf{w}(\lambda_{\pi})+2\mu_{0}\). Note also that \(j\in\operatorname{Crit}(\lambda_{\pi})\) implies \(j-\mu_{0}=j-\mathsf{w}(\lambda-\lambda_{\pi})/2\in\operatorname{Crit}(\lambda)\), yielding the condition in Proposition 10.2. Via Notation 11.2\(op\). _cit._, for \(1\leqslant i\leqslant n-1\) let \(v_{(i)}\in V_{\alpha_{i}}(\mathbf{Q}_{p})\) such that \(H(\mathbf{Z}_{p})\) acts trivially, let \(v_{(n),j}\in V_{\alpha_{n}}(\mathbf{Q}_{p})\) be such that \(H(\mathbf{Z}_{p})\) acts as \(\mathrm{det}_{j}\) (for \(j=1,2\)), and fix a generator \(v_{(0)}\in V_{\alpha_{0}}(\mathbf{Q}_{p})\). In Proposition 11.3\(op\). _cit._ we showed \[v_{\lambda,j}:=[v_{(1)}^{\lambda_{1}-\lambda_{2}}]\cdot[v_{(2)}^{\lambda_{2}- \lambda_{3}}]\cdots[v_{(n-1)}^{\lambda_{n-1}-\lambda_{n}}]\cdot[v_{(n),1}^{- \lambda_{n+1}-j}]\cdot[v_{(n),2}^{\lambda_{n}+j}]\cdot[v_{(0)}^{\lambda_{n+1}}] \tag{10.6}\] generates \(V^{H}_{(-j,\mathsf{w}(\lambda)+j)}(\mathbf{Q}_{p})\subset V_{\lambda}( \mathbf{Q}_{p})|_{H(\mathbf{Z}_{p})}\). Dualising, we obtain a map \(\kappa_{\lambda,j}:V^{\vee}_{\lambda}\to V^{H}_{j,-\mathsf{w}(\lambda)-j}\) that was used in the construction of \(\mathcal{E}^{j,\eta_{0}}_{\lambda,B,\chi}\) (see [BDG\({}^{+}\), Rem. 4.14]). #### 10.1.3 \(p\)-adic interpolation We recap the main points of [BDG\({}^{+}\), SS11], and simplify them; in that paper, we also incorporated cyclotomic variation, but we shall not need this generality. For \(p\)-adic variation of (10.6) we want to replace the algebraic weight \(\lambda\) with a more general character \(\kappa\) of \(T(\mathbf{Z}_{p})\). In particular, we wish to make sense of \((\kappa_{i}-\kappa_{i+1})(v_{(i)})\). In Proposition 11.4 _op. cit._ we showed that if we define \[N^{\beta}(\mathbf{Z}_{p})\coloneqq N(p^{\beta}\mathbf{Z}_{p})\cdot u=\left\{n \in N(\mathbf{Z}_{p}):n\equiv\left(\begin{smallmatrix}1&w_{n}\\ 0&1_{n}\end{smallmatrix}\right)\;(\mathrm{mod}\,p^{\beta})\right\},\] then \[v_{(i)}[N^{\beta}(\mathbf{Z}_{p})]\subset 1+p^{\beta}\mathbf{Z}_{p}, \tag{10.7}\] and hence \((\kappa_{i}-\kappa_{i+1})(v_{(i)}\big{|}_{N^{\beta}(\mathbf{Z}_{p})})\) is well-defined. This, and (10.6), motivates the definition \[w_{\kappa,\lambda_{\pi}}:N^{\beta}(\mathbf{Z}_{p}) \longrightarrow R^{\times}, \tag{10.8}\] \[g \longmapsto v_{(0)}(g)^{\kappa_{n+1}}\cdot\left[\prod_{i=1}^{n-1}v_{(i)}(g) ^{\kappa_{i}-\kappa_{i+1}}\right]\cdot v_{(n),1}(g)^{-\lambda_{\pi,n+1}}\cdot v _{(n),2}(g)^{\lambda_{\pi,n}}.\] (In [BDG\({}^{+}\)], the last two terms used \(\kappa_{i}\) rather than \(\lambda_{\pi,i}\), because we also wanted cyclotomic variation. Here we fix these terms, which allows us to fix \(j\) and still obtain interpolation of \(v_{\lambda,j_{\lambda}}\) as \(\lambda\) varies; see (10.9) below). Now let \(\Omega\subset\mathscr{W}_{0}^{G}\), with universal character \(\kappa_{\Omega}\) on \(T(\mathbf{Z}_{p})\). For \(j\in\mathrm{Crit}(\lambda_{\pi})\), define a function \(v_{\Omega,j}:N(\mathbf{Z}_{p})\rightarrow\mathcal{O}_{\Omega}\) by \[v_{\Omega,j}(g)\coloneqq\left\{\begin{array}{cl}w_{\kappa_{\Omega},\lambda _{\pi}}(g)\cdot\left(\frac{v_{(n),2}(g)}{v_{(n),1}(g)}\right)^{j}&:g\in N^{ \beta}(\mathbf{Z}_{p}),\\ 0&:\mathrm{otherwise}.\end{array}\right.\] Now suppose \(\lambda\) is a classical weight, with \(\mathsf{w}(\lambda)=\mathsf{w}(\lambda_{\pi})+2\mu_{0}\). Recall \(j_{\lambda}=j-\mu_{0}\in\mathrm{Crit}(\lambda)\). We know \(\kappa_{\Omega}\,(\mathrm{mod}\,\mathfrak{m}_{\lambda})=\lambda\) as characters of \(T(\mathbf{Z}_{p})\), and one may formally verify that \[v_{\Omega,j}\,(\mathrm{mod}\,\mathfrak{m}_{\lambda})=v_{\lambda,j_{\lambda}} \big{|}_{N^{\beta}(\mathbf{Z}_{p})}. \tag{10.9}\] The function \(v_{\Omega,j}\) extends to a unique element of \(\mathcal{A}_{\Omega}\), and dualising, we get a '\(p\)-adic branching law' \(\kappa_{\Omega,j}:\mathcal{D}_{\Omega}\rightarrow\overline{\mathbf{Q}}_{p}\) that, after restriction to \(N^{\beta}(\mathbf{Z}_{p})\), formally interpolates the branching laws \(\kappa_{\lambda,j}\) as \(\lambda\) varies in \(\Omega\). In the construction of \(\mathcal{E}_{\Omega,B,\chi}^{j,\eta_{0}}\), by [BDG\({}^{+}\), Lem. 12.4] the result of steps (1) and (2) (in Construction 10.3, with \(\mathscr{D}_{\Omega}\) coefficients) was a distribution supported on \(t_{B}^{\beta}N(\mathbf{Z}_{p})t_{B}^{-\beta}u\subset N^{\beta}(\mathbf{Z}_{p})\); so we could use \(\kappa_{\Omega,j}\) to construct \(\mathcal{E}_{\Omega,P,\chi}^{j,\eta_{0}}\) (in Proposition 12.3). We switch to a general parabolic \(P\). Let \(\mathcal{D}_{\Omega}^{\beta,P}\subset\mathcal{D}_{\Omega}^{P}\) be the subset of distributions supported on \(N_{P}^{\beta}(\mathbf{Z}_{p})\coloneqq t_{B}^{\beta}N_{P}(\mathbf{Z}_{p})t_{B }^{-\beta}u\) (analogous to [BDG\({}^{+}\), Def. 11.11]). For a general parabolic \(P\), by the same proof as [BDG\({}^{+}\), Lem. 12.4], the output of steps (1) and (2) of Construction 10.3 lies in (a quotient of) \(\mathcal{D}_{\Omega}^{\beta,P}\). Since \(N_{P}^{\beta}(\mathbf{Z}_{p})\subset N^{\beta}(\mathbf{Z}_{p})\), we can define \(v_{\Omega,j}^{P}:N_{P}(\mathbf{Z}_{p})\rightarrow\overline{\mathbf{Q}}_{p}\) by \[v_{\Omega,j}^{P}(g)\coloneqq\left\{\begin{array}{cl}v_{\Omega,j}(g)&:g\in N _{P}^{\beta}(\mathbf{Z}_{p}),\\ 0&:\mathrm{otherwise}.\end{array}\right.\] The function \(v_{\Omega,j}^{P}\) extends uniquely via the induction property [BDW, Def. 3.11] to an element in \(\mathcal{A}_{\Omega}^{P}\), and hence dualises to a map \[\kappa_{\Omega,j}^{P}:\mathcal{D}_{\Omega}^{\beta,P}\longrightarrow\mathcal{O} _{\Omega},\qquad\mu\mapsto\mu(v_{\Omega,j}^{P}).\] Again, formally, \(\kappa_{\Omega,j}^{P}\) interpolates the branching laws \(\kappa_{\lambda,j_{\lambda}}\) after restriction to \(N_{P}^{\beta}(\mathbf{Z}_{p})\). _Proof of Proposition 10.2_: As in [BDG\({}^{+}\), Rems. 4.14, 12.7], define \(\mathcal{E}_{\Omega,P,\chi}^{j,\eta_{0}}\) as the composition \[\mathrm{H}_{\mathrm{c}}^{t}(S_{K},\mathscr{D}_{\Omega}^{P})\xrightarrow[ \oplus\mathrm{Ev}_{r_{B,\delta}^{P}}]{\oplus\mathrm{Ev}_{r_{B,\delta}^{P}}^{P}} \bigoplus_{[\delta]}(\mathcal{D}_{\Omega}^{\beta,P})_{\Gamma_{\beta,\delta}^{P}} \xrightarrow[\delta]{\otimes\ast\kappa_{\Omega,j}^{P}}\bigoplus_{[\delta]} \mathcal{O}_{\Omega}\xrightarrow[\delta]{\sum_{\lambda}\chi(\lambda)\Xi_{d}^{ \eta_{0}}}\mathcal{O}_{\Omega}, \tag{10.10}\] with \(\mathrm{Ev}_{P,\beta,\delta}^{D_{\Omega}^{P}}\) the map of Definition 4.7_op. cit._, which lands in \((\mathcal{D}_{\Omega}^{\beta,P})_{\Gamma_{\beta,\delta}^{P}}\) exactly as in Lemma 12.4; \(\kappa_{\mathrm{l},j}^{P}\) descends to the coinvariants as in the proof of Proposition 12.3; and \(\Xi_{q\neq 0}\) is defined in Remark 4.14, all _op. cit._, where any other undefined notation is explained. The three arrows in (10.10) correspond to (1-2), (3) and (4) in Construction 10.3 respectively. To deduce the claimed interpolation property in Proposition 10.2, observe that for any classical \(\lambda\in\Omega\), the diagram commutes. For the first square, this is [BDG\({}^{+}\), Lem. 4.8]; the second is identical to Proposition 11.12_op. cit_; and the third is clear from the definition. Since the bottom row here is exactly \(\mathcal{E}_{\lambda,P,\chi}^{j_{\lambda},\eta_{0}}\), this concludes the proof of Proposition 10.2 (hence of Theorem 8.6). ### Tracing from Iwahoric to parahoric level The above 'big evaluation' had the parabolic \(P\) baked into it; it used the parahoric classical evaluation map, and \(P\)-parahoric distributions in the overconvergent cohomology. As in [BDW], this is sufficient to study symplectic families through \(\tilde{\pi}^{P}\) in the \(P\)_-parabolic eigenvariety_, where we have analytic variation of some subset of the Hecke operators \(U_{p,r}\). However, our study of the symplectic locus crucially used analytic variation of _all_ the \(U_{p,r}\); in other words, it applies only to the Iwahori-level eigenvariety. We now port between the two. There is a natural trace map \(\mathrm{Tr}:\pi_{p}^{\mathrm{Iw}_{G}}\to\pi_{p}^{J_{P}}\), given by summing over translates by representatives of \(J_{P}/\mathrm{Iw}\). **Lemma 10.4**.: _If \(\tilde{\pi}=(\pi,\alpha)\) is an Iwahori refinement above the \(P\)-refinement \(\tilde{\pi}^{P}=(\pi,\alpha^{P})\), then \(\mathrm{Tr}\) induces an isomorphism \(\tilde{\pi}\rightsquigarrow\tilde{\pi}^{P}\)._ Proof.: As trace only acts at \(p\), it suffices to prove \(\mathrm{Tr}:\tilde{\pi}_{p}\rightsquigarrow\tilde{\pi}_{p}^{P}\). As the Satake parameter of \(\pi_{p}\) is assumed regular, both sides are complex lines; so we need only check the map is well-defined and non-zero. Let \(\sigma=\Psi_{\theta}(\tilde{\pi})\). We have \(\pi_{p}\cong\mathrm{Ind}_{B}^{G}(\theta^{\sigma})\), so it suffices to prove the result in \(\mathrm{Ind}_{B}^{G}\,\theta^{\sigma}\). Let \(f_{\sigma}\in\mathrm{Ind}_{B}^{G}\,\theta^{\sigma}\) be the (unique) Iwahori-invariant function supported on the big Bruhat cell \(B(\mathbf{Q}_{p})\cdot w_{2n}\cdot Iw_{G}\) with \(f_{\sigma}(w_{2n})=1\). By [BDG\({}^{+}\), Prop. 7.4], \(f_{\sigma}\) is an \(\alpha\)-eigenvector, hence yields a generator of \(\tilde{\pi}_{p}\). Under trace, this is mapped to a non-zero \(J_{P}\)-invariant vector supported on \(B(\mathbf{Q}_{p})\cdot w_{2n}\cdot J_{P}\). But by the same arguments, this is an \(\alpha^{P}\)-eigenvector, hence the map on refinements is well-defined and non-zero. Let \(K_{B}=K^{P}\mathrm{Iw}_{G}\) be an Iwahori-at-\(p\) level, and \(K_{P}=K^{p}J_{P}\) a parahoric-at-\(p\) level. We have natural trace maps from the cohomology of \(S_{K_{B}}\) to \(S_{K_{P}}\), which are functorial in maps between the coefficients. Finally, we have a natural map \(s_{P}:\mathcal{D}_{\Omega}\to\mathcal{D}_{\Omega}^{P}\)[BW21, Prop. 4.8], and \(r_{\lambda}:\mathcal{D}_{\Omega}\to V_{\lambda}^{\vee}\) factors through \(s_{P}\). Putting this all together with Proposition 10.2 and Lemma 10.4 yields: **Lemma 10.5**.: _For any classical \(\lambda\in\Omega\), here is a commutative diagram_ (10.11) ### Symplectic families in the parabolic eigenvariety Now suppose \((1^{\prime})\) of Conjecture 8.10 holds, giving \(\varphi_{p}\in\tilde{\pi}^{P}\) with \(\zeta_{p}(ut_{P}^{\beta}\cdot\varphi_{p},\chi_{p},s+1/2)\neq 0\) (for all \(s\), by Corollary 9.2). Let \(\tilde{\pi}\) be as given by the hypotheses of Theorem 8.6. Let \(\varphi_{p}^{\prime}\in\tilde{\pi}_{p}\) be a lift of \(\varphi_{p}\) under the trace map (via Lemma 10.4). Let \(\varphi=\otimes_{\ell\neq p}\varphi_{\ell}^{F}\otimes\varphi_{p}^{\prime}\in \tilde{\pi}\). By (10.1), attached to this is a cohomology class \(\phi_{\varphi}\in\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{Y}_{\lambda_{ \pi}}^{\vee})_{\sharp}\). By (10.3), we have \[\mathcal{E}_{\lambda_{\pi},P,\chi}^{j,\eta_{p}}\circ\mathrm{Tr}(\phi_{\varphi} )=A_{\lambda_{\pi},P,\chi}^{j}\cdot L\Big{(}\pi\times\chi,j+\tfrac{1}{2}\Big{)} \cdot\zeta_{p}\Big{(}ut_{P}^{\beta}\cdot\varphi_{p},\chi_{p},j+\tfrac{1}{2} \Big{)}\neq 0, \tag{10.12}\] where non-vanishing is by assumption \((1^{\prime})\). By hypothesis, the map \(r_{\tilde{\pi}}:\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{D}_{\Omega}) \to\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{Y}_{\lambda_{\pi}}^{\vee})_ {\sharp}\) is surjective for some neighbourhood \(\Omega\subset\mathscr{W}_{0,\lambda_{\pi}}^{P}\) of \(\lambda_{\pi}\). We summarise some consequences, described in detail in [1, SS7.2,7.3]: * By Hecke-equivariance of \(r_{\lambda_{\pi}}\), for \(h\gg 0\) the localisation of the slope \(\leqslant h\) subspace \(\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{D}_{\Omega})^{\leqslant h}\) at \(\tilde{\pi}\) is non-zero, giving a point \(x_{\tilde{\pi}}\) in the top-degree eigenvariety. Let \(\mathscr{C}^{\prime}\) be the connected component through \(x_{\tilde{\pi}}\). * Let \(\Phi\) be a lift of \(\phi_{\varphi}\), and \(\Phi_{\mathscr{C}^{\prime}}\) be its projection to the direct summand of \(\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{D}_{\Omega})^{\leqslant h}\) corresponding to \(\mathscr{C}^{\prime}\). Then \(\Phi_{\mathscr{C}^{\prime}}\in\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{ D}_{\Omega})\) with \(r_{\tilde{\pi}}(\Phi_{\mathscr{C}^{\prime}})=\phi_{\varphi}\). * Let \(\mathcal{E}_{\Omega}:=\mathcal{E}_{\Omega,P,\chi}^{j,\eta_{0}}\circ\mathrm{Tr} \circ s_{P}:\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{D}_{\Omega})\to \mathcal{O}_{\Omega}\), an \(\mathcal{O}_{\Omega}\)-module map. By (10.12) and Lemma 10.5, we have \(\mathcal{E}_{\Omega}(\Phi_{\mathscr{C}^{\prime}})\neq 0\,(\mathrm{mod}\,\mathfrak{m}_{ \lambda_{\pi}})\), so \(\mathcal{E}_{\Omega}(\Phi_{\mathscr{C}^{\prime}})\neq 0\). As \(\mathcal{O}_{\Omega}\) is torsion-free, we deduce that \(\mathrm{Ann}_{\mathcal{O}_{\Omega}}(\Phi_{\mathscr{C}^{\prime}})=0\). As in [1, Cor. 7.12] this forces existence of an irreducible component \(\mathscr{C}\subset\mathscr{C}^{\prime}\) of dimension \(\dim(\Omega)\). **Lemma 10.6**.: _We may take \(\mathscr{C}\) to be a classical family._ Proof.: Up to replacing \(\Omega\) with an open neighbourhood of \(\lambda_{\pi}\) of the same dimension, we may assume the rigid-analytic function \(\mathcal{E}_{\Omega}(\Phi_{\mathscr{C}^{\prime}})\in\mathcal{O}_{\Omega}\) is non-vanishing on \(\Omega\). At any classical weight \(\lambda\in\Omega\), combining non-vanishing of \(\mathcal{E}_{\Omega}(\Phi_{\mathscr{C}^{\prime}})\,(\mathrm{mod}\,\mathfrak{m} _{\lambda})\) with Lemma 10.5 implies \(\Phi_{\mathscr{C}^{\prime}}\) has non-zero image in \(\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{Y}_{\lambda}^{\vee})\). It must therefore have non-zero image after projection to at least one of the finite number of Hecke eigensystems that appear in \(\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{B}},\mathscr{Y}_{\lambda}^{\vee})\), so there is a classical point of \(\mathscr{C}^{\prime}\) of weight \(\lambda\). The result follows from Zariski-density of such \(\lambda\) in \(\Omega\), and the fact \(\mathscr{C}^{\prime}\) has finitely many irreducible components. **Lemma 10.7**.: _If \(\mathscr{C}\) as above is a classical family, it is a cuspidal symplectic family._ Proof.: We first exhibit a related family in the _parabolic_ eigenvariety. At parahoric level, \(r_{\tilde{\pi}}:\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{P}},\mathscr{D}_{\Omega}^{P}) \to\mathrm{H}_{\mathrm{c}}^{t}(S_{K_{P}},\mathscr{Y}_{\lambda_{\pi}}^{\vee})_ {\sharp}\) is surjective by Lemma 10.5, as the left-hand map surjects onto the \(\tilde{\pi}\)-eigenspace (by assumption) and the bottom map is an isomorphism on the \(\tilde{\pi}\)-eigenspaces (by Lemma 10.4). In particular, we can apply all of the above arguments (with \(\mathcal{E}_{\Omega,P,\chi}^{j,\eta}\) replacing \(\mathcal{E}_{\Omega}\)) to exhibit a classical family \(\mathscr{C}^{P}\) in the \(P\)-parabolic eigenvariety. This family also varies over \(\Omega\) and by Lemma 10.5, there is a bijection between classical points of \(\mathscr{C}\) and \(\mathscr{C}^{P}\), where every classical point \(y\) of \(\mathscr{C}\) is a further Iwahori-refinement of a \(P\)-refinement \(\tilde{\pi}_{y}^{P}\) appearing in \(\mathscr{C}\). To show \(\mathscr{C}\) is cuspidal, then, it suffices to prove \(\mathscr{C}^{P}\) is a cuspidal family. By assumption, \(\tilde{\pi}^{P}\) is \(P\)-strongly-interior and has regular weight. As in [1, Prop. 5.15], a Zariski-dense set of classical points in \(\mathscr{C}^{P}\) are also \(P\)-strongly-interior, have regular weight, and are non-\(P\)-critical slope. As _op. cit._, this forces them to be cuspidal. Finally, a Zariski-density of symplectic points follows exactly as in [1, SS7.4]. This \(\mathscr{C}\) is the family required in Proposition 10.1, completing the proof of Theorem 8.6.
2303.17650
Comparing Abstractive Summaries Generated by ChatGPT to Real Summaries Through Blinded Reviewers and Text Classification Algorithms
Large Language Models (LLMs) have gathered significant attention due to their impressive performance on a variety of tasks. ChatGPT, developed by OpenAI, is a recent addition to the family of language models and is being called a disruptive technology by a few, owing to its human-like text-generation capabilities. Although, many anecdotal examples across the internet have evaluated ChatGPT's strength and weakness, only a few systematic research studies exist. To contribute to the body of literature of systematic research on ChatGPT, we evaluate the performance of ChatGPT on Abstractive Summarization by the means of automated metrics and blinded human reviewers. We also build automatic text classifiers to detect ChatGPT generated summaries. We found that while text classification algorithms can distinguish between real and generated summaries, humans are unable to distinguish between real summaries and those produced by ChatGPT.
Mayank Soni, Vincent Wade
2023-03-30T18:28:33Z
http://arxiv.org/abs/2303.17650v3
# Evaluating and Detecting ChatGPT's Responses on Abstractive Summarization # Evaluating and Detecting ChatGPT's Responses on Abstractive Summarization Mayank Soni ADAPT Centre, Trinity College Dublin [email protected] &Vincent Wade ADAPT Centre, Trinity College Dublin [email protected] ###### Abstract Large Language Models (LLMs) have gathered significant attention due to their impressive performance on a variety of tasks. ChatGPT, developed by OpenAI, is a recent addition to the family of language models and is being called a _disruptive technology_ by a few, owing to its human-like text-generation capabilities. Although, many anecdotal examples across the internet have evaluated ChatGPT's strength and weakness, only a few systematic research studies exist. To contribute to the body of literature of systematic research on ChatGPT, we evaluate the performance of ChatGPT on _Abstractive Summarization_ by the means of automated metrics and blinded human reviewers. We also build automatic text classifiers to detect ChatGPT generated summaries. **We found that while text classification algorithms can distinguish between real and generated summaries, humans are unable to distinguish between real summaries and those produced by ChatGPT.** ## 1 Introduction ChatGPT is a recent addition to the family of large language models. Specifically, it is created by extending the work on InstructGPT [1] with a dialog based user-interface that is fine-tuned using Reinforcement Learning with Human Feedback (RLHF) [1]1. ChatGPT is estimated to have reached about \(100\) million active users [13] and is being widely used by business's and customers to accomplish various textual tasks. ChatGPT is easy to use due to it's dialog interface and large scale training via RLHF. Part of its popularity is also abilities, hitherto unseen, such as code generation. ChatGPT has an ability to interact with users in a conversation-style manner and can intelligently "answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests" [14] Footnote 1: [https://beta.openai.com/docs/model-index-for-researchers](https://beta.openai.com/docs/model-index-for-researchers) However, despite its seemingly magical abilities, anecdotal reports on ChatGPT have suggested significantly remaining challenges - for example it fails in elementary mathematical tasks [15]. Open AI has listed many limitations of ChatGPT on its website expressing via a tweet, the CEO tweeted "Its a mistake to be relying on ChatGPT for anything important" [12]. Despite this, there are commercial web based applications that are running on ChatGPT as background. ChatGPT has been found to hallucinate knowledge and frame answers confidently and incorrectly [2, 10]. Despite its widespread usage systematic studies of ChatGPT's evaluation are scarce. Towards this end, in this study, we evaluate if summaries generated by ChatGPT can be distinguished from real summaries by humans and text-classification algorithms and evaluate generated summaries through automated metrics. We first prepare a dataset of \(50\) summaries generated by ChatGPT and then analyse how ChatGPT summaries are scored across various automated metrics, then evaluate if blinded human reviewers can distinguish between real and generated summaries and finally build a text classifier to detect the two sources. The sections below discuss related work, data gathering, prompting and results. ## 2 Related work We briefly survey related work in the areas of _ChatGPT_ evaluation and enlist some important studies conducted in _Summarization_. ChatGPT EvaluationSince the release of ChatGPT, there have been many anecdotal evaluation of ChatGPT's abilities and weaknesses posted online but ChatGPT strength and weakness is still a small research area. jiao2023visual evaluated Chat GPT's ability on _machine translation_ and found that "ChatGPT performs competitively with commercial translation products (e.g., Google Translate) on high-resource European languages but lags behind significantly on low- resource or distant languages.". Kung et al. (2022) evaluated ChatGPT's performance on United States Medical License Examination (USMLE) and found that ChatGPT "performed at or near the passing threshold for all three exams without any specialized training or reinforcement". Bang et al. (2023) is perhaps the most comprehensive evaluation of ChatGPT: the subsection on studies on summarization show that _Interactivity_(recursively generating summaries with modified instruction) highlight that interactivity improves _ROUGE 1_ score. Gao et al. (2022)'s work on comparing ChatGPT generated scientific abstracts is similar to our work. In this investigation, authors generated and compared \(10\)_Abstracts_ in specific journal format and evaluated the generated and original abstracts through an AI output detector, a plagiarism detector and blinded human reviewers. The study found that "ChatGPT writes believable scientific abstracts, though with completely generated data. These are original without any plagiarism detected but are often identifiable using an AI output detector and skeptical human reviewers." Text SummarizationSummarization is a task of shortening a large text to a smaller version while retaining the main information. There are two broad approaches to summarization: _Extractive_ and _Abstractive_. Extractive summaries contain _as is_ parts from the original text while abstractive summaries can contain novel words and sentences not found in original text, like human-written summaries. Neural sequence-to-sequence models can generate _abstractive_ summaries (meaning the summary generated is not limited to selecting and rearranging text from the original passage) and there is substantial literature on Neural Abstractive SummarizationSee et al. (2017); Rush et al. (2015); Nallapati et al. (2016); Chopra et al. (2016); Lewis et al. (2019). Recent approaches have seen Large Language Models(LLM) being utilised Devlin et al. (2018); Yang et al. (2019); Raffel et al. (2019); Liu et al. (2019) and performance on summarization reported as standard. In this work, we focus on generating abstractive summaries from ChatGPT and compare it to original summaries. We discuss the precise instructions and dataset curated in section \(3\) and \(4\) below. ## 3 Dataset We utilise \(50\)_articles_ from CNN News/Daily News Dataset Nallapati et al. (2016); Hermann et al. (2015). The articles were selected from version \(3.0.0\) of the test-set. CNN/Daily News Dataset contains \(287,226\) training pairs, \(13,368\) validation pairs and \(11,490\) test pairs. This dataset contains multi-sentence summaries ordered according to the events described in the source article. To prepare a dataset of generated summaries real summaries, we prompted ChatGPT Feb 13 Version (ope) with \(50\)_articles_ from the CNN News/Daily News Dataset Nallapati et al. (2016); Hermann et al. (2015) with the following prompt: "Generate a short summary of the following paragraph in as less words as possible", the details on why we used this prompt are in the paragraph below. We are also releasing the generates summaries at 2. Footnote 2: github.com/Mayanksoni20/ChatGPT_summaries PromptingChatGPT is an conversation style, instruction-based large language model and the output can depend on the prompt provided. This means two instructions with similar goals can produce different outputs. Keeping this in mind, our goal with selecting a prompt was to produce summaries as similar as possible to the real summaries. Towards this goal, we experimented with a few prompts and finally arrived at the prompt highlighted in bold in Table 1. Other prompts rendered summaries which were longer than real summaries. ## 4 Results We present the results obtained in automated metrics, human evaluation and text detection in below paragraphs. Automated MetricsTo begin with evaluation, we measure ChatGPT's performance on abstractive summarization by comparing original and generated summaries. We utilize the following metrics: ROUGE Lin (2004) reporting the F1 scores for ROUGE-1, ROUGE-2 and ROUGE-L and METEOR Banerjee and Lavie (2005)), measuring the word-overlap, bigram-overlap, longest common sequence and unigram overlap between the real and generated summaries. We obtain the scores using HuggingFace evaluate library 3. We report the results in Table 2. The ROUGE and METEOR scores obtained will be compared to other baseline models in the future versions. Human EvaluationNext, we wanted to evaluate if blinded reviewers were able to distinguish between generated and original summaries. The task of the reviewers was formulated as follows: we asked reviewers to first read the news article followed by reading the summary and then click on either of two radio buttons guessing if the summary was generated by ChatGPT or human. Figure 1 shows a design of the user interface based on google forms. Two volunteers (native speakers of english) were given \(50\) random summaries each. The randomisation worked in such a way that the same volunteer did not receive both (generated and original) summaries of the same article. **We found that our reviewers were not able to distinguish between generated and human summaries**. The accuracy of human reviewers was reported to be \(0.49\). Table 3 shows the confusion matrix of the results obtained. Reviewers also commented that they were sure that they were not able to guess between generated and original summaries, highlighting that the generated and original summaries were similar. ChatGPT Summary DetectionIt is very important to be able to distinguish between ChatGPT generated and human produced text to detect ChatGPT generated text. In this next step, we wanted to evaluate if a fine-tuned text classification model is able to distinguish between ChatGPT and Human-generated summaries. We employ DistillBERT from HuggingFace and fine-tune it on our dataset. After Fine-tuning for \(2\) epochs with a learning rate of \(0.0000002\), we could achieve an accuracy of \(90\%\) in distinguishing between generated and original summaries. We also utilise Sentence Embedding Reimers and Gurevych (2019) and pass them through XGBoost Chen and Guestrin (2016) and obtain an accuracy of \(0.50\). Table 3 shows the results obtained from using the algorithms. We leave the implementation of other algorithms for future work. ## 5 Limitations The study conducted here has few restrictions. First, there was a limitation of \(50\) on the number of summaries that could be compared. Second, \begin{table} \begin{tabular}{l We haven't compared the summaries produced by various prompts, and there may be other prompts that can be used to generate summaries. Third, we have not yet contrasted ChatGPT's summarization performance with that of other models and baselines. Fourth, both of our reviewers were native English speakers; however, it may be worthwhile to examine whether non-native speakers predict real and generated summaries differently. Fifth, the accuracy of automatic summary detection can be increased by using more sophisticated algorithms. ## 6 Discussion and Conclusion In this concise study, through comparing 50 real and generated summaries, we found that while text classification algorithms can distinguish between real and generated summaries, humans are unable to distinguish between real summaries and those produced by ChatGPT. Our reviewers were not certain whether a summary was produced by ChatGPT or a human. The reviewers also commented that they were confident that they were incorrectly guessing the source of the summaries. We attribute this to lack of any distinguishing feature between the two sources which in-turn is attributed to our careful selection of prompts to generate summaries as similar as possible to original summaries. Additionally, we were able to identify ChatGPT-generated summaries with a 90% accuracy. ## Acknowledgements This work was conducted with the financial support of the Science Foundation Ireland Centre for Research Training in Digitally-Enhanced Reality (d-real) under Grant No. 18/CRT/6224 and Science Foundation Ireland ADAPT Centre under Grant. No. 13/RC/2106 \begin{table} \begin{tabular}{c|c|c} Algorithm & Accuracy & F-1 Score \\ \hline SentTrans. +XGB & 0.50 & **0.60** \\ Distill-BERT & **0.90** & 0.33 \\ \end{tabular} \end{table} Table 4: Results of ChatGPT Summary Classification Figure 1: User Interface of Summaries Shown to Human Reviewers
2309.01069
Separable Hamiltonian Neural Networks
Hamiltonian neural networks (HNNs) are state-of-the-art models that regress the vector field of a dynamical system under the learning bias of Hamilton's equations. A recent observation is that embedding a bias regarding the additive separability of the Hamiltonian reduces the regression complexity and improves regression performance. We propose separable HNNs that embed additive separability within HNNs using observational, learning, and inductive biases. We show that the proposed models are more effective than the HNN at regressing the Hamiltonian and the vector field. Consequently, the proposed models predict the dynamics and conserve the total energy of the Hamiltonian system more accurately.
Zi-Yu Khoo, Dawen Wu, Jonathan Sze Choong Low, Stéphane Bressan
2023-09-03T03:54:43Z
http://arxiv.org/abs/2309.01069v4
# Separable Hamiltonian Neural Networks ###### Abstract The modelling of dynamical systems from discrete observations is a challenge faced by modern scientific and engineering data systems. Hamiltonian systems are one such fundamental and ubiquitous class of dynamical systems. Hamiltonian neural networks are state-of-the-art models that unsupervised-ly regress the Hamiltonian of a dynamical system from discrete observations of its vector field under the learning bias of Hamilton's equations. Yet Hamiltonian dynamics are often complicated, especially in higher dimensions where the state space of the Hamiltonian system is large relative to the number of samples. A recently discovered remedy to alleviate the complexity between state variables in the state space is to leverage the additive separability of the Hamiltonian system and embed that additive separability into the Hamiltonian neural network. Following the nomenclature of physics-informed machine learning, we propose three _separable Hamiltonian neural networks_. These models embed additive separability within Hamiltonian neural networks. The first model uses additive separability to quadratically scale the amount of data for training Hamiltonian neural networks. The second model embeds additive separability within the loss function of the Hamiltonian neural network. The third model embeds additive separability through the architecture of the Hamiltonian neural network using conjoined multilayer perceptions. We empirically compare the three models against state-of-the-art Hamiltonian neural networks, and demonstrate that the separable Hamiltonian neural networks, which alleviate complexity between the state variables, are more effective at regressing the Hamiltonian and its vector field. Hamiltonian dynamical systems, physics-informed neural networks, physics properties, system symmetries, additive separability ## I Introduction Modelling dynamical systems is a core challenge for science and engineering. The movement of a pendulum, the wave function of a quantum-mechanical system, the movement of fluids around the wing of a plane, the weather patterns under climate change, and the populations forming an ecosystem are spatio-temporal behaviours of physical phenomena described by dynamical systems. Hamiltonian systems [1] are a class of dynamical systems governed by Hamilton's equations, which indicate conservation of the Hamiltonian value of the system. A recent advancement in the modelling of dynamical systems is Hamiltonian neural networks [2, 3], which are physics-informed neural networks with learning biases given by Hamilton's equations and their corollaries [4]. Hamiltonian neural networks are universal function approximators [5] capable of modelling non-linear multivariate functions [6]. They use a learning bias [4] based on physics information regarding Hamilton's equations [2, 3] to aid the neural network in converging towards solutions that adhere to physics laws [4]. Hamiltonian neural networks unsupervised-ly regress the vector field and the Hamiltonian of a dynamical system from discrete observations of its state space or vector field under the learning bias of Hamilton's equations, and outperform their physics-uninformed counterparts in doing so [7]. However, Hamiltonian dynamics are often complicated and chaotic, especially in higher dimensional systems such as the Toda Lattice and Henon Heiles systems. The state space of the Hamiltonian system is large relative to the number of samples. A redeeming feature of these systems is their additive separability. As highlighted by Gruver et al. in the ICML 2022 spotlight paper, additive separability "_allowed the physics-informed neural network to avoid[...] artificial complexity from its coordinate system_" (the input variables) and improve its performance [8]. This motivates further informing Hamiltonian neural networks of additive separability to alleviate the complexity between state variables of Hamiltonian systems. The main technical contribution of this work is the embedding of additive separability into a Hamiltonian neural network to regress a Hamiltonian and vector field. We propose three Hamiltonian neural networks that independently embed additive separability using three modes of biases. We call this family of Hamiltonian neural networks, _separable Hamiltonian neural networks_. The three separable Hamiltonian neural networks follow the nomenclature of Karniadakis et al. [4] and embed additive separability using three modes of biases: observational bias, learning bias, and inductive bias. The first model embeds an observational bias by training on newly generated data that embody separability. The second model embeds a learning bias through the loss function of the Hamiltonian neural network. The third model embeds an inductive bias through the architecture of the Hamiltonian neural network by means of conjoined multilayer perceptrons. We empirically evaluate the performance of the proposed models against a baseline Hamiltonian neural network on a variety of representative additively separable Hamiltonian systems. We compare their performances in regressing an additively separable Hamiltonian and a vector field. In this paper, Section II presents the necessary background on dynamical, Hamiltonian and separable Hamiltonian systems. Section III synthesises the related work and positions the main contribution. Section IV introduces the proposed separable Hamiltonian neural networks. Section V compares the proposed models against the baseline Hamiltonian neural network in regressing additively separable Hamiltonians and vector fields. Section VI concludes the paper. ## II Background ### _Dynamical Systems and Hamiltonian Systems_ Dynamical systems theory [9] studies the temporal dynamics, or time evolution, of dynamical systems. The phase or state space of a dynamical system is a \(\mathbb{R}^{2\times n}\) multidimensional space which represents all possible states of a dynamical system comprising the combination of the system's \(2\times n\) state variables [9], also called degrees of freedom, or parameters. Without loss of generality, we consider autonomous, time-independent systems. The dynamics of an autonomous system are captured by the vector field [9, 10] of dimension \(\mathbb{R}^{2\times n}\), formed by the time derivatives of the state variables. A Hamiltonian system [1] is a dynamical system characterised by a smooth, real-valued scalar function of the state variables [10] called the Hamiltonian function or simply the Hamiltonian, \(H\in\mathbb{R}\). The Hamiltonian system is governed by Hamilton's equations [1], a system of \(2\times n\) differential equations given in Equation 1 that define the vector field \(F(x,y)=\left(\frac{\mathrm{d}x}{\mathrm{d}t},\frac{\mathrm{d}y}{\mathrm{d}t}\right)\)[1, 11]. A Hamiltonian with \(2\times n\) state variables has a dimension, or axis, of \(n\). Conventionally, the set of variables can be evenly split into two sets called generalised variables, noted \(\vec{x}\) for the position, and \(\vec{y}\) for momentum. \[\frac{\mathrm{d}\vec{x}}{\mathrm{d}t}=\frac{\partial H(\vec{x},\vec{y})}{ \partial\vec{y}},\quad\frac{\mathrm{d}\vec{y}}{\mathrm{d}t}=-\frac{\partial H (\vec{x},\vec{y})}{\partial\vec{x}}. \tag{1}\] A classic example of a Hamiltonian system is the non-linear pendulum. Figure 0(a) shows the vector field of a non-linear pendulum in its two-dimensional phase space with state variables position (angle when oscillating in a plane) and momentum (mass multiplied by velocity). The vector field is formed by the time derivatives of the state variables. For a non-linear pendulum of unitary mass, the Hamiltonian is the sum of kinetic and potential energy. The heatmap in Figure 0(a) shows the value of the Hamiltonian in the phase space. ### _Separable Hamiltonian Systems_ A Hamiltonian system is separable if the Hamiltonian can be separated into additive terms, each dependent on either \(\vec{x}\) or \(\vec{y}\), where \(\vec{x}\) and \(\vec{y}\) are disjoint subsets of the state variables of a Hamiltonian system [12]. The separable Hamiltonian is defined \(H(\vec{x},\vec{y})=T(\vec{x})+V(\vec{y})\) where \(T\) and \(V\) are arbitrary functions [12]. Furthermore, the mixed partial derivative of the separable Hamiltonian is zero following Equation 2. \[\frac{\partial^{2}H}{\partial x\partial y}=\frac{\partial}{ \partial x}\left(\frac{\partial(T(\vec{x})+V(\vec{y}))}{\partial y}\right)= \frac{\partial}{\partial x}\left(\frac{\partial V(\vec{y})}{\partial y}\right) =0. \tag{2}\] ### _Examples of Separable Hamiltonian Systems_ For illustration and comparative empirical evaluation of the models presented, we consider eight Hamiltonian systems shown in Table I, of which five are classical and mechanical and three are abstract. These allow the models to demonstrate their performance in predicting a range of Hamiltonian dynamics comprising different functions, different values of \(n\), and well-behaved or chaotic dynamics, from observations. ## III Related Work ### _Predicting Function Values and Vector Fields_ There are multiple statistical methods to regress correlated vector-valued functions [13, 14, 15, 16]. Hastie et al. [6] addressed the regressing of vector fields using multiple output regression and machine learning. State-of-the-art works regress the vector field of a dynamical system using neural networks that model ordinary [17] and partial [18] differential equations. For the regressing of Hamiltonian dynamical systems, Bertalan et al. [2] and Greydanus et al. [3] independently use physics-informed machine learning methods to regress the value of the Hamiltonian from multiple evenly-spaced samples along multiple Hamiltonian trajectories. Our work emulates theirs, by embedding Hamilton's equations within the loss function of a neural network to regress the Hamiltonian [3] and using automatic differentiation of the regressed Hamiltonian to yield the regressed vector field [3]. However, our work uses instantaneous observations of the Hamiltonian vector field, which are sufficient to train the Hamiltonian neural network. Recent advancements in regressing Hamiltonian vector fields use neural ordinary differential equations [19, 20, 21, 22, 23] and leverage the symplectic property of the Hamiltonian. They use symplectic integration to regress the Hamiltonian vector field. Some further leverage Hamiltonian separability [19, 21] by using a Leapfrog integrator. Others additionally require the Hamiltonian to be mechanical [22]. Neural ordinary differential equations-based works require trajectories of the Hamiltonian system as input while our model only requires instantaneous observations of the Hamiltonian vector field. ### _Biases in Neural Networks_ Karniadakis et al. focus on three modes of biasing a regression model: observational bias, learning bias, and inductive bias [4]. Observational biases are introduced directly through data that embody the underlying physics, or carefully crafted data augmentation procedures. With sufficient data to cover the input domain of a regression task, machine learning methods Fig. 1: (a) Non-linear pendulum vector field (black arrows) and Hamiltonian (heatamap), (b) Random samples of the non-linear pendulum vector field. have demonstrated remarkable power in achieving accurate interpolation between the dots [4]. Learning biases are soft constraints introduced by appropriate loss functions, constraints and inference algorithms that modulate the training phase of a machine learning model to explicitly favour convergence towards solutions that adhere to the underlying physics [4]. Inductive biases are prior assumptions incorporated by tailored interventions to a machine learning model architecture, so regressions are guaranteed to implicitly and strictly satisfy a set of given physical laws [4]. Hamiltonian neural networks leverage learning biases and use Hamilton's equations as soft constraints in the loss function of the neural network to favour convergence toward the Hamiltonian [2, 3]. Seminal work on embedding additive separability within neural networks utilised block diagonal matrices to regress elements of an additively separable finite element problem [24]. Recently, Zhong et al. [22] leveraged the additive separability of mechanical Hamiltonians as an inductive bias to design neural ordinary differential equations. Gruver et al. [8] empirically examined these neural networks and found that their improved generalization resulted from the bias of a second-order structure [8], which arose from the additive separability of the modelled mechanical Hamiltonian and "_allowed the physics-informed neural network to avoid[...] artificial complexity from its coordinate system_ (the input variables)" and improve its performance [8]. Our work also exploits the additive separability of the modelled Hamiltonian. Our work incorporates knowledge regarding the additive separability of the Hamiltonian function within the Hamiltonian neural network in the style of Karniadakis' physics-informed machine learning [4] so that regressions sought of the Hamiltonian function and vector field are guaranteed to implicitly or explicitly satisfy this separability. ## IV Methodology Four Hamiltonian neural networks are compared for the task of regressing the Hamiltonian and vector field of a Hamiltonian system. One, the baseline, is uninformed of the additive separability of the Hamiltonian system. Three proposed models are informed via observational, learning and inductive biases respectively. Subsections IV-A, IV-B, IV-C, and IV-D introduce the four models. Section V empirically compares the models on their abilities to perform the task. ### _The Baseline Hamiltonian Neural Network_ We adapt Hamiltonian neural networks (HNNs) [2, 3] (leftmost, Figure 2) for the task of regressing the Hamiltonian and vector field of a Hamiltonian system from random samples of the vector field. Hamiltonian neural networks inform a neural network that a system is Hamiltonian by embedding a Hamiltonian learning bias into the neural network. Equation 3 defines the loss function of the Hamiltonian neural network. \(f_{0}\) is an arbitrary pinning term that "pins" the regressed Hamiltonian to one among several solutions that are modulo and additive constant, and reduces the search space for convergence. \(f_{1}\) and \(f_{2}\) are Hamilton's equations corresponding to Equation 1. \(f_{0}\), \(f_{1}\) and \(f_{2}\) introduce biases that favour the convergence of the Hamiltonian neural network toward the underlying physics of the regressed Hamiltonian. Equation \(f_{*}\) defines the loss function of the neural network as a linear combination of equations \(f_{0}\) to \(f_{2}\). \(c_{k}\) is the coefficient of each \(f_{k}\). One can assume that \(c_{k}=1\) although additional knowledge of the system can be used to emphasise any \(f_{k}\)[2]. \[f_{0}=\left(\hat{H}(\vec{x_{0}},\vec{y_{0}})-H_{0}\right)^{2}, f_{1}=\left(\frac{\partial\hat{H}}{\partial\vec{y}}-\frac{ \mathrm{d}\vec{x}}{\mathrm{d}t}\right)^{2},\] \[f_{2}=\left(\frac{\partial\hat{H}}{\partial\vec{x}}+\frac{ \mathrm{d}\vec{y}}{\mathrm{d}t}\right)^{2}, f_{*}(\vec{x},\vec{y},\frac{\mathrm{d}\vec{x}}{ \mathrm{d}t},\frac{\mathrm{d}\vec{y}}{\mathrm{d}t};w)=\sum_{k=0}^{2}c_{k}f_{k}. \tag{3}\] To perform the task of regressing the Hamiltonian and vector field of a \(n\) dimensional Hamiltonian system, the Hamiltonian neural network uses instantaneous, random samples of the \(2\times n\) state variables and the \(2\times n\) vectors, like those seen in Figure 1b. The state variables are input to the model. The output of the model is the regression surrogate, \(\hat{H}\), which is an estimator of the Hamiltonian \(H\). The training is supervised through the loss function, which uses the \(2\times n\) vectors corresponding to the \(2\times n\) state variables that were input to the model. Via gradient descent, \(f_{*}\) is minimised. The derivative of the surrogate \(\hat{H}\) at all input state variables is the surrogate vector field. It is computed via automatic differentiation [25] of the Hamiltonian neural network. The separable Hamiltonian neural networks introduced in subsections IV-B, IV-C and IV-D adopt the Hamiltonian learning bias of the Hamiltonian neural network and further embed biases regarding additive separability. They perform the task of regressing the Hamiltonian and vector field in the same way. ### _Embedding a Separability Observational Bias within a Hamiltonian Neural Network_ The baseline Hamiltonian neural network can be informed of additive separability by embedding an observational bias into the Hamiltonian neural network. Given data of the vector field comprising instantaneous state variables and vectors, additive separability is used to quadratically scale the amount of data. Training the Hamiltonian neural network on the new data embeds the observational bias and allows the model to regress a surrogate Hamiltonian that reflects the additive separability of the data. The model, with an embedded observational bias, is a separable Hamiltonian neural network. Given original data comprising samples of tuples \((\vec{x},\vec{y})\), new samples are generated. Additive separability means the generation comprises new combinations of the \(\vec{x}\) and \(\vec{y}\) from the original data. As the Hamiltonian is additively separable, the first derivatives of the new samples are dependent solely on \(\vec{x}\) or \(\vec{y}\), and can be inferred from the original samples at the respective values of \(\vec{x}\) or \(\vec{y}\). The amount of data available to train the Hamiltonian neural network has increased. Consider original data comprising two samples \((x_{1},y_{1})\) and \((x_{2},y_{2})\) in blue in Figure (a)a. With additive separability, two new samples created are \((x_{1},y_{2})\) and \((x_{2},y_{1})\) in red. Figure (b)b shows that as more samples from the original data (in blue) are available, quadratically more new samples (in red) can be created. Generally, up to \(N\times(N-1)\) new samples can be created from original data comprising \(N\) samples. The observational bias creates more data to improve coverage of the input domain of the Hamiltonian regression task, and the regressed surrogate Hamiltonian reflects the additive separability of the data. This improves regression performance but increases the time taken for forward and backward propagation of the separable Hamiltonian neural network. In selecting the optimal number of samples, there is a trade-off between regression performance and training time. ### _Embedding a Separability Learning Bias within a Hamiltonian Neural Network_ The baseline Hamiltonian neural network can be informed of additive separability by embedding a learning bias. The resulting separable Hamiltonian neural network with learning bias (centre, Figure 2) favours convergence towards a surrogate Hamiltonian that is additively separable. \(f_{3}\) in Equation 4 is the mixed partial derivative of the surrogate Hamiltonian \(\hat{H}\) corresponding to Equation 2. \(f_{*}^{sep}\) is the loss function of the separable Hamiltonian neural network with learning bias. It is a linear combination of equations \(f_{0}\) to \(f_{3}\) from Equation 3 and 4. It introduces a bias that favours convergence of the Hamiltonian neural network toward a surrogate Hamiltonian that is additively separable. \[f_{3}=\left(\frac{\partial^{2}\hat{H}}{\partial x\partial y}\right)^{2}\forall x \in\vec{x},y\in\vec{y},\hskip 14.226378ptf_{*}^{sep}=\sum_{k=0}^{3}c_{k}f_{k}. \tag{4}\] A larger \(c_{3}\) allows the separable Hamiltonian neural network to emphasise \(f_{3}\) and additive separability of the surrogate Hamiltonian, but may also decrease the emphasis of \(f_{0}\), \(f_{1}\) and \(f_{2}\), presenting a trade-off in the optimal value of \(c_{3}\). Fig. 3: (a) \(N\times(N-1)=2\) new samples are created from \(N=2\) samples, (b) \(N\times(N-1)=12\) new samples are created from \(N=4\) samples. Fig. 2: Architecture of the baseline Hamiltonian neural network and proposed separable Hamiltonian neural network with observational bias (left), proposed separable Hamiltonian neural network with learning bias (centre), and proposed separable Hamiltonian neural network with inductive bias (right). ### _Embedding a Separability Inductive Bias within a Hamiltonian Neural Network_ A baseline Hamiltonian neural network can be informed of additive separability by embedding an inductive bias. The resulting separable Hamiltonian neural network with inductive bias (rightmost, Figure 2) regresses a surrogate Hamiltonian that implicitly and strictly satisfies additive separability. The proposed separable Hamiltonian neural network with inductive bias is not fully-connected. The model comprises two smaller neural networks with the same number of layers, conjoined only at the output layer. Each smaller conjoined neural network has one output. Their sum (indicated by the dotted lines in Figure 2) is the surrogate Hamiltonian. The architecture of the proposed separable Hamiltonian neural network ensures additive separability as each smaller conjoined neural network has an input of either \(\vec{x}\) or \(\vec{y}\), and is, therefore, a function of either \(\vec{x}\) or \(\vec{y}\). During forward propagation, the sum of the conjoined neural networks ensures the surrogate Hamiltonian is always the sum of two independent functions of \(\vec{x}\) and \(\vec{y}\). The mixed partial derivative of the surrogate Hamiltonian is by design always zero. Additive separability is strictly satisfied. The conjoined neural networks are trained simultaneously in parallel to minimise training time. The separable Hamiltonian neural network has two smaller conjoined neural networks that can be trained consecutively or simultaneously in parallel. Furthermore, the summation layer of the separable Hamiltonian neural network can utilise a simple sum or introduce weights and biases. The different implementations for the separable Hamiltonian neural network with inductive bias present a trade-off in finding its optimal implementation. ## V Performance Evaluation The four models are compared on the task of regressing the Hamiltonian and vector field of a Hamiltonian system. To reach the comparison, the proposed separable Hamiltonian neural networks must first be empirically studied and optimised. Thereafter, the baseline Hamiltonian neural network and three optimised models can be compared. In Experiment 1, the optimal number of samples to create from the original data for the separable Hamiltonian neural network with observational bias is empirically studied and identified. In Experiment 2, the optimal value of \(c_{3}\) for the separable Hamiltonian neural network with learning bias is empirically studied and identified. In Experiment 3, the optimal implementation for the separable Hamiltonian neural network with inductive bias is empirically studied and identified. Finally, in Experiment 4, the baseline model and three optimised separable Hamiltonian neural networks are compared on the task of regressing the Hamiltonian and vector field, and their respective training times. In finding the optimal implementations of the three informed variants, we only compare their performance on the task of regressing the vector field. This is because regressing the Hamiltonian involves finding the real-valued sum of the integrated vector field. Errors made by the models in regressing the vector field may be cancelled out when regressing the Hamiltonian. Therefore, a model that regresses the Hamiltonian well may not regress the vector field well, but a model that regresses the Hamiltonian vector field well can also regress the Hamiltonian well. For completeness, and to demonstrate this phenomenon, we regress and present results for both the Hamiltonian and the vector field in Experiment 4. For the regression of the Hamiltonian, the performance of the models is measured by the absolute or L1-error between the surrogate Hamiltonian of a model and the true Hamiltonian from test data. The absolute error is computed following Equation 5. For the regression of the vector field, the performance of the models is measured by the vector error between the derivative of the surrogate Hamiltonian of a model and the true vector field from test data. The vector error is computed following Equation 6[26]. \(\hat{v}\) is the regressed vector and \(v\) is the true vector. The test data set comprises \(d=s^{2n}\) vectors, with \(s=10\) evenly spaced states in each dimension of the phase space for each system. \[E_{H}=\frac{1}{d}\sum_{k=1}^{d}||\hat{H}_{k}-H_{k}||_{1}, \tag{5}\] \[E_{V}=\frac{1}{d}\sum_{k=1}^{d}\frac{||\hat{v}_{k}-v_{k}||_{2}}{||v_{k}||_{2}}. \tag{6}\] All experiments are evaluated over the Hamiltonian systems shown in Table I. The general experimental setup for all models in all experiments is as follows. Training data comprising \(512\) samples of the state variables and vector field are generated uniformly at random within the sampling domain for each Hamiltonian system. The sampling domains are shown in columns 2 and 3 of Table II. The samples comprise tuples of the state variables \((\vec{x},\vec{y})\) and their corresponding vectors or time derivatives. The models to be experimented on are designed with two hidden layers, an Adam optimizer and softplus activation. In training, 20% of the training data is set aside as validation data for a dynamic stopping criterion using validation-based early stopping [27] and a batch size of 80 is used. All models have an input layer with width \(2\times n\), two hidden layers with width shown in columns 6 and 8 of Figure II, and one output layer with width one. Samples of state variables are input to the models. Samples of the vector field are used in the loss function of the models. All models are trained until convergence. In order to find an optimal bias-variance trade-off, the training will terminate if there is no decrease in the validation loss for 4,000 epochs in a row. The output of the models is the surrogate Hamiltonian. The surrogate vector field is computed via automatic differentiation of the surrogate Hamiltonian with respect to its inputs \(\vec{x}\) and \(\vec{y}\). It is equivalent to the vector field following Equation 1. All models are trained in Pytorch on a GeForce GTX1080 GPU with 32 GB RAM. The complete code in Python and results for the models discussed are available at github.com/zykhoo/SeparableNNs. All experiments are repeated for 20 random seeds. ### _Experiment 1: Optimising the Separable Hamiltonian Neural Network with Observational Bias_ This subsection details the experimental setup to empirically study the trade-off between regression performance and regression time taken for the separable Hamiltonian neural network with observational bias, by determining the optimal number of new samples to create from data with \(N\) samples. #### Iv-A1 Experimental Setup With original training data of size \(512\), 20% of the data is first set aside as validation data. The remaining 80% or \(N=409\) of training data comprising samples \((\vec{x}_{i},\vec{y}_{i})\) and time derivatives \((\frac{\mathrm{d}\vec{x}_{i}}{\mathrm{d}t},\frac{\mathrm{d}\vec{y}_{i}}{ \mathrm{d}t})\forall i\in N\), is doubled by creating new data comprising samples \((\vec{x}_{i},\vec{y}_{i+1})\) with time derivatives \((\frac{\mathrm{d}\vec{x}_{i}}{\mathrm{d}t},\frac{\mathrm{d}\vec{y}_{i+1}}{ \mathrm{d}t})\forall i\in N\), then appending this new data to the original data. Generally, the training data can be increased \(m\) times by creating new data comprising samples \((x_{i},y_{i+m})\) and with time derivatives \((\frac{\mathrm{d}x_{i}}{\mathrm{d}t},\frac{\mathrm{d}\vec{y}_{i+m}}{ \mathrm{d}t})\forall m=[1,k]\), then appending all \(m\) sets of new samples to the original data. The separable Hamiltonian neural network with observational bias is set up following the leftmost architecture in Figure 2. All details of the setup, training and evaluation follow from the description above in Section V. The separable Hamiltonian neural network is trained and evaluated for models where \(k=\{1,2,3,4,5,10,20,30,40,50,100,200,300,400\}\). #### Iv-A2 Experimental Results We report the vector error and training time for the separable Hamiltonian neural network with observational bias in Tables III and IV. Generally, from Tables III and IV, as \(k\) increases, the vector error decreases and training time increases. The decrease in vector error is not proportional to the amount of data. The marginal improvement in vector error decreases as \(k\) increases. Furthermore, the time taken for each epoch increases proportionately to \(k\). However, the training time does not, because as \(k\) increases, the number of epochs required for convergence decreases. The time taken for the model to converge when \(k=1\) (where the number of samples is doubled from the original data) is not 200 times more than the time taken for the model to converge when \(k=400\) (where the number of samples is four hundred times from the original data). These suggest that smaller values of \(k\) are sufficient to cover the input domain and emphasise the additive separability of the regression task. Nonetheless, in general, the model with \(k=400\) regresses the vector field best, and the model with \(k=1\) is the fastest. A good trade-off that balances reducing the vector error and training time is the model with \(k=2\). ### _Experiment 2: Optimising the Separable Hamiltonian Neural Network with Learning Bias_ This subsection details the experimental setup to empirically study the trade-off between emphasing additive separability of the Hamiltonian regression task and \(f_{0}\), \(f_{1}\) and \(f_{2}\) in Equation 3. The optimal value of \(c_{3}\) is empirically determined. #### Iv-B1 Experimental Setup The separable Hamiltonian neural network with learning bias is set up following the middle architecture in Figure 2 with loss function Equation 4. All details of the setup, training and evaluation follow from the description above in Section V. The separable Hamiltonian neural network is trained and evaluated for cases where \(c_{3}=\{0.25,0.50,1.00,2.00,4.00\}\). #### Iv-B2 Experimental Results We report the vector error calculated following Equation 6 for the separable Hamiltonian neural network with learning bias in Table V. Table V shows that as \(c_{3}\) increases, the vector error decreases. Table VI shows that as \(c_{3}\) increases, the number of epochs and training time required increases. These suggest that as the value of \(c_{3}\) increases, the emphasis on additive separability increases, and the separable Hamiltonian neural network with inductive bias places less emphasis on learning Hamilton's equations. As a result, the number of epochs required for the model to learn the surrogate Hamiltonian and converge increases, increasing training time. The model with \(c_{3}=4.00\) regresses the vector field best. The model with \(c_{3}=0.25\) is the fastest. A good trade-off that balances the importance of additive separability and Hamilton's equations appears to be \(c_{3}=1.00\) as it often outperforms the model when \(c_{3}=2.00\). Using \(c_{3}=1.00\) is optimal, and balances the importance of additive separability and Hamilton's equations. ### _Experiment 3: Optimising the Separable Hamiltonian Neural Network with Inductive Bias_ This subsection details the experimental setup to empirically study the trade-off between different implementations of the separable Hamiltonian neural network with inductive bias. #### Vi-C1 Experimental Setup The separable Hamiltonian neural network with inductive bias is set up following the rightmost architecture in Figure 2. The width of each smaller conjoined neural network is shown in the second last column of Table II. The proposed separable Hamiltonian neural network with inductive bias trains the two conjoined neural networks in two ways. Firstly, consecutively, where \(\vec{x}\) is first input to the first conjoined neural network, which outputs \(f(\vec{x})\), then \(\vec{y}\) is input to the second conjoined neural network, which outputs \(g(\vec{y})\). Their sum is the surrogate Hamiltonian. Secondly, simultaneously and in parallel, by designing each layer of the separable Hamiltonian neural network such that it appends the respective layers of the two conjoined neural networks. For the case where the model is trained in parallel, forward propagation through the proposed model is computed as \(x_{n}=\sigma(W_{n}\times x_{n-1}+B_{n})\) where \(\sigma\) is the activation function, \(x_{n}\) is the output of layer \(n\) calculated from the output of layer \(n-1\), \(W_{n}\) is the weight matrix of layer \(n\) of shape \(2\times L_{n}\times L_{n-1}\), and \(L_{n}\) and \(L_{n-1}\) are the widths of layers \(n\) and \(n-1\) respectively, and \(B_{n}\) is the bias matrix of layer \(n\) of shape \(2\times 1\times L_{n}\). The \(2\) in both the weight matrix and bias matrix corresponds to the _two_ disjoint subsets of the state variables of the additively separable Hamiltonian. With this architecture, both smaller conjoined neural networks are trained simultaneously in parallel, with each forward and backward propagation using one graphics processing unit. The separable Hamiltonian neural network with inductive bias is also trained and evaluated for five possible implementations of the summation layer. The zeroth implementation is a simple summation (sHNN-I (0)). The first implementation is a linear layer with fixed and equal weights, and no bias (sHNN-I (1)). The second implementation is a linear layer with fixed and equal weights, and a trainable bias (sHNN-I (2)). The third implementation is a linear layer with trainable weights, and no bias (sHNN-I (3)). The fourth implementation is a linear layer with trainable weights and bias (sHNN-I (4)). For the zeroth and first implementations, the last column in Table II shows the number of parameters of the model. The second, third and fourth implementations have one, two and three additional parameters respectively. All other details of the setup, training and evaluation follow from the description above in Section V. #### Vi-C2 Experimental Results We report the vector error calculated following Equation 6 for the separable Hamiltonian neural network with inductive bias in Tables VII and IX. From Table VII it is observed that training the conjoined neural networks in parallel and consecutively results in different vector errors and training time. Intuitively, they should have the same vector errors, but closer analysis reveals they have different floating point errors which cause their vector errors to diverge after many iterations. Generally, both separable Hamiltonian neural networks perform well in regressing the vector field as they have similar vector errors. However, from Table VIII, it is observed that the separable Hamiltonian neural network with conjoined neural networks trained in parallel is consistently faster than that which is trained consecutively. Therefore, it is preferred that the conjoined neural networks are trained in parallel. From Table IX, it can be observed that the simple summation (sHNN-I (0)) and linear layer with fixed and equal weights (sHNN-I (1)) have the same vector errors because they are identical in implementation. However, from Table X, it is observed that the simple summation (sHNN-I (0)) is consistently faster than the linear layer with fixed and equal weights (sHNN-I (1)). Generally, these two models also have the lowest vector errors among the five possible last-layer implementations. The second, third and fourth implementations have more trainable parameters but do not regress the vector field well. Additional trainable weights or a bias obfuscate the contributions of the conjoined neural network toward the surrogate Hamiltonian and introduce unnecessary complexities when regressing the surrogate Hamiltonian. Therefore, the zeroth implementation, with a simple summation, is the preferred implementation for the summation layer of the separable Hamiltonian neural network with inductive bias. _Experiment 4: Comparing the Four Variants on the Task of Regressing the Hamiltonian and Vector Field_ This subsection details the experimental setup to empirically compare the four models on the task of regressing the Hamiltonian and vector field of a Hamiltonian system. The four models are the baseline Hamiltonian neural network and the three proposed separable Hamiltonian neural networks with observational, learning and inductive biases. #### Iv-D1 Experimental Setup The Hamiltonian neural network is set up following the leftmost architecture in Figure 2. From subsections V-A, V-B and V-C, the optimal implementations of the various separable Hamiltonian neural networks are used. These are the separable Hamiltonian neural network with observational bias where \(k=2\), the separable Hamiltonian neural network with learning bias where \(c_{3}=1.00\) and the separable Hamiltonian neural network with inductive bias with a summation in the last layer. All details of the setup, training and evaluation follow from the description above in Section V. #### V-C2 Experimental Results We report the L1-error in regressing the Hamiltonian following Equation 5 and the vector error calculated following Equation 6 for all models in Table XI and Table XII respectively. We report the time taken to train each model in seconds in Table XIII. From Tables XI and XII, we observe that in general, all proposed separable Hamiltonian neural networks regress the Hamiltonian and vector field with a lower absolute error and vector error than the baseline Hamiltonian neural network. The proposed models leverage physics information regarding separability to penalise or prevent interaction between the state variables and this reduces the complexity of the Hamiltonian regression problem. The regression of the Hamiltonian and vector field are therefore improved. However, from Table XIII, the baseline model is faster than all proposed models. Generally, from Tables XI, XII and XIII, we observe that among the three proposed models, the separable Hamiltonian neural network with observational bias has the lowest absolute error and vector error, while the separable Hamiltonian neural network with inductive bias is the fastest. The observational bias generates more samples of the data, and this emphasises additive separability and covers the input domain of the Hamiltonian system to ease the interpolation task of the model. Conversely, the models with learning and inductive bias only rely on emphasising additive separability to regress the Hamiltonian and vector field. The additional effect of the observational bias in covering the input domain of the Hamiltonian system allows the model to regress the vector field and Hamiltonian better. The model with inductive bias generally outperforms the model with learning bias as it restricts regressions of the Hamiltonian and vector field to strictly satisfy separability, therefore forcing the model to simplify a complex regression problem into two smaller ones. It is also observed that the relative performance of the models between Tables XI and XII changes. Regressing the Hamiltonian involves finding the sum of the integrated vector field and errors made by the models in regressing the vector field may be cancelled out when regressing the Hamiltonian. From Table XIII, the proposed models generally require fewer epochs to converge as the knowledge of additive separability reduces the complexity of the Hamiltonian regression problem. However, the baseline model is the fastest to train. This is because the time taken for each epoch for the proposed models is longer. Compared to the baseline model, the proposed model with inductive bias is slower due to its conjoined architecture with higher dimensional weight and bias matrices that slightly increase the forward and backward propagation time for each epoch. The proposed model with learning bias is even slower due to the additional time taken to compute the mixed partial derivative. The proposed model with observational bias is the slowest as it has several times more samples that linearly scale the training time per epoch given the same batch size. Among the proposed models, the model with inductive bias generally requires the fewest number of epochs to converge and less time per epoch. It is the fastest proposed model. The separable Hamiltonian neural network with inductive bias is the optimal separable Hamiltonian neural network as it outperforms the baseline in regressing the Hamiltonian and vector field and has the smallest trade-off in training time. ## VI Conclusion Four models are compared for the task of regressing the Hamiltonian and vector field of a Hamiltonian system from discrete observations of the vector field. One, the baseline Hamiltonian neural network, is uninformed of the additive separability of the Hamiltonian system. Three proposed separable Hamiltonian neural networks are informed via observational, learning and inductive biases respectively. All proposed separable Hamiltonian neural network models leverage additive separability to avoid artificial complexity between state variables. They are more effective than the baseline in regressing Hamiltonian vector fields and can converge within fewer epochs, but are generally slower in training. The best model is the separable Hamiltonian neural network with inductive bias as it outperforms the baseline in the regression tasks and has the smallest trade-off in training time. We are now studying separable Hamiltonian neural networks that are simultaneously embedded with multiple biases. Preliminary results show that models embedded with both observational and inductive biases can regress Hamiltonian vector fields best. We are also working on using an inductive bias to recover the kinetic and potential energies of a Hamiltonian system for better interpretability, and dynamically testing for and embedding separability as an inductive bias by rewiring the Hamiltonian neural network on the fly.